DAZN is the world’s leading live sports streaming service which allows fans in any country to watch sport in any language, on any device, live and on-demand. DAZN is available in over 200 other territories, where it boasts the most extensive library of elite and emerging sports content.
In this episode of the Godel POD our host Francesca Platten, Client Director at Godel is joined by Alex Fell, one of the VPs of Software Engineering at DAZN to discuss frontend architecture and how to balance technical innovation.
Francesca Platten: Hi everyone. Welcome back to the Godel Pod. Today you are joined by myself, Francesca Platten, who is a Client Director here at Godel. And with me is the wonderful Alex Fell from DAZN. Alex, if you could please say hello to our listeners and give us a bit of an insight into you, what you do and what you do at DAZN.
Alex Fell: Hi. Thank you. My name is Alex Fell and I am one of the VPs of Engineering at a company called DAZN, D-A-Z-N which is a sports streaming company. I’m in charge of part of that platform called Front End Platforms, which is everything to do with our client applications. So mobiles TVs and the web.
Francesca Platten: Amazing. So how did you get to be VP of Engineering at DAZN? Tell me right back from university up until now, what’s your history?
Alex Fell: Yeah, sure. From university I did like a mixed multimedia kind of degree, which was sort of half art and design and half technology. So I did a lot of music composition, but then also a lot of graphic design and theory around this. And that helped me sort of springboard into sort of creative agencies or digital creative agencies. So I started work at the time when Flash and Microsites were very popular, a lot of people wanting sort of high concept, very flashy, very animated websites. And then I sort of continued on my original individual contributor route right up until I sort of turned into software development, like for a product. And that time was at a company called Uvue, which is a set top box platform provider that’s very prominent here in the UK. And so there I sort of then started to sort of hone my skills on living room device development, set top box development.
It’s at that point where I started to become more interested in sort of people leadership and leadership in general. And from there I jumped to a number of different roles hovering between technical sort of architecture and people management and leadership. I eventually got the call from DAZN. Somebody reached out and said, hey, your experience in this streaming background looks interesting, do you want to take a jump? And so I did. And it’s been a roller coaster ever since then, a very exciting industry to be in streaming.
Francesca Platten: Absolutely. From that sports streaming element, what do you find is constantly kind of up and coming? What do you find viewers want all the time?
Alex Fell: Well, one of the things that we are trying to address is this engagement that you have with your fans as part of being a technology platform rather than just a broadcast platform. So we’ve seen an evolution in a lot of ways in which companies, not just in sports streaming, but in other fields, have tried to sort of make technology more part of their offering. And so what we have a particular segment in design that’s focused all on just engagement of the user and trying to provide extra value as they are watching the game or as they are trying to engage with their fans, both whilst the match or their content is live. So like in-game analysis and statistics and gamification and things like this, but then offline as well. So keeping up to date with your team and things like that. So it’s all about how you can use the technology offering to sort of engage people more than just when they’re just watching the match, which is the more traditional broadcast aspect of it.
Francesca Platten: Yeah, lot going on at DAZN. I didn’t realize it had all those different divisions and things going on. So super exciting business, which is why I’m extremely excited to have you here. As Alex mentioned, he is a specialist in front end web architecture. That is what we’re going to jump into with our questions. Alex, if you’re ready. My first question for you is how has the front end architecture evolved over time?
Alex Fell: I think a lot of the way in which we sort of approach architecture is kind of almost reactive to trends in consumer technology. So we think about the advent of the iPhone and what that meant for people. Prior to the iPhone, we were on a track where we had like the cloud that was very heavy. We did a lot of processing in the cloud and we didn’t do a lot of processing on the client or on the front end. It just was very minimal.
But as laptops and iPhones got more powerful and tablets, you start to get all this processing power that you were able to do on the client. So it’s basically a pocket computer much more powerful than what sent the rockets to the moon and things like this. So all sorts of crazy things could now be possible. And a lot of bigger companies leapt on this, like Facebook and you got Snapchat with their lenses and things like that. Where previously we were on a track to sort of keep the front end sort of minimal and a lot of processing in the cloud, we then start to have a lot of processing in the client side architecture. So these applications got very big.
What I find is that these things tend to yoyo between very light on the client, heavy on the back end, and then they go backwards and forwards and we’re sort of in the middle of that. Some swing at the moment as well, because we’re starting to see some of the computing power in the cloud get very powerful as well. So a lot of things that you used to do maybe on your phone, like processing images, are now being taken care of in the cloud. But because that technology is so quick now, it can be done sort of near real time.
We’re seeing it yoyo back to being cloud heavy as well. It depends really, what the next technical evolution that comes out is and how we sort of adapt to it. So we’ve seen a lot of things to do with VR and AI as well. A lot of phones now have an AI chip embedded inside the phone and so we just need to see what people come up with next, really. So it’s really about sort of reacting to these new technology trends and working out how best to sort of structure your application around us.
Francesca Platten: So building on that, is it fair to say these big names and big players, they’re the cause of this year in back and forth? Or is it the consumer demand that will kind of drive that?
Alex Fell: I guess it is a little bit of both. Even if you think of our cloud providers and things like this, they are inventing technology to make it easier to move your code to the cloud so you have less on the front end. But then at the same time, we’re seeing a lot more from smartphone manufacturers as well, like flip phones and like I said, the AI chips and gyroscopes and things like that in the phones. So I think it’s a little bit of both, I would say.
Francesca Platten: Interesting. So how do we balance technical innovation with the need for stability and reliability?
Alex Fell: There is some great sort of reference material. Obviously. There’s a lot of technical books out there. One that was doing the rounds a couple of years ago, I’m sure it’s still doing the rounds now. Is there’s a great book called Accelerate with an exclamation mark called by Annie Cohen? And in it they talk about sort of five sort of key metrics or key drivers that is good for startups and good to see people grow at scale. So those sort of four ones that they call out in the book is Cycle Time, which is the time it takes you to conceive an idea to get it out into production. They talk about deployment frequency, which is how many times can you push new features out to your consumers per day?
I think Facebook used to do something like 100 deployments a day, which is quite a lot. And then just the second two are more directly to your question, which is around deployment failure. So how many deployments fail? And the last one is something called Mean Time to Recovery, which is basically how quickly can you recover from if something goes wrong. Now, in terms of balancing that innovation versus stability, one of the things that you want to be doing is making sure that you can test new things in small increments often, so you’ve got large changes going out into production, so you’re just making small increments. And then if they fail, you can also roll them back really quickly. So we’re really lucky at the zone in that we can sort of roll back within a few seconds, like 30 seconds or something like this, if something goes wrong. But that can even go down. Much smaller. I’ve seen it even in other places. Much smaller. Sometimes you’re stuck with a very large deployment cycle, right? So let’s say if you’ve got, let’s say, a set top box application or you’ve got a phone application, you can’t necessarily roll that back so quickly.
In terms of the balance between the sort of innovation and the need for stability. I think there’s this phrase where you have this fail safe environment where if you encourage your engineers to be able to fail safely, so if you fail that’s fine because you almost have like a safety net in that you can roll it back very quickly. And also so like you’re trying to innovate in that you’re trying to try things really quickly and just deploy just enough and not a big change. So it’s all about doing lots of small things quickly and being adjusting to those ones. If you follow those sort of principles then you’re able to find that balance or at least you have the flex.
Francesca Platten: Yeah. Would you say that having that ability to roll back quickly and making small incurrent changes is the best way that companies and businesses can stay compatible with latest technologies?
Alex Fell: Yes, I would say that I think there’s maybe more of another layer around that that you can do and that is basically not sort of trying to chain yourself to a particular technology choice or when you do make that technology choice, making sure you’re isolating it to just one part of your system and recognising that that part of your system has that dependency or is to do that particular technology and therefore you’re sort of isolating the impact that might have on something else. So doing this and maybe keeping something like the core of your application quite technology choice agnostic shall we say, means that you’ll have greater flexibility in being able to adapt to future changes a bit more easily.
Francesca Platten: Absolutely. A really popular question when I reached out to some of our in house web end architecture division heads was they were curious as to how we can ensure we have security and privacy of software architecture. So I wanted to present that to you Alex, and see what you thought on that.
Alex Fell: Well, I think it’s right to be concerned, and it’s an interesting question because there is a whole branch of roles and jobs and subparts of the industry that deal specifically with security of code, security of deployments roles such as pen testers, penetration testers, specifically looking for gaps and holes in systems that you can employ or you can have on-site. There is also something called SEC ops or security ops as well. And there’s also things like Imsec ops or information Security ops. So there’s this whole broad range of skills and roles available in an organization and lots of literature and stuff out there for people looking for that internally in terms of day to day.
One of the things I would say here is that automation is key. There’s lots of great services that are available to sort of try and catch these things as they go along. So you have things like code scanning, people code scanning. A lot of software nowadays is built on open source software where you’re basically getting a dependency on something else that somebody else has built, being able to understand that the package or the dependency you’re downloading, if it has a security vulnerability, there’s a lot of automated systems that can check that for you. One of the things that is also useful here is something called version pinning where you say, I know that this version is safe, so I will pin to this version and then say I just want this version so I know there’s no other security flaws in it. I guess that’s day to day in a sort of larger, more organisational sense. One of the things that have become apparent in the last few years this, especially in the UK, is this idea of GDPR, where if you need to have access to the data, your user can request all the data you have on them, then being able to be able to store that usefully and be able to say.
The scenario that comes up a lot is when you will get a request for a user to wipe your data. So I want this data wipes everything personal you have about me wiped. And this has a big part on how you actually design a system. One of my stints in my career was fintech and of course one of the things you have to tackle there is auditing as well. So you might also decide to partition your systems such that only one part of the system needs auditing at a time.
If you have like, let’s say a bad architecture, you could easily spread the responsibility across multiple parts of the system and that would mean that your entire system would be audited, which is obviously not what you want and would take a long time. So both in terms of auditing and GDPR and things like this, you find that sometimes the design of the system will be that you isolate the part that is just specific to the users. And you might have, like, heightened security on that particular piece. And you might have a restriction on which people can access this particular piece. Oftentimes they can only be accessed by a particular system and things like that. So the design of your whole system or series of systems often revolves around the security as well. So it’s a big thing for businesses to consider. It’s not a side thought. It’s kind of what I’m getting at here.
Francesca Platten: Yeah, you mentioned absolutely automation testing for the listeners who might be quite new to technology. Would you just be able to briefly explain in layman’s terms the difference between automation testing and manual testing?
Alex Fell: Sure, absolutely. Traditionally, the way in which you would test a piece of software is you would have a manual tester, and that is a person who is given, let’s say, a test script and walks through the script so that the script would say, click on this or press this, or Enter this piece of information. And then they would do that. And then they would wait to see what the output was, and they would check that the output was matching what they got. Automation testing is where you’re able to do this script or this test is able to be fed to a computer and it’s able to perform those operations on behalf of what a manual tester would be.nSo the computer is able to simulate, let’s say a click on a button and some entering of some information and that it’s able to wait until it sees the correct information. And if it doesn’t see the correct information so let’s say a calculator. If you put two plus two and press equals it’s then waiting to see a four in the correct part of the screen. And if it doesn’t see a four, it says that’s a failure.
We see this sort of application of this technology sort of across the range in our industry and it really helps scale up because of course you can do this automation testing in parallel which means you can do many tests at once. You can also do them if you’re doing particularly long complicated set of tests. You could do these overnights when people are sleeping. So let’s say if you had something to do there that was quite long and you didn’t want somebody sat in front of their computer for an hour you could run that test overnight. And that sometimes these are called like nightly tests or something like that.
Francesca Platten: Yeah, makes sense.
Alex Fell: There’s also another part of this as well which is closely related to automation. Something called a soak test. A soak test is something where you do the same thing over and over again and if you think about sort of soaking a sponge or something like this so you’re making a trifle, whatever basically you’re doing the same thing over and over again and you’re looking to see if the system will crash or not.
And this can be a very repetitive thing for a manual QA tester to do. And again you might run this overnight. So let’s say if you’re uploading an image and cropping an image and let’s say you’re doing something that might fail a lot you’re doing this like a thousand times and you do over 1000 times, did this application crash or not? So yeah, lots of exciting things that we can get software to do that otherwise we’d have to sit in front of a computer for hours.
Francesca Platten: Are you seeing a shift almost or quite a trend within your industry to automation and that cert testing instead of the manual purely because of the parallel run ins and probably the more efficient way of doing things?
Alex Fell: Yes, so across the board people automation testing is a good way of getting again this idea of a safety net for people. It is just purely because of the way in which you can scale it. It means your cycle time is less right so you can do more things. Because this runs in an automated way. You also don’t have to schedule it. So you have this idea of doing on demand testing which is let’s say I’ve made a change and I’ve pushed it to my system where I’m storing my code and then I can just schedule an automated test straight away so I don’t have to sort of schedule something with a manual person.
But what automation testing is not so good for. I mean, it’s getting better, but it’s not quite the same. Is what’s called exploratory test testing which is, let’s say, somebody has made a change to your application and let’s say it’s quite a deep seated change in that it could appear in lots of different places. So let’s say you’ve changed your primary colour to red instead of blue or something like this and that primary colour could be absolutely everywhere. What you could do there is you could ask a manual tester to do some exploratory testing which is take hey, go through all the different parts of the application, go through it in different sequences, out of order sequences and see where things might have been broken. And this is again, with the outcome of AI and things like that, there’s things called generative testing where all those different combinations going in and out of different flows could effectively be generated by AI. But we’re still on the early days of this and you’d still need a manual tester. Somebody who knows the product, who knows how things work, ultimately knows how to break things, which manual QAs are exceptionally good at. They’re very good at breaking things, which is which is what you want them to do. This is still a key part of their job and absolutely the best use for them in my opinion.
Francesca Platten: Absolutely. In terms of managing dependencies between different components and software architecture, what in your opinion would be the best approach?
Alex Fell: So when it comes to dependencies or when you have a dependency on someone or a team or a piece of code, I think it’s kind of almost like buying a house or buying anything to have something like a strong contract between the two entities. Whether it’s like even if it’s got nothing to do with writing any code, even if it’s just say you deliver this by this date. I deliver this by this date is to have a strong contract, or at least an agreement between the two of you, to say, this is the boundary in which we both make an agreement to do that. And so this is the idea of a contract.
Now, in terms of code and how that works in terms of code and technical systems, we also have mechanisms of describing contracts between different software systems as well, so we have things like API definitions. You say my piece of code works like this and your piece of code works like that and you have this idea of something called semantic versioning or versioning. So let’s say I own a piece of software and you interact with my piece of software and let’s say I need to introduce a new piece of functionality, but it will break the way it works in some way.
What I can do is I can say, hey, I’m releasing a new version and this version breaks the compatibility with your system. And the idea of saying I have a dependency on this version, but I’ve changed this version, it means that we have a strong contract and agreement between us. So let’s say you depend on version one and I’ve created version two. You say that’s fine, other people might want to use version two, but I still have a dependency on version one. And there’s this idea of creating backwards compatibility between different systems. So what often happens is people will run versions in parallel. So I will keep running version one and version two at the same time because I know that you have a dependency, but I know that new people might have a dependency on version two. And then over time we do something called deprecation. So we say I will stop supporting version one in six months in the future, which basically gives you six months to move from version one to version two. so this is one of the ways in which we preserve those strong contracts. This can get quickly out of hand. So one of the things that Microsoft is really good at is because they are installed on a lot of legacy systems. Windows is backwards compatible for the last like five, six versions, which means they’re easily supporting Windows Seven functionality inside Windows Eleven.
Francesca Platten: I did not know that. Wow.
Alex Fell: Yes, Microsoft, they are very good at their backwards compatibility. I think in between developers and stuff, you typically only support like maybe three versions. So once you get to version four, you might start deprecating version one and you expect people to have upgraded or I suppose it’s a bit different when you have, let’s say, a B to B relationship. So I mean a business to business relationship. Let’s say I’m providing some data to you. I know that you’re a valued client, so I might not change that API. I might keep that API running for as long as the relationship exists.
It’s a lot to consider. I think the key thing with dependencies is enforcing that strong contract and when you make a change, making sure you’re using versioning something called semantic versioning to make sure that you’ve effectively communicated the change and maintaining backwards compatibility, but also setting a precedent of how long you expect that support to go on for.
Francesca Platten: Yeah, I guess this is how kind of technology evolves, isn’t it? Once I bring out a new piece and it’s no longer compatible with what you need it to be, that almost not forces you to advance or update your technology, but it does, in a sense, once that kind of if we have that contract in place, and I can still do the keep running that back end, but if I then do another version, which is another one on, it kind of forces you to kind of come up in parallel with me, if that makes sense. And I guess that’s why we’ve now got multiple versions of technology, languages and things like that.
Alex Fell: That’s absolutely right. And I think there is, in terms of a larger approach to this is something around sort of API design where you have something which is kind of maybe the design of your API is structured in such a way. That it allows evolution without break, what’s called breaking changes. Which basically means you might want to sort of you’re free to add things to your contract, but it’s just additive. It doesn’t necessarily break the underlying contract. Now, this is a very difficult thing to get right, but when you do get it right, it’s really very blissful for everybody involved because it means that you can just upgrade to the latest version. But you know that. There’s no sort of breaking changes and you get all the added functionality if you need it, or you can ignore it if you don’t need it. And this is really the beauty of good API design, if you can get it right.
Francesca Platten: Absolutely. In terms of key performance metrics and things like that, what can we track them from a front end perspective and how can we optimise from them?
Alex Fell: It’s a good question. So in terms of what we talk about performance, one of the champions here is Google. They’ve created out lots of fun little abbreviations for people in terms of what they consider important to them. By that, what I mean is they’ve done a lot of research on, let’s say, web pages and sort of Android applications and things like that to understand from the user what’s good metrics to track. So some of the ones they talk about is things like Time to First Paint or ttfp Time to First Paint. So they’re all these clever little acronyms, but Time to First Paint, which is basically what that means is when you open your app or you click on a link, what’s the first time you see something visual that indicates that you are on the application?
Sometimes people try and game this system as well. If you’ve ever opened something like Netflix or Disney Plus or something like this, you see an end straight away, or the Disney logo straight away. Yeah, you do that’s because you need to get something on screen to tell the user that something is happening, something is in the background and this is the Time to First Paint kind of metric that people are looking at the time that you’ll see the first paint.
We also introduced another one which is slightly different, which is called Time to First Contentful Paint, which is the time when you first start seeing content and not something visual, which is something like text or imagery, which I guess is like when you can see all the tiles and things like this. And then another key one that they have is something called Time to First Interactive, which is the first time that you can start to move your mouse and it responds. Or you press something on your keyboard, or you start scrolling on your phone and something actually starts happening. So this time to First Interactive is really a good indicator of when the user can start basically using your application. So these are all sort of packaged up into something that Google refers to as web vitals. And so if anybody wanted to Google that, you’d see all the research and stuff that comes along with that. And of course, it’s not just the web that a lot of these principles apply to phones and TVs and to things like this and other parts of the system.
Francesca Platten: So to kind of put it in a real world application, your Time to First Paint would be that initial N you see on Netflix. Then the first content, would that be selecting which user you are? Would that be that page?
Alex Fell: So it’s the first time you see some sort of text or imagery on your screen other than the logo. So it would be that logo, select the profile, select screen. Yeah.
Francesca Platten: Is that their way of kind of buying some time while everything in the background is getting ready?
Alex Fell: And I think because Google is so big, you get not a narrowing, but you get a lot of convergence of people around the same sort of technique. So a lot of people show your logo while something is happening in the background. And it’s just to let you know that something is happening. I wouldn’t say that it makes the web a less innovative place with everybody doing the same thing. At least it’s kind of like a tried and tested UX user experience sort of trick or technique to let people know that something is happening.
Francesca Platten: I guess we’re all guilty of that, aren’t we? If you’re using something on your phone and nothing’s happening, you instantly get frustrated or you click off it, or you try refresh it. Whereas I guess kind of having this time to first pay an element is a way of making people like five, maybe 10 seconds, which is sometimes all you need. I’m guilty. If I go on a shopping app and it doesn’t load immediately, I go off it and they’ve lost my business then really quick, because I don’t have the patience to sit and wait. Whereas with Netflix, I will wait. So I guess it ties in quite well to kind of keeping that user engaged while everything gets ready in the back.
Alex Fell: It does, yeah. And I think a lot of these UX patterns due to the sort of the form factors that we now interact with, which is a lot of things that we sort of try and take into consideration as well when designing apps for different form factors. So one of the things that you will find with mobile applications, applications that I have on the phone, is that they have to be quite responsive and quite reactive, because there’s something about being able to touch something that’s very close to your face or in your hand. This tactile experience. That means that because it is tactile, by which I mean you’re touching it and directly interacting with it, you expect things to react. So, like when you pick up a squishy ball and you squishy, it squishes. Right. So it’s that kind of psychological, it’s that kind of psychology. Whereas let’s say you’re on a laptop and you’ve got a big screen or something, you can move a window to the side and let’s say something’s not loading, it’s fine. Maybe you go across your other window and you can do something else while you’re waiting for it to load.
And the same thing for a TV. You’re usually on your sofa, you’re usually quite relaxed anyway. You’re usually not sat down for five minutes. You usually sat down for half an hour to a couple of hours if you’re watching a movie or a football match or something like this. So you’re happy to wait. So a lot of this instant reaction stuff is less important on those types of form factors versus, like on the mobile phone, where if something’s not working, you quickly go on to something else.
Francesca Platten: Absolutely. I know we have touched on a bit of testing, but I think we can dive a bit deeper into that element. So what role does testing play in front end architecture? And how do we ensure sufficient test coverage?
Alex Fell: So there is something called the testing pyramid, which is this idea that you have some low level tests at the bottom, and then as you get up, the tests become more complex and more what we call sort of end to end, which means they cover all parts of the system. And the way in which the pyramid is structured is that you have at the bottom, you have something that’s kind of like low cost, easy to run and quick to run, and towards the top, they’re more high cost. And when we say cost, we mean in terms of maintenance and things like that and things that take longer to run. So the key thing to ask yourself when you’re exploring the testing pyramid is, like, when do you stop? How many tests do you need in each layer? And the real sort of guidance you can get for that is kind of what value they give to you as the business and as the development team and having a strong handle on this and talking with your engineers and the business about what value those are giving you and where you invest in those. Can let you know when to stop.
So, for example, at the bottom of the pyramid you have things kind of like these things called unit tests, which are usually like non visual, they run very quickly, so you can run like 1000 tests in 2 seconds or something like this. These are very quick tests that you do every time you make a change. And these give you validation of, let’s say, your business logic to make sure that calculator, that two plus two equals four keeps working. So these are very good and these are very cheap to run. And investment in here is quite good because it’s quite cheap to write. But lots of tests in here. If you only had this layer, you wouldn’t get the full value of how your customer uses the product because of course, oftentimes it’s not just one system the user interacts with. It’s like it’s like a layer layered application, right? So you have your UI or your front end at the top and then the server underneath, and then some databases and stuff behind. So the higher up you get the pyramid, which is more like end to end testing, which is like you really deploy your code somewhere and you get a real user or an automated test to go and check that these ones have high value because they test it how the customer uses it.
But again, you don’t want to invest only in those tests because it would take ages for your system to run and you wouldn’t be able to cycle through very quick. So you need to find the balance of what’s just right at each layer to understand what’s enabling you to go as fast as possible and giving you the value that you need out of your development process.
Alex Fell: When we say ages to test these elements, what time frames are we talking? Because ages in software can mean very different to ages in travelling to Dubai, for example.
Francesca Platten: Yeah, I guess, yes, it is a very good point. It’s relative, I would say, to the speed of the other tests. I think one of the other parts of it being expensive is not just time, is also in terms of the sandbox, by which I mean the isolation that is required to test each part of the system. So if you take the scenario where you’ve deployed some code and you’re writing some tests, you wouldn’t want lots of people to be performing tests on that same system at the same time because you might get false positives, false negatives. So usually what you would do is you would run a test in isolation, which means you would spin up an environment by which I mean you would deploy an entire series of systems to one environment and then you’d test them all and then you would destroy them or throw them away. So you just have these ephemeral environments is the term. So you’d spin them up, run a test and tear them down. And this can be quite costly in terms of how much it costs. If you’re using something in the cloud, do you get billed?
Or if you’re using your own machines, electricity and power to do them and then also it’s time to set that system up, time to destroy it and then the actual test itself, it could be quite quick, but you have to sort of do them in sequence. Right. So you couldn’t necessarily do them in parallel because you kind of want to do them in sequence just in one environment. There are exceptions to that rule, but in general that principle is the same. So you’re hurt by the fact that it has to be done in sequence.
Francesca Platten: Yeah. I guess with businesses that are doing a lot of innovation and product development and things like that means there’s kind of a constant need for testers, is that fair to say?
Alex Fell: Yes, absolutely yes.
Francesca Platten: So how do we balance that need for rapid new features, new developments with a clean and organized front end architecture?
Alex Fell: Yeah, so I think all companies struggle with this particular problem. You have rapid development that is happening and you need to sort of balance that by making sure you’re not tripping each other up. And I wouldn’t say there’s any particular magic bullets for this. One of the things that you can do is actually more cultural and people related than it is software system related. And that is around making sure you have clear lines of communication between other teams and business. So you’re rapidly irritating, you’re rapidly sometimes you’re rapidly iterating on your product and you’re looking to push something out and let’s say you’re part of the product might affect some other teams. If you don’t communicate that that change is happening, then you could easily cause a problem for somebody else. So having visibility on what you intend to release in what manner, into which audience, et cetera, can really assist you along this journey. At DAZN we also try not to release around big sporting events as well. So we have kind of like code freezers and windows and things like this.
Not everything is cultural and communication driven. There are some technology things that can be done here to assist you with rapid feature development. And this is the idea of being able to deploy code that is sort of dormant and then later get switched on or switched off. And this is a piece of technology called feature flagging or feature development or a B testing, something like this. And this is the idea where you’d have like a little switch inside your product and so the switch by default is off and so. What you would do is let’s say if we could use the example that I used earlier, where you’re switching from red to blue.
So let’s say you’d have a little switch in your code that says turn it to red and by default it’s blue. So actually what you do is you make that change and you’d put the switch in and by default it would be off for everybody. So you’d be able to safely deploy the code and it wouldn’t change for anybody. And then what you would do is you would then remotely, without deploying any code, you would then turn that switch on. So it’s like a little you send a little message to the application and then turns that feature switch on. Now, the great thing about this is that you can control who actually sees it. So you might want not deploy it to all of your users first. You might want to just deploy it to just a certain what they call a cohort or segment of your users. So for example, dogs can only see black, white and red. So you might want to only switch it on fur dogs, for example. So you might want to target specific users or user groups to just switch this on. And particularly, this is particularly good when you have things like beta testers and things like that, where you can say this user is a beta test user. So let’s turn on the new style from blue to red just for them and see if it reacts.
And this is a really good way of sort of being able to just rapidly iterate on features without sort of doing big chunky releases. You’re just sort of putting code out there that’s dormant and then you can switch it on. And then if something goes wrong, you can just switch it off. And so it means you’re not waiting for a big deploy and then deploy back, which again is one of those metrics, you know, number of deployments that you can do off frequency of deployments.
Francesca Platten: Yeah, it sounds like for a business there’s quite a lot to consider when trying to release something new. It’s can we afford to do this dormant code and have that sat there? Do we have the in house capabilities? But equally, we don’t want to kind of interfere or mess up what we already have and interrupt that testing feature. So it’s kind of a balance between what can we release that’s new but what do we also have the capability to do to maintain without all hell breaking loose? Is that fair to say?
Alex Fell: Yeah, that is fair to say. And sometimes you can get yourself into a situation if you’re running a lot of let’s say these are called tests, feature tests. If you’re running a lot of feature tests at the same time, you got to check sometimes the combinations. Because if you’ve turned too many things on and you haven’t particularly tested that combination in your testing before you could end up testing that in production and seeing what could go bang. And again, this is when your meantime to recovery can help you. So if you can switch quickly, turn it back off, then you’re in a safe space. So you’re sort of testing in production. Is that so?
Francesca Platten: When that product has obviously been tested, new features been tested, and it’s ready to go out to the market, how do we incorporate feedback from customers or product users and stakeholders back into that architecture to make something better?
Alex Fell: Yeah. So I think the real key here, again, is to try and keep that feedback loop kind of tight and as small as you can. Right. And one of the things that I’ve seen work really well is when this information and this feedback from the customers is not sort of hidden away from the development teams and from the teams that are directly involved in the production of the end product of the application or whatever it is. Some really great things that I’ve seen is where we’ve had some focus groups come in and sort of test new versions of applications and we’ve had a CCTV thing on the room while they’re using a product or a TV product as it is. And then you get people watching that back and sort of screaming at the screen, going, you need to press the right button. You need to press the right button. Of course the user hasn’t seen it. And this is because maybe it made sense in the initial design, but then when you get to the real users, you can see that they’re having trouble actually using it. Even at the developer level, the person who’s actually writing the code, being able to see real users and how they interact with the product that they are making is incredibly useful for them.
So it’s not just incorporating it at a business development level and a product level. Just actually making sure that that feedback gets down, right down to the team itself is really useful. And this is a really good way of forming teams and organisations where your development teams are sort of trying to manage the part of the product that they own end to end. So they’re being really involved and really invested in the product that they make. And that makes for happy developers and more productive developers as a result of that.
Francesca Platten: How important is feedback for yourself at DAZN? Is data something you utilize a lot in your job? How pivotal? Is that kind of what you do next and what you continue to do?
Alex Fell: Yes, a lot of organisations are really investing in data-driven decision making. I think sometimes you can rely on it too much, but there are lots of great sort of data mining tools. And again, when we talked about security testing there. There are lots of great roles being created inside organisations now. Sort of data scientist is one of the key ones, right, where maybe you’re looking at quite a lot of data and you need somebody to help process it and follow the analysis for you and create some useful outputs from that massive amount of data.
I’ve been at a number of different organisations where we have what’s called like a self service data platform, whereby the data will be ingested into some of the systems, and some people have only access to certain parts, but some parts will be open to everybody. And so I can go in and there’s like a little application that I can use that can tell me how popular a certain type of device is or how many users in Spain use this particular feature and things like that. And so if I’m curious about how a certain part of the product is working, I can dip in and I can find that information.
This idea of self service is really important, particularly in larger companies, right, where you have a lot of data flowing about in terms of and that’s like, let’s say, about how the product is doing internally, let’s say about the team and people management and things like that. It’s also very important as well to be making data driven decisions as well. Sometimes you can have a tendency towards sort of opinion based opinion or risky assumption based decision making, where you assume that one thing is right, but then when you check the data, it’s actually wrong, and you also have this idea of a belief. Right. So, again, you might believe a user might interact in a certain way, but actually they won’t. So it’s always good to sort of validate your beliefs and your assumptions whenever you’re making any of those kind of decisions.
Francesca Platten: Absolutely. I think what we’ve learned from this podcast is that there is a lot that goes into front end architecture and there’s a lot of things to consider. So it’s been an absolute pleasure to have you, Alex, because I imagine you’re an extremely busy individual. So thank you so much for your time. I do have one more question for you, which is obviously DAZN is massively tied to boxing. Are you a boxing fan and do you have a favourite boxer?
Alex Fell: I wouldn’t say I wasn’t a big boxing fan, but then AJ came to the zone offices in Hammersmith and really people got to meet him. Yeah. If you look on our LinkedIn and our news pages, you’ll see lots of very smiley people.
Francesca Platten: I can imagine.
Alex Fell: Yes. It’s one of the things everybody’s sort of always looking for the next pay per view event and boxing event in the zone, and so they all get posted around internally so we can all see what’s going on. And so a lot of them that we do end up watching as well, sometimes just to make sure the system is working, but also just to enjoy the fight. There’s lots of I mean, boxing very big in the UK, of course, I should say that zone is global and we deal with lots of different content on lots of different regions. Football very popular in Europe.
Francesca Platten: Interesting. Well, I’m glad, obviously, you’ve become a boxing fan and you’ve come this whole new world for you. But equally, I’m so glad that I’ve been able to talk to you about your specialty, which is front end architecture. All views that Alex has shared today are his own and not of his employers. But thank you so much, Alex. Once again, you’ve been a delight and I cannot wait for everybody to hear this podcast.
Francesca Platten: Thank you very much. It’s been a delight, really enjoyed it. Thanks for taking the time.
Alex Fell: Thank you so much.
[Outro] Francesca Platten: Huge thank you to all the listeners out there who’ve listened to this instalment of The Godel POD. If you like what you hear and would like to know when we’re releasing even more episodes, please just subscribe to the Godel page on pod bean or Spotify.
“To the Loop and Back. In and out”: Exploring the World of Browser Engines
Pavel Karpovich is a Lead Software Engineer at Godel who recently took part in a tech meetup in Wroclaw. He expanded on his topic, “To the loop and back. In and out” where he spoke about the zany world of browser engines, exploring the twists and turns of the event loop and how it manages… Continue reading The Godel POD: Front-end on-demand with Alex Fell
How to Automate the Testing Process for Machine Learning Systems
Testing is an essential aspect of the development of any software system, including Machine Learning (ML) systems. ML models are designed to learn from data and improve their performance over time, which makes them powerful tools for solving complex problems in a wide range of applications. However, ML systems require specialised algorithms and techniques to… Continue reading The Godel POD: Front-end on-demand with Alex Fell