Long-standing Godel quality assurance engineer Elena Prilutskaya recently presented a seminar on the subject of context-driven software testing at Leeds Tester Gathering. Elena explains the principles of the concept, and how she, along with all Godel software development and quality assurance teams, are able to adapt to all manner of contextual differences within our partnerships.
Quality assurance plays a vital part of the software delivery process, as it is integral to the building, releasing and improving of any software project. For businesses, software delivery has to evolve in pace with rapidly changing stakeholder demands. This means that quality assurance functions must be able to adapt to contextual changes in requirements.
I have worked in quality assurance for nine years, for different clients in different domains, each with their own requirements. In my time, what I’ve grown to understand is that you need to take responsibility for making your own decisions and choose the best techniques, rather than blindly following testing ‘best practices’. Godel champions this approach as very important in delivering the best results for its clients. I always found the level of confidence Godel places in its quality assurance engineers to be very high.
What Does Context-Driven Software Testing Really Mean?
Context-driven testing is an approach which focuses on the value of varied user needs. The end-user of a software project and their requirements is defined as the most important factor. The concept doesn’t state to ignore best practices, in fact, it advises you to play with them – the value of any testing practice depends heavily on the context.
When prioritising the context of a project and choosing the best testing approach to take, there are essentially four types of context-based testing to consider: context oblivious, context-specific, context imperial and context-driven.
Firstly, context oblivious is when we are unaware of the relationship between the way you approach testing and your reason for choosing these approaches. It’s done without any thought for the match between testing practices and testing problems.
Context-specific is when you’re faced with a set of problems in the project, and you have a set of solutions ready to use. The problems don’t change, so you don’t choose to adapt your solution accordingly.
Context imperial is when you know the best practices, and you are trying to apply them to all situations. You don’t try to play with them. For example, if you don’t have a specification, but you still need to test, a context imperial approach would lead to the refusal to test until the specification is received. Why not try and test beforehand?
Finally, context-driven testing is learning about practices; decomposing and recomposing them to fit in the way you need. To be context-driven, it’s important to consider the needs and goals of the project, existing assets and liabilities, risks and requirements. The skills to apply mainly involve asking questions and exploring the current situation, focusing on the unique context of the project to tailor an individual approach.
The parallels between context-driven testing and the pillars of agile development are significant; both place the value of individuals over tools, both prioritise collaboration and responding to change – or context – and view them as a shared necessity. However, context-driven testing is not limited to agile teams, and its values can be applied in all manner of software development environments.
To analogise the concept, look at driving. It seems like in each country, the principles are the same, right? You’ll sit down behind the wheel and go. But, you need to look at the context. So when I came to the UK for the first time, I was absolutely shocked by what looked like chaos on the streets! I couldn’t begin to understand how all the traffic was able to avoid the crowds of pedestrians crossing the roads here, there and everywhere.
Over time, however, I began to consider the context of the situation. Really, the UK’s roads are organised, and there were rules in place. To sit behind the wheel and drive in the UK, I had to become context-driven and apply only the most relevant best practices.
Having worked on a number of different projects at Godel, I have a wealth of experience in applying context-driven values to my work. Godel quality assurance engineers fully integrate with an organisation’s in-house software development function – which means that the ability to adapt to different organisations’ cultures is a key part of Godel’s own culture.
Example Of Context-Driven Testing
For four years I was part of a Godel project, working with a UK organisation to build and test a very complex system. Functionality requirements were constantly changing and delivery priorities shifted a lot. We had no specifications at all – only Jira story cards. Also, while we did have business analysts, they were business-oriented rather than technical.
So when approaching testing within this project it was very important to look at the context rather than follow standard best practices. Firstly, our quality assurance sprints were one week behind the development teams’ sprints. The contextual reason for this was that our requirements came in the form of Jira story cards from business-oriented business analysts. For the developers, this meant that the development process would often be long due to the complexities of each requirement, which in turn would delay testing. Naturally, our client wanted completed functionalities at the end of sprints, and so we worked by testing end-to-end, which was much easier than testing separately.
The developers in this project undertook functional tests – it was great! At the time, there was no automated quality assurance function, so to be sure that we had good quality code this was chosen as the best practice for the context. We also worked to replace functional tests written by our developers with automation regression testing in order to save time and deliver to the client faster. Also to save time we didn’t use test cases – instead, we wrote BDD acceptance test criteria and Jira cards which our AQA function could then use as necessary.
Again taking the context of the project into account, it was vital that there was very strong communication between the team. The lack of a technical BA led us to appoint one of the quality assurance team members to take up this role and write requirements to help the developers understand how the system should behave from a technical perspective.
The client is very happy with the work we provide – since Godel teams provide a unique approach to each client and each project, we are able to apply the best techniques for their unique requirements every time.
To explore how our clients feel about their partnerships with Godel and how our unique approach to software delivery works for each of them, visit www.godeltech.com/industries/