Less noise, more data. Get the biggest data report on software developer careers in South Africa.

Dev Report mobile

The Changing Face Of Software Testing: Where Do We Go From Here?

14 January 2019 , by Charles Brittz

As testers, we've evolved our craft a lot over the years. Inadvertently, though, we've also erected a bunch of barriers to effective testing. I'm a testing professional who's been in the industry for nearly two decades, and I've seen this happen first-hand. In this article, I'll first discuss the main barriers we've created before diving into the mindset changes and practical strategies that I have found works to overcome them.

Inner-Article-Image

What I mean by testing

Testing means different things to different people, so it's worth clarifying what exactly I'm referring to. The testing that I'm talking about is the kind done by a specialised team normally known as the Quality Assurance (QA) or Test team. These are the teams that I have been working in for many years. This team's testing happens separately from those done by developers who, for example, work with unit tests. The QA or Test team would:

  1. Take expected system behaviour and output as their yardstick,
  2. Devise extensive tests to check that the system behaves as expected,
  3. Execute these tests,
  4. Explore undiscovered behaviour, and
  5. Report the results back to the developers.

Testing is effective when it can reliably determine whether software is good enough to be released. Unfortunately, we've erected three key barriers to this. Here's what I've found them to be.

Testing is effective when it can reliably determine whether software is good enough to be released.

The key barriers we've erected

1. We've oversimplified things, making it harder to determine whether software is good enough

Over time, our effort to entrench testing in the Software Development Lifecycle (SDLC) has had some nasty side effects. In trying to build standardised, scalable processes to decide whether the code is worthy of release, we've oversimplified things. Software behaviour is complex and very much context-dependent. The common way that we test software doesn't properly cater to this complexity and is too rigid to deal with different contexts.

We often pass software through a set of consecutive test cycles. Each cycle comprises of a range of tests, the results of which are aggregated into a simple pass/fail rate and a bug count. Only when code satisfies these simple checks is it deemed "good enough" to deploy to production. This is bad because:

  1. Having 'passed' a cycle doesn't tell the full story of the software's quality or release readiness: there might still be problems. After all, what does it really mean when we say software "has cleared three clean cycles" or that a test cycle "has a 50% pass/fail rate"?
  2. The existence of bugs doesn't mean that the software should not be released. Using a simple bug count and threshold to decide whether to release doesn't give enough context and therefore isn't useful. After all, if the latest version of the software contains high-priority bugs, but not releasing it endangers the business as a whole, then what's the point of holding it back? Simply relying on the bug count isn't useful in this broader scheme of things; we don't have the correct context. We are not answering the right question in this way, namely: Is our software release-ready?
  3. Waiting to do testing only when all the code is done is simply too late. This is because the cost of fixing things or understanding issues so late in the process is just too high. It can, and should, be done much earlier.

All of this means that we don't give enough attention to the context of the change or the failures that were noted. These processes often also "activate" testing as a phase right at the end of the SDLC because there are so many gates to clear first. Often the time available to test is squashed to almost zero, which negates most of the value that testing is supposed to bring. We have designed cookie-cutter processes and we apply them to all situations only to be surprised when they consistently fail.

2. We're seen as gatekeepers and tolerated, making it more difficult to collaborate with developers on improving software

By hammering on about injecting our gated processes into the SDLC, we've acquired the label of "gatekeepers" amongst developers. This breeds resentment and an "us-versus-them" mindset amongst both developers and testers. Such a mindset is problematic, because it hampers collaboration. Without close collaboration between developers and testers, it's a whole lot harder to create the shared context necessary for effective testing.

The gatekeeper label also obscures the value that great testing actually brings: it provides accurate and trustworthy information of a change or system in terms of its readiness for release. This is very closely linked to the issue of oversimplifying things, as there is more value to great testing than just the "pass one clean cycle then release" approach.

A big part of being tolerated rather than embraced is that testing is often seen as an entry-level career and those that do it were the ones who did not make it in the programming course. Of course, that is complete nonsense and our craft is as important as all the other disciplines that contribute to the software delivery process but we need to start showing up. We will show up if we ourselves, value the work that we do, firstly, and then demonstrate that value to the rest of the team with meaningful contributions. We keep talking about the same problems and complain about the same issues year after year without actually offering any real tangible solutions to improve testing and so the label sticks. No doubt some of the processes that we nurtured and curated were necessary to promote collaboration with teams and add value, but it has often, in reality, had the opposite effect.

3. Testing bureaucracy becomes an obstacle to agility and rapid releases, both of which are integral to improving software

We often insist on a signed-off specification or requirement document before we start our process. This slows everything down. Truthfully, I am yet to come across a complete and fully comprehensive specification anyway, so why force everyone to wait?

Another example is how we write and capture test cases. We don't need a million 100-step tests that describes to minute detail what each step should do, because that is not the real value of testing. The value of testing lies in the information that we gather and what we can supply when we test and exercise a system with our experiments.

I am not saying stand back and let Rome burn, not at all. I'm saying that it is a problem that we try to enforce one standardised paradigm in all cases. We should be more sensitive to, and flexible around, each testing context. More on this later.

To overcome the barriers, we need to change how we think about testing

We need to make a paradigm shift. We need to start thinking of ourselves as the "providers of valuable information". This means that we must (i) make the information about what and how we test as valuable and relevant as possible and (ii) provide this information at the right time. If we use the familiar framework of People, Process and Tools we can tackle the simple changes that will dismantle these barriers.

We must (i) make the information about what and how we test as valuable and relevant as possible and (ii) provide this information at the right time.

People

We as testers are the people in this equation. We need to make these subtle shifts in our mindset to truly change for the better.

  • Good practice over best practice: I really despise the phrase "best practice" because our industry evolves too fast to have anything stick around long enough to be considered the best. Before I go off on a tangent, stick to good practices. Things that work for you and your context. Maybe it is something that somebody else has used or maybe it is completely off the wall but still works for you and your context. I am always on the lookout for new things that can help or reduce effort. Our teams experiment with them in our context. If they do work, great! But if they don't we chuck them out regardless of popular opinion or some periodic report. This is more valuable for developers because we are guided by what provides value to our teams and not on what some global practice suggests or dictates.
  • Skill enhancement/augmentation: We need to constantly enhance our technical skills, soft skills and general test skills to keep up with the change in technologies. Formal training is quite cumbersome and often becomes a budgeting nightmare but you can easily supplement the training that you have with talks, videos, tutorials and white papers. Share these learnings in the team, discuss them, try them — it can be a great experience to discover things as a team. (You can read up on pair testing and the approach here and here.) Working together is often more valuable for the entire project team because you work on your understanding at a much deeper level.
  • Dev Skills in the QA team: Inject the hardcore development skills into the team for things like test automation. A trend over the last few years has definitely been to add actual dev skills into test teams to assist with test frameworks and test automation. Whether they are called Software Engineers in Test (SETs) or Developers in Test (DiTs) is not important — what is important is that they are part of the test team. Sure, they are hard to find but they are out there. While you're looking for them, it would be useful to start thinking about how to upskill those entering the job market through grad programs or internships.
  • Embed into product teams: This plays a huge role in moving the dials and how testers are perceived. Testing as a service has some major flaws. The divide that it creates is, in my opinion, not worth it. Embed the testers in the teams that they work in, make sure the teams that are working together are in the same area and work on the communication. This will allow you to pivot and be more nimble when it comes to making subtle changes in your project instead of relying on documented communication structures to manage intent or change to intent.
  • Storytelling: Tell the test story. This can really help to foster a sense of engagement between you and the developers working on the project. Providing them with the context on how you thought about the change that was made or how a customer would engage with the change can make a world of difference for your relationship. Be open to feedback, what works and what does not, try new things, and experiment with your team, including the developers. This makes it easier for us all to collaborate and learn together.

Process

Keep it light: The more complicated your process becomes the more difficult it becomes to follow and to get value out of it. Let your process support the context of your business. The key is striking the balance between having enough documentation, but not so much that you get caught up in it. We have trained ourselves to treat a Business Requirement Specification (BRS), the document that describes the characteristics of the proposed system from the viewpoint of the end user, as the only source of the truth. That is not always true and, in fact, blinds us to many other features of the System Under Test (SUT) that might not be documented. No doubt there is some really valuable information in these, but treat the system as your source of information and check that against the BRS for differences or anomalies. Do more exploratory testing, discover what you know about the system and how it behaves.

  • Constantly refine and iterate: Make small changes to your process and iterate on more, as opposed to trying to design the perfect process and make all major changes in one go. Make the change significant enough to add more value to the team and small enough not to create large-scale rework or extra work for the team. Sometimes we need to spend some time untangling messes that require substantial changes, and that is okay. If you can start small and share the end goal with the team, however, you will find that they can quickly feed back to you what works and what does not.
  • Automation up, down and side to side in the Test Triangle: Turn the test triangle upside down and add as much automation at the lower levels as you can, including unit level testing. It does not mean that we should be writing the unit tests ourselves but we can contribute to optimising these in many forms. For example, we could review the unit tests with the developers who created them or host walkthrough sessions of our understanding of the function, which could help the developers code for the things that we are going to look for. Using the same tech-stack as the developers also greatly enhances this process and adds value to the end result.

Tools

Like any skilled worker, we need good tools but we can't solve the testing problem with tools only. We need to focus on the thinking that goes into testing as tools can't think (at least not yet!). Tools can help us execute our thinking and not the other way around. Navigating this world of tools can also be confusing so here are my top tips for staying on the right side of the tool decisions:

  • Stay within the tech-stack: There will always be exceptions to this but if you work with it as a general rule you are setting yourself up for success. You will not only find awesome solutions but you will also have the might and skills of the whole team at your disposal to solve issues and get solutions implemented. Make sure that the specific tools that you might use for testing, like Test Management Tools for example, integrate with the tools that the dev team are using. When you introduce a tool that provides information to teams outside of their peripheral vision, meaning they need to leave their workspace for follow-ups or updates for to someone else's workflow, it loses value and quickly becomes something that is going to lose support.
  • Fit for purpose: Pick tools for the specific problems that they solve. Add these types of quality tools to your toolbox instead of tools that do this and that and a little bit of that and in some instances, if you squint really hard, do this too. Even if you write your own little utility, treat that as a useful tool (if it is useful of course) and socialise the use in the team.
  • Build a collection of useful tools rather than a one size fits all mega enterprise tool: The value of having tools that have specific functions is tremendous. Not only because it says what it is going to do, but you will also not be over-invested in technology that can become stale or tools that go out of support because of some acquisition strategy somewhere in the US.
  • Democratise the toolbox: As you add to this toolbox share it with the broader team, including the devs, show them what you can do with it and how it can help you solve problems. Chances are they will also find a use for it and will be able to share with you how it can be expanded and extended. Democratise the use and access to these tools for the wider team. Sharing is indeed caring!
  • Open Source First: Look for open source options first. There are some really awesome solutions out there that could solve your problem and potentially also benefit from your contribution to them. Extending these solutions and working around issues that crop up is infinitely easier to solve if you can tinker with the code. Like any of the other suggestions I have made, there are exceptions to this as well where a proprietary solution will be the only consideration but don't stop trying to find out first. Be sure to give back to the communities that you get these tools from by contributing or a donation every now and again.

Tester 2.0 Loading...

Rome was not built in a day: overcoming the barriers that we have erected as Testers will take time. Changing our thinking through the subtle changes I have recommended is the key to doing this. In my own working environments, I've seen how positively teams respond to this shift in thinking.


Charles Brittz is a passionate advocate for the testing craft and the value that it can bring organisations, if done right, and for the right reasons. He is a strong supporter of context-driven test approaches and embraces the learning opportunities that testing challenges afford him.

Source-banner--1-

Recent posts

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.