Last week, SmartBear sponsored StarEast and took the team down to Orlando, Florida to chat with attendees. We had some great conversations about testing, our news tools such as LoadNinja and Zephyr, and our awesome t-shirts!
While we were there, we also had the chance to sit in on a few session and learn from some of the very best in software testing. Here are a few lessons we learned from StarEast on continuous testing, learning, and change.
The Reality of Continuous Testing
Jeffery Payne of Coveros delivered “Cutting Through the Hype Around Continuous Testing” where he called into question what exactly continuous testing is, how it fits into the context of DevOps, and how we can actually use it in our work.
While Jeff had plenty of valuable advice and information surrounding continuous testing and DevOps, we found that his three realities of continuous testing were a good place to start:
- You will not realize significant benefits from DevOps without continuous testing - Some may think of DevOps as speed rather than quality, but with this minset, we won’t be successful. This is further proven by the fact that continuous testing is mentioned as a key DevOps enabler in the 2018 Accelerate State of DevOps report, meanwhile the quality gap continues to grow between highest and lowest performers. Quality is a significant component of DevOps, and we have to get everyone on board with that idea that if we’re going to be successful.
- Continuous testing does not equal 100% test automation - People who are low maturity in DevOps are thinking about as speed and speed only -- they’re looking to automate everything and not thinking it through to make sure they’re automating the right things, which means they’ll quickly start having quality problems. Most of the talk around testing has been around how we need to automate it, but there’s always going to be some level of manual testing -- and if we aren’t, our end users will be.
- To be continuous, you can’t be siloed; you have to be a cross-functional team - The first aspect of continuous testing is being continuous, since testing happens before, during, and after each change. However, it also takes shifting left, or removing downstream blockers and fixing defects closer to their introduction, and shifting right, or leveraging customers and production data for feedback loops, to make continuous testing work. Additionally, presenting testing results in a way that supports business decisions will be beneficial to stakeholders. There are multiple ways teams can better enable testing activities, but following these overarching themes is paramount.
- Investment in enablers are essential to your continuous testing success - There are a few continuous testing enables Jeff mentioned that will be the key in your efforts. These include whole team quality, production-like environments, continuous integration, automated testing below the UI, service virtaulization, infrastructure as code, and test data management. You cannot shift left or work continuously without these enablers.
Look for Weirdness
In Paul Grizzaffi’s talk, “Well, That’s Random: Automated Fuzzy Browser Clicking,” he pointed out that if you’re testing the same thing over and over, you’re going to find the same bugs. To find bugs out in the wild, you have to go off the beaten path.
HiVat is a family of testing techniques that catch defects that are hard to find with traditional automation -- fuzzing is one aspect of HiVat for testing without an oracle, or a single source of truth that tells you whether the test case passed or not.
Instead, fuzzing is interested in whether something bad such as a site crash or 404 happened or not. By randomly varying our inputs, we can find out what might create a critical problem in the application that we might not have caught.
It’s impossible for testers to think of every single possible input (although the big list naughty of naughty strings is a good place to look for inspiration). Randomly creating inputs can actually help you look for things that are valid but not intuitive.
While Paul used a link clicker to automate randomization, he urges those that may not have the resources or time to make one to consider ways that they can implement randomization or an unintuitive path, or to explore if HiVat could be valuable to help find bugs you may never have found before -- in Paul’s case, it actually helped find some critical issues for his clients.
None of these techniques should be replacing what you’re doing today. Instead, it should be another piece of your test automation strategy. Paul emphasized that testing is an incomplete puzzle, and there’s always going to be something we haven’t thought of, but being creative and thinking outside the box is essential to keeping the testing mindset sharp.
Take Precautions Against Smelly Code
Angie Jones took us through the good, bad, and the smelly of test automation code in “What’s That Smell? Tidying Up our Test Code” and showed us that following the same patterns over and over again is a recipe for flaky tests and poor automation.
The problem is that many of us follow tutorials that are meant to help get started with automation, but they lack architectural complexity. This can result in code smells where the code technically works but violates fundamental design principles so that it’s difficult to maintain and manage -- and could even introduce new bugs.
Angie took us through a few common code smells one by one to teach us about tidying up our tests. Here are a few examples to keep in mind when creating your test automation scripts:
- Long Class - Why is it problematic to have a class with enough lines of code that it seems you’re endlessly scrolling? It often indicates there’s no single responsibility and that the class is doing more than it should be, which will also make it difficult to find things and maintain the code. To clean it up, seperate concerns.
- Duplicate Code - It can be very tempting to copy and paste your code when something first works for you (or you’re trying to code quickly), but this means that any change needed has to be done in multiple spots -- plus, it can create other code smells. Remove and eliminate duplications where you can in your code.
- Flaky Locator Strategy - Your locators might work for you now, but is your strategy intelligent, or is it asking for trouble? This is the top cause of flaky tests -- your locators are fragile and unreliable. To clean it up, see how you can stabilize your locators.
- Inefficient Waits - Using waits and sleeps interchangeably will slow down your runtime and create flakiness. Additionally, different environments may require different wait times, which is why you should use WebDriverWait rather than a hard sleep.
- Multiple Points of Failure - Your framework is not supposed to determine whether a test has passed or failed -- it should be neutral, giving you information about your application. Clean it up by increasing its flexibility.
What’s Changed...And What Hasn’t
Over the years, the ideas surrounding testing have changed and evolved -- just ask Dorothy Graham who has been in the industry for over 40 years and has given 394 presentations at 246 conferences in her career.
As we consider the lessons learned at StarEast and look toward the future of testing, one of Dorothy’s sentiments in particular rings true: the only thing constant is change. Even those who may have only been in the industry for a year can likely attest to this.
However, despite these changes -- like shifting left, transitioning to DevOps, continuous testing, and artificial intelligence -- there will always be a need for testing. While the role of the tester will continue to change and evolve, the need for someone to look at these technological changes with a testing mindset will remain.
The jobs as a tester isn’t going anywhere anytime soon, but we do need to polish the skills that will keep us relevant as technology changes and new products emerge. Continuous learning will be the key to success in our teams, our organizations, and in the world.
So, what’s changed, what’s coming, and how can we stay relevant? Check out our interviews with Michael Bolton, Angie Jones, and Paul Grizzaffi from StarEast to find out what they think.