Many of the trends I’m listing here took place earlier with other software development activities like functional automation testing, but only now are performance testing activities catching up.
Shift from Performance Testing to Performance Engineering
One of the most significant quality trends I've been seeing over the last few years is the move from doing performance testing only, to teams embracing a full, SDLC performance engineering practice.
But what is performance engineering?
Performance engineering is a cultural shift in the way organizations view their essential processes. Performance engineering embraces practices and capabilities that build quality and performance throughout an organization.
Todd DeCapua, author of Effective Performance Engineering, has said that performance engineering means understanding how all the parts of the system fit together, knowing which metrics matter, and building in performance quality from the first design.
Scott Moore, a performance testing consultant, and speaker at the 2020 PerfGuild, expresses a similar belief when he says that the days of testing performance into a product are over. It's too risky, causing too much technical debt to go back and do it later.
This is the same type of shift that occurred with functional automation years ago. Rather than have one person responsible for automation, everyone on the sprint team becomes responsible.
So how can you help your company transform into a performance engineering culture?
You might think the first step is technical, but I have a tremendous piece of actionable, non-technical advice you can use right now–MIP it.
One thing you can implement anytime, anywhere with performance or any other quality concern is to MIP it (Mention in Passing).
Scott Barber explained to me that there was a time when he would “MIP” to anyone he passed in his office, “How's performance today?”
That person could be anyone, whether they were directly involved in his project or not.
In fact, he would MIP to anyone in the building, “How's performance today?” And at the end of the first week, he says performance increased significantly.
Be sure to check out Scott Moore’s Effective Performance Engineering session at this year’s PerfGuild. It’ll be a great introduction to folks who are new to performance engineering and want to learn about some actionable ways to get started.
Of course, this is just a start – the real push is to get your agile team thinking about performance as early as possible, not after the fact.
Shift Left Performance
Similar to trying to get developers more involved in functional automation by using the same languages and tools that they use in their day-to-day work flow, you can do the same to move performance testing to an earlier place in your SDLC.
Because of this, there is a trend to create performance tests using developer-friendly tools and languages—this allows performance tasks to fit in with your developer's current ecosystem.
For example, if you have Python developers, you can introduce a tool like Locust that allows you to create Python-based, performance-testing scripts. If you use Locust, everything is in Python; everything is in code.
You can even use PyCharm as an IDE.
Another popular technology option is k6, which focuses on giving your developers the best experience for load testing. K6 takes a code-like approach to developing load test scripts.
These dev friendly tools in the market allow you to treat your performance testing scenarios or test cases like you would any other application code. This, in turn, enables you to shift your automated performance testing efforts to earlier in the software development lifecycles.
Cristian Daniel Marquez Barrios, a front-end developer with over ten years of experience, gave me an excellent recommendation to get developers involved in creating more performant software.
He said that all front-end developers should be using, at a minimum, Google Lighthouse metrics that are available in Google Chrome tools.
When you open DevTools in Chrome and navigate to the Audits tab, you can find the Lighthouse audit. This will tell what's the performance of your site simulating mobile on the desktop version of a browser. So it's also a perfect tool to start with.
A pillar of performance engineering is that performance needs to be baked into your software from the beginning, with developers creating performant code in your product from the start.
Check out Alyson Henry & Matthew Andrus’ PerfGuild session, Why Good Test Scripts aren't Enough–Shifting Left and Automation as Keys to the Future of Testing for a great example on how to achieve this.
This trend can help get your whole team involved in focusing on creating performance software from the beginning.
Another approach is to leverage your teams existing functional selenium test to create browser-level performance tests and performance engineering activities.
Browser-Based Performance Testing
You might be wondering why they shouldn't just use a non-UI-based protocol.
Modern JavaScript front-end frameworks like Angular and React make it very difficult to create a performance script at a protocol level, which is kind of the traditional way testers have been developing performance tests since the mid-nineties.
In such cases, there is no interaction between the UI and the server like you'd find in a typical, older, client-server application architecture.
With protocol tools, you would have to do additional scripting and programming to capture those kinds of transactions.
So, the process of setting up a script becomes increasingly tricky.
By focusing on browser-based performance script creation, you're spending less time doing any of the other manual editing, correlation, and programming that would otherwise be necessary with a protocol-based recording tool.
Furthermore, because this approach uses a real browser and more people are familiar with interacting with the actual browser rather than a protocol, this helps to shift-left the performance activity that more teams probably should get involved in.
Nicole van der Hoeven’s PerfGuild talk will be about how to combine both browser and protocol-level performance scripting to create a cool hybrid load testing solution. (Don’t miss it!)
Does it replace protocol-level performance testing? No.
But it's another tool you can add to your performance toolbox to help transform your team toward performance engineering.
Shift-Right AI Performance Engineering
I first heard this term (AIOPS) from Jonathon Wright. AiOps is about expanding artificial intelligence (AI) out of functional testing and applying it to all software development shift-right activities including performance testing.
Although many companies have started to make the shift-right they still tend to focus on simple analytics.
Jonathon believes that digital experience analytics are incredibly important, but that we need to stop focusing on a web page or a mobile app. We need to start thinking about the entire customer journey.
Jonathon will present a real-world scenario using all open-source tools for his Shift-Right Digital Experience Analytics talk at the 2020 PerfGuild.
For example, it's monitoring things in production.
It's then able to take that monitoring information and automatically create a performance test model for what's actually happened in production.
It can then use the model to auto-generate your performance tests and make use of all the information produced in your SDLC pipeline from beginning to end.
It can also help inform your decisions using machine learning to bring up key insights that you might not have noticed in your pipeline.
James Pulley also believes there are excellent uses for AI and machine learning intelligence in performance testing.
If you look at what we do inside performance testing and performance engineering in general, it is pattern-based. And the conventional AI used today is less expert-system-based and more AI-based.
This type of AI doesn't ask questions and assigns weights.
Instead, it behaves more like it’s looking at a pattern and observing match conditions. Is it a normal condition or an exception? And if it's an exception, does it match other exception patterns?
When you’re talking about resource utilization and things of that nature, AI can be very useful.
AI is also great for looking at the behavior of systems going through the logs and saying, “Hey, here's what 90 percent of our people are doing.”
Here's how our users normal navigate our site
Here are the most common user workflows and building a model for what that should look like.
AI can then generating the performance scripts for you based on how your real users actually behave in production. It can also help create test data to use.
This helps to create more realistic performance scenarios and a transaction mix for your tests.
Monitoring things in production is great, but how to plan for unknowns that tend to happen once your system is live?
Chaos Engineering/Resilience Testing
If the COVID-19 pandemic has taught us anything, it’s that we need to make sure our software is more resilient.
We have seen scenarios in production that most teams probably hadn’t anticipated. But how to plan for unknown behavior?
Examples of companies that learned this the hard way include RobinHood, SBA, and Epic Game Company—all whose systems went down due to unexpected spikes.
The best approach to achieving chaos is to test how the system responds under stress proactively.
You can then identify and fix failures before they impact your customers or cause damage to your reputation due to the poor publicity an outage could receive.
The idea of chaos engineering is that it compares what you believe will happen in your distributed system against the reality of what actually happens.
To learn how to build resilient business software systems, you can use a chaos test tool to break things in your environment on purpose to see if it actually fails the way you believed it would.
Chaos engineering is a growing area and it's achieved by using thoughtful, well-planned experiments that help to reveal weaknesses in your systems.
Tammy Butow Principal Site Reliability Engineer at Gremlin suggested we think of Chaos Engineering as being like preventative medicine in that it’s a disciplined approach to identifying failures before they become outages.
Another similar type of testing to help is robustness testing.
Dawn Haynes CEO & Testing Yogini at PerfTestPlus has been a big advocate of adding more robustness testing into our test plans.
The industry glossary definition of robustness is the way software responds in the face of invalid inputs or unexpected or stressful environmental conditions.
Dawn likes to define it as applications delivering value to customers consistently and not disappointing them or frustrating them in any particular way.
So what would be a requirement for robustness? Maybe we just look at how our apps handle invalid inputs, or how we handle stressful environmental conditions.
But if we were to expand it and say it’s about the user experience, how robust are we for user interaction?
Do we fall down when we get surprised by something, or do we handle surprises in a way that can do two things:
Inform the user about what's going on right now, and make some sort of suggestion to them about what to do next.
If we're surprised by something, can we do anything other than fail? Or can we expect that throughout the run of a piece of software we will be surprised by something?
And if a user is surprised by something not expected, what should we do about it next? What are the options in the face of surprises?
I expect variance, chaos and resilience testing to be significant growth areas of focus for teams in 2020 and beyond due to the COVID-19 scare.
That’s why I’m excited to be offering two sessions about how distortion (Paul McLean) and variance (James Pulley) can help you to create more realistic load tests that will help plan for unknowns.
Performance in CI/CD
So, where does that performance fit into a DevOps pipeline?
Paola Rossaro Co-Founder and CTO at Nouvola believes there is a trend underway for performance tests to become part of your acceptance tests.
Automation is vital, and having a reliable reproduction of the production environment is a must, so rapid feedback also allows resolving issues before they reach the user.
This is what we call continuous testing.
Continuous testing is really a key component of continuous integration and continuous delivery and creating testing at speed.
What good is having a streamlined, continuous delivery process if the only way to find out if the performance of your product is not working well is via a help ticket being open by a user?
Testing and continuous integration actually go hand in hand; each step in the continuous delivery process has testing associated with it.
So from the development environment, the integration, testing and test environment, functional testing, and then a pre-production environment with performance and acceptance testing up to production, you then have synthetic monitoring, real-user monitoring, and general monitoring to make sure the performance is working well.
In all those steps, we can see continuous delivery and continuous testing with performance testing really do work together.
Look for more teams to start integrating performance into their CI/CD pipeline in 2020 and beyond.
Performance Trends for 2020 – What did I Miss?
Those are my top six predictions for 2020. What do you think? Let me know in the comments.
Also, to properly prepare for these technologies and trends, I highly recommend you register for PerfGuild 2020 and get a jump start on many of the trends we covered here.
Register now for ===> PerfGuild 2020