5 Test Automation Trends of 2024 That You Should Question
This trend post differs from all the trend posts you will read in the next months. It’s can did and questioning what’s going on in the test automation market. We’re not seeing much innovation from the vendors, but lots of marketing buzz and hype without benefits.
It’s not easy to ride the AI wave and integrate leading-edge technology with the existing tech stack if it’s not made for that. But no one wants to stay behind.We get that.
This post is not about finger-pointing or being cynical. We want to shine a light on what’s behind some trends and hypes that are coming in 2024 and how you can make an educated decision if it’s for you and to differentiate between marketing buzzing and where you get value from.
Let’s dive in with the already rising trend of GenAI in test automation.
GenAI is currently widespread, and there’s not a single area in the software industry you’re not bumping into it. And we love GenAI. It’s great.
There are two major trends we see. One is to generate test cases; spoiler alert, this is not the best way to use it today; the other one is prompting your test steps.
At first glance, both seem to be the same. But they aren’t. Let’s dive into two practical examples outside of testing.
You ask GenAI to generate an anime drawing. It’s generated with stable diffusion, but on closer inspection, it’s unconvincing. It needs a lot of attempts until you get a solid result. Frankly, it would be faster and cheaper to hire someone on Upwork to draw.
On the other hand, we have Photoshop Generative Fill Adobe’s GenAI feature. You select a smaller area of the picture and, e.g., say you want to replace the bicycle with a motorbike. This way, the stable diffusion has a lot of context through the other parts of the picture.
This way,the generated part, “the motorbike,” in this example, fits well into the picture. Sure, it does not always generate great results, but the ratio of high-quality output is much higher than in the first example.
If yougenerate the complete picture without further context, it’s time-consuming, and the result is mostly not satisfying. If you use GenAI for smaller context-driven parts, it’s a game-changer.
If you get a GenAI feature to generate test cases, it’s too general, and the output is not beneficial. Let’s say you generated 1000 test cases, and 200 of them fail. What are you going to do with this information? What value do you get out of it? Do these test cases make sense from a user and tester perspective? Are they failing because of the subject under test (SUT) or the test case's quality?
If you look at it from a Photoshop Generative Fill perspective, GenAI will prompt the test steps. Step by step, you tell the system what you’d like to do. It is the same as exploratory testing, but instead of clicking through the application, you prompt. The system recognizes the interactions and creates the steps. The GenAI translates your request into actionable steps in your SUT’s context.
You can prompt what you want to do, and the system creates automated tests. This way,you have all the exploratory tests handy later.
Besides GenAI, there are two other buzzwords on the hype: AI and autonomous testing.And they will rise immensely in 2024.
You need to have AI to have a chance on the market. Customers and users are demanding it. Or better said, everyone wants to be at the edge of the latest technology, looking for tools with fancy AI features.
More and more vendors will claim they have AI and that their AI is super cool and the latest must-have. Even though it just shines from the outside. Spoiler alert: sometimes it’s not even AI, and most of the time it’s not useful at all; the contrary.
We get it. AI is excellent if used in a meaningful way. I mean, we have an AI-driven virtual user for autonomous software testing 😉 And yes, it seems like many buzzwords in one phrase. But as it keeps up with its promises, it is valid, unique, and the latest sh*** in the market.
Let’s go back to the AI trend of 2024. Vendors will add AI to their tool to have AI. It looks fantastic, sounds fancy, and even gives a glimpse of usefulness, but at a closer look, it’s nothing you want to have in your automated testing.
Why? You’re wondering right now?
Let’s analyze a real-life example I ran into last week.
Someone claimed something like, “our artificial intelligence can recognize your software with its best-in-class models with a 99% success rate in re-runs.”
It sounds excellent, and it’s hard not to get trapped by the 99%. But what does it actually mean to have a success rate of 99%?
Let’s say you have 100 interactions in your test case – which is a minimal test case(don’t confuse test steps with interactions) – with a 99% probability of passing, which means your test cases fail every single time, as only 99 interactions out of 100 will pass.
That 1%means 100% flakiness.
The automation is not helpful if additional manual effort is on your daily task list.
99% success rate gives you the feeling of a super-stable automation. More and more of these examples will pop up in 2024. Keep in mind to question those numbers and interpret them right. Don’t let the marketing and numbers dazzle you.
The trend is increasingly going towards autonomous testing. I would even say it’s going to be the buzzword of the year 🏆
It will take some more time until full autonomy is possible. But in 2024, the first step in this direction is going to become popular with partially autonomous building blocks. Which means that there are closed actions that will be autonomous, like a login. You only say it’s a login, and the system recognizes if it’s two-factor authentication and you need to grab a code from the mobile phone or e-mail.
Blocks where it’s possible to define the requirements and the functionality need to be easily overseen and closed actions (no dependencies): The task for the system is to recognize all ways I could log in with the result being logged in.
As it is with AI, you're going to hear "autonomous" everywhere, and most of the time, it has nothing to do with autonomy in testing. No one wants to stay behind. And as you don't have a solid, stable autonomous test solution overnight, vendors start to add autonomous features on top of old technology, or new vendors who claim to be autonomous will pop up.
Most of the added autonomous testing features in 2024 will be web crawlers taking screenshots of all the links and then going through all the links autonomously to compare the screenshots with the application. If the application differs from the screenshot, the application behaves differently.
That's not autonomous testing. There is no interaction with the system itself.
Be open to what is coming and question shiny demos and marketing slides if it's autonomous testing or if it’s about being part of the buzz.
As it is hard to come up with innovations and you’re limited if a tool is based on technology from the 90s, the strategy to acquire smaller players in the market is on the rise in 2024.
Bigger players in the market will add tools and services to their portfolio to have a broader offering. The challenge here is how to integrate them with the existing portfolio. Mostly, it ends in overlapping offerings. And this is quite confusing for the customers. You don't know what tools/offers you should buy and why you should get two tools that are overlapping functionality-wise. What do you solve with which tool?
The focus is less on automation and more on covering everything with the tool stack offered. It's a motley assortment of tools and services that can't be seamlessly integrated as they were never built to be a tool suite.
Nevertheless,it may be suitable for some to have one vendor with the complete tech stack,and it may unsettle a lot of companies as there are overlaps. in the end, you still may end up buying the tools from different vendors to have your needs perfectly met.
In 2024 more and more promises are going to be made and enticing and intriguing lingo used instead of innovation and technological newness.
We're back at the same point as before: Distinguishing between new valuable stuff and marketing buzz.
The chances are that there is something new launched. But we see the tendency of (more prominent) players in the market, who haven't brought something new technology-wise in the last years, mostly don't keep up with the promises.
That's part of the game, so nowadays, you need to sharpen your inspector skills and see what's worth evaluating and what's not.
There are tools out there that have great technology and release amazing new stuff regularly. As there is no label for that, dig into it and inspect promises and tools closely before you get into something you regret later.
It's not about finger-pointing, as I said in the intro. The game has changed, so different skills and questions are needed for using and evaluating test automation scenarios; that's all. And that’s one of the significant shifts that continues to increase: less innovation, more buzz, more inspector-like skills for users and decision makers.