Breaking Boundaries with Extreme Shift-Left Testing
Would you go exploring the Amazon wildernesses or walking in a small city? Would you go scuba diving or lie on a white-sand beach? Would you go surfing or on a cruise ship? Would you taste the exotic local cuisine or stick to what you eat at home?
Learn how testers’ drive for exploration of brand new territories can contribute to the early stages of building customer relationships in software development.
Whatever your choice you're debarking on an adventure. The difference is the level of comfort caused by the level of uncertainty in your trip.
When we talk about extreme sports, some of us are terrified, and others get excited. We're terrified by the danger and the discomfort. We're excited about pushing our own limits and proving we can do it.
If all of us were passionate about extreme adventures, there would have been no humankind. If all of us were safety first, probably there would've still been no humankind or at least we would've been stuck in a very early stage of our evolution, all living in a tiny village.
The popular notion about people doing extreme sports is that they're restless and blindly taking high risks for no reason. In fact, extreme sports professionals spend years in training, preparation, and risk analysis. There is a high risk, and often, it's a solo activity and there's no one to help them if something goes wrong, but they fully understand the risk and are prepared for it.
On the other hand, people who feel more comfortable playing by the rules don't always mean they don't need excitement. Every sport needs years of training and practice to get better and better, and also be careful not to harm yourself while practizing or competing.
Now, what does it all have to do with software testing?!
Testers have been traditionally set to compete indoors following a drawn track. For years, we've been taught that exploratory testing is done at the end of the cycle when .. well, there's something to explore.
Testers have felt generally comfortable with this position. They know their place, testers' role have been clearly defined - find bugs, sometimes through exploring a jungle of untestable components spread across the shores of deep rivers of muddy waters full of bugs and piranhas and endless dark abysses.
It's funny, but it's not true.
Testers work with facts. The code is written and deployed. It's a fact. If it works or not is also a fact. There's actually no risk there. No uncertainty. It's a very comfortable... job, and there's a simple (not at all) explanation. Working with facts means clarity. Clarity creates serotonin.
But there's something missing for the extreme souls. What we haven't developed yet enough is the extreme sports in the testing - the Extreme shift-left testing.
We know about shift-left testing. Shift-left testing is testing early in the software development life cycle. Maybe you'd call it agile testing. Maybe you'd even throw in the Three Amigos technique.
These are good. These are the first steps to the shift left. But there's more.
What's the difference?
In agile testing, testing still starts after the user story is written. After it has already been decided what to prioritize and what to build. The "What" is already defined. We're not even at the zero coordinate but much more to the right.
This is a problem because a good portion of the timescale has already been wasted before the Developers and the Testers got involved. They're typically involved too late, after months or even years of negotiations and prioritization.
It's not their job, you'd say. Well... their job is dramatically affected by what's not their job.
If you depend on something and have the tools, techniques, skills, and attitude to contribute to it, it's a pure and sad waste not to.
When does everything actually start, though? It depends. It depends on where we get the requirements from. Is it a tender for a big new customer? Are we developing our own product and have done significant market research to come up with the idea of a new feature? Is it a change request from a support ticket, or we're just following the roadmap we built two years ago?
The scope of this writing is something new rather than something old. Imagine we're building and selling a new product, or we're engaging with a new customer for a long-term project.
Let me assume that if I asked you what extreme shift-left was, you would have said, "Testers are part of the Discovery phase". That's the best shift-left you can basically get out there right now.
But that's not extreme. There's a lot known already. It's pretty safe there.
What we're talking about today is bringing the testers to the Sales table.
There's a huge gap between the Marketing, Sales, and Development teams.
In teams with a whole-team approach, where the Product Owners are part of the team, the gap is by design - organizational design. If the Product Manager is rather between Sales and Devs, then the broken phone situation is guaranteed even if all the skills are available. It's a common problem.
If you ask a Development team about the Sales team, you'll not hear the end of it.
Why is that? Because it's two different universes. They are literally aliens to each other. There are no Sales and Development processes that are integrated. It's waterfall design even in the most agile organizations.
You might ask: "But what about startups?"
What about them?! Development teams don't do Sales. There's always someone on the team that's more "customer-oriented", and once the startup grows enough, the huge gap immediately separates them.
So, how do we close the gap, engage the team, get more involved in the "Why" question, and prevent unnecessary commitment mess?
Bring a tester to the Sales table.
It might be just me, but I can't work effectively if I don't know the "Why" and if I don't have the big picture. I'm just too curious all the time. That's a problem with my efficiency, but knowing the reasoning is liberating and inspiring. When you know the "Why", you can innovate. If you only care for the "What" - that's limiting. Yes, that's a certainty, and perhaps you believe that's your job. But just imagine the freedom and joy to understand the "What" is not what it looks like. That the customer only thinks they have a problem that can be fixed with this new feature while they can do a small tweak in a routine and combine our other two existing features, the expansion of which has been on the roadmap but never had a good business case to prioritize it.
Or your Sales keep selling the highest technical debt slash not configurable at all components as core and most advanced technology while you know you need to scratch that and just write two new ones. It will be faster and safer, and it will look pretty good.
How often do you have to reprioritize the entire Sprint right after you did the Sprint planning yesterday because you suddenly get informed the Sales team are going to a conference next month and needs this cool new demo-only feature?
You might say: "But all of this is organizational issues?!. It's not the tester's job to solve them!"
True. True. True. You're right.
But I'm sure you know the key to change also, right? Baby steps.
Remember, we have to build a bridge over a wide, dark and deep abyss. It will not just appear out of nowhere.
The other key to consider is the sense of extreme. Not all testers want to go rock climbing, so you can't just send them into the wild.
They need to want it, and they need to study, prepare and train.
The good news is testers have been doing negotiations for years through learning and practicing testing psychology and bug advocacy.
For years, testers practiced reporting severe issues in a clear and firm way, so there's no question about the priority. They've practiced how to report mild inconvenience in an engaging manner so they get prioritized and fixed. And they've practiced how to inform about actual coding mistakes without hurting developer's feelings.
They know how to always fact-check and provide evidence - the first law of testing is to reproduce the bug, which is nothing less than providing clear evidence about a fact.
That's not what Sales do.
It's counterintuitive because if they don't take a controlled amount of risk, they'll not sell anything to anyone. They need to be well-prepared and trained to explore the unknown, but they don't have to be alone like a rock climber or a surfer. They can get the support from the Team - the Testing team.
What Sales and Product Managers also don't do is say No. It's the first advice you'd get if you enter Product management - learn to say No. Well, don't learn - get help.
Testers can say no. They're natural at it: Does it work? No!
See, easy.
What else will you get from a tester on the Sales table?
First of all, high sensitivity to what could go wrong.
Overpromises and underestimates are the big problems in the Sales and Dev relationship. Testers are the ones who actually know how much it takes to get the feature to the customer - to clarify the requirements, to build and fix, and to release it. Neither developers nor sales teams consider those. Ever. It's the law of software development.
Actually, I've seen developers on the Sales table. And that's the biggest mistake. They don't want to be there. The next first thing testers learn is to put themselves in the users' shoes. That's not what developers are taught. Developers are there "to estimate", and to evaluate the technical feasibility. It's too early for that, and they always, always consider solving the technical issue with code alone. Testers always consider how they'll test it - it already contains a more realistic estimation.
The next invaluable benefit of having a tester on the Sales table is the first-hand knowledge about the "Why".
The tester is from the Team. It's "our people". They have the Team's trust because they work with facts. Neither the Product managers nor Sales have the same level of trust because they work with promises and assumptions that change over time. It's the nature of the work, but it's very difficult to accept and understand for the Team.
The third quality of a tester that's beneficial for Sales is early exploration. An unbelievable amount of startups and SMEs don't have marketing teams. They don't know how to explore software.
Testers know how to explore. They can provide so many insightful details about a competitor's product that no marketing agency will. Use testers in your competitive analysis. That's what the mystery client practice is about, for example. The problem is that Product managers who would typically perform it are not equipped with Exploratory testing techniques, and their focus is on "what the others have and we don't" rather than "what we're doing better". Only testers can demonstrate "How" we're doing better or worse than the competitors' product. The tester can also give you amazing details about the way the competitor works only by submitting a support ticket.
I feel there are even more benefits for the entire Team and business you can create by daring to explore the unknown and equip better for the extreme and the risks in Sales with the tools of a tester. It's unconventional and counter-organizational but it's out of the black box and out of the RACI tables, and can create a big customer experience difference at the very early stages of the business relationship.
The testers, on the other hand, have the opportunity to find joy in the extremely early collaboration and expand their understanding of the customer, the domain and the problem, and that level of understanding will help them get even better in their fact-based test and bug management and prioritisation. And adequate prioritisation is everything in life.
This blog post is written by:
Process geek, software tester, product owner, and mentor.
Hristina is helping software teams establish efficient development and testing workflows and deliver high-quality value to their customers.