Anand Bagmar is a student of the craft. He is an example of how I would have loved to be if I had not chosen to wear a business hat a decade ago. Anand published this post in his blog and asked some pretty good questions at the end.
How do I know? I have had an opportunity to see Anand in close action. He nailed it.
He then took to Linkedin to ask people those questions. I put in as sincere attempt as possible to reply to each of Anand’s questions and I wanted to capture it into a blog post here.
I love automation that helps add value.
80% of the automation I have personally seen did not add test value.
Why I qualify test value is because I believe automated consistency checks help testers to progress and move on to fresh code that might have unidentified and far to unknown risks.
20% did add test value.
They were also about automating :
Numbers are approx. I haven’t measured to be precise.
While rerun of tests look obvious – it remains obvious only till it is qualified as a “stable test”. Post which people become oblivious to it unless there is an event that makes them to look at it.
I have seen automation that has built in re-runs before concluding if it was intermittent. Cost of re-run + value of the test matters. Re-running a useless test because it failed won’t help anyone. Not that people don’t know.
This question should be named the question of the century in the space of test automation (largely front end).
Taking inspiration from what Marcus Merrell from Sauce Labs mentioned in Test Warez 2019 conference : “What Sauce Labs has seen that hosts a significant market share of Selenium Grid – an incredibly high % of the tests don’t pass consistently”.
I know orgs that want to achieve a 90% stability with their tests. The orgs who achieve it have test and dev binded very well.
Good question. Instead of talking about time to code, I want to talk about how much time do people take to realize that they need to add new tests.
Answer to these questions help understand if people are thinking about new tests and in what way.
In most cases, the product-under-test is available on multiple platforms – ex: Android & iOS Native, and on Web. In such cases, for the same scenario that needs to be automated, is the test implemented once for all platforms, or once per platform?
Just yesterday (7th July 2020) someone told me that a particular Android App Automation (front end) was being run for 16 hours. I was like, “Really? Where is the fast feedback loop?”
The team has fantastic engineers and a poor management. The team built all scripts the management asked but hey – the management just wanted to see 100% automation. Sure enough they got it. After hearing that – my hunger to move towards a more minimalistic approach to testing has increased multifold.
In cases I have seen : Aspiration is to get it to run “automatically” Achieved mostly with web automation [80% cases] Needs a trigger for mobile automation [20% cases] High maturity tech companies achieve automatic run via CI.
Automation is getting the same treatment testing got a few years ago. Automation and Tooling needs hardcore technology and software engineering skills. Not just tools and frameworks.
It varies per project I have seen. It also has a bearing from the culture of the org.
Best case I have seen teams that have some sorta RCA / Logging / Observability built into their automation that helps isolate the issues to
Average case Taking time to find if was a product failure or automation failure to
Worst case People first fixing their script to make it a Pass – so that they get time to investigate what the real issue is.
Test has two aspects:
Automation is a great candidate for known or previously explored territory.
Automation, tooling and testability are great candidates to aid new unexplored territory. 98% of people (I have seen) claim their interest in automation and build skills to automate “running of tests” which falls as a subset in known or previously explored territory.
Within that, the obsession to show pass or a green is a big driver (due to the culture of the org) PLUS done on front end with flakiness PLUS Lack of testability PLUS Done as a silo activity – Results in value and time off things that could have been done.
The world is moving from “Anyone can test” to “Anyone can automate”.
Good questions, right? What questions do you have? Send me those on Linkedin and I will try to answer them and keep this post updated.