Apart from automation, we also see the supposed rapid growth of machine learning and AI techniques.

They (AI and ML) will change the future for sure, such as the testing scope and workloads – imagine having robotic automation tools that will ease the workloads, because everybody has experienced working late because of a delay in the project. Debugging adequacy – as AI bots can work 24/ 7 easily, they can be used to utilize debugging projects overnight or over the weekend. And the workload for the humans – test engineers will be truly less than before.
There are already a lot of examples how AI and ML techniques are entering our lives. For example, the San Francisco – based start – up Appdiff is bringing machine learning “bots” online as QAs. Another company, dinCloud, announced “James” – a virtual robot QA. Infostretch announced it will offer artificial intelligence in software testing through a brand new service called Predictive and Prescriptive QA. That all happened in 2017….
We decided to take a closer look into the Appdiff case which is quite interesting in general.
Appdiff taught its’ machine learning algorithms how to know if the result of a given activity was likely to uncover a deformity. Appdiff tests about 90% of the surface area of a typical mobile application. If we compare that to a human tester, it is clear that it is quite rare that companies with human testers test as much as 90% of the surface area of a mobile application. As for the 10% left, it is either too costly or too complex for most companies to invest in testing it.You definitely know, bots can both interact with people and machines. Still, testing, at least for us, requires more than just interaction. Testers have an understanding of the business domain, a set of heuristics for exposing defects. They can also put themselves in the shoes of both the best and the clumsiest software user. They pay special attention to the client’s needs and have intricate understanding of the solution purpose – by analyzing how a software is used and in what ways it can go wrong. We can draw the conclusion here, that if you are smarter, there is no way you can be replaced.
If you want in your heart to be a very good tester and not be replaced by the AI and ML, well then…think differently. You are not to use a common testing strategy. Why? Because of the differences, we face now with the complex systems we produce:
So, the traditional approach and testing tools will definitely not meet the needs of testing these systems…Humans have to invent new ways…and to reinvent them…
Of course, as AI and ML may be seen as the end of testing, we took the liberty to share that examples of AI going wrong are not uncommon.
Of course, as AI and ML may be seen as the end of testing, we took the liberty to share that examples of AI going wrong are not uncommon.
On the 19th of March, 2018 there was a fatal accident with a self – driving car in Arizona, USA.
The car, during the accident, was in autonomous mode and hit a pedestrian in Tempe, Arizona. The woman passed away in the hospital. It is also reported that there was a human driver inside the car during the accident. The car was part of Uber’s fleet and that is the first reported deadly accident with a self – driving vehicle. We wonder – who is to blame? The car or the driver who did not attempt to take over the control of the car after seeing what was going to happen. After a US federal investigation, it is thought that the car did not stop because the system put in place to carry out emergency stops in dangerous situations was disabled.In fact, Tesla Motors was the first to disclose a death involving a self – driving car in 2016 when the sensors of a Model S driving in autopilot mode failed to detect a large white 18 – wheel truck and trailer crossing the highway. The car drove full speed under the trailer, causing the collision that killed the 40 – year old behind the wheel in the Tesla.
Then there is the Microsoft fail that happened in March, 2018(thankfully not a fatal one). Microsoft released the AI chat bot Tay. It was designed to be a talking teen AI chat bot, built to mimic and converse with users in real time. But people (not surprisingly) took advantage of Tay’s machine learning capabilities and coaxed it into saying racist, sexist and generally bad things. Tay started posting racist tweets on Twitter and Microsoft had to take it down just a few days after release.
Then there is the Microsoft fail that happened in March, 2018(thankfully not a fatal one). Microsoft released the AI chat bot Tay. It was designed to be a talking teen AI chat bot, built to mimic and converse with users in real time. But people (not surprisingly) took advantage of Tay’s machine learning capabilities and coaxed it into saying racist, sexist and generally bad things. Tay started posting racist tweets on Twitter and Microsoft had to take it down just a few days after release.

Besides Microsoft, Google also had a bad experience in relation to the Google Photos App that was released in 2015.
What the Google Photos did is label two black people as “gorillas”. The new app was released in May 2015 and was supposed to automatically tag uploaded pictures by using its own artificial software. After the “gorillas” tagging, Google apologized for it.
And then, we have the moral dilemma of the AI development.
In April 2018, a lot of experts in the AI field harshly criticized their colleagues from South Korea who are working on the creation of a military killer robot (reported by The Verge). The development of that robot is tasked to the Korea Advanced Institute of Science and Technology/ KAIST. The scientists condemned the task of KAIST and asked for a ban to the creation of military AI systems. It looks like the days of extinction foreseen by the Terminator are coming…But let’s leave the doomsday thoughts for now. There are more pressing testing matters to discuss – such as the importance of security that we will be looking at in the next part of ‘The Future of Software Testing’.