Sign up for the Testim Dev Kit beta. Sign up now

Summary-Webinar: Real use cases for using AI in Enterprise Testing

We recently hosted a webinar on Real Use Cases for using AI in Enterprise Testing with an awesome panel consisting…

By Testim,

We recently hosted a webinar on Real Use Cases for using AI in Enterprise Testing with an awesome panel consisting of Angie Jones and Shawn Knight and me being the moderator. There were lot of great discussions on this topic and we wanted to share it with the community as well.

Below you will find the video recording, answers to questions that we couldn’t get to during the webinar (will be updated as and when we get more answers) and some useful resources mentioned in the webinar as well. Please feel free to share these resources with other testers in the community via email, twitter, linkedin and other social channels. Also, reach out to me in case of any questions at raj@testim.io or any of the panel members.

Video Recording

 

Q&A

Any pointers to the existing AI tools for testing?

@Raj: I am assuming this question is about things to know with existing AI tools in the market. If this is the case, then first and foremost, we need to figure out what problems we are trying to solve with an AI based tool that cannot already been done with other possible solutions. If you are looking for decreasing time spent on maintenance, getting non-technical folks involved in automation and making your authoring and execution of UI tests much faster, then AI tools could be a good solution. I may be biased in this but it is definitely worth checking out Testim and Applitools if any of the points I mentioned is your area of interest/pain point as well.

As discussed in the webinar, currently there are a lot of vendors (including us) who use all these AI buzzwords. This may result in you getting confused or overwhelmed in choosing a great solution for your problems. My recommendation is-

  • Identify the problem you are trying to solve
  • Pick different AI tools, frameworks to help solve that problem
  • Select the one that meets the needs of your project
  • Then, proceed with that tool

 

As a tester working with Automation, what should I do to not lose my job?

@Raj: First of all, I believe manual testing can never be replaced. We still need human mind to think out of the box and explore the application to find different vulnerabilities in our product. AI will be used complementary to manual testing.

Secondly, we need humans to train these AI bots to try to simulate human behavior and thinking. AI is still in its initial stages and is going to take another 10 -15 years to completely mature.

In Summary, I think this is the same conversations we had 10 years ago when there were automated tools coming into the market. Then, we concluded that automated tools helps to complement manual testing BUT NOT replace it. I think it is the same analogy here where AI is going to complement manual testing BUT NOT replace it.

As long as people are open to constantly learning and acquiring different skillsets, automation is only going to be make our lives easier while we can pivot and focus on other aspects that cannot be accomplished with automation. This mainly involves things related to creativity, critical thinking, emotion, communication and other things that are hard to automate. The same thing holds through for Artificial Intelligence. While we use AI to automate some of the processes to save us time, we can use this saved time in focusing on acquiring other skills and stay abreast with the latest in technology.

So the question here is, not more about automation/AI replacing humans but about how do we stay creative and relevant in today’s society? That is done by constant learning, development and training.

 

Do we have some open source tool on the market for AI testing?

@Raj: Not really, we do have a small library which was added to the Appium project to give a glimpse of how AI can be using in testing — https://medium.com/testdotai/adding-ai-to-appium-f8db38ea4fac?sk. This is just a small sample of the overall capabilities.

 

What should be possible in testing with ai, in 3 years time? And how do you think testing has changed (or not)?

@Raj: We live in this golden age of testing, Where there are so many new tools, frameworks and libraries that are available to us, to help make testing more effective, easier and collaborative. We are already seeing the effects of AI based testing tools in our daily projects with introduction of new concepts in the areas of location strategies of elements, visual validation, app crawling and much more.

In 3 years, I can see the following possibilities in testing-

  • Autonomous Testing

I think autonomous testing will be more matured and a lot of tools will include AI in their toolset. This means we can create tests based on actual flows done by the user in production. Also, the AI can observe and find repeated steps and cluster them to make reusable components in your tests. For Example – Login, Logout scenarios. So now we have scenarios that are actually created based on real production data instead of us assuming what the user will do in production. In this way, we also get good test coverage based on real data. Testim already does this and we are trying to make it better.

  • UI Based TDD

We have heard of ATDD, BDD, TDD and also my favorite SDD (StackOverflow Driven Development) 🙂 . In 3 years, we will have UITDD. What this means is, when the developers get mockups to develop a new feature; the AI potentially could scan through the images in the mockups and start creating tests, while the developer is building the feature in parallel. Eventually, by the time the developer has finished implementing the feature, the AI would have already written tests for it based on these mockups using the power of image recognition. We just need to run the tests against the new feature and see whether it passes/fails.

  • AI for Mocking Responses

Currently we mock server requests/responses for testing functionalities that depend on other functionalities which haven’t been implemented yet or for making our tests faster by decreasing response time of API requests. Potentially, AI can be used to save commonly used API requests/responses and prevent unnecessary communication to servers when the same test is repeated again and again. As a result, your UI tests will be much faster as the response time has been drastically improved with the help of AI managing the interaction between the application and the servers.

 

Will our jobs be replaced?

@Raj: Over the past decade, technologies have evolved drastically. There have been so many changes happening in the technology space but one thing constant is human testers’ interaction with them and how we use them for our needs. The same holds true for AI as well. Secondly, to train the AI, we need good data combinations (which we call a training dataset). So to work with modern software we need to choose this training dataset carefully as the AI starts learning from this and starts creating relationships based on what we give to it. Also, it is important to monitor how the AI is learning as we give different training datasets. This is going to be vital to how the software is going to be tested as well. We would still need human involvement in training the AI.

Finally, it is important to ensure while working with AI the security, privacy and ethical aspects of the software are not compromised. All these factors contribute to better testability of the software. We need humans for this too.

In summary, we will continue to do exploratory testing manually but will use AI to automate processes while we do this exploration. It is just like automation tools which do not replace manual testing but complement it. So, contrary to popular belief, the outlook is not all ‘doom-and-gloom;’ being a real, live human does have its advantages. For instance, human testers can improvise and test without written specifications, differentiate clarity from confusion, and sense when the ‘look and feel’ of an on-screen component is ‘off’ or wrong. Complete replacement of manual testers will only happen when AI exceeds those unique qualities of human intellect. There are a myriad of areas that will require in-depth testing to ensure safety, security, and accuracy of all the data-driven technology and apps being created on a daily basis. In this regard, utilizing AI for software testing is still in its infancy with the potential for monumental impact.

 

Did you have to maintain several base images for each device size and type?

Has anyone implemented MBT to automate regression when code is checked in?

 

Resources relevant to discussions in the webinar

Testim's latest articles, right in your inbox.

From our latest feature releases, to the way it impacts the businesses of our clients, follow the evolution of our product

Blog Subscribe