With the rapid advancement of AI, I’m seeing a surge of new tools, each claiming to offer solutions to all the challenges in software testing. This has sparked even more questions, and I believe now is the perfect time to dive into this.
I've been asked to share my insights on test automation tools to determine which is the best option and why. This is a frequent question, people want to understand the pain points of each tool and identify the one that best addresses their software quality assurance challenges.
Throughout my career, I’ve dedicated myself to exploring new tools, evaluating their effectiveness, and determining whether they align with business needs. This involves implementing and executing the tools, analyzing results, generating reports, tracking ‘KPIs, and much more. The core question has always been: why choose one tool over another? Is the tool or framework we've selected truly the best, or is there something newer, easier, and better on the market? Let's explore that!
With the rapid advancement of AI, I’m seeing a surge of new tools, each claiming to offer magic solutions to all the challenges in software testing. This has sparked even more questions, and I believe now is the perfect time to dive into this topic.
Discussing test automation is essential, as it has long been a challenge to implement for several reasons. These include frameworks that demand advanced technical expertise, QA teams lacking technical skills in development languages, tools that require significant effort and time to maintain scripts, and projects with insufficient budgets for dedicated QAs to work in the automation, among many other factors.
Personally, I don't have a preferred tool, but I recognize that some must evolve or face obsolescence. In my 18+ years in Quality Assurance, I've seen test automation tools come and go in the blink of an eye, with many teams misdirecting efforts and failing to achieve the expected results.
So, how about now, we have AI to help us, from where to start? There are so many options and questions.
This article won’t cover traditional frameworks like Selenium, JMeter, Cypress, WebdriverIO, TestCafé, or Playwright. Each deserves its own discussion, as mastering these tools provides a strong foundation for test automation. I’ve noticed that some frameworks, like Cypress, are actively adding AI integration options to enhance their competitiveness in the market. Ohhh, So… Do you see them as obsolete now? Not at all, I did not say that! In my opinion, some test automation implementations can only be achieved through coding and customization, and these frameworks allow you to implement exactly what you need, assuming you dominate one programming language you will achieve your goals perfectly.
Instead, I’ll focus on AI-powered, low-code tools that streamline test automation within e-commerce ecosystems. These tools enable quick and effortless generation of test cases and scripts, making test implementation more efficient. Let's dive into how they work.
Working within the eCommerce ecosystem to implement test automation can be challenging, especially when it comes to automating functionalities where you don't have control over the code to implement or change any selector. E-commerce frameworks, in particular, require tools that enable test automation without the constant worry of managing locators. That's exactly what this tool promises: smart locator management for seamless automation.
This tool stands out with its user-friendly interface for recording test scenarios. While the record-and-play feature isn’t new, its ability to manage changes in scenarios and interfaces seamlessly sets it apart.
Where does AI come from for Testim? This tool uses AI to automatically update locators when changes occur, reducing the need to manually update code and recreate scenarios, which is a common pain point in test automation. This makes the testing process more efficient.
The tool uses AI to analyze the DOM (Document Object Model) and generates the best selector for the automated test scenario, not just grabbing what is there in the code, but generating the best one. So, that part of the time you expend in training your employees to be able to read the code and create the best locators (unique, stable, maintainable, performable…) the tool uses AI to provide you that.
As a passionate person to have control over the code and be able to implement my own assertions, validations, triggers, CI connections, customizations… The tool provides the opportunity for those who want to go further, there is a Javascript editor to implement your customizations.
This tool is ideal for platforms where you don’t have control over the code, such as third-party frameworks. If you have full access to modify and update your code directly, I would suggest another tool.
This tool complements Testim.io. Testim.io and Lambda, both provide integration from one to another tool; while Testim.io simplifies script creation and maintenance, LambdaTest excels in managing executions. It specializes in cloud-based grid executions, parallel control, and offers a wide range of real browsers and operating systems for testing, which many other tools lack.
If you’re looking for a cloud-based testing solution, this tool has you covered. It’s similar to a well-known competitor, BrowserStack, in offering seamless cloud execution and a wide number of devices and different versions. However, LambdaTest stands out by quickly advancing AI integration, and delivering unique features.
So, how about AI? Lambda test uses AI to orchestrate your execution pipeline to execute the most important scenarios first understanding that considering historical executions, uses AI to provide the feature of “Visual Testing” comparing your interface with a baseline and checking if there are no breaks on that. Cool right? Another thing that I like that this tool does using AI: Error Analysis… yes, something that we struggle with analyzing manually. Which error is this? Flaky tests? The tool understands that the test is unstable and retries automatically.
How often, as a test automation engineer, have you spent time setting up retries and custom messages in post-execution code blocks? This tool promises that it's no longer necessary.
LambdaTest has another interesting feature, this tool provides one AI prompt where the person can type the commands and the AI generates the test code. The first try is interesting, but this functionality needs to evolve a little bit to provide good results and expected executions, it was a little frustrating to see the execution was not exactly what I as a human intended to ask the tool to do.
Considering all of this, in my opinion, both tools Testim.io and LambdaTest provide you a really strong test automation environment.
ACCELQ does something that I really like, and has brightened my eyes. The tool provides a way for you to upload your page, your DOM elements, and locators.… Something that many times I missed in other tools. Provides a different way to think about your test automation. With this upload, the tool does something hard to measure: entire system test coverage!
The tool gives you a mapping of the universe of your application and gives you one idea of how much of your system has been tested.
ACCELQ allows you to provide your system URLs, the tool reads your DOM and uploads all the locators and actions using AI. After that, all the elements are mapped, and you can start using their interface by selecting the actions: Navigate to here, click on there, assert this message… and you can build what you need using their interface. But, okay, many other tools provide ways to create scenarios that choose ‘actions’ from deep ‘selenium’ commands in an interface, what is new with that?
The news is: When selectors change, there's no need to manually update your code. Simply re-upload your pages, and all scenarios will automatically update with the new elements. Your test coverage will also be refreshed, allowing you to track the updated percentage of your system's testing coverage.
The user while creating the scenarios doesn't need to know javascript/python or any programming language, the tool provides a way to write the scenarios using one pseudocode:
So, you are saying that using this tool I can have any person with no tech skills creating my automated tests? No, I am not saying that! And I will never say such a thing. You will fail if you allocate the wrong people/profiles to automate your test scenarios.
To create successful test scenarios, the person must have an understanding of the QA knowledge base and programming logic, including pseudo-code, ‘IF’ statements, and loops, to achieve good results.
Another good thing: integrating your runs with a CI pipeline could be challenging sometimes. They provide an easy way to do that using an interface.
My consideration about this tool is the same for any “low code” tool: It does not seem to have much flexibility to customize… Test automation frequently has tech challenges such as code not present in the DOM, iFrames, etc.
Katlon is not a tool, but an ecosystem of testing tools for low coding testing, that promises to help you with mobile, web functional testing, data testing, API testing, using a pseudocode solution to typescript, analytics, and reports…
Provides you with a way to simply create your scripts, using Cucumber framework, and Gherkin language:
Katalon offers a collaborative marketplace similar to Jira Cloud, where you can download add-ons to enhance the tool's functionality. Examples include Jira and Slack integrations, smart locator creation, search queries, and tags.
Where does AI enter for Katalon? Katalon provides you using AI, something that Object Edge has already developed for some of our customers using node packages and rest API integrations, Katalon generates test cases into Jira tickets.
Katalon GPT integration is available in the Jira marketplace and generates test cases to be executed manually, linked directly into your Jira Stories.
And how about automated test scripts? Katalon Studio has one option to use chatGPT inside their studio… You need to set your ChatGPT user and password to take advantage of that:
With AI integration set up, you'll leverage AI in Katalon Studio to generate and explain code options while coding your test scripts. Anyone who has already experienced Visual Studio + Copilot can make some references here, it is similar. There’s no “big magic” in generating your script, it is a supportive option to speed up your work.
In the vast world of test automation, finding a solution that aligns seamlessly with your processes and business needs without significant adjustments can be challenging. To have AI working in your favor, data preparation must be done, mapping definitions, prompt creation, and data with good quality to train your “beast” are necessary.
The Object Edge TestGenAI is an AI-powered test case generator designed to work directly from its user-friendly interface or through references in Jira tickets. If a Jira reference is provided, the tool utilizes Jira API to pull the ticket’s title and description, offering contextual information that the Gemini AI engine uses to create relevant test cases. This content is processed using a carefully engineered prompt that guides Gemini AI to produce test cases with the structure, clarity, and detail needed for TestRail.
The tool accepts inputs including Jira stories, uploaded requirements documents, and customized prompts meeting your specific business needs.
The final output includes a JSON or CSV file, along with test cases created directly in TestRail. The JSON is pre-formatted for smooth integration with TestRail, while the CSV can be customized to match your QA team’s existing test case formats, ensuring consistency with your established patterns and processes.
With a single click, users can send test cases to TestRail, automating the process and allowing QA teams to focus on strategic tasks like review, refinement, and execution. The OE TestGen solution elevates test automation, providing an adaptable, intuitive solution aligned with the evolving needs of QA teams in different project environments
About the Author
Patricia Nardelli
Quality Assurance Global Manager
Patricia is a computer scientist with 18 years of experience working with technology, including software development, software quality and management. Specialized in software quality and focused on automated testing, she is able to combine technical knowledge applying the best tools and techniques.