Selenium has been a popular automation testing framework for the past several decades. But, as applications have become more complex in the past several years, especially with the use of popular JavaScript frameworks such as Angular.js, Vue.js, React.js and Ember.js for building web applications; Selenium has found it hard to adapt to these technologies.

For example– If you are a currently using Selenium, have you ever experienced any of the below problems-

  • Spending majority of your valuable testing time fixing flaky tests?
  • Unable to make automation progress due to the lack of skilled programmers to write automated tests?
  • Not finding enough support in the open source community when new libraries and updates break existing tests and you have no idea what to do?
  • Need visual validation when a step fails, to visually understand the exact reason for the failure?
  • Insufficient logging information when your tests fail?
  • Finding it hard to seamlessly integrate your automated tests within your CI/CD pipeline?

If you answered YES, to any of the above questions, then you are not alone!!! According to new Gartner research, “Selenium is the de facto standard for front-end test automation of modern web technologies due to the flexible and powerful browser automation capabilities of WebDriver… It’s a sophisticated tool that isn’t easy to learn. It requires your team to have programming expertise and familiarity with object-oriented languages.” So, Selenium is good if you have the necessary programming expertise in your team, if not it becomes an obstacle.

There is no need to fear as there are good alternatives to Selenium. Below are the top 3 widely used alternatives currently available to give you and your team the flexibility to build robust automation test suites/frameworks.

  1. Cucumber

This is an open source testing tool that focuses on BDD (Behavior Driven Development). It emphasizes writing requirements in plain english in the form of Given, When and Then statements. This is commonly referred to as “Gherkin” syntax. You then convert these GWT statements into code using Java, JavaScript, Ruby or Kotlin.  This helps to enforce collaboration and bring more clarity to requirements.

On the flip side, it still needs someone with programming background to write the binding code to convert these GWT statements into usable actions. Also, the GWT format leads to a maintenance nightmare especially when many people collaborate and start making changes to different steps. Finally, it does not have any visual validation and has insufficient logging features making it hard to troubleshoot errors.

  1. SikuliX

This is an open source GUI based automation tool. It uses image recognition powered by OpenCV to automate anything with a UI. This includes desktop and web applications. It supports multiple programming languages including Python, Ruby and JavaScript.

Although this tool has robust UI validation functionalities, it lacks a lot of necessary features that are needed for stable automation such as locating elements based on attributes, creating reusable components and modularizing your tests for easier maintenance.

  1. Testim

Testim uses artificial intelligence (AI) for the authoring and execution of automated tests. The focus is on functional testing, end-to-end testing and UI testing. The Dynamic Location strategy used within the tool sets it year’s apart from its competitors. The Artificial Intelligence (AI) underneath the platform in real time, analyzes all the DOM objects of a page and extracts the objects and its properties. Finally, the AI decides the best location strategy to locate a particular element based on this analysis. Due to this, even if a developer changes the attribute of an element, the test still continues to run and this leads to more stable tests. This is the major advantage of using Testim compared to other frameworks like Selenium which uses static locators. Also, the “self-healing” mechanism of the AI helps to proactively detect problems in your test and fixes it automatically for you. As a result, the authoring and execution of automated tests are much faster and more stable.

Testim is not a completely code-less tool; you can use JavaScript and HTML to write complex programming logic (if needed) for your applications. This helps to make the automation suite more extensible. It provides easy integration with your CI/CD pipeline and most importantly helps to involve the whole team in automation including technical and non-technical people.

In summary, Selenium has really good capabilities but knowing there are other alternatives out there helps to give your more options to make your automation effort more easier and stable.


We recently hosted a webinar on AI and its influence on test automation with an awesome panel consisting of Jason Arbon, Oren Rubin, Dionny Santiago and me being the moderator. There were lot of great discussions on this topic and we wanted to share it with the community as well.

Below you will find the video recording of the webinar, the list of questions and answers that we couldn’t get to during the webinar and different resources to learn about AI and testing. Please feel free to reach out to me in case of any questions at or any of the panel members.

Video Recording



I am a performance engineer and am working on AI for quality gates in load testing results…that needs to be a high priority for the “future” which is “now”.How do you think bots can be used in this area?
@Jason: UI-Bots can help generate user-like load directly via the application.  Though, for most load testing problems, would recommend something like charles proxy, or internal ways to spin up load, and only use the ‘expensive’ UI-based bots to see how the app works E2E for the user under load.

With rapid changes in agile requirements, how do we make the machines learn or adapt to the changes every time?
@Jason: The ai bots most folks are working on these days (vendors) will auto discover new features in the app and exercise them.  At we have a set of 5k+ tests written for common flows in apps, so if you add a new feature to your app that looks like something similar on another app, the bots will auto apply that test case to the new build.

@Raj: The more tests you run, the smarter the AI becomes in detecting changes in the application. It will start detecting changes in application’s UI, element attributes and start adapting the tests automatically to these changes due to its self-healing mechanism. It can identify flaky tests, optimize waits in between steps and also proactively fix issues for us before they occur.

With BDD model and shift left and demand for testing at unit and service/api layer where does E2E testing stand?
@Jason: Dionny’s work can help generate valid permutations of existing API test case parameters/flows.  Also, clustering methods can help identify misbehaving servers via logs of activity during api testing or production.

Where can we find Dionny’s paper on AI testing, you were talking about?
Dionny’s paper

Lot of automation test scripts fail due to test data issues, can we use AI to tackle those kind of issues?
@Jason: Thats a broad category of failure types, but yes, ‘AI’ can be taught to auto associate correct data with the right application states.  Google also shared some ‘test selection’ findings using ML to help decide what to do with all those failing tests:

I wanted to understand, what does really mean by AI in testing?if it mean by machine will perform testing? if machine will test then if they will be already defined with scenarios which needs to be tested? is it same as automation testing as there also we don’t need manual intervention?
@Jason: Generally, AI in testing, means applying machine learning / AI techniques to test applications.  There is also ‘Testing AI’ which refers to approaches to test AI/ML-based products and features. There are a variety of ways to apply AI to testing, some leverage pre-written test cases and the AI is used to automatically execute the tests, create variations, or analyze the results.  Some AI based systems are trained to mimic general human behavior and can execute basic ‘flow testing’ for many apps, without pre-written test scenarios.  The bots we build at can read BDD/Scenarios and execute them against a set of applications.  As for need for human intervention, like automation, there is still the need for plenty of human intervention in AI-based testing approaches these days 🙂 Humans gather oracle/training data for the AI. Humans measure the correctness of the ‘AI’, and humans evaluate the significance of the AI-based test results as they relate to business/shipping decision.

@Raj: In addition to what @Jason was saying,  I wanted to mention that, AI can have a positive impact on several facets of software testing especially test automation. There have been so many different tools and frameworks that have come up trying to solve different kinds of problems related to test automation but one problem that has been a constant challenge till date, is the aspect of “maintenance”. One of the main reasons for this is the use of static locators. With static locators, we use only one attribute of an element to uniquely identify it on a page and if this changes, the test breaks and as testers we end up spending a considerable amount of time troubleshooting the problem and fixing it. Based on research, about 30% of testers’ time is consumed in just maintaining tests. Can you imagine the opportunity cost associated with this effort? It is mind blowing. Testers’ time is valuable and better spent on actually exploring the application and providing information to help stakeholders make informed decisions about the product. With AI based testing we can overcome this problem by using dynamic locators. Dynamic Locators is a concept where we use multiple attributes of an element to locate it on on the page instead of a single attribute. This way, even if one attribute changes, the element can still be successfully located with the help of other attributes that have already been extracted from the DOM by the AI.

Can you guys elaborate on how do AI-based tests learn acceptance criteria that normally has to be defined by humans?
@Jason: Depends on the AI system being used.  The bots at execute human-written test cases.  Acceptance tests are written at a very high level of abstraction, and the bots do all the test execution.  Reporting is as normal for test automation.  In summary, just tell the bots what your acceptance criteria are.

What an automation tester needs to learn to align with the future of ai in testing ?
@Jason: A good set of links to learn are here: . You can also start to leverage/experiment with “AI” via the current testing vendors. If you are already familiar with selenium/appium like testing, there is a new open source API that uses AI for element selection that you can use today:

Is AI platform dependent. like desktop application or web/mobile?
Depends on the AI approach/solution.  Many are platform dependent.  The bots we build at though are not platform dependent–a key feature.  The bots are platform-independent as the machines are trained to recognize UI elements much like humans do, and humans are platform dependent ;).

Is there an Open Source project that allows to apply AI to locate the elements?
@Jason: Yes, for appium today and likely Selenium soon:

How can AI be used for improving test coverage ?
@Jason: AI can help generate many more validate test scenarios than a human could create. AI also enabled re-use of test artifacts so a test written for one app, can also execute on a similar app with no human intervention.

@Raj:  Now with AI, you can also connect your production apps to the testing cycle. This means that we can create tests based on actual flows done by the user in production. Also, the AI can observe and find repeated steps and cluster them to make reusable components in your tests. For Example – Login, Logout scenarios. So now we have scenarios that are actually created based on real production data instead of us assuming what the user will do in production. In this way, we also get good test coverage based on real data.

Will AI testing replace selenium appium and all tools and technologies?
@Jason: Asymptotically.

Is AI really better?
@Dionny: Traditional testing teams focus on either a single app, or a small set of apps; whereas, AI can learn from millions of different examples and apps. The more data we show the AI, the better it gets. Also, the AI never gets tired!

What are the immediate benefits of using AI?
@Raj: Apart from the benefits already mentioned in the answers above, AI can also help in increasing team collaboration. The field of test automation has historically been a technical tester focused community. This stigma can also change with AI. What this means is, non-technical resources no longer need to fear code and technology, rather AI will help to bridge the gap between the technical know-how and authoring and execution of tests making life easier for teams.

Will our jobs be replaced?
@Raj: Over the past decade technologies have evolved drastically, there have been so many changes happening in the technology space but one thing constant is human testers’ interaction with them and how we use them for our needs. The same holds true for AI as well. Secondly, to train the AI, we need good data combinations (which we call a training dataset). So to work with modern software we need to choose this training dataset carefully as the AI starts learning from this and starts creating relationships based on what we give to it. Also, it is important to monitor how the AI is learning as we give different training datasets. This is going to be vital to how the software is going to be tested as well. We would still need human involvement in training the AI. Finally, it is important to ensure while working with AI the security, privacy and ethical aspects of the software are not compromised. All these factors contribute to better testability of the software. We need humans for this too.

In summary, we will continue to do exploratory testing manually but will use AI to automate processes while we do this exploration. It is just like automation tools which do not replace manual testing but complement it. So, contrary to popular belief, the outlook is not all ‘doom-and-gloom;’ being a real, live human does have its advantages. For instance, human testers can improvise and test without written specifications, differentiate clarity from confusion, and sense when the ‘look and feel’ of an on-screen component is ‘off’ or wrong. Complete replacement of manual testers will only happen when AI exceeds those unique qualities of human intellect. There are a myriad of areas that will require in-depth testing to ensure safety, security, and accuracy of all the data-driven technology and apps being created on a daily basis. In this regard, utilizing AI for software testing is still in its infancy with the potential for monumental impact.

Will intelligence machines take over the world?
@Raj: Hollywood movies do have an influence on our lives don’t they 🙂 ? At most of the conferences I speak at, there is this weird notion that, in 3 years, AI powered robots are going to take over the world and we will become slaves to them. Which sounds interesting on paper but in reality I don’t think that is going to be the case.

Currently there are are some section of the people who believe fully developed AI that can react and think like humans , will be developed by 2055 and there are are other sections of people who think it will take several hundred years for that to happen. No one knows the exact answer yet. That being said, there are several organizations trying to ensure the AI currently being developed is safe for the human society. For example – The future of life institute was formed for the exact same purpose and has the brightest minds in the AI field working in that group on AI safety research. We also have groups like the World Economic forum keeping a close eye on the impact of AI on society.

So, I do not think machines will take over the world,  just yet!!! 🙂

AI Resources


And there are more courses available online. Just google search for “Deep Learning courses”, “Machine Learning courses” as keywords.


Free Resources/Courses




We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Shared Group Indicator, Numbered Test Steps, New Base URL Parameter. Check them out and let us know what you think.

Shared Group Indicator

What is it?

When trying to change a Shared step the users will now get a notification that they are editing a shared step. Further clicking on “See affected tests” takes the user to the list of tests that are using the shared step.



Why should I care?

You no longer have to worry about someone changing a shared step unknowingly, as you now see the shared group indicator letting you know the effects of a change before it is done. This is useful when teams are collaborating to build test suites and when multiple people are working on the same set of tests. Now individuals have more visibility to how their changes might impact overall testing.


Base URL as a Parameter

What is it?

Users now have the ability to access the base url through a variable within your custom actions. The new variable that automatically stores the url value is named BASE_URLLearn More

Why should I care?

You no longer have to add extra code to get the url value of the web page used in the test. Instead, you just use the BASE_URL parameter and perform any actions necessary inside our custom actions. For example – If we want to print out the url of the web page to ensure the same page is still displayed after certain number of validations, you could just say

console.log(“The current base url is” + BASE_URL)


Numbered Test Steps

What is it?

Step numbers help to uniquely identify each step in a test. You now have the step number displayed next to the name of each and every step that is added to your test.

Why should I care?

Having numbered steps help to easily refer to a particular step in a test. This is helpful in cases where you want

  • To edit a particular step
  • To collaboratively work on a particular step of a test with other team members
  • To talk to our support team to debug a particular step in a test

By: Sofía Palamarchuk for Abstracta

If you work in the software industry, you’ve most likely heard about the popular term, “shift-left testing.” With Agile practices like TDD, BDD, CI and DevOps becoming mainstream, “shift-left” is the answer to how testing fits in, and must be done in order for them to become a reality. Instead of taking a backseat during the development process, testing is planned in advance and begins earlier in the SDLC (therefore “shifts left”). It could even start before a single line of code is written! Making this shift changes the view of testing instead of traditional QA, it transforms into QE: Quality Engineering.

What Does Shift-Left Testing Look Like?

Thanks to the rise of automation, and the aid of tools that use AI and machine learning, testers have more time to dedicate to being more strategic about their work, instead of having their hands tied, running tests manually every day.

For testers to be successful today, they have to not only be great at testing, but also be engineers of the Agile testing process by collaborating with development and operations while analyzing quality during every stage of development:

Shift Left Testing

Shift-left testing activities include:

  • Testers helping developers implement unit testing
  • Planning, creating, and automating integration test cases
  • Planning, creating, and employing virtualized services at every stage and component level
  • Gathering, prioritizing, and processing feedback

Several process changes occur when teams shift left. Instead of a developer waiting weeks to add his or her code to the rest of the team’s code, it can be done every day, or even several times a day. Instead of manually performing all the tests, most are automated and run every day, or even several times a day. And, instead of detecting problems at the end, the team as a whole analyzes quality as the development progresses.

Not sure if it’s the right move for your organization? Here are some of the pros and cons of shift left testing.



It’s well known that the sooner a bug is found, the cheaper it is to fix. One of the aims of Agile testing is detecting errors as soon as possible. With shift-left testing it’s possible to detect in real time, the exact moment in which an error was inserted into the system in order to resolve it in a timely manner. When testing is done with each build (especially during unit testing), the errors that are found are smaller, easier to detect and locate, and subsequently, less costly to fix.


With the increased levels of automation when shifting left, teams can benefit from:

  • Increased test coverage since more tests can be run in the same amount of time
  • More time for testers to focus on more challenging and rewarding tasks
  • Reduced human error rate
  • Monitoring performance over time
  • Code quality checks
  • Built-in security checks
  • Reducing issues in production that users may encounter

Beyond these benefits, being able to start testing sooner invariably results in a higher quality product, as testers are less rushed to find all the errors at the end, when there’s little time left to fix them.


In today’s competitive technological landscape, the barriers to compete are minimal, so the best way to survive is to be able to move fast and defend one’s stature by innovating in iterations, which is possible thanks to Agile. As everyone can agree that it’s important to deliver software more quickly, it also mustn’t be rushed out the door (causing a possible backfire). Shift-left testing answers the problem of accelerating development without sacrificing quality.

Anotheryet less obviousbenefit of shifting left is that it can help businesses position themselves as an attractive employer to top talent. Because it is becoming more mainstream, with about two thirds of IT workers reportedly using Agile or leaning towards agile (according to a recent study by HP), it’s what the most forward thinking software professionals expect from their teams. Therefore, if you want to be an attractive employer or at least on-par with the rest, it is important to adopt the modern practices that both testers and developers want to master in order to stay relevant in today’s labor market.



For shift-left testing to be a success, an often drastic change in culture must occur first, which requires a team effort. Teams are usually set in their traditional ways of working, and when they consider shifting, they must consider how the methods, processes, skills, tooling, etc. will need to change. Even more important, what will need to happen to get all the roles within the organization to align properly?


Yes, agile and shift-left aim to eliminate testing as a bottleneck, but it is true that agile teams can find themselves stuck waiting in a queue once all of the pieces come together in the performance and user acceptance testing phases, due to the complexity of environments and composite applications. One way to overcome this is to to utilize service virtualization. Service virtualization emulates the behavior of essential components that will be present in production, enabling integration tests to take place much earlier in development. This is how you can eliminate that key bottleneck, while also benefiting from eliminating errors earlier on. Along with service virtualization, there are several tools to setup automated systems and CI such as Jenkins.

A Worthwhile Undertaking

In the end, shift left testing is certain to have pros that outweigh its cons. Testers will find themselves delegating some of their work to developers and assigning them more testing activities. In mature teams, the testers become “coaches,” training developers on how to write better code, avoid bugs, and own unit testing. The advantage of this is that the tester who used to be busy, bogged down by writing test cases, now has time to delve deeper into the product, working on business cases, penetration testing, performance testing, implementing smarter testing solutions that use artificial intelligence like, and so on. This sharing of the responsibility over testing leads to a higher level of achieved quality, as more of the bases get covered, quicker!

What do you think? Still not on-board to shift left testing? Or have you managed to do so already?

About the Author: Sofía Palamarchuk

Sofia Palamarchuk

Sofía is the Chief Executive Officer and Chief Product Officer of software testing services company, Abstracta. With a B.S. in Computer Engineering, Sofía started working in application performance optimization, system monitoring and load testing for the corporate sector for many years. With a solid background in performance tuning and automation, Sofía has become a business development leader and is responsible for managing all aspects of Abstracta’s US operations as well as its mobile testing tool, Monkop.


We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Updated Exports Parameters, New Groups Tab. Check them out and let us know what you think.

Updated Exports Parameters

What is it?

Flexible Exports Parameters allow to pass variables within a group, test or collection of tests. Learn more

testim exports_parameters

Why should I care?  

When we use different exports parameters across tests for dynamic data validation, it often gets difficult to keep track of which user defined variables can be used in which groups or tests . For better user experience and control of variables, we now have 3 exports parameters-

  • Local export: Allows you to pass variables between steps in the same group.
  • Test export : Allows you to pass variables between steps and groups in the same test.
  • Global export: Allows you to pass variables between tests in the same test plan or test suite.

Each one has a clearly defined scope and it makes it a lot easier for user to understand the scope of different variables used within groups and tests.

Improved Toolbar Navigation Navigation

What is it?

A new Groups Tab has been added to the “+” menu. Learn more

Testim Groups Tab

Why should I care?

You now have  the ability to switch back and forth between test and the test runs via the tabs.

Customers have access to these new features now. Check it out and let us know what you think. If you’re not a customer, sign up for a free trial to experience autonomous testing. We’d love to hear what you think of the new features. Please share your thoughts on Twitter, LinkedIn or Facebook.


We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Result Labels, Test Run Navigation Icon, Grid Management. Check them out and let us know what you think.

Result Labels

What is it?

The “Result Labels” allows you to name each remote run. On the “Suite Runs” and “Test runs” pages, you can easily filter your runs by choosing a result label.

Testim Result Labels

Why should I care?  

You now have the ability to label your runs. This is especially useful when you need to drill down into specific runs based on environment, application version, sprint numbers etc. For example you can label your runs as “nightly-scheduler”, “v1.42.34”, “Jenkins”, “Troubleshooting”, “Staging”.

Result labels can be added to the CLI using the parameter –result-label “<user defined name of the run>”. Learn more

Test Run Navigation Icon

What is it?

The new navigation icon opens the results of a test in a new tab.

Testim test run navigation

Why should I care?

You now have  the ability to switch back and forth between test and the test runs via the tabs.

Grid Management

What is it?

To run your tests remotely, you need to integrate either with Testim grid, your own local grid or other 3rd party grids like Sauce Labs and Browserstack. Learn more

testim grid management

Why should I care?  

Grid management now offers the ability to easily manage multiple grids providing an abstraction layer for your devops. The grid information is automatically added to the CLI based on the already configured grids and will appear in this format –grid “<grid name>”.

Customers have access to these new features now. Check it out and let us know what you think. If you’re not a customer, sign up for a free trial to experience autonomous testing. We’d love to hear what you think of the new features. Please share your thoughts on Twitter, LinkedIn or Facebook.


We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Hidden Parameters, Data Driven testing  via config files and Element Text condition. Check them out and let us know what you think.

Hidden Parameters

What is it?

When you use parameters in your tests, the values that are passed in during run time are saved and shown in the UI. Sometimes this information is sensitive and you may want the value to be hidden. This is now possible using the hidden parameters option available in the project settings page of the Testim editor. Learn more

Why should I care?  

You no longer have to worry about revealing sensitive information in your tests. This is especially true if your application is related to banking, security, insurance or any other domain that handles a large amount of sensitive data.

Data Driven testing now supports CSV, database and other external sources

What is it?

Now users have the ability to pass data sets at run time via config files. The newly added “overrideTestData” parameter in the beforeSuite hook will allow users to pass in multiple parameters to multiple tests at the same time. The same parameter can also be used to extract data from external sources such as CSV, Databases etc.

Why should I care?  

Data Driven testing is no longer just restricted to passing a json file within the tests. Now, you have the flexibility to pass this data at run time through a single config file. Also, you can extract data from external sources and use it within your tests. Everything happens automatically for you during run time. This makes test data setup much more extensible and reusable. Learn more

Talking about working with excel;  we already have detailed documentation of an alternate way to import excel data into Testim. You can learn more about here.

Element Text condition

What is it?

Testim provides several predefined conditions (“if statements”) to be used with steps.  For example, whether an element is visible or not. We just introduced a new condition which checks whether an element has a specific text. Just pass in a string, regex, or a js statement (you can use variables too!).

Why should I care?
Now you have the flexibility to add conditions based on element text instead of just checking for element being visible on the screen. Learn more


Customers have access to these new features now. Check it out and let us know what you think. If you’re not a customer, sign up for a free trial to experience autonomous testing. We’d love to hear what you think of the new features. Please share your thoughts on Twitter, LinkedIn or Facebook.


One of the most important factors related to automated tests is Maintenance. A lot of effort is spent on maintaining the tests than writing actual tests.  A recent study suggested about 30% of testers time is spent on maintenance.This leads to wastage of valuable time and effort by the resources, which they could have rather spent on testing the actual application.

Imagine a world where the software can maintain tests without human interaction? This world has become a reality with We use Artificial Intelligence (AI) underneath the hood, which provides self-healing maintenance i.e problems are detected by the AI and automatically fixed without human intervention. also help to speed up the maintenance of tests by providing the follow features within our platform-

  1. Version Control

At any given time, it is important to have logs of what changes were made to a particular test. This way we can always revert back to an older version of test as and when required. Our platform provides this functionality by showing all the version history by going to the Properties panel of the setup step and clicking on “See old revisions”

  1. Branching

At, we firmly believe in the “Shift Left Paradigm” where Development and Testing must start in parallel as early as possible in the software development lifecycle. Keeping this in mind, we provide the functionality to teams to create separate branches for each team member and work on the same projects and tests. This way, no one can overwrite the changes of the other team members and teams can work on the same code base at any instant of time

In our platform, we just need to select “Fork” to create a new branch and we can also switch between existing branches

        3.  Scheduler

Users have the option of scheduling their tests. This helps to run the tests automatically at a certain day and time without any manual intervention. We can also get notified via email in case of any errors



As testers, we spend considerable amount of time troubleshooting issues. To help in troubleshooting, our platform offers different options to the user to narrow down the scope of the problem. These options  are as follows-

  1. Screenshots

The screenshot feature explained in the “Authoring and Execution” section helps users to know what was the baseline image and what was the actual image found.

  1.   Properties Panel

The properties panel helps to capture the error messages and display it to the user. The user also has the option of interacting with DOM and see what objects were extracted during the run

  1. Test Logs

Logs are a rich source of information on what happened underneath the UI. We provide test logs when the user runs the tests on our grid or a 3rd party grid. The option can be found in the in top section of editor

  1. Bug Reporting

One of the most time consuming aspects of testing is after finding a bug, we need to report it to the developer with relevant information, to speed up the troubleshooting and fixing of issues.

With you can do this with a single click with the help of our chrome extension. All the details related to the bug are automatically generated for you.

  1. Documentation

We put in a lot of effort to document most of the features of the tool in our User Documentation found under the “Educate” tab.

We also have detailed videos on how to troubleshoot your tests quickly

Troubleshooting Part 1- Element is not visible

Troubleshooting Part 2 – Element not found

Troubleshooting Part 3 – Timing issues

Troubleshooting Part 4 – Issues related to mouse hover

With the above features, helps to create stable tests that are highly maintainable.

The below posts gives more in depth analysis of Testim in terms of different features that make authoring and execution of tests really simple and how to create reusable components that improves the extensibility of the tool


Artificial Intelligence (AI) and machine learning (ML) are advancing at a rapid pace. Companies like Apple, Tesla, Google, Amazon, Facebook and others have started investing more into AI to solve different technological problems in the areas of healthcare, autonomous cars, search engines, predictive modeling and much more. Applying AI is real. It’s coming fast. It’s going to affect every business, no matter how big or small. This being the case how are we as Testers going to adapt to this change and embrace AI? Here is the summary of different things you need to know about using AI in software testing.

Let’s summarize how the testing practice has evolved over the last 4 decades

  • In the 1980’s majority of software development was waterfall and testing was manual
  • In the 1990’s, we had bulky automation tools which were super expensive, unstable and had really primitive functionality. During the same time, there were different development approaches being experimented like Scrum, XP, RAD (Rapid Application Development)
  • From 2000, the era of open source frameworks began
    • People wanted to share their knowledge with the community
    • Started encouraging innovation and asking community of like minded people to help out in improving testing
    • Agile became a big thing, XP, Scrum, Kanban became a standard process in the SDLC
    • There were need for faster release cycles as people wanted more software features delivered faster
  • In the 2010’s, it was all about scale, how to write tests fast and find bugs faster
    • Crowdtesting started
      • Encouraging other people to give feedback on the application. Free and Paid services
    • Cloud testing started
      • People started realizing they need more
        • Server space
        • Faster processing
      • Started to realize the problem of maintenance. How expensive it is to buy hardware and software for maintaining your tests
      • Then we have
        • DevOps
        • Continuous Testing
        • CI/CD integration
  • I believe the Future will be about Autonomous Testing using Machine Learning and AI


Basics of AI

Let’s start by de-mystifying some of the terminologies related to AI

  • Artificial Intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans
  • Machine Learning (ML) evolved from the study of pattern recognition and computational learning theory (studying design and analysis of ML algorithms) in AI. It is a field of study that gives computers ability to learn without being explicitly programmed
  • Deep Learning(DL) is one of the many approaches to ML. Other approaches include decision tree learning, inductive logic programming, clustering and Bayesian networks. It is based on neural networks in the human body. Each neuron keeps learning and interconnects with other neurons to perform different actions based on different responses


There are 3 types of widely used ML algorithms

  • Supervised Learning – We are giving the right training data (input/output combinations) for the algorithm to learn
    • Examples
      • Give bunch of e-mails and identify spam e-mails
      • Extracting text from audio
      • Fill out a loan application and find the probability of the user repaying the loan
      • How to make user click on ads by learning their behavior
      • Recommendation engines on Amazon, Netflix where customer is recommended products and movies
      • Amazon uses AI for logistics
      • Car Optimization
      • Autonomous cars
  • Un-supervised learning – We give a bunch of data and see what we can find
    • Examples
      • Taking a single image and creating a 3D model
      • Market Segmentation
  • Reinforced learning – Based on concept of reward function. Rewarding Good/Bad behavior and making the algorithm learn from it. E.g. Training a Dog


Real life AI applications to visually see how it works

  • Quick Draw from Google
  • Weka is an open source project where they are using ML algorithms for data mining


What challenges can AI solve?

Let’s discuss the challenges the industry faced while transitioning to agile and what’s still remains a challenge:

How can we use AI to solve testing problems?

There are many companies taking multiple approaches to solve different problems related to software testing and automation. is one such company uses Dynamic Locators, The Artificial Intelligence (AI) underneath the platform in real time, analyzes all the DOM objects of a page and extracts the objects and its properties. Finally, the AI decides the best location strategy to locate a particular element based on this analysis. Due to this, even if a developer changes the attribute of an element, the test still continues to run and this leads to more stable tests. As a result, the authoring and execution of automated tests are much faster and more stable.


Here is the detailed insight of how our AI works –

In today’s political climate, automation is a hot topic that is quickly dividing people across the country. While there’s no denying automation is the way of the future, many people are hesitant that this new technology will mean less jobs for the average workers. That being said, automation isn’t all bad!

There are a lot of big changes coming to the workforce thanks to automation, and many of those changes bring with them a new wave of job opportunities. The demand for technology and coding jobs is on the rise! This is big news for the future of industry, and these changes can be a positive thing with the right mentality. Read on to delve deeper into the rise of automation and the future of coding!

software development

Image via Pexels

The rise of automation as we know it today.

It might be hard to think about how automation has changed drastically in the last few years. Nowadays, more and more “human” jobs are being overcome by advances in automotive technology. These changes first occurred in repetitive, labor-intense positions. Factory workers and farmers quickly became replaced by these automated machines which could do the work in faster time.

Humans have been using tools to produce faster results since the dawn of civilization. Cars replaced horses while robots now replace humans in many labor-intensive positions. People began to push back when factor jobs became harder and harder to find, and now it’s not just these factory positions in jeopardy. Today, automation is taking over jobs in more creative fields and thought related tasks. Sales positions, finance positions, and even healthcare positions are slowly being taken over by skilled automotive machines. In just a few years, who knows what automation will be able to do next!

With the rise of automation comes the rise of coding.

There’s no denying that the workforce is changing as automation because more mainstream. The jobs of the past will become more and more obsolete, and today’s workers will need to develop new ways to be competitive as employees. Coding is rising to meet this challenge. As the demand for automation grows, so does the need for skilled coders.

Coding positions are becoming the new factor jobs. As more works are gaining higher education, these coding positions will serve as the new baseline in coming years for a new workforce. This might seem hard to come to grips with, but these changes can be positive! A rising technological workforce allows modern workers more freedom as most of these positions can be performed remotely. Employers can advise your remote workers from anywhere in the world, making a more cooperative international economy.

There will be an need for all types of coders from .NET Core vs .NET Framework to new advanced technologies we haven’t developed yet. Not only does this mean there’s a demand for more workers, but the quality standards are also increasing. Employers are beginning to recognize that quantity and quality and not equal! That means the workers who are able to adapt their skills ambitiously will find greater success. The new rising workforce will rise to meet new challenges, and there’s no limits to the new heights of technology!

machine learning

Image via Pexels

Modern workers are programmers.

Many people today fall into the mindset that these changes to the workforce are negative. This isn’t’ the case! It’s just different. Throughout history, technology improvements have altered the workforce and brought change. These modern changes towards automation might seem strange to our modern eyes, but in a few years they’ll be normal practice. Technology is the only way to move forward to a brighter tomorrow, and our workers will be prepared for any new challenges!

Be smart & save time...
Simply automate.

Get Started Free