Tag

Test automation

Browsing

Selenium tests have the reputation of easily becoming fragile tests. We’ll look at some common causes of fragile Selenium tests, how you can alleviate some of these issues, and how Testim can provide extra value and robustness to your UI tests.

What’s a Fragile Test?

A Selenium test can be fragile just like any other automated test. A fragile test is when changes that seemingly shouldn’t influence the test result do so anyway. An example that most of us have encountered is when we change one piece of code and break a test that we believe shouldn’t break. But there can be other influencing factors: test data, the current date and time or other pieces of context, or even other tests. Whenever these factors influence the test outcome, we can say we have a fragile test.

Fragile Selenium Tests

Selenium is a tool to automate the browser, giving us the power to write automated tests for our (web) UI. This entails that we will execute our entire application more or less. A lot of moving parts means these Selenium tests can become fragile more easily than simple unit tests.

Let’s look at some causes of fragile Selenium tests and how we can improve them so they become more robust. The causes we’ll cover are

  • tight coupling to the implementation
  • external services
  • asynchrony
  • fragile data
  • context

Tight Coupling to the Implementation

Just like unit or integration tests, Selenium tests can be written so that they’re tightly coupled to the implementation. However, when the implementation changes, this means our tests need to change too, even if the public or visual behavior hasn’t changed.

For example, let’s say we’ve written our test to click on a button with ID “login-button.” When this ID is changed to something else, for whatever reason, our test will fail, even if the test case still works fine when performed manually.

This is because the test is tightly coupled to the specific implementation. How can we decouple our test from the implementation?

In this specific example, we could improve the test by having the test call a helper method that knows about the implementation. So instead of clicking the button with the ID “login-button,” we can make it call a method named “Login” and pass the necessary parameters. If all of our tests use this helper method, we’ll only have to change one piece of code when the implementation changes.

Now, let’s take this a step further and group our helper methods inside a class per page. This is the Page Object design pattern. If something on a page changes, we know where to update the implementation for our tests—in the corresponding page object.

Can we do better? Yes, we can. Thanks to Dynamic Locators and AI, Testim can identify the page element we need, even if its attributes have changed. Check it out:

A screenshot of a cell phone

Description automatically generated

This means that the QA team can record tests in their browser and they will still run fine if some underlying implementation detail changes.

External Services

Another common cause of fragile tests, especially when testing a complete application, is that we rely on external services to work as we expect. These can be particularly hard to control. But when they don’t behave as they should for the test, or if they aren’t available, the test fails. And this can happen even if we haven’t changed or broken anything on our side.

In this case, we could make our tests more robust by creating a mock service and having our application call that service. Otherwise, we’re also testing the external service and setting ourselves up for fragile tests.

By creating a mock service, we have full control over the responses. We can define what data is returned, but we can also simulate error responses, timeouts, or slow responses. These are things that are harder to imitate when we’re using the real external service.

Asynchrony

When we’re testing an application, we often have test cases where we need to wait for a certain event. For example, we might have an application that makes an AJAX call and renders the result on the page. We now have to tell Selenium to wait until the result is rendered on the page. But for how long?

Luckily, Selenium has explicit and implicit waits. They allow us to tell Selenium to wait until a certain condition is met. In our example, we would wait until the result of our AJAX call has been rendered. When that has happened, we can continue our test.

Fragile Data

Most applications that we test with Selenium will require some data storage. Our Selenium tests can become fragile because of the data for two reasons: either the data changes or certain tests continue with data from previous tests.

Data Changes

When we set up a database for our tests, the amount of data we pump into it can become quite large. Consequently, when we make a small change to this test data for one test, we might break another test that depended on this data. This is especially true for larger test suites where it has become unclear which pieces of data are necessary for which tests.

A possible solution is to populate the database with separate data for each test. However, that can lead to a large amount of test data that can be difficult to maintain over time.

A better option is to populate your database as part of each test. You could start with basic data that doesn’t change, and then have each test add the data that it needs.

Interacting Tests

Another potential issue with data is when one test changes the data of another test. Take a simple example where a test changes some identifier of a record (a product ID for example). At the end of the test, it changes it back so that another test can find that same record. However, if the first test fails to revert the change, the second test won’t find the record and will also fail.

This is a case of interacting tests: a test fails because it has been influenced by another test, even though the failing test runs fine when it’s run individually.

The best solution is to give each test its own set of data. We can achieve this by spinning up a new database for each test. This might sound like a complex solution, but we could leverage containers for this: start a container with a database, fill it with the necessary data, run the test and end by shutting down the container. Rinse and repeat for other tests.

If a separate database for each test isn’t an option, we could look at solutions like in-memory databases or libraries that reset our database to the state before the test (like Respawn for .NET).

Context

The typical example of a fragile Selenium test occurs because of a context, like tests that depend on certain dates or times. This is true for every kind of test, so it also applies to Selenium tests.

When your tests depend on certain dates or times, they might suddenly fail when the conditions (temporarily) don’t apply. This is often the case on leap days, but depending on your business logic, other dates may cause your test to fail too.

In unit tests, the date and time can easily be mocked out, but it’s often much more difficult in Selenium tests. To make matters worse, you’re dealing with two clocks: the “server” clock (where your back-end application is running) and the browser’s clock.

There are many options here, and the solution for you depends on your specific case.

On the server-side, you could wrap calls to the system clock in a service and inject a mock service anywhere you need it in your tests. In production, you would then inject the service that uses the real clock. There’s also libfaketime for Linux and Mac OS X, which might be an easier route.

You can mock the browser’s date and time using TimeShift.js and inject this in your Selenium tests.

Is It Really That Hard?

All of these problems have solutions, but they’re often non-trivial to implement, especially in legacy applications. Also, many of these solutions require time and effort from developers, which you may not be able to spare. But if the tester (a QA department, the product owner, or the end-user) can work together with the developers, then you can achieve solid tests more easily.

Also, take a look at Testim. It can cover more than Selenium does out of the box and it provides a useful platform for developers, the QA team, and end-users to collaborate.

What if you need to verify the contents of a downloaded Word or Excel file? Or if your application has email or SMS integration? And wouldn’t you like to have the tester create your UI tests instead of taking time from your developers?

Testim provides an easy recording function for non-developers to write tests. This also allows non-developers to report a bug and immediately include a test to reproduce the issue. Traditionally, similar recording tools have created fragile tests because they’re too close to the implementation. But as we mentioned, Testim takes a different approach and leads to more robust tests.

For QA professionals, there are powerful features, like running tests with different data sets, conditional steps, and screenshots. Testim also learns about your tests over time, making your tests more stable the more you run the tests, even when implementation details change.

Testim allows you to create robust UI tests for your application that need minimal maintenance and provide maximum value to both developers, QA professionals, and end-users.

Author bio: This post was written by Peter Morlion. Peter is a passionate programmer that helps people and companies improve the quality of their code, especially in legacy codebases. He firmly believes that industry best practices are invaluable when working towards this goal, and his specialties include TDD, DI, and SOLID principles.

Introduction

We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Shared Group Indicator, Numbered Test Steps, New Base URL Parameter. Check them out and let us know what you think.

Shared Group Indicator

What is it?

When trying to change a Shared step the users will now get a notification that they are editing a shared step. Further clicking on “See affected tests” takes the user to the list of tests that are using the shared step.

 

 

Why should I care?

You no longer have to worry about someone changing a shared step unknowingly, as you now see the shared group indicator letting you know the effects of a change before it is done. This is useful when teams are collaborating to build test suites and when multiple people are working on the same set of tests. Now individuals have more visibility to how their changes might impact overall testing.

 

Base URL as a Parameter

What is it?

Users now have the ability to access the base url through a variable within your custom actions. The new variable that automatically stores the url value is named BASE_URLLearn More

Why should I care?

You no longer have to add extra code to get the url value of the web page used in the test. Instead, you just use the BASE_URL parameter and perform any actions necessary inside our custom actions. For example – If we want to print out the url of the web page to ensure the same page is still displayed after certain number of validations, you could just say

console.log(“The current base url is” + BASE_URL)

 

Numbered Test Steps

What is it?

Step numbers help to uniquely identify each step in a test. You now have the step number displayed next to the name of each and every step that is added to your test.

Why should I care?

Having numbered steps help to easily refer to a particular step in a test. This is helpful in cases where you want

  • To edit a particular step
  • To collaboratively work on a particular step of a test with other team members
  • To talk to our support team to debug a particular step in a test

By: Sofía Palamarchuk for Abstracta

If you work in the software industry, you’ve most likely heard about the popular term, “shift-left testing.” With Agile practices like TDD, BDD, CI and DevOps becoming mainstream, “shift-left” is the answer to how testing fits in, and must be done in order for them to become a reality. Instead of taking a backseat during the development process, testing is planned in advance and begins earlier in the SDLC (therefore “shifts left”). It could even start before a single line of code is written! Making this shift changes the view of testing instead of traditional QA, it transforms into QE: Quality Engineering.

What Does Shift-Left Testing Look Like?

Thanks to the rise of automation, and the aid of tools that use AI and machine learning, testers have more time to dedicate to being more strategic about their work, instead of having their hands tied, running tests manually every day.

For testers to be successful today, they have to not only be great at testing, but also be engineers of the Agile testing process by collaborating with development and operations while analyzing quality during every stage of development:

Shift Left Testing

Shift-left testing activities include:

  • Testers helping developers implement unit testing
  • Planning, creating, and automating integration test cases
  • Planning, creating, and employing virtualized services at every stage and component level
  • Gathering, prioritizing, and processing feedback

Several process changes occur when teams shift left. Instead of a developer waiting weeks to add his or her code to the rest of the team’s code, it can be done every day, or even several times a day. Instead of manually performing all the tests, most are automated and run every day, or even several times a day. And, instead of detecting problems at the end, the team as a whole analyzes quality as the development progresses.

Not sure if it’s the right move for your organization? Here are some of the pros and cons of shift left testing.

PROS

LOWER THE COST OF TESTING & DEVELOPMENT

It’s well known that the sooner a bug is found, the cheaper it is to fix. One of the aims of Agile testing is detecting errors as soon as possible. With shift-left testing it’s possible to detect in real time, the exact moment in which an error was inserted into the system in order to resolve it in a timely manner. When testing is done with each build (especially during unit testing), the errors that are found are smaller, easier to detect and locate, and subsequently, less costly to fix.

INCREASE EFFICIENCY & QUALITY

With the increased levels of automation when shifting left, teams can benefit from:

  • Increased test coverage since more tests can be run in the same amount of time
  • More time for testers to focus on more challenging and rewarding tasks
  • Reduced human error rate
  • Monitoring performance over time
  • Code quality checks
  • Built-in security checks
  • Reducing issues in production that users may encounter

Beyond these benefits, being able to start testing sooner invariably results in a higher quality product, as testers are less rushed to find all the errors at the end, when there’s little time left to fix them.

COMPETE MORE FIERCELY

In today’s competitive technological landscape, the barriers to compete are minimal, so the best way to survive is to be able to move fast and defend one’s stature by innovating in iterations, which is possible thanks to Agile. As everyone can agree that it’s important to deliver software more quickly, it also mustn’t be rushed out the door (causing a possible backfire). Shift-left testing answers the problem of accelerating development without sacrificing quality.

Anotheryet less obviousbenefit of shifting left is that it can help businesses position themselves as an attractive employer to top talent. Because it is becoming more mainstream, with about two thirds of IT workers reportedly using Agile or leaning towards agile (according to a recent study by HP), it’s what the most forward thinking software professionals expect from their teams. Therefore, if you want to be an attractive employer or at least on-par with the rest, it is important to adopt the modern practices that both testers and developers want to master in order to stay relevant in today’s labor market.

Cons

EASIER SAID THAN DONE

For shift-left testing to be a success, an often drastic change in culture must occur first, which requires a team effort. Teams are usually set in their traditional ways of working, and when they consider shifting, they must consider how the methods, processes, skills, tooling, etc. will need to change. Even more important, what will need to happen to get all the roles within the organization to align properly?

RISK OF BOTTLENECKS

Yes, agile and shift-left aim to eliminate testing as a bottleneck, but it is true that agile teams can find themselves stuck waiting in a queue once all of the pieces come together in the performance and user acceptance testing phases, due to the complexity of environments and composite applications. One way to overcome this is to to utilize service virtualization. Service virtualization emulates the behavior of essential components that will be present in production, enabling integration tests to take place much earlier in development. This is how you can eliminate that key bottleneck, while also benefiting from eliminating errors earlier on. Along with service virtualization, there are several tools to setup automated systems and CI such as Jenkins.

A Worthwhile Undertaking

In the end, shift left testing is certain to have pros that outweigh its cons. Testers will find themselves delegating some of their work to developers and assigning them more testing activities. In mature teams, the testers become “coaches,” training developers on how to write better code, avoid bugs, and own unit testing. The advantage of this is that the tester who used to be busy, bogged down by writing test cases, now has time to delve deeper into the product, working on business cases, penetration testing, performance testing, implementing smarter testing solutions that use artificial intelligence like Testim.io, and so on. This sharing of the responsibility over testing leads to a higher level of achieved quality, as more of the bases get covered, quicker!

What do you think? Still not on-board to shift left testing? Or have you managed to do so already?

About the Author: Sofía Palamarchuk

Sofia Palamarchuk

Sofía is the Chief Executive Officer and Chief Product Officer of software testing services company, Abstracta. With a B.S. in Computer Engineering, Sofía started working in application performance optimization, system monitoring and load testing for the corporate sector for many years. With a solid background in performance tuning and automation, Sofía has become a business development leader and is responsible for managing all aspects of Abstracta’s US operations as well as its mobile testing tool, Monkop.

Introduction

We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Updated Exports Parameters, New Groups Tab. Check them out and let us know what you think.

Updated Exports Parameters

What is it?

Flexible Exports Parameters allow to pass variables within a group, test or collection of tests. Learn more

testim exports_parameters

Why should I care?  

When we use different exports parameters across tests for dynamic data validation, it often gets difficult to keep track of which user defined variables can be used in which groups or tests . For better user experience and control of variables, we now have 3 exports parameters-

  • Local export: Allows you to pass variables between steps in the same group.
  • Test export : Allows you to pass variables between steps and groups in the same test.
  • Global export: Allows you to pass variables between tests in the same test plan or test suite.

Each one has a clearly defined scope and it makes it a lot easier for user to understand the scope of different variables used within groups and tests.

Improved Toolbar Navigation Navigation

What is it?

A new Groups Tab has been added to the “+” menu. Learn more

Testim Groups Tab

Why should I care?

You now have  the ability to switch back and forth between test and the test runs via the tabs.

Customers have access to these new features now. Check it out and let us know what you think. If you’re not a customer, sign up for a free trial to experience autonomous testing. We’d love to hear what you think of the new features. Please share your thoughts on Twitter, LinkedIn or Facebook.

Introduction

We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Result Labels, Test Run Navigation Icon, Grid Management. Check them out and let us know what you think.

Result Labels

What is it?

The “Result Labels” allows you to name each remote run. On the “Suite Runs” and “Test runs” pages, you can easily filter your runs by choosing a result label.

Testim Result Labels

Why should I care?  

You now have the ability to label your runs. This is especially useful when you need to drill down into specific runs based on environment, application version, sprint numbers etc. For example you can label your runs as “nightly-scheduler”, “v1.42.34”, “Jenkins”, “Troubleshooting”, “Staging”.

Result labels can be added to the CLI using the parameter –result-label “<user defined name of the run>”. Learn more

Test Run Navigation Icon

What is it?

The new navigation icon opens the results of a test in a new tab.

Testim test run navigation

Why should I care?

You now have  the ability to switch back and forth between test and the test runs via the tabs.

Grid Management

What is it?

To run your tests remotely, you need to integrate either with Testim grid, your own local grid or other 3rd party grids like Sauce Labs and Browserstack. Learn more

testim grid management

Why should I care?  

Grid management now offers the ability to easily manage multiple grids providing an abstraction layer for your devops. The grid information is automatically added to the CLI based on the already configured grids and will appear in this format –grid “<grid name>”.

Customers have access to these new features now. Check it out and let us know what you think. If you’re not a customer, sign up for a free trial to experience autonomous testing. We’d love to hear what you think of the new features. Please share your thoughts on Twitter, LinkedIn or Facebook.

Introduction

We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Hidden Parameters, Data Driven testing  via config files and Element Text condition. Check them out and let us know what you think.

Hidden Parameters

What is it?

When you use parameters in your tests, the values that are passed in during run time are saved and shown in the UI. Sometimes this information is sensitive and you may want the value to be hidden. This is now possible using the hidden parameters option available in the project settings page of the Testim editor. Learn more

Why should I care?  

You no longer have to worry about revealing sensitive information in your tests. This is especially true if your application is related to banking, security, insurance or any other domain that handles a large amount of sensitive data.

Data Driven testing now supports CSV, database and other external sources

What is it?

Now users have the ability to pass data sets at run time via config files. The newly added “overrideTestData” parameter in the beforeSuite hook will allow users to pass in multiple parameters to multiple tests at the same time. The same parameter can also be used to extract data from external sources such as CSV, Databases etc.

Why should I care?  

Data Driven testing is no longer just restricted to passing a json file within the tests. Now, you have the flexibility to pass this data at run time through a single config file. Also, you can extract data from external sources and use it within your tests. Everything happens automatically for you during run time. This makes test data setup much more extensible and reusable. Learn more

Talking about working with excel;  we already have detailed documentation of an alternate way to import excel data into Testim. You can learn more about here.

Element Text condition

What is it?

Testim provides several predefined conditions (“if statements”) to be used with steps.  For example, whether an element is visible or not. We just introduced a new condition which checks whether an element has a specific text. Just pass in a string, regex, or a js statement (you can use variables too!).

Why should I care?
Now you have the flexibility to add conditions based on element text instead of just checking for element being visible on the screen. Learn more

 

Customers have access to these new features now. Check it out and let us know what you think. If you’re not a customer, sign up for a free trial to experience autonomous testing. We’d love to hear what you think of the new features. Please share your thoughts on Twitter, LinkedIn or Facebook.

 

Artificial Intelligence (AI) and machine learning (ML) are advancing at a rapid pace. Companies like Apple, Tesla, Google, Amazon, Facebook and others have started investing more into AI to solve different technological problems in the areas of healthcare, autonomous cars, search engines, predictive modeling and much more. Applying AI is real. It’s coming fast. It’s going to affect every business, no matter how big or small. This being the case how are we as Testers going to adapt to this change and embrace AI? Here is the summary of different things you need to know about using AI in software testing.

Let’s summarize how the testing practice has evolved over the last 4 decades

  • In the 1980’s majority of software development was waterfall and testing was manual
  • In the 1990’s, we had bulky automation tools which were super expensive, unstable and had really primitive functionality. During the same time, there were different development approaches being experimented like Scrum, XP, RAD (Rapid Application Development)
  • From 2000, the era of open source frameworks began
    • People wanted to share their knowledge with the community
    • Started encouraging innovation and asking community of like minded people to help out in improving testing
    • Agile became a big thing, XP, Scrum, Kanban became a standard process in the SDLC
    • There were need for faster release cycles as people wanted more software features delivered faster
  • In the 2010’s, it was all about scale, how to write tests fast and find bugs faster
    • Crowdtesting started
      • Encouraging other people to give feedback on the application. Free and Paid services
    • Cloud testing started
      • People started realizing they need more
        • Server space
        • Faster processing
      • Started to realize the problem of maintenance. How expensive it is to buy hardware and software for maintaining your tests
      • Then we have
        • DevOps
        • Continuous Testing
        • CI/CD integration
  • I believe the Future will be about Autonomous Testing using Machine Learning and AI

 

Basics of AI

Let’s start by de-mystifying some of the terminologies related to AI

  • Artificial Intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans
  • Machine Learning (ML) evolved from the study of pattern recognition and computational learning theory (studying design and analysis of ML algorithms) in AI. It is a field of study that gives computers ability to learn without being explicitly programmed
  • Deep Learning(DL) is one of the many approaches to ML. Other approaches include decision tree learning, inductive logic programming, clustering and Bayesian networks. It is based on neural networks in the human body. Each neuron keeps learning and interconnects with other neurons to perform different actions based on different responses

 

There are 3 types of widely used ML algorithms

  • Supervised Learning – We are giving the right training data (input/output combinations) for the algorithm to learn
    • Examples
      • Give bunch of e-mails and identify spam e-mails
      • Extracting text from audio
      • Fill out a loan application and find the probability of the user repaying the loan
      • How to make user click on ads by learning their behavior
      • Recommendation engines on Amazon, Netflix where customer is recommended products and movies
      • Amazon uses AI for logistics
      • Car Optimization
      • Autonomous cars
  • Un-supervised learning – We give a bunch of data and see what we can find
    • Examples
      • Taking a single image and creating a 3D model
      • Market Segmentation
  • Reinforced learning – Based on concept of reward function. Rewarding Good/Bad behavior and making the algorithm learn from it. E.g. Training a Dog

 

Real life AI applications to visually see how it works

  • Quick Draw from Google
  • Weka is an open source project where they are using ML algorithms for data mining

 

What challenges can AI solve?

Let’s discuss the challenges the industry faced while transitioning to agile and what’s still remains a challenge:

How can we use AI to solve testing problems?

There are many companies taking multiple approaches to solve different problems related to software testing and automation. Testim.io is one such company

Testim.io uses Dynamic Locators, The Artificial Intelligence (AI) underneath the platform in real time, analyzes all the DOM objects of a page and extracts the objects and its properties. Finally, the AI decides the best location strategy to locate a particular element based on this analysis. Due to this, even if a developer changes the attribute of an element, the test still continues to run and this leads to more stable tests. As a result, the authoring and execution of automated tests are much faster and more stable.

 

Here is the detailed insight of how our AI works – https://www.softwaretestinghelp.com/testim-io-tool-tutorial/

Introduction

We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Loops, Help Tooltip, and several tutorial videos. Check them out and let us know what you think.

Loops

What is it?
Adding conditions to your test lets you control if some steps will run or not. You can add conditions to any type of step, including a group step. Now, with Loops you can execute a group of steps until a condition returns false.

Want to run a set of steps continuously until a certain condition is met? Now you can. Loops give you the ability to run the same set of actions a predetermined number of times or as long as a condition isn’t met.

Why should I care?  
This allows you to reach a specific result directly from the group’s properties, or you can go over the different iterations from inside the group. If one of the iterations failed, when entering the group, you will be taken directly to the failed iteration. Learn more

loops

Help Tooltip

What is it?
Authoring tests is easy (well at least with Testim) but troubleshooting takes time. To help you troubleshoot faster we’ve added tooltips to steps that fail. These tooltips will guide you through what you should look at and how you might troubleshoot a failed step.  

Why should I care?
Now you can troubleshoot failed steps faster and independently.

Tutorial Videos

What is it?
Need help running Testim tests from IDE? Or how to do data driven testing from an Excel file? These 3 minute or less videos will show you how. Check them out.

Filter Suite Runs by Suite

What is it?
A new filter has been added to our Suite runs view. Now you can filter the list of runs according to the name of your suite.

Why should I care?
This will allow you to easily find a particular run you did and compare different runs of the same suite. Learn more

filter by test suite run

Customers have access to these features and videos now. Check it out and let us know what you think. If you’re not a customer, sign up for a free trial to experience autonomous testing. We’d love to hear what you think of the new features. Please share your thoughts on Twitter or Facebook.

When we hear the phrase “Record and Playback”, a majority of the people cringe with fear and skepticism as they relate it to primitive, unstable and flaky tests. Organizations have viewed it as a sign of vulnerability in automation and have continued to discourage teams from doing record and playback tests for many years now. The main reason for this is, it leads to-

  • Higher maintenance of tests
  • Lesser stability of tests as it breaks if any element changes
  • Unclear test coverage
  • Tests are highly coupled

This is not a new phenomenon, this has been the case for the past 20 years within which the state of automation has evolved by leaps and bounds.

I am not going to refute the above points as it is true in some cases but people fail to realize there is a time and place for everything which includes “record and playback tests”. These type of tests are valuable to-

  • Do fuzz testing (a.k.a monkey testing), which involves recording large amounts of random data through vast number of valid and invalid actions/assertions and observe the application under test. This helps to uncover issues like memory leaks, unexpected crashes and helps to evaluate the system under extreme conditions that otherwise may be hard to do with normal structured automated tests.

Monkey Testing

  • Perform automated exploratory testing, where the user tries out multiple scenarios and records multiple actions while simultaneously learning about the application and the tool used for automated testing.

a b testing

  • Help in load testing, by quickly recording a bunch of tests and simulate thousands of users concurrently performing the same set of recorded actions on the application.

load testing

  • It helps to get the whole team involved in test automation irrespective of their skillsets.

Now, you may think, “Why am I highlighting the advantages and disadvantages of Record and Playback tests”? The answer is, we at Testim.io, recognized these factors and came up with a hybrid approach to solve the problems with record and playback, by building a platform based on Artificial Intelligence (AI).

Testim.io follows a hybrid approach where we give organizations and users the ability to record and playback tests; while at the same time giving the users flexibility to programmatically manipulate these recorded tests. These tasks can be performed easily using inbuilt functionalities of the platform. It also gives teams the freedom to add their own wrappers around the platform (if needed) by using Javascript and HTML.

hybrid testing approach

To increase stability of tests irrespective of the way the tests are written, Testim.io uses Dynamic Locators. The AI underneath the platform in real time, analyzes all the DOM objects of a page and extracts the object trees and its properties. Finally, it decides on the best location strategy to locate a particular element based on this analysis. The more tests you run, the smarter the AI becomes in increasing the stability of the automated tests. So, even if your strategy for automation is only record and playback, re-running the recorded tests multiple times helps to make those tests stable even if the attribute of an element changes in the future. As a result, the authoring and execution of tests are much faster.

In summary, there are various approaches to test automation. Each approach has its own merits/demerits. Understanding and using the approach that makes more sense in the context of the project is crucial to help in better testing using automation tools, platforms and frameworks. As these options continue to mature, it will become all the more important to follow the hybrid approach to cater to different type of skill sets, needs and expectations of teams and organizations. Hybrid approach to testing is the new era of test automation.

Curious to see how we implement the hybrid approach? Sign up for our free trial.

Introduction

We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. With that said, we’re thrilled to release a few of your most requested features; negative validation, extract value, and integration to test management. Check them out and let us know what you think.

Negative Validation

What is it?

Validations are the most important steps we need in our tests. Validations allow us to see if our app does what we expect it to do. Testim provides a wide range of validation options so you can choose the ones that best suit your needs. Now you can choose a negative validation.

  • Element Not Visible – Make sure an element is not visible to the user.

Why should I care?  
Use the element not visible validation to check whether your element does not exist/visible in the page.  Use this validation to make sure an element disappeared from the screen or wasn’t shown in the first place. Learn more

negative validation

Extract Value

What is it?

Extract value lets you copy values directly from your application to be used in later steps. For example, if you have text in your application such as username, name or account number use extract value to validate that it appears on another page. You can also use it to extract a value for future calculation at a later step.

Why should I care?
Now, you can use the new parameter in a validation step, set text, custom steps, etc. So if you want to change the element you selected, now you don’t have to re-record this step. Learn more

extract value

Integrate Test Management

What is it?

Testim can now sync your test results with TestRail so you can get a side by side view of your manual and automated tests.

Why should I care?

If your doing exploratory testing, now you can link your manual and automated tests to your requirements and defects for end to end traceability. Learn more

integrate test management

Customers have access to these features now. Check it out and let us know what you think. If you’re not a customer, sign up for a free trial to experience autonomous testing. We’d love to hear what you think of the new features. Please share your thoughts on Twitter or Facebook.

Be smart & save time...
Simply automate.

Get Started Free