Category

AI

Category

We recently hosted a webinar on Common Automation Pitfalls and Solutions with an awesome panel consisting of Jonathan Lipps, Philip Lew and me. There were a lot of great discussions on this topic and we wanted to share this with the community as well.

Several topics related to automation were covered in this webinar, including-

  • How automation fits in the SDLC?
  • Is the Testing Pyramid still relevant in this day and age?
  • How automation complements manual testing?
  • How it fits in the CI/CD pipeline?
  • How to come up with a good test automation strategy?
  • What are common pitfalls we fall into while doing test automation?
  • What are some good practices of automation to keep in mind?
  • What are the new trends in automation?

Automation is not a “one size fits all” solution. It does not solve all testing problems. It is just one aspect of the overall testing process. This webinar helps to instil that mindset. It will also be a good resource for all the testers starting to do test automation and experienced testers who are looking for more ideas to build and maintain robust automation frameworks.

Below are some quotes from the conversation-

Phil: “I think when we talk about automation strategy, a lot of folks start digging down really quickly and do tools and frameworks and things like that. But to me, I think that’s the easy part, the hard part, determining why and how are we going to show value?”

Johanthan: “Don’t call something a flake, investigate, figure out exactly what the problem is with it and then either fix that problem in your app or your infrastructure or consider doing away with the test entirely.”

Me: “When you’re trying to build automation frameworks try to also make it as simple as possible because in that way you can further avoid a lot of pitfalls in running and maintaining it.

Below is the recorded video of the webinar-

If you have any feedback or ideas for other webinar topics, we would love to hear that as well. You can email us at raj@testim.io and share them.

Selenium tests have the reputation of easily becoming fragile tests. We’ll look at some common causes of fragile Selenium tests, how you can alleviate some of these issues, and how Testim can provide extra value and robustness to your UI tests.

What’s a Fragile Test?

A Selenium test can be fragile just like any other automated test. A fragile test is when changes that seemingly shouldn’t influence the test result do so anyway. An example that most of us have encountered is when we change one piece of code and break a test that we believe shouldn’t break. But there can be other influencing factors: test data, the current date and time or other pieces of context, or even other tests. Whenever these factors influence the test outcome, we can say we have a fragile test.

Fragile Selenium Tests

Selenium is a tool to automate the browser, giving us the power to write automated tests for our (web) UI. This entails that we will execute our entire application more or less. A lot of moving parts means these Selenium tests can become fragile more easily than simple unit tests.

Let’s look at some causes of fragile Selenium tests and how we can improve them so they become more robust. The causes we’ll cover are

  • tight coupling to the implementation
  • external services
  • asynchrony
  • fragile data
  • context

Tight Coupling to the Implementation

Just like unit or integration tests, Selenium tests can be written so that they’re tightly coupled to the implementation. However, when the implementation changes, this means our tests need to change too, even if the public or visual behavior hasn’t changed.

For example, let’s say we’ve written our test to click on a button with ID “login-button.” When this ID is changed to something else, for whatever reason, our test will fail, even if the test case still works fine when performed manually.

This is because the test is tightly coupled to the specific implementation. How can we decouple our test from the implementation?

In this specific example, we could improve the test by having the test call a helper method that knows about the implementation. So instead of clicking the button with the ID “login-button,” we can make it call a method named “Login” and pass the necessary parameters. If all of our tests use this helper method, we’ll only have to change one piece of code when the implementation changes.

Now, let’s take this a step further and group our helper methods inside a class per page. This is the Page Object design pattern. If something on a page changes, we know where to update the implementation for our tests—in the corresponding page object.

Can we do better? Yes, we can. Thanks to Dynamic Locators and AI, Testim can identify the page element we need, even if its attributes have changed. Check it out:

A screenshot of a cell phone

Description automatically generated

This means that the QA team can record tests in their browser and they will still run fine if some underlying implementation detail changes.

External Services

Another common cause of fragile tests, especially when testing a complete application, is that we rely on external services to work as we expect. These can be particularly hard to control. But when they don’t behave as they should for the test, or if they aren’t available, the test fails. And this can happen even if we haven’t changed or broken anything on our side.

In this case, we could make our tests more robust by creating a mock service and having our application call that service. Otherwise, we’re also testing the external service and setting ourselves up for fragile tests.

By creating a mock service, we have full control over the responses. We can define what data is returned, but we can also simulate error responses, timeouts, or slow responses. These are things that are harder to imitate when we’re using the real external service.

Asynchrony

When we’re testing an application, we often have test cases where we need to wait for a certain event. For example, we might have an application that makes an AJAX call and renders the result on the page. We now have to tell Selenium to wait until the result is rendered on the page. But for how long?

Luckily, Selenium has explicit and implicit waits. They allow us to tell Selenium to wait until a certain condition is met. In our example, we would wait until the result of our AJAX call has been rendered. When that has happened, we can continue our test.

Fragile Data

Most applications that we test with Selenium will require some data storage. Our Selenium tests can become fragile because of the data for two reasons: either the data changes or certain tests continue with data from previous tests.

Data Changes

When we set up a database for our tests, the amount of data we pump into it can become quite large. Consequently, when we make a small change to this test data for one test, we might break another test that depended on this data. This is especially true for larger test suites where it has become unclear which pieces of data are necessary for which tests.

A possible solution is to populate the database with separate data for each test. However, that can lead to a large amount of test data that can be difficult to maintain over time.

A better option is to populate your database as part of each test. You could start with basic data that doesn’t change, and then have each test add the data that it needs.

Interacting Tests

Another potential issue with data is when one test changes the data of another test. Take a simple example where a test changes some identifier of a record (a product ID for example). At the end of the test, it changes it back so that another test can find that same record. However, if the first test fails to revert the change, the second test won’t find the record and will also fail.

This is a case of interacting tests: a test fails because it has been influenced by another test, even though the failing test runs fine when it’s run individually.

The best solution is to give each test its own set of data. We can achieve this by spinning up a new database for each test. This might sound like a complex solution, but we could leverage containers for this: start a container with a database, fill it with the necessary data, run the test and end by shutting down the container. Rinse and repeat for other tests.

If a separate database for each test isn’t an option, we could look at solutions like in-memory databases or libraries that reset our database to the state before the test (like Respawn for .NET).

Context

The typical example of a fragile Selenium test occurs because of a context, like tests that depend on certain dates or times. This is true for every kind of test, so it also applies to Selenium tests.

When your tests depend on certain dates or times, they might suddenly fail when the conditions (temporarily) don’t apply. This is often the case on leap days, but depending on your business logic, other dates may cause your test to fail too.

In unit tests, the date and time can easily be mocked out, but it’s often much more difficult in Selenium tests. To make matters worse, you’re dealing with two clocks: the “server” clock (where your back-end application is running) and the browser’s clock.

There are many options here, and the solution for you depends on your specific case.

On the server-side, you could wrap calls to the system clock in a service and inject a mock service anywhere you need it in your tests. In production, you would then inject the service that uses the real clock. There’s also libfaketime for Linux and Mac OS X, which might be an easier route.

You can mock the browser’s date and time using TimeShift.js and inject this in your Selenium tests.

Is It Really That Hard?

All of these problems have solutions, but they’re often non-trivial to implement, especially in legacy applications. Also, many of these solutions require time and effort from developers, which you may not be able to spare. But if the tester (a QA department, the product owner, or the end-user) can work together with the developers, then you can achieve solid tests more easily.

Also, take a look at Testim. It can cover more than Selenium does out of the box and it provides a useful platform for developers, the QA team, and end-users to collaborate.

What if you need to verify the contents of a downloaded Word or Excel file? Or if your application has email or SMS integration? And wouldn’t you like to have the tester create your UI tests instead of taking time from your developers?

Testim provides an easy recording function for non-developers to write tests. This also allows non-developers to report a bug and immediately include a test to reproduce the issue. Traditionally, similar recording tools have created fragile tests because they’re too close to the implementation. But as we mentioned, Testim takes a different approach and leads to more robust tests.

For QA professionals, there are powerful features, like running tests with different data sets, conditional steps, and screenshots. Testim also learns about your tests over time, making your tests more stable the more you run the tests, even when implementation details change.

Testim allows you to create robust UI tests for your application that need minimal maintenance and provide maximum value to both developers, QA professionals, and end-users.

Author bio: This post was written by Peter Morlion. Peter is a passionate programmer that helps people and companies improve the quality of their code, especially in legacy codebases. He firmly believes that industry best practices are invaluable when working towards this goal, and his specialties include TDD, DI, and SOLID principles.

We were recently at the STP Spring 2019 conference. Testim was one of the sponsors for the event. We were also there to give a talk and workshop on Implementing ATDD in large scale agile projects and doing Paired Session Based Exploratory Testing respectively. It was an amazing conference in terms of the content, speakers, attendees and the location.

The conference was held at the Hyatt Regency next to the SFO airport and San Francisco Bay. It was a beautiful location and was easily accessible to everyone. As for the conference, there were a great collection of talks and workshops for attendees to learn from and apply the concepts in their daily project activities. The content included different testing strategies/approaches that can be applied to manual/automated testing, applying AI in software testing, different leadership techniques and traits that can be applied in agile testing, testing in DevOps/Continuous Delivery and performance testing.

This is one of the reasons why we have continued to sponsor STP Conferences in the past couple of years; as they make testing inclusive by bringing people from different countries in one location to share their experiences and also learn from each other.

We met a lot of our friends from SauceLabs, Applitools and other companies at the conference. We also had our own sponsor booth.

NOTE: In case you are interested to test drive Testim yourself, just fill in your details here and we will hook you up with

  • Freebies for you and your team
  • Unlimited access to Testim for 14 days
  • 24/7 Customer Support
  • 1 Hour Free Test Design and Automation Consultation with me

As mentioned earlier, on behalf of Testim, I also gave a talk and a workshop. The talk was titled “ATDD (Acceptance Test Driven Development) Is A Whole Team Approach – A Real Case Study”. It was about my real life experiences implementing ATDD in a large scale agile project. I discussed the problems my team had before implementing ATDD and how I trained the entire team of 25 people on different practices to encourage collaboration, learning and reinstating the mindset of One Team, One Goal. I also discussed the process changes that happened due to ATDD, how my team could leverage test automation throughout this process and finally shared the lessons learned from the implementation.

The workshop I did was titled “Unwrapping the box of Paired Testing”. In this workshop, I shared different testing strategies to do quick tours on your applications based on my real life experiences. I discussed what is Session Based Exploratory Testing and used the template I formed to do paired exploratory testing on live applications.

Below are some articles I wrote covering some of the details discussed in my talk and workshop

The power of Session Based Exploratory Testing

A Quick Guild to Implementing ATDD

Finally, we took part in a 5 minute lightning talk series STP hosted called “Testing Stories”. People shared their real life testing stories that were really insightful within 5 minutes.


Overall, we had a great time at the conference and look forward to the next event to meet new people and make lasting relationships. Thanks again to the STP organizers for putting on a great show.

Introduction

We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Email Validation, Advanced Scheduler, For Each Loop. Check them out and let us know what you think.

Email Validation

What is it?

You now have the ability to generate email addresses, send emails and validate the contents of an email within the Testim IDE itself.

Why should I care?

There is no longer a need to use 3rd party email vendors such as Guerilla Mail and Shark Lasers to do email validations. All this is handled within Testim and there is no need for any context switching. The validations can be done within a click of a button as shown below.

Advanced Scheduler

What is it?

You now have the ability to run tests in parallel, add results label, choose branches and set a timeout for a scheduler run with the Advanced Scheduler feature.

Why should I care?

With this new feature, you now have more control over you scheduled runs in terms of making it run faster by adding parallelism, labeling each scheduled run, pick and choose which branch you want to run the tests on and finally setting a timeout value to control when a test needs to be aborted.

For Each Loop

What is it?

You now have the ability to iterate over any list of similar items and perform repeated actions.

Why should I care?

Iterating over rows in a table, clicking on multiple checkboxes in a list of items or validating the order of a list of similar items; just got a lot easier with the for each loop functionality. You simply choose this option and select the element you want to repeatedly click to perform certain validations. It does not matter which element in the list of similar items is selected as the loop always starts from the first element and iterates over its siblings.

Click on this demo test to learn how the For Each Loop functionality works.

Introduction

We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Advanced Analytics, Results Export Feature. Check them out and let us know what you think.

Advanced Analytics

What is it?

You will now be able to see an aggregated failure summary report inside a suite runs page. The report will contain a snapshot of the test failures and a pie chart. To help with debugging; clicking any error group or one of the pie chart segments will filter the view to show only the tests that failed on the selected error. This speeds up the troubleshooting of failed tests by pinpointing the root cause of run failures.

Why should I care?

Sometimes, a single problem in one shared step (for example, a “login” group) can cause many tests to fail. With the release of this feature, you will now be able to quickly hone in on those high-impact problems and dramatically reduce the time it takes to stabilize the suite.

 

Results Export Feature

What is it?

You will now see an export button in the results page (suite runs, test runs and single suite view). Clicking this button will download the results data as a CSV file, which can then be imported into excel or Google sheets.

NOTE: Only the data that is currently presented in the UI will be put into the CSV file. For example, if “Last 24 hours” is selected and the status filter is set to “Failed”, the CSV file will only include failed tests from the last 24 hours.

Why should I care?

We now have the ability to easily share the test results across teams with people who use and do not use Testim. This also gives the flexibility to feed the generated CSV file to any external tool or framework for more customized reporting. The possibilities are endless.

Testim gives users the flexibility to run tests on different screen resolutions. But sometimes this can get confusing where in; some tests run on a certain resolution and the newly created tests run on a different resolution. Below, are two simple tips to set screen resolution for a particular test and also apply it globally to all the tests in the project.

Tip 1: To ensure a test runs on a particular screen resolution each time you run it, follow the below steps

  • Navigate to the properties panel of the setup step (first step in the test)
  • Click on “Choose Other” option to select a resolution from the already existing config list OR
  • Click on the edit config icon next to the exiting resolution
  • Set the desired screen resolution you want
  • Give a name for the newly created resolution
  • Then click “Save”

 

Tip 2: To apply an existing/new resolution to all the tests in your Test List, follow the below steps

  • Navigate to the Test List view
  • Click on “Select All”
  • Click on the “Set configuration for selected tests” icon
  • Choose the required resolution you want to be applied to all the tests

NOTE: Test configs can also be overridden during runtime via the –test-config parameter in the CLI and the Override default configurations option in the Scheduler.

The Software Development Lifecycle (SDLC) consists of various roles and processes that have to seamlessly mesh together, to release high-quality software. This holds true right from the initial planning phase, all the way to production release and monitoring. In terms of roles, we have designers, business analysts (BA), developers, testers, scrum masters, project managers, product owners (PO) and technical architects (TA), who bring a varying level of experience and skill set to the project. They collaborate to discuss and implement different aspects of the software. In terms of processes, based on team size, release schedules, availability of resources and complexity of the software, the amount of processes can vary from no process to strict quality gates, at different phases of the SDLC.

What is the Knowledge Gap?

As teams start to collaborate to work on different features of the software, they often run into the below situations-

  • Each team member has different interpretations of the requirements
  • A majority of the team finds it hard to understand the technical jargons used to describe the working of the system
  • Developers assume the testers have the same level of technical expertise as them, while explaining changes in code during code reviews
  • Developers fail to do development testing and assume testers will test the newly implemented feature completely
  • There is no clear distinction of responsibilities in the team; leading to ambiguity and confusion
  • The PO comes up with a feature to implement and each team member has different interpretations of how the feature should work
  • The BA writes requirements that are hard to understand and implement, due to lack of understanding of the technical aspects of the system
  • The TA explains how a feature should be implemented using technical jargons that are hard to understand by designers, PO’s, BA’s and testers
  • The developer develops the feature without paying attention to the testability of the feature
  • The tester sits in on code reviews and the developers assume he/she has the same level of technical expertise as them when explaining their implementation
  • The tester does not get enough time to complete their testing due to tight release schedules
  • The developer fails to do development testing and makes testers responsible for the quality of the product
  • There is no clear distinction of responsibilities on who would do what task in the SDLC and everyone assumes someone would do the tasks. As a result; majority of them never gets done

And so on…

Now, you may ask? Why do teams get into the above situations more often than expected? The answer is, there is a knowledge gap in teams. This is especially true when teams have a mix of entry-level, mid-level and expert level resources and each one makes different assumptions on the skillset, experience and the domain knowledge every individual brings to the table.

Also, these gaps can stem from a more granular level when teams are using different tools as well. For example – When customers use Testim, we have observed first hand that, different developers/testers think and use our platform differently.

  • Manual testers see Testim as a time saving tool that helps them quickly create and run stable tests, and as something that can help them in reducing the amount of manual effort it takes to test applications
  • Automation Engineers see Testim as an integrated tool that helps them to do coded automated testing with the help of JavaScript and API testing in one single platform instead of using multiple tools for functional and API testing
  • Developers see Testim as a quick feedback tool that helps them to run several UI tests quickly and get fast feedback on the application under test. They also recognize the ability to do more complex tests by interacting with databases and UI, all in one single platform
  • Release Engineers see Testim as a tool that can be easily integrated in their CI/CD pipeline. They also recognize the ability to trigger specific tests on every code check in; to ensure the application is still working as expected
  • Business and other stakeholders view Testim as a collaborative tool that can help them easily get involved in the automation process irrespective of their technical expertise. They also recognize the detailed reports they get from the test runs that eventually helps them to make go/no go decisions

As we can see, within the same team, people have different perceptions of tools that are being used within the project as well. These are good examples of knowledge gaps.

How do we identify knowledge gaps?

Identifying knowledge gaps in the the SDLC is critical not only to ensure the release of high quality software but also to sustain high levels of team morale, productivity, job satisfaction and the feeling of empowerment within teams. The following questions help to identify knowledge gaps-

  • What are the common problems that occur in each phase of the SDLC?
  • What processes are in place during the requirements, design, development, testing, acceptance and release phases of the SDLC?
  • How often does a requirement change due to scope creep?
  • How effective is the communication between different roles in the team?
  • Are the responsibilities of each team member clearly identified?
  • How visible is the status of the project at any instant of time?
  • Do developers/testers have discussions on testability of the product?
  • How often are release cycles pushed to accommodate for more development and testing?
  • Are the teams aware of what kind of customers are going to use the product?
  • Has there been lapses in productivity and team morale?
  • Is the velocity of the team stable? How often does it fluctuate and by how much?

In terms of tools being used:

  • How are teams using different tools within the project? Are they using tools the right way?
  • Are all the resources sufficiently trained to use different tools within the project?
  • Does one sub-group within a team have more problems in using a tool than others?
  • How effective is a particular tool in saving time and effort to perform different tasks?

Answering these questions as a team helps to identify the knowledge gaps and helps in thinking about solutions to these problems.

How do we bridge the knowledge gap?

There are 5 main factors that help to bridge the knowledge gap in teams. They are as follows-

  1. Training

Sufficient training needs to be given to designers, developers, testers, scrum masters, project managers to help them do their job better, in the context of the project and using different tools. Doing this will help designers understand how mockups need to be designed so that developers can effectively implement the feature, testers can attend code reviews without feeling intimidated and use tools more effectively with sufficient technical training, developers will understand why thinking about the testability of the feature being implemented is important and realize tools can help aid their development testing effort, the scrum master can better manage tasks in the project, project managers can ensure they help the team to collaboratively meet release schedules and deadlines and finally, stakeholders can get a high level overview of the project when they learn what reports are generated from different tools and how to comprehend them.

  1. Visibility

If we want the teams to start seamlessly working together like a well oiled machine in an assembly plant; we need to make the results of everyone’s effort visible to the entire team. It is important for us to know how our contributions help in the overall goal of releasing a high quality product within the scheduled date. There are various way to increase visibility in teams such as,

  • Checklists – where there is a list of items to be done in each phase of the SDLC, that helps everyone to be aware of the expectations from each one of them. This is especially helpful when the team consists of members of varying skill sets and experience. If the items in the list are marked as DONE, then there is no ambiguity in terms of what task has been completed
  • Visual Dashboards – Another solution is having visual dashboards giving a high-level overview of the project health and status. This not only helps stakeholders but also individual contributing team members. These can be created on whiteboards, easel boards or software that is accessible to everyone. Everyday, during stand up meetings, the teams should make it a point to address the dashboard and ensure everyone is aware of the high-level project status. For example – In Testim, we provide a dashboard showing what percentage of test runs passed, number of active tests, average duration of tests, how many new tests were written, how many tests were updated, how many steps have changed, what are the most flaky tests in your test suite and all these details can be filtered based on the current day, 7 day or a 30 day period.
  1. Clear definition of responsibilities

There needs to be clearly defined responsibilities in teams. Each one needs to know why he/she is in the team and what task they need to accomplish on a daily, weekly, monthly and a quarterly basis. Goals, objectives and expectations from each team member need to be clearly discussed with their respective peers/managers. This prevents a majority of the confusion that may occur in terms of task completion.

  1. Empowering the team

In this day and age, where individuals are more technical and skilled, what they are lacking in is – getting some level of empowerment and autonomy. Contrary, to popular beliefs that there needs to be one leader for the whole project; the leadership responsibility should be divided within each role in the team. There needs to be one point of contact each from the design, development and testing teams. Each point of contact who also in most cases help to lead their respective sub-teams, meet up with other leads and ensure all the sub-teams within the project are on the same page and working towards the same goals and objectives. This way, the whole team is empowered.

  1. Experimentation

Once the gaps are identified, the whole team needs to sit together (usually in retrospective meetings after sprints) to discuss different solutions to problems. Based on this, the team needs to experiment with different solutions and see what works well/doesn’t. This constant experimentation and feedback loop helps to make the team more creative and empowers them to come up with solutions that work for them.

In summary,

the “knowledge gap” has been one of the major obstacles for teams to reach their fullest potential. Identifying and reducing them, will help to increase efficiency and as a result lead to faster release cycles, with higher quality. Also, Testim can be used as one of the aids in this entire process. Sign up now, to know how it helps to increase collaboration and bridge the gap.

 

Introduction

We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Auto Scroll, Scheduler Failure Notifications. Check them out and let us know what you think.

Auto Scroll

What is it?

When an element on the page is moved around, finding the target element may require scrolling even though it wasn’t required when the test was initially recorded. Testim does this automatically with auto scroll.  

NOTE: User also has the option to disable this feature, if required.

Why should I care?

You no longer have to worry about tests failing because of element not visible/found when the element location is changed in the page; thereby needing to scroll.  With auto scroll, you can scroll automatically to the element that is outside the viewport.

Scheduler Failure Notifications

What is it?

Users now have the ability to get email notifications on every failed test that ran on the grid using the scheduler.

Why should I care?

With the new “Send notifications on every failure” feature, users will receive notifications on failures every time a scheduler run fails. Now, you have instant feedback on failed scheduler runs. This is unlike the “Notify on error” option, where uses gets notifications only once; when a scheduler run fails. No new email is sent out until the failed scheduler run is fixed.

 

What are Loops?

Loops are one of the most powerful concepts in programming and sometimes could be a little hard to understand. At Testim, we made it easier for users to use Loops in their tests; by building it within the framework itself. This means, the user does not need to write code to repeat a group of steps a certain number of times.

Why do we need Loops?

Loops are useful when we need to repeat an action several times in our test.

For example –  Say we want to check whether the “Post” button on Twitter works consistently. We could have a scenario where we want to click the button 50 times to ensure it works consistently without any problems. In this scenario, are we going to record 50 steps to click the button 50 times or just record the step once and repeat it 50 times?

This is where Loops can be our best friend. It helps in repeating an action several times without needing to repeat the steps again and again. As a result, we save a lot of time in test authoring and execution.

How to use Loops?

Loops could still be a little hard concept to grasp. So here is a quick tip on how to easily use loops within Testim.

Let’ say we have the below code

for(i = 1; i < 4; i++) {

//Steps to repeat 3 times

}

What we are doing here is –

  • We are initializing a variable “i” to be equal to 1 before the loop starts. This is our Iterator.
  • We are specifying a condition to repeat the steps a certain number of times by giving “i<4”. In this case we want to repeat a set of actions 3 times.
  • Each time we exit a loop we need to increment our Iterator. In this case, we are doing “i++” which will increment the variable i from 1 to 2, 2 to 3 and 3 to 4. Then, eventually we exit the loop when i = 4, as we have a condition which checks to see if i<4 (Remember 4 is NOT LESS than 4 but EQUAL)

The same logic applies to Loops in Testim as well. Where we have 3 steps –

Step 1 – Initialize the Iterator

Step 2 – Specify a condition to repeat the steps a number of times

Step 3 – Increment the Iterator

This is what we do in this Demo Example, where we-

 

STEP 1Initialize the Iterator

 

STEP 2Specify a condition to repeat the steps a number of times

  • Group the steps to be repeated in one single reusable component
  • Go to the Properties Panel and add Custom code
  • Give condition to repeat the group a certain number of times

 

STEP 3Increment the Iterator

  • Add a custom action as the last step inside the newly created group
  • Increment the Iterator

 

The same steps applies to any set of actions you may want to repeat using our Loops functionality. You can also use do…while loops which follows similar concepts.

Also remember, Testim supports loops based on element conditions apart from just custom code.

We recently hosted a webinar on Real Use Cases for using AI in Enterprise Testing with an awesome panel consisting of Angie Jones and Shawn Knight and me being the moderator. There were lot of great discussions on this topic and we wanted to share it with the community as well.

Below you will find the video recording, answers to questions that we couldn’t get to during the webinar (will be updated as and when we get more answers) and some useful resources mentioned in the webinar as well. Please feel free to share these resources with other testers in the community via email, twitter, linkedin and other social channels. Also, reach out to me in case of any questions at raj@testim.io or any of the panel members.

Video Recording

 

Q&A

Any pointers to the existing AI tools for testing?

@Raj: I am assuming this question is about things to know with existing AI tools in the market. If this is the case, then first and foremost, we need to figure out what problems we are trying to solve with an AI based tool that cannot already been done with other possible solutions. If you are looking for decreasing time spent on maintenance, getting non-technical folks involved in automation and making your authoring and execution of UI tests much faster, then AI tools could be a good solution. I may be biased in this but it is definitely worth checking out Testim and Applitools if any of the points I mentioned is your area of interest/pain point as well.

As discussed in the webinar, currently there are a lot of vendors (including us) who use all these AI buzzwords. This may result in you getting confused or overwhelmed in choosing a great solution for your problems. My recommendation is-

  • Identify the problem you are trying to solve
  • Pick different AI tools, frameworks to help solve that problem
  • Select the one that meets the needs of your project
  • Then, proceed with that tool

 

As a tester working with Automation, what should I do to not lose my job?

@Raj: First of all, I believe manual testing can never be replaced. We still need human mind to think out of the box and explore the application to find different vulnerabilities in our product. AI will be used complementary to manual testing.

Secondly, we need humans to train these AI bots to try to simulate human behavior and thinking. AI is still in its initial stages and is going to take another 10 -15 years to completely mature.

In Summary, I think this is the same conversations we had 10 years ago when there were automated tools coming into the market. Then, we concluded that automated tools helps to complement manual testing BUT NOT replace it. I think it is the same analogy here where AI is going to complement manual testing BUT NOT replace it.

As long as people are open to constantly learning and acquiring different skillsets, automation is only going to be make our lives easier while we can pivot and focus on other aspects that cannot be accomplished with automation. This mainly involves things related to creativity, critical thinking, emotion, communication and other things that are hard to automate. The same thing holds through for Artificial Intelligence. While we use AI to automate some of the processes to save us time, we can use this saved time in focusing on acquiring other skills and stay abreast with the latest in technology.

So the question here is, not more about automation/AI replacing humans but about how do we stay creative and relevant in today’s society? That is done by constant learning, development and training.

 

Do we have some open source tool on the market for AI testing?

@Raj: Not really, we do have a small library which was added to the Appium project to give a glimpse of how AI can be using in testing — https://medium.com/testdotai/adding-ai-to-appium-f8db38ea4fac?sk. This is just a small sample of the overall capabilities.

 

What should be possible in testing with ai, in 3 years time? And how do you think testing has changed (or not)?

@Raj: We live in this golden age of testing, Where there are so many new tools, frameworks and libraries that are available to us, to help make testing more effective, easier and collaborative. We are already seeing the effects of AI based testing tools in our daily projects with introduction of new concepts in the areas of location strategies of elements, visual validation, app crawling and much more.

In 3 years, I can see the following possibilities in testing-

  • Autonomous Testing

I think autonomous testing will be more matured and a lot of tools will include AI in their toolset. This means we can create tests based on actual flows done by the user in production. Also, the AI can observe and find repeated steps and cluster them to make reusable components in your tests. For Example – Login, Logout scenarios. So now we have scenarios that are actually created based on real production data instead of us assuming what the user will do in production. In this way, we also get good test coverage based on real data. Testim already does this and we are trying to make it better.

  • UI Based TDD

We have heard of ATDD, BDD, TDD and also my favorite SDD (StackOverflow Driven Development) 🙂 . In 3 years, we will have UITDD. What this means is, when the developers get mockups to develop a new feature; the AI potentially could scan through the images in the mockups and start creating tests, while the developer is building the feature in parallel. Eventually, by the time the developer has finished implementing the feature, the AI would have already written tests for it based on these mockups using the power of image recognition. We just need to run the tests against the new feature and see whether it passes/fails.

  • AI for Mocking Responses

Currently we mock server requests/responses for testing functionalities that depend on other functionalities which haven’t been implemented yet or for making our tests faster by decreasing response time of API requests. Potentially, AI can be used to save commonly used API requests/responses and prevent unnecessary communication to servers when the same test is repeated again and again. As a result, your UI tests will be much faster as the response time has been drastically improved with the help of AI managing the interaction between the application and the servers.

 

Will our jobs be replaced?

@Raj: Over the past decade, technologies have evolved drastically. There have been so many changes happening in the technology space but one thing constant is human testers’ interaction with them and how we use them for our needs. The same holds true for AI as well. Secondly, to train the AI, we need good data combinations (which we call a training dataset). So to work with modern software we need to choose this training dataset carefully as the AI starts learning from this and starts creating relationships based on what we give to it. Also, it is important to monitor how the AI is learning as we give different training datasets. This is going to be vital to how the software is going to be tested as well. We would still need human involvement in training the AI.

Finally, it is important to ensure while working with AI the security, privacy and ethical aspects of the software are not compromised. All these factors contribute to better testability of the software. We need humans for this too.

In summary, we will continue to do exploratory testing manually but will use AI to automate processes while we do this exploration. It is just like automation tools which do not replace manual testing but complement it. So, contrary to popular belief, the outlook is not all ‘doom-and-gloom;’ being a real, live human does have its advantages. For instance, human testers can improvise and test without written specifications, differentiate clarity from confusion, and sense when the ‘look and feel’ of an on-screen component is ‘off’ or wrong. Complete replacement of manual testers will only happen when AI exceeds those unique qualities of human intellect. There are a myriad of areas that will require in-depth testing to ensure safety, security, and accuracy of all the data-driven technology and apps being created on a daily basis. In this regard, utilizing AI for software testing is still in its infancy with the potential for monumental impact.

 

Did you have to maintain several base images for each device size and type?

Has anyone implemented MBT to automate regression when code is checked in?

 

Resources relevant to discussions in the webinar

Be smart & save time...
Simply automate.

Get Started Free