Author

Raj Subramanian

Browsing

Introduction

We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Auto Scroll, Scheduler Failure Notifications. Check them out and let us know what you think.

Auto Scroll

What is it?

When an element on the page is moved around, finding the target element may require scrolling even though it wasn’t required when the test was initially recorded. Testim does this automatically with auto scroll.  

NOTE: User also has the option to disable this feature, if required.

Why should I care?

You no longer have to worry about tests failing because of element not visible/found when the element location is changed in the page; thereby needing to scroll.  With auto scroll, you can scroll automatically to the element that is outside the viewport.

Scheduler Failure Notifications

What is it?

Users now have the ability to get email notifications on every failed test that ran on the grid using the scheduler.

Why should I care?

With the new “Send notifications on every failure” feature, users will receive notifications on failures every time a scheduler run fails. Now, you have instant feedback on failed scheduler runs. This is unlike the “Notify on error” option, where uses gets notifications only once; when a scheduler run fails. No new email is sent out until the failed scheduler run is fixed.

 

What are Loops?

Loops are one of the most powerful concepts in programming and sometimes could be a little hard to understand. At Testim, we made it easier for users to use Loops in their tests; by building it within the framework itself. This means, the user does not need to write code to repeat a group of steps a certain number of times.

Why do we need Loops?

Loops are useful when we need to repeat an action several times in our test.

For example –  Say we want to check whether the “Post” button on Twitter works consistently. We could have a scenario where we want to click the button 50 times to ensure it works consistently without any problems. In this scenario, are we going to record 50 steps to click the button 50 times or just record the step once and repeat it 50 times?

This is where Loops can be our best friend. It helps in repeating an action several times without needing to repeat the steps again and again. As a result, we save a lot of time in test authoring and execution.

How to use Loops?

Loops could still be a little hard concept to grasp. So here is a quick tip on how to easily use loops within Testim.

Let’ say we have the below code

for(i = 1; i < 4; i++) {

//Steps to repeat 3 times

}

What we are doing here is –

  • We are initializing a variable “i” to be equal to 1 before the loop starts. This is our Iterator.
  • We are specifying a condition to repeat the steps a certain number of times by giving “i<4”. In this case we want to repeat a set of actions 3 times.
  • Each time we exit a loop we need to increment our Iterator. In this case, we are doing “i++” which will increment the variable i from 1 to 2, 2 to 3 and 3 to 4. Then, eventually we exit the loop when i = 4, as we have a condition which checks to see if i<4 (Remember 4 is NOT LESS than 4 but EQUAL)

The same logic applies to Loops in Testim as well. Where we have 3 steps –

Step 1 – Initialize the Iterator

Step 2 – Specify a condition to repeat the steps a number of times

Step 3 – Increment the Iterator

This is what we do in this Demo Example, where we-

 

STEP 1Initialize the Iterator

 

STEP 2Specify a condition to repeat the steps a number of times

  • Group the steps to be repeated in one single reusable component
  • Go to the Properties Panel and add Custom code
  • Give condition to repeat the group a certain number of times

 

STEP 3Increment the Iterator

  • Add a custom action as the last step inside the newly created group
  • Increment the Iterator

 

The same steps applies to any set of actions you may want to repeat using our Loops functionality. You can also use do…while loops which follows similar concepts.

Also remember, Testim supports loops based on element conditions apart from just custom code.

We recently hosted a webinar on Real Use Cases for using AI in Enterprise Testing with an awesome panel consisting of Angie Jones and Shawn Knight and me being the moderator. There were lot of great discussions on this topic and we wanted to share it with the community as well.

Below you will find the video recording, answers to questions that we couldn’t get to during the webinar (will be updated as and when we get more answers) and some useful resources mentioned in the webinar as well. Please feel free to share these resources with other testers in the community via email, twitter, linkedin and other social channels. Also, reach out to me in case of any questions at raj@testim.io or any of the panel members.

Video Recording

 

Q&A

Any pointers to the existing AI tools for testing?

@Raj: I am assuming this question is about things to know with existing AI tools in the market. If this is the case, then first and foremost, we need to figure out what problems we are trying to solve with an AI based tool that cannot already been done with other possible solutions. If you are looking for decreasing time spent on maintenance, getting non-technical folks involved in automation and making your authoring and execution of UI tests much faster, then AI tools could be a good solution. I may be biased in this but it is definitely worth checking out Testim and Applitools if any of the points I mentioned is your area of interest/pain point as well.

As discussed in the webinar, currently there are a lot of vendors (including us) who use all these AI buzzwords. This may result in you getting confused or overwhelmed in choosing a great solution for your problems. My recommendation is-

  • Identify the problem you are trying to solve
  • Pick different AI tools, frameworks to help solve that problem
  • Select the one that meets the needs of your project
  • Then, proceed with that tool

 

As a tester working with Automation, what should I do to not lose my job?

@Raj: First of all, I believe manual testing can never be replaced. We still need human mind to think out of the box and explore the application to find different vulnerabilities in our product. AI will be used complementary to manual testing.

Secondly, we need humans to train these AI bots to try to simulate human behavior and thinking. AI is still in its initial stages and is going to take another 10 -15 years to completely mature.

In Summary, I think this is the same conversations we had 10 years ago when there were automated tools coming into the market. Then, we concluded that automated tools helps to complement manual testing BUT NOT replace it. I think it is the same analogy here where AI is going to complement manual testing BUT NOT replace it.

As long as people are open to constantly learning and acquiring different skillsets, automation is only going to be make our lives easier while we can pivot and focus on other aspects that cannot be accomplished with automation. This mainly involves things related to creativity, critical thinking, emotion, communication and other things that are hard to automate. The same thing holds through for Artificial Intelligence. While we use AI to automate some of the processes to save us time, we can use this saved time in focusing on acquiring other skills and stay abreast with the latest in technology.

So the question here is, not more about automation/AI replacing humans but about how do we stay creative and relevant in today’s society? That is done by constant learning, development and training.

 

Do we have some open source tool on the market for AI testing?

@Raj: Not really, we do have a small library which was added to the Appium project to give a glimpse of how AI can be using in testing — https://medium.com/testdotai/adding-ai-to-appium-f8db38ea4fac?sk. This is just a small sample of the overall capabilities.

 

What should be possible in testing with ai, in 3 years time? And how do you think testing has changed (or not)?

@Raj: We live in this golden age of testing, Where there are so many new tools, frameworks and libraries that are available to us, to help make testing more effective, easier and collaborative. We are already seeing the effects of AI based testing tools in our daily projects with introduction of new concepts in the areas of location strategies of elements, visual validation, app crawling and much more.

In 3 years, I can see the following possibilities in testing-

  • Autonomous Testing

I think autonomous testing will be more matured and a lot of tools will include AI in their toolset. This means we can create tests based on actual flows done by the user in production. Also, the AI can observe and find repeated steps and cluster them to make reusable components in your tests. For Example – Login, Logout scenarios. So now we have scenarios that are actually created based on real production data instead of us assuming what the user will do in production. In this way, we also get good test coverage based on real data. Testim already does this and we are trying to make it better.

  • UI Based TDD

We have heard of ATDD, BDD, TDD and also my favorite SDD (StackOverflow Driven Development) 🙂 . In 3 years, we will have UITDD. What this means is, when the developers get mockups to develop a new feature; the AI potentially could scan through the images in the mockups and start creating tests, while the developer is building the feature in parallel. Eventually, by the time the developer has finished implementing the feature, the AI would have already written tests for it based on these mockups using the power of image recognition. We just need to run the tests against the new feature and see whether it passes/fails.

  • AI for Mocking Responses

Currently we mock server requests/responses for testing functionalities that depend on other functionalities which haven’t been implemented yet or for making our tests faster by decreasing response time of API requests. Potentially, AI can be used to save commonly used API requests/responses and prevent unnecessary communication to servers when the same test is repeated again and again. As a result, your UI tests will be much faster as the response time has been drastically improved with the help of AI managing the interaction between the application and the servers.

 

Will our jobs be replaced?

@Raj: Over the past decade, technologies have evolved drastically. There have been so many changes happening in the technology space but one thing constant is human testers’ interaction with them and how we use them for our needs. The same holds true for AI as well. Secondly, to train the AI, we need good data combinations (which we call a training dataset). So to work with modern software we need to choose this training dataset carefully as the AI starts learning from this and starts creating relationships based on what we give to it. Also, it is important to monitor how the AI is learning as we give different training datasets. This is going to be vital to how the software is going to be tested as well. We would still need human involvement in training the AI.

Finally, it is important to ensure while working with AI the security, privacy and ethical aspects of the software are not compromised. All these factors contribute to better testability of the software. We need humans for this too.

In summary, we will continue to do exploratory testing manually but will use AI to automate processes while we do this exploration. It is just like automation tools which do not replace manual testing but complement it. So, contrary to popular belief, the outlook is not all ‘doom-and-gloom;’ being a real, live human does have its advantages. For instance, human testers can improvise and test without written specifications, differentiate clarity from confusion, and sense when the ‘look and feel’ of an on-screen component is ‘off’ or wrong. Complete replacement of manual testers will only happen when AI exceeds those unique qualities of human intellect. There are a myriad of areas that will require in-depth testing to ensure safety, security, and accuracy of all the data-driven technology and apps being created on a daily basis. In this regard, utilizing AI for software testing is still in its infancy with the potential for monumental impact.

 

Did you have to maintain several base images for each device size and type?

Has anyone implemented MBT to automate regression when code is checked in?

 

Resources relevant to discussions in the webinar

It’s no surprise that mobile is big and only getting bigger. There are currently 8.7 billion mobile connections in the world, more than a billion active devices than there are people. Ensuring seamless user experiences for mobile users requires extensive software testing. However, testing mobile is much easier said than done – until now!

We are thrilled to unveil Testim Mobile.  SIGN UP  to our Mobile Beta program to experience super fast automated test creation and execution with minimal maintenance.

For those who are familiar with Testim, now you get all of the same AI capabilities you are accustomed to in the web version, now available in mobile. With this release, we have become the only fully integrated AI test automation platform that supports Web, Mobile Web and Native applications.

For those who are not familiar with Testim, we help engineering teams of all sizes across the globe improve test coverage, speed up release cycles and reduce risk in their delivery process. Companies like Wix, Globality, Autodesk, Swisscom and LogMeIn use Testim to integrate testing into their evolving Agile software development processes.

Playback Tests on Real Devices or Emulators

To get the true behavior of the application, it is recommended to playback tests on real devices, and not settle for an emulator. The main challenge with real device coverage is the plethora of available devices and configurations, which can include screen resolutions, performance levels, different firmwares from manufacturers, different operating systems and much more.

With Testim, you can directly playback tests on the physical device seamlessly; apart from just using the emulator.

Record actions in real time

When the user performs gestures such as tap, text inputs, scroll etc on the device/emulator, they are recorded in real time in the Testim editor. This allows you to see exactly which actions are being captured.

Get screenshots for each and every step

As we start recording different flows and creating multiple tests, it becomes critical to visually see the UI elements of the app when recording and playing the tests. Testim provides screenshots for each and every step that is recorded. Once the same test is played back, you can see the baseline and the expected image side by side. This way you know exactly what is happening underneath the hood.

Assertions

One of the most important capability of any automation framework is its ability to do assertions, this holds true for mobile as well. With Testim, you can do various validations to ensure the UI elements on the page are displayed as expected.

Feedback on each step

The user also gets feedback on each step in terms of whether the tests Passed or Failed by showing a “Green” or “Red icon” on the top left portion of each step as shown below

Ease of Use

The downfall of most of the automation frameworks is the complexity that goes into setting up the physical device with the framework, recording/playing back tests, maintaining and organizing the tests into meaningful components. With Testim, everything is handled within the the Testim editor just like how it is for users who are accustomed to doing Web and Mobile Web automation using Testim. There is no context switching between different windows, tabs, tools or frameworks. Just pair the device and start recording. It is that simple. We take care of the rest. All the features you have been using such as branching, version control, test suite/plans, test runs works the same in Testim Mobile.

In addition, you can maintain your Web, Mobile Web and Native automation projects all in one place and easily switch between projects and manage your tests.

Robust CLI actions

Users have the ability to create custom Node.js scripts that can be executed in the CLI (Command Line Interface). This will give users the flexibility to not only perform UI level validations on the mobile application but also the opportunity to do more advanced actions such as database validations.

Inserting Custom Code

Custom javascript code can be added to perform addition validations (if needed). We can perform manipulation/validation of parameters.

Control structures built within Testim

Save time by using the while loop to repeat steps. This eliminates the need to duplicate steps or write code to perform certain actions on your mobile application repeatedly. This feature helps to build simple and robust tests.

Multi Application Recording

With Testim Mobile you have the the flexibility to record actions from multiple applications within the same test. This means, you can easily switch between applications during recording and perform end to end testing, without any additional configuration.

 

Upcoming features in future releases

We will be releasing more features for Testim Mobile in the upcoming months. Here is a quick sneak peek into some of them-

Support for advanced gestures

Users will be able to record/playback more gestures on their applications that includes double tap, nested scroll, multi-touch, long press menus and much more.

Pixel level validation

Testim provides a step by step baseline and expected images out of the box, as a basic means of debugging your test runs. In addition, you will also be able to do visual validation. This feature is helpful when you need to ensure the UI of the mobile application is exactly what is supposed to be, across different devices.

Support for multiple IME’s

With multiple IME’s (Input Method Editor) available in the Android ecosystem, it makes sense to give users the flexibility to record/playback tests with the desired IME. We will soon be able to use alternative input methods such as on-screen keyboards, speech input etc, on our devices and record/playback tests with Testim Mobile.

 

Get started with Mobile Native App Automation

Don’t take our word for it, test drive the Mobile Beta and find out first hand for yourself to see how AI driven mobile automation can be super fast, stable, and simple.

It is also worth mentioning that, users who sign up for the program get a personalized on-boarding process from our team, which also includes 24/7 customer support. The best part of all this is, everything is FREE. We are more interested in your feedback and would like to showcase our AI powered mobile automation framework.

So, what are you waiting for? Enroll to our beta program and take your first step towards easy and seamless mobile native app test automation using AI.

As you may already know, Testim provides the flexibility to extend the functionalities of our platform by giving teams the ability to add their own JavaScript code. That being said, there maybe a situation where you need to use inbuilt javascript methods within the Testim JavaScript editor.  For Example – Say you want to find out of if an element is enabled on the page, you could do this

if(!element.disabled) {

 return true;

}

Now how do you know that, you could use the inbuilt method disabled() here?  The answer is simple.  Just follow the below steps-

  1. Open any web page on your Chrome browser
  2. Right click on an element and select “Inspect”
  3. Navigate to Console tab
  4. Start typing “$0.”
  5. Then observe all the inbuilt methods which are supported by Chrome

The same methods are supported by Testim as well.  Additionally, we support commonly used javascript methods such as reload(), split(), trim() and much more.  For more javascript examples check out our help document here.

Authoring and Execution of tests is an important aspect of test automation. Tests should be simple to write, to understand, and to execute across projects. The chosen platform should give the flexibility to both record and playback tests and write custom code to extend the functionalities of the automation framework. I recently came across Angie’s article on 10 features every codeless test automation tool should offer. She does a great job of discussing different aspects of test automation that needs to be an integral part of any automation platform.

Angie’s breakdown appeals to the heart and soul of what we set out to do when we built Testim. Starting from her explanation of why record and playback tools fail (we discuss some of these issues in this post as well) to the different challenges mentioned in her article.

We are really proud of what we do at Testim and wanted to address how we handle some of the important aspects of test automation pointed out in her article. We also highlight how we use AI to solve the “maintenance” issue which is arguably the hardest challenge of test automation.

 

  • Smart element locators

Testim’s founder (Oren Rubin) coined the term “Smart element locators” in 2015, when he gave the first demo of Testim. He showed us how AI can be used to improve locators. Using hundreds of locators instead of a single static one, Testim can learn from each run and improve over time.

With static locators (e.g. CSS-Selector/XPath), we use only one attribute of an element to uniquely identify it on a page and if this changes, the test breaks and as testers, we end up spending a considerable amount of time troubleshooting the problem and fixing it. Based on research, about 30% of testers’ time is consumed in just maintaining tests. Can you imagine the opportunity cost associated with this effort?

A Testers’ time is valuable and better spent on actually exploring the application and providing information to help stakeholders make informed decisions about the product. With AI based testing we can overcome this problem by using dynamic locators. Dynamic Locators is a concept where we use multiple attributes of an element to locate it on on the page instead of a single attribute. This way, even if one attribute changes, the element can still be successfully located with the help of other attributes that have already been extracted from the DOM by the AI. Testim is based on Dynamic Location Strategy making your tests more resilient to change.

  1. Conditional waiting

Testim supports conditional waits and can be added in a click of a button. We provide built in wait functionalities based on element visibility, text (or regex), code based (custom) waits using JavaScript, waits based on downloaded file and of course the hardcoded “sleep” (which is not generally advisable to use in tests unless you have no other option).

  1. Control structures

Testim supports “if” statements and “loops”. Looping can be applied on the test level by parameterizing your tests with different datasets (aka Data Driven) or on a specific subset of actions (something super hard, that only Testim supports). These conditions (when to stop the loops) can either be simple, predefined (such as element or text being visible) or can be more complex with custom code. This has been an integral part of Testim since the first version.

  1.  Easy assertions

Assertions are one of the most widely performed actions with test automation. You want to validate an element based on different conditions. For example – If a particular element needs to appear on a particular page of your website, we need to add an assertion to validate the presence of element. With Testim, we made it easy for users to add assertions with a single mouse click and built all of them within the tool itself.

Users have various validation options that include:

  • Validate element visible
  • Validate element not visible
  • Validate element text
  • Validate via API call
  • Validate file download
  • Validation via custom JS code running in the browser (great for custom UI)
  • Validation via custom JS code running in node.js (great for pdf and DB validations)
  • Visual validation – via Applitools integration*.

Testim integrates seamlessly with Applitools, a Visual Assertion platform, which allows you to validate not only the text, but also its appearance like font and color.

  1. Modification without redo

Testim not only supports easy modification of steps, the platform also supports full version control, including creating branches and auto-sync with github.

In Testim, you can add a step at any point in your test.

You can also easily delete or modify any step in your test

  1. Reusable steps

Testim supports reusability by grouping several steps together and the ability to reuse the same group in another test (and passing different parameters e.g. a login),

For Example – The simple steps to log into an application, is one of the most commonly repeated steps in test automation. In Testim, you can create a reusable “Login” step by selecting the steps we want to group together and click on “Add new Group” as shown below.

 

 

Not only does Testim support the creation of reusable components as explained above, the platform also supports maximizing the use of reusable component through a feature called Group Context. Imagine you have one or more components (E.g. gallery of images) within a page or across several pages, and you need to easily state on which component to perform the group of actions. Although this is relatively doable in coding (via the Page Object design pattern), this was extremely hard to implement in low code tools until now with the release of Group Context. Testim is the only codeless platform that can currently support this action.

  1. Cross-browser support

Testim uses Selenium underneath the hood, and supports test execution on ALL browsers, even mobile web, and mobile native (iOS/Android) which is currently in private beta. Signup for a free trial to execute tests on different browser combinations that include Chrome, Safari, Edge, Firefox and IE11.

  1. Reporting

It is vital to get quick feedback on your test runs, especially root cause analysis. The reports generated should be easy to read and needs to have relevant information on the state of the test. In Testim, there are different levels of reporting to help users know what exactly happened within each test run. Some of the features worth mentioning here include.

  • Screenshots

While each test is recorded, the platform takes screenshots of all the Passed and Failed results for each step. As a result, users find it easier to troubleshoot problems and understand what happens underneath the hood.

  • Feedback on each step

The user gets feedback on all the Passed or Failed steps in a test by showing a “Green” or “Red icon” on the top left portion of the step as shown below.

  • Entire DOM captured on failure

On failure, the user also has the option of interacting with the real HTML DOM and see what objects were extracted during the run.

  • Test Logs

Logs are a rich source of information on what happened underneath the AUT. Testim provides test logs when the user runs the tests on the grids. The option can be found in the in top section of editor.

  • Suite and Test Runs

We have suite and test runs views that enables the user to get granular details on each test run in terms of when the test ran, what was the result, the duration of the run, the level of concurrency, what browser the test ran on and much more. We also have filters to drill down based on different options.

 

  • Reports

We make it easy to get a high level health check of your tests using the Reports feature. There is no other testing platform that currently exists, that can give this level of details about the tests and this is done all in one single dashboard. We give details that include what percentage of test runs passed, number of active tests, average duration of tests, how many new tests were written, how many tests were updated, how many steps have changed, what are the most flaky tests in your test suite and all these details can be filtered based on the current day, 7 day or a 30 day period.

  1. Ability to Insert Code

Testim gives the flexibility for organizations to extend the functionalities of our platform using JavaScript by either running it in the browser; where Testim can help by finding elements for you in the DOM (aka dependency injection), or by running the code on node.js, which allows loading many common libraries (that includes accessing databases or inspect downloaded PDFs).

In addition, Testim has test-hooks to run before or after each test/suite.

For example, if you want to validate a particular price on a web page, you can grab the price, convert the string to a number and do the necessary validation. In the below example we are validating that, the price is not over $1000.

  1. CI/CD Integration

Testim easily integrates with ALL CI servers (e.g. Jenkins, Circleci, VSTS) by literally just copying the automatically generated CLI command and pasting it in your build servers. We integrate with ALL 3rd party grids hosting, that supports Selenium and Appium (Amazon/SauceLabs/BrowserStack…). We also support locally hosted grids and also provide our own Testim grids.

Apart from all the above mentioned features to help build stable automated tests, we also have   a notable feature to make tester’s life a lot easier by saving time to report bugs

 

Capture Feature

One of the most time consuming aspects of testing is bug reporting, where in, after finding a bug, we need to report it to the developer with relevant information, to speed up the troubleshooting and fixing of issues.

With Testim you can do this with a single click, with the help of our chrome extension. All the details related to the bug are automatically generated for you in matter of seconds.

 

In summary, we wanted to build a tool that could help in the authoring, maintenance and collaboration, which we consider the 3 pillars of test automation. Hopefully this post helps to highlight this aspect.

We also love to hear your feedback about our tool, so feel free to reach out to us by not only  trying out Testim for FREE but also getting a free consultation on Test Design and Test Automation on your existing testing frameworks and practices. Remember as Steve Jobs said “Simplicity is the ultimate sophistication” and this is the basis on which Testim was created.

Testim is #1 in innovation among all other competitors for several years now. This can be seen from the new features we have been releasing constantly to make automation faster, more stable and much more collaborative than ever. In continuation with this, we are excited to bring you our next big feature which we are calling Group Context. Imagine you have one or more components (E.g. gallery of images) within a page or across several pages, and you need to easily state on which component to perform the group of actions. Although this is relatively doable in coding (via the Page Object design pattern), this was extremely hard to implement in codeless tools until now with the release of Group Context. Testim is the only codeless platform that can currently support this action

For those of you not familiar with Testim, let’s start by defining “Context”?

“Context” is key in real life and in coding. You provide the necessary information in order to perform an action. For Example – Say you make a reservation at a restaurant for a party of four people, you would be required to provide a name for the reservation. Here the name is “contextual information” and you making a reservation is the “context”. Similarly, in test automation, it is important to know the context of elements from the DOM level in order to perform an action.

A Context in Test Automation

Context is all the more important when you have reusable components in your automation suite. For Example – let’s take the below web page.

 

It is a simple web page, containing a gallery of items. In this case, it has names of places with description and an option to book them as part of a reservation. In each item in the gallery, you have similar elements such as an image, text, and button, this is because the code for generating those instances is the same. So, on a high level, all these gallery items are exactly the same except for the different data showing up for each element in the gallery item.

Say we create a reusable group to:

  1. Validate whether an image and text is present in the gallery item 1 which is “Madan”
  2. Validate whether there is a “Book” button in the gallery item
  3. Click on the “Book” button
  4. Do some validations in the “Checkout” screen

It would look something like this:

 

Now, what if I want to use group on Item 2 of the gallery which is “Shenji” instead of “Madan” (which was the 1st item) ?

 

Typically, we would have to create another reusable group to make it work for gallery item 2, which is time consuming and does not make sense when the whole gallery shares the same DOM structure.

When using code (e.g. Selenium), you can use the Page Object design pattern, and just pass the context in the constructor, either as a locator or a WebElement (see slides 40 and 41 here).

In codeless/low-code platforms, this wasn’t possible until now with the release of our new feature “Group Context”. Now you can maximize reuse by assigning entire groups to different elements of the page and across pages within a single click. Taking the above example, you would not have to reassign all the elements/properties of steps within the group when you want to perform same set of actions, on another item in the gallery, with the exact same DOM structure. This means, we can use the same group on gallery item 2 which is “Shenji” without having to do any rework by just choosing the Context-> Custom option from the properties panel as shown below

 

Use Cases for Group Context

Group Context can be used in many other scenarios, such as:

  • Repeating elements: when you have similar elements repeating in the page and want to execute the same steps on all of them.
  • Table rows: when you want to execute the same steps on different rows in a table.
  • Tabs or frames: when you want to reuse a group of steps recorded on one tab on other tabs in the same or different page

 

Summary

We constantly keep working hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. We strongly believe this feature will benefit teams and help in making automation much smarter. Group Context is currently in beta and we would love your feedback.

Please use our chat support to give us feedback or e-mail us at product@testim.io. Happy Testing!!!

We recently hosted a webinar on AI and its influence on test automation with an awesome panel consisting of Jason Arbon, Oren Rubin, Dionny Santiago and me being the moderator. There were lot of great discussions on this topic and we wanted to share it with the community as well.

Below you will find the video recording of the webinar, the list of questions and answers that we couldn’t get to during the webinar and different resources to learn about AI and testing. Please feel free to reach out to me in case of any questions at raj@testim.io or any of the panel members.

Video Recording

 

Q&A

I am a performance engineer and am working on AI for quality gates in load testing results…that needs to be a high priority for the “future” which is “now”.How do you think bots can be used in this area?
@Jason: UI-Bots can help generate user-like load directly via the application.  Though, for most load testing problems, would recommend something like charles proxy, or internal ways to spin up load, and only use the ‘expensive’ UI-based bots to see how the app works E2E for the user under load.

 

With rapid changes in agile requirements, how do we make the machines learn or adapt to the changes every time?
@Jason: The ai bots most folks are working on these days (vendors) will auto discover new features in the app and exercise them.  At test.ai we have a set of 5k+ tests written for common flows in apps, so if you add a new feature to your app that looks like something similar on another app, the bots will auto apply that test case to the new build.

@Raj: The more tests you run, the smarter the AI becomes in detecting changes in the application. It will start detecting changes in application’s UI, element attributes and start adapting the tests automatically to these changes due to its self-healing mechanism. It can identify flaky tests, optimize waits in between steps and also proactively fix issues for us before they occur.

 

With BDD model and shift left and demand for testing at unit and service/api layer where does E2E testing stand?
@Jason: Dionny’s work can help generate valid permutations of existing API test case parameters/flows.  Also, clustering methods can help identify misbehaving servers via logs of activity during api testing or production.

 

Where can we find Dionny’s paper on AI testing, you were talking about?
Dionny’s paper

 

Lot of automation test scripts fail due to test data issues, can we use AI to tackle those kind of issues?
@Jason: Thats a broad category of failure types, but yes, ‘AI’ can be taught to auto associate correct data with the right application states.  Google also shared some ‘test selection’ findings using ML to help decide what to do with all those failing tests:  https://testing.googleblog.com/2018/09/efficacy-presubmit.html

 

I wanted to understand, what does really mean by AI in testing?if it mean by machine will perform testing? if machine will test then if they will be already defined with scenarios which needs to be tested? is it same as automation testing as there also we don’t need manual intervention?
@Jason: Generally, AI in testing, means applying machine learning / AI techniques to test applications.  There is also ‘Testing AI’ which refers to approaches to test AI/ML-based products and features. There are a variety of ways to apply AI to testing, some leverage pre-written test cases and the AI is used to automatically execute the tests, create variations, or analyze the results.  Some AI based systems are trained to mimic general human behavior and can execute basic ‘flow testing’ for many apps, without pre-written test scenarios.  The bots we build at test.ai can read BDD/Scenarios and execute them against a set of applications.  As for need for human intervention, like automation, there is still the need for plenty of human intervention in AI-based testing approaches these days 🙂 Humans gather oracle/training data for the AI. Humans measure the correctness of the ‘AI’, and humans evaluate the significance of the AI-based test results as they relate to business/shipping decision.

@Raj: In addition to what @Jason was saying,  I wanted to mention that, AI can have a positive impact on several facets of software testing especially test automation. There have been so many different tools and frameworks that have come up trying to solve different kinds of problems related to test automation but one problem that has been a constant challenge till date, is the aspect of “maintenance”. One of the main reasons for this is the use of static locators. With static locators, we use only one attribute of an element to uniquely identify it on a page and if this changes, the test breaks and as testers we end up spending a considerable amount of time troubleshooting the problem and fixing it. Based on research, about 30% of testers’ time is consumed in just maintaining tests. Can you imagine the opportunity cost associated with this effort? It is mind blowing. Testers’ time is valuable and better spent on actually exploring the application and providing information to help stakeholders make informed decisions about the product. With AI based testing we can overcome this problem by using dynamic locators. Dynamic Locators is a concept where we use multiple attributes of an element to locate it on on the page instead of a single attribute. This way, even if one attribute changes, the element can still be successfully located with the help of other attributes that have already been extracted from the DOM by the AI.

 

Can you guys elaborate on how do AI-based tests learn acceptance criteria that normally has to be defined by humans?
@Jason: Depends on the AI system being used.  The bots at test.ai execute human-written test cases.  Acceptance tests are written at a very high level of abstraction, and the bots do all the test execution.  Reporting is as normal for test automation.  In summary, just tell the bots what your acceptance criteria are.

 

What an automation tester needs to learn to align with the future of ai in testing ?
@Jason: A good set of links to learn are here:  https://www.linkedin.com/pulse/links-ai-curious-jason-arbon/ . You can also start to leverage/experiment with “AI” via the current testing vendors. If you are already familiar with selenium/appium like testing, there is a new open source API that uses AI for element selection that you can use today:  https://medium.com/testdotai/adding-ai-to-appium-f8db38ea4fac?sk

 

Is AI platform dependent. like desktop application or web/mobile?
@Jason: 
Depends on the AI approach/solution.  Many are platform dependent.  The bots we build at test.ai though are not platform dependent–a key feature.  The bots are platform-independent as the machines are trained to recognize UI elements much like humans do, and humans are platform dependent ;).

 

Is there an Open Source project that allows to apply AI to locate the elements?
@Jason: Yes, for appium today and likely Selenium soon:  https://medium.com/testdotai/adding-ai-to-appium-f8db38ea4fac?sk

 

How can AI be used for improving test coverage ?
@Jason: AI can help generate many more validate test scenarios than a human could create. AI also enabled re-use of test artifacts so a test written for one app, can also execute on a similar app with no human intervention.

@Raj:  Now with AI, you can also connect your production apps to the testing cycle. This means that we can create tests based on actual flows done by the user in production. Also, the AI can observe and find repeated steps and cluster them to make reusable components in your tests. For Example – Login, Logout scenarios. So now we have scenarios that are actually created based on real production data instead of us assuming what the user will do in production. In this way, we also get good test coverage based on real data.

 

Will AI testing replace selenium appium and all tools and technologies?
@Jason: Asymptotically.

 

Is AI really better?
@Dionny: Traditional testing teams focus on either a single app, or a small set of apps; whereas, AI can learn from millions of different examples and apps. The more data we show the AI, the better it gets. Also, the AI never gets tired!

 

What are the immediate benefits of using AI?
@Raj: Apart from the benefits already mentioned in the answers above, AI can also help in increasing team collaboration. The field of test automation has historically been a technical tester focused community. This stigma can also change with AI. What this means is, non-technical resources no longer need to fear code and technology, rather AI will help to bridge the gap between the technical know-how and authoring and execution of tests making life easier for teams.

 

Will our jobs be replaced?
@Raj: Over the past decade technologies have evolved drastically, there have been so many changes happening in the technology space but one thing constant is human testers’ interaction with them and how we use them for our needs. The same holds true for AI as well. Secondly, to train the AI, we need good data combinations (which we call a training dataset). So to work with modern software we need to choose this training dataset carefully as the AI starts learning from this and starts creating relationships based on what we give to it. Also, it is important to monitor how the AI is learning as we give different training datasets. This is going to be vital to how the software is going to be tested as well. We would still need human involvement in training the AI. Finally, it is important to ensure while working with AI the security, privacy and ethical aspects of the software are not compromised. All these factors contribute to better testability of the software. We need humans for this too.

In summary, we will continue to do exploratory testing manually but will use AI to automate processes while we do this exploration. It is just like automation tools which do not replace manual testing but complement it. So, contrary to popular belief, the outlook is not all ‘doom-and-gloom;’ being a real, live human does have its advantages. For instance, human testers can improvise and test without written specifications, differentiate clarity from confusion, and sense when the ‘look and feel’ of an on-screen component is ‘off’ or wrong. Complete replacement of manual testers will only happen when AI exceeds those unique qualities of human intellect. There are a myriad of areas that will require in-depth testing to ensure safety, security, and accuracy of all the data-driven technology and apps being created on a daily basis. In this regard, utilizing AI for software testing is still in its infancy with the potential for monumental impact.

 

Will intelligence machines take over the world?
@Raj: Hollywood movies do have an influence on our lives don’t they 🙂 ? At most of the conferences I speak at, there is this weird notion that, in 3 years, AI powered robots are going to take over the world and we will become slaves to them. Which sounds interesting on paper but in reality I don’t think that is going to be the case.

Currently there are are some section of the people who believe fully developed AI that can react and think like humans , will be developed by 2055 and there are are other sections of people who think it will take several hundred years for that to happen. No one knows the exact answer yet. That being said, there are several organizations trying to ensure the AI currently being developed is safe for the human society. For example – The future of life institute was formed for the exact same purpose and has the brightest minds in the AI field working in that group on AI safety research. We also have groups like the World Economic forum keeping a close eye on the impact of AI on society.

So, I do not think machines will take over the world,  just yet!!! 🙂

 

AI Resources

Courses

And there are more courses available online. Just google search for “Deep Learning courses”, “Machine Learning courses” as keywords.

 

Free Resources/Courses

 

Books

Introduction

We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Shared Group Indicator, Numbered Test Steps, New Base URL Parameter. Check them out and let us know what you think.

Shared Group Indicator

What is it?

When trying to change a Shared step the users will now get a notification that they are editing a shared step. Further clicking on “See affected tests” takes the user to the list of tests that are using the shared step.

 

 

Why should I care?

You no longer have to worry about someone changing a shared step unknowingly, as you now see the shared group indicator letting you know the effects of a change before it is done. This is useful when teams are collaborating to build test suites and when multiple people are working on the same set of tests. Now individuals have more visibility to how their changes might impact overall testing.

 

Base URL as a Parameter

What is it?

Users now have the ability to access the base url through a variable within your custom actions. The new variable that automatically stores the url value is named BASE_URLLearn More

Why should I care?

You no longer have to add extra code to get the url value of the web page used in the test. Instead, you just use the BASE_URL parameter and perform any actions necessary inside our custom actions. For example – If we want to print out the url of the web page to ensure the same page is still displayed after certain number of validations, you could just say

console.log(“The current base url is” + BASE_URL)

 

Numbered Test Steps

What is it?

Step numbers help to uniquely identify each step in a test. You now have the step number displayed next to the name of each and every step that is added to your test.

Why should I care?

Having numbered steps help to easily refer to a particular step in a test. This is helpful in cases where you want

  • To edit a particular step
  • To collaboratively work on a particular step of a test with other team members
  • To talk to our support team to debug a particular step in a test

Introduction

We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Result Labels, Test Run Navigation Icon, Grid Management. Check them out and let us know what you think.

Result Labels

What is it?

The “Result Labels” allows you to name each remote run. On the “Suite Runs” and “Test runs” pages, you can easily filter your runs by choosing a result label.

Testim Result Labels

Why should I care?  

You now have the ability to label your runs. This is especially useful when you need to drill down into specific runs based on environment, application version, sprint numbers etc. For example you can label your runs as “nightly-scheduler”, “v1.42.34”, “Jenkins”, “Troubleshooting”, “Staging”.

Result labels can be added to the CLI using the parameter –result-label “<user defined name of the run>”. Learn more

Test Run Navigation Icon

What is it?

The new navigation icon opens the results of a test in a new tab.

Testim test run navigation

Why should I care?

You now have  the ability to switch back and forth between test and the test runs via the tabs.

Grid Management

What is it?

To run your tests remotely, you need to integrate either with Testim grid, your own local grid or other 3rd party grids like Sauce Labs and Browserstack. Learn more

testim grid management

Why should I care?  

Grid management now offers the ability to easily manage multiple grids providing an abstraction layer for your devops. The grid information is automatically added to the CLI based on the already configured grids and will appear in this format –grid “<grid name>”.

Customers have access to these new features now. Check it out and let us know what you think. If you’re not a customer, sign up for a free trial to experience autonomous testing. We’d love to hear what you think of the new features. Please share your thoughts on Twitter, LinkedIn or Facebook.

Be smart & save time...
Simply automate.

Get Started Free