Category

QA

Category

Introduction

We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Email Validation, Advanced Scheduler, For Each Loop. Check them out and let us know what you think.

Email Validation

What is it?

You now have the ability to generate email addresses, send emails and validate the contents of an email within the Testim IDE itself.

Why should I care?

There is no longer a need to use 3rd party email vendors such as Guerilla Mail and Shark Lasers to do email validations. All this is handled within Testim and there is no need for any context switching. The validations can be done within a click of a button as shown below


Advanced Scheduler

What is it?

You now have the ability to run tests in parallel, add results label, choose branches and set a timeout for a scheduler run with the Advanced Scheduler feature.

Why should I care?

With this new feature, you now have more control over you scheduled runs in terms of making it run faster by adding parallelism, labeling each scheduled run, pick and choose which branch you want to run the tests on and finally setting a timeout value to control when a test needs to be aborted.

For Each Loop

What is it?

You now have the ability to iterate over any list of similar items and perform repeated actions.


Why should I care?

Iterating over rows in a table, clicking on multiple checkboxes in a list of items or validating the order of a list of similar items; just got a lot easier with the for each loop functionality. You simply choose this option and select the element you want to repeatedly click to perform certain validations. It does not matter which element in the list of similar items is selected as the loop always starts from the first element and iterates over its siblings.

Click on this demo test to learn how the For Each Loop functionality works.

Testim gives you the ability to override timeouts within a test, outside a test and across a group of tests. This helps to control the amount of time tests need to wait before a particular condition is met; after which tests fail gracefully after the set timeout period expires. The different ways to handle timeouts are as follows-

Tip 1: Timeouts within a step

Every step you record in Testim has a default timeout value of 30 seconds. You can override this value by following the below steps

  • Navigate to the properties panel of the step
  • Select “Override timeout” option
  • Change the default timeout value from 30 seconds to the desired timeout value
  • Click on Save

Tip 2: Timeouts within Test Configs

You have the ability to change the timeout for all the tests using a particular test config. You can do this by following the below steps-

  • Navigate to the properties panel of the setup step (first step in the test)
  • Click on the edit config icon next to the exiting resolution
  • Change the default timeout value from 30 seconds to the desired timeout value
  • Click on Save

NOTE: You can also edit the Step delay value in the edit config screen

Tip 3: Setting timeout for test runs

To abort a test run after a certain timeout has elapsed, you can use the CLI –timeout command. The default value is set to 10 minutes.

Usage example:

npm i -g @testim/testim-cli && testim –token “eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ” –project “vZDyQTfE” –grid “Testim-grid” –timeout 120000.

This timeout value can also be set in the Advanced Scheduler screen as shown below


Exploratory Testing has been around for several decades now. Every tester has been knowingly or unknowingly practicing it in their daily testing activities. There are various definitions and methodologies surrounding this testing approach. One of which is session based exploratory testing (SBET). Some confuse this testing approach with Ad-hoc testing without realizing it is way more powerful and structured. Here is a formal introduction to this testing approach and how to use it in your daily testing activities.

What is SBET?

SBET are time boxed uninterrupted testing sessions focused on a particular goal (module, feature, scenario). There are different approaches and templates used for this approach.

Advantages of SBET

This can be used within any domain, project or application; where you can get quick feedback about the application instead of writing detailed test cases (scripted testing). You get more flexibility in exploring the product and get to use your creativity within the boundaries of the goal of the session.

How to do it?

I personally have had a lot of success pairing up with another tester/developer and we both execute the same scenario in different devices/environments and discuss our observations. For example – Say, I am testing a mobile web application; I will have my colleague test the web app on an Android Tablet and I may have an Apple phone. Then we both execute the same scenario and discuss the observations. Just by doing this you can uncover lots of rendering issues, inconsistencies and unexpected behavior.

Structure of SBET?

SBET usually follows the below structure. They are-

  • 45-90 minute Time Boxed sessions
  • Have Charter/Goal document to guide the session
  • Note down test ideas/scenarios
  • Paraphrase/Debrief the observations
  • Discuss Observations with a developer/business person
  • Log Defects based on the discussion

All the session notes are contained in what is called a Charter Document. This is a document that contains all the details about the session including the goal of the session, necessary resources used in the session, task breakdowns containing time spent on performing different tasks during the session, session notes containing helpful information along with the test ideas and observations, issues uncovered during the session and any screenshots (if necessary).

So everyone knows the details about the session and how much time was spent on it. The document can be attached to a story or any repository where you house your test artifacts.

Doing a number of SBET sessions helps to

  • Get a better idea about the product features
  • Uncover bugs that would be otherwise hard to find with scripted/automated testing
  • Identify high risk areas
  • Identify mundane tasks in manual testing which are time consuming, which are good candidates for automation

How it fits into automation?

Doing SBET helps to set the stage for automation. It helps to learn about the application and think about different scenarios to automate. It is good to have SBET and high level automated tests running in parallel as it gives you good coverage of the application. The time you invest in automation depends on your context i.e how many people are available to do automation, the skill sets, cost vs value of doing automation, timeline and what tools/framework you are using.

After a month or two of getting to know the product by doing SBET, you can start doing some time boxed experimentation with different tools that are available for automation. Then you can practically see what fits your needs. Once you identify the tool, you can start automating the different scenarios.

How SBET fits in Agile Projects?

Given the flexibility SBET provides, the next question that quite often comes to mind is – When is the right time to do SBET? The answer is it depends on the context of the project. If you are just the lone tester or have only 2-3 people in the testing team, you can start doing ET sessions on each user story. Once you get a fair understanding of the functionalities of the application, you can start writing high level test cases and pick out scenarios for test automation based on the knowledge gained from these sessions.

If you are working in large scale agile projects and have a big test team, then you could follow the below approach-

  • For each story, discuss the acceptance criteria. Based on that discussion, identify scenarios that can/cannot be automated
  • For those scenarios that have to be tested manually, figure out the risk and impact associated with the story. For example – If the story is about implementing the payment functionality of a banking system, then there are high risks and huge impact to the customer and the organization, if the feature is not implemented correctly and we do not get proper test coverage. On the other end, if a story is about increasing the font size on the web page from 12 points to 15 points, the risks and impact to the customer are lot lesser. Do customers really care if the font size was not changed correctly? The answer could be Yes; but the impact is minimum as the customers would still be able to perform the required transactions in the application. But if the payment system is not working, then customers cannot make a payment which is a huge deal
  • Once we identified the story as high risk and impact, we can write high level test cases covering the acceptance criteria and some edge cases. This can then be supplemented by one or more ET sessions to explore certain aspects of the functionality in more detail

Once an ET session is complete, all the documentation generated from the session (which usually would be ET charters filled with information) can be attached to the specific story for better traceability and letting stakeholders know the details about the ET session including the different issues uncovered. This way, everything is documented and available for future reference.

During regression testing phase, one or more of these ET charters could be reused to perform additional sessions. Some of the scenarios from the ET session can be converted into high level test cases or automated test cases. Thus, ET sessions can start right from the story testing phase and can extend all the way till acceptance testing phase.

Remember, SBET is NOT a replacement for scripted test case execution but is performed COMPLEMENTARY to it. It is an approach that helps in exercising the creativity and experience of the tester to get more information about the product. As a result, stakeholders can make informed decisions.

Introduction

We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Advanced Analytics, Results Export Feature. Check them out and let us know what you think.

Advanced Analytics

What is it?

You will now be able to see an aggregated failure summary report inside a suite runs page. The report will contain a snapshot of the test failures and a pie chart. To help with debugging; clicking any error group or one of the pie chart segments will filter the view to show only the tests that failed on the selected error. This speeds up the troubleshooting of failed tests by pinpointing the root cause of run failures.

Why should I care?

Sometimes, a single problem in one shared step (for example, a “login” group) can cause many tests to fail. With the release of this feature, you will now be able to quickly hone in on those high-impact problems and dramatically reduce the time it takes to stabilize the suite.

 

Results Export Feature

What is it?

You will now see an export button in the results page (suite runs, test runs and single suite view). Clicking this button will download the results data as a CSV file, which can then be imported into excel or Google sheets.

NOTE: Only the data that is currently presented in the UI will be put into the CSV file. For example, if “Last 24 hours” is selected and the status filter is set to “Failed”, the CSV file will only include failed tests from the last 24 hours.

Why should I care?

We now have the ability to easily share the test results across teams with people who use and do not use Testim. This also gives the flexibility to feed the generated CSV file to any external tool or framework for more customized reporting. The possibilities are endless.

Testim gives users the flexibility to run tests on different screen resolutions. But sometimes this can get confusing where in; some tests run on a certain resolution and the newly created tests run on a different resolution. Below, are two simple tips to set screen resolution for a particular test and also apply it globally to all the tests in the project.

Tip 1: To ensure a test runs on a particular screen resolution each time you run it, follow the below steps

  • Navigate to the properties panel of the setup step (first step in the test)
  • Click on “Choose Other” option to select a resolution from the already existing config list OR
  • Click on the edit config icon next to the exiting resolution
  • Set the desired screen resolution you want
  • Give a name for the newly created resolution
  • Then click “Save”

 

Tip 2: To apply an existing/new resolution to all the tests in your Test List, follow the below steps

  • Navigate to the Test List view
  • Click on “Select All”
  • Click on the “Set configuration for selected tests” icon
  • Choose the required resolution you want to be applied to all the tests

NOTE: Test configs can also be overridden during runtime via the –test-config parameter in the CLI and the Override default configurations option in the Scheduler.

Introduction

We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Auto Scroll, Scheduler Failure Notifications. Check them out and let us know what you think.

Auto Scroll

What is it?

When an element on the page is moved around, finding the target element may require scrolling even though it wasn’t required when the test was initially recorded. Testim does this automatically with auto scroll.  

NOTE: User also has the option to disable this feature, if required.

Why should I care?

You no longer have to worry about tests failing because of element not visible/found when the element location is changed in the page; thereby needing to scroll.  With auto scroll, you can scroll automatically to the element that is outside the viewport.

Scheduler Failure Notifications

What is it?

Users now have the ability to get email notifications on every failed test that ran on the grid using the scheduler.

Why should I care?

With the new “Send notifications on every failure” feature, users will receive notifications on failures every time a scheduler run fails. Now, you have instant feedback on failed scheduler runs. This is unlike the “Notify on error” option, where uses gets notifications only once; when a scheduler run fails. No new email is sent out until the failed scheduler run is fixed.

 

What are Loops?

Loops are one of the most powerful concepts in programming and sometimes could be a little hard to understand. At Testim, we made it easier for users to use Loops in their tests; by building it within the framework itself. This means, the user does not need to write code to repeat a group of steps a certain number of times.

Why do we need Loops?

Loops are useful when we need to repeat an action several times in our test.

For example –  Say we want to check whether the “Post” button on Twitter works consistently. We could have a scenario where we want to click the button 50 times to ensure it works consistently without any problems. In this scenario, are we going to record 50 steps to click the button 50 times or just record the step once and repeat it 50 times?

This is where Loops can be our best friend. It helps in repeating an action several times without needing to repeat the steps again and again. As a result, we save a lot of time in test authoring and execution.

How to use Loops?

Loops could still be a little hard concept to grasp. So here is a quick tip on how to easily use loops within Testim.

Let’ say we have the below code

for(i = 1; i < 4; i++) {

//Steps to repeat 3 times

}

What we are doing here is –

  • We are initializing a variable “i” to be equal to 1 before the loop starts. This is our Iterator.
  • We are specifying a condition to repeat the steps a certain number of times by giving “i<4”. In this case we want to repeat a set of actions 3 times.
  • Each time we exit a loop we need to increment our Iterator. In this case, we are doing “i++” which will increment the variable i from 1 to 2, 2 to 3 and 3 to 4. Then, eventually we exit the loop when i = 4, as we have a condition which checks to see if i<4 (Remember 4 is NOT LESS than 4 but EQUAL)

The same logic applies to Loops in Testim as well. Where we have 3 steps –

Step 1 – Initialize the Iterator

Step 2 – Specify a condition to repeat the steps a number of times

Step 3 – Increment the Iterator

This is what we do in this Demo Example, where we-

 

STEP 1Initialize the Iterator

 

STEP 2Specify a condition to repeat the steps a number of times

  • Group the steps to be repeated in one single reusable component
  • Go to the Properties Panel and add Custom code
  • Give condition to repeat the group a certain number of times

 

STEP 3Increment the Iterator

  • Add a custom action as the last step inside the newly created group
  • Increment the Iterator

 

The same steps applies to any set of actions you may want to repeat using our Loops functionality. You can also use do…while loops which follows similar concepts.

Also remember, Testim supports loops based on element conditions apart from just custom code.

We recently hosted a webinar on Real Use Cases for using AI in Enterprise Testing with an awesome panel consisting of Angie Jones and Shawn Knight and me being the moderator. There were lot of great discussions on this topic and we wanted to share it with the community as well.

Below you will find the video recording, answers to questions that we couldn’t get to during the webinar (will be updated as and when we get more answers) and some useful resources mentioned in the webinar as well. Please feel free to share these resources with other testers in the community via email, twitter, linkedin and other social channels. Also, reach out to me in case of any questions at raj@testim.io or any of the panel members.

Video Recording

 

Q&A

Any pointers to the existing AI tools for testing?

@Raj: I am assuming this question is about things to know with existing AI tools in the market. If this is the case, then first and foremost, we need to figure out what problems we are trying to solve with an AI based tool that cannot already been done with other possible solutions. If you are looking for decreasing time spent on maintenance, getting non-technical folks involved in automation and making your authoring and execution of UI tests much faster, then AI tools could be a good solution. I may be biased in this but it is definitely worth checking out Testim and Applitools if any of the points I mentioned is your area of interest/pain point as well.

As discussed in the webinar, currently there are a lot of vendors (including us) who use all these AI buzzwords. This may result in you getting confused or overwhelmed in choosing a great solution for your problems. My recommendation is-

  • Identify the problem you are trying to solve
  • Pick different AI tools, frameworks to help solve that problem
  • Select the one that meets the needs of your project
  • Then, proceed with that tool

 

As a tester working with Automation, what should I do to not lose my job?

@Raj: First of all, I believe manual testing can never be replaced. We still need human mind to think out of the box and explore the application to find different vulnerabilities in our product. AI will be used complementary to manual testing.

Secondly, we need humans to train these AI bots to try to simulate human behavior and thinking. AI is still in its initial stages and is going to take another 10 -15 years to completely mature.

In Summary, I think this is the same conversations we had 10 years ago when there were automated tools coming into the market. Then, we concluded that automated tools helps to complement manual testing BUT NOT replace it. I think it is the same analogy here where AI is going to complement manual testing BUT NOT replace it.

As long as people are open to constantly learning and acquiring different skillsets, automation is only going to be make our lives easier while we can pivot and focus on other aspects that cannot be accomplished with automation. This mainly involves things related to creativity, critical thinking, emotion, communication and other things that are hard to automate. The same thing holds through for Artificial Intelligence. While we use AI to automate some of the processes to save us time, we can use this saved time in focusing on acquiring other skills and stay abreast with the latest in technology.

So the question here is, not more about automation/AI replacing humans but about how do we stay creative and relevant in today’s society? That is done by constant learning, development and training.

 

Do we have some open source tool on the market for AI testing?

@Raj: Not really, we do have a small library which was added to the Appium project to give a glimpse of how AI can be using in testing — https://medium.com/testdotai/adding-ai-to-appium-f8db38ea4fac?sk. This is just a small sample of the overall capabilities.

 

What should be possible in testing with ai, in 3 years time? And how do you think testing has changed (or not)?

@Raj: We live in this golden age of testing, Where there are so many new tools, frameworks and libraries that are available to us, to help make testing more effective, easier and collaborative. We are already seeing the effects of AI based testing tools in our daily projects with introduction of new concepts in the areas of location strategies of elements, visual validation, app crawling and much more.

In 3 years, I can see the following possibilities in testing-

  • Autonomous Testing

I think autonomous testing will be more matured and a lot of tools will include AI in their toolset. This means we can create tests based on actual flows done by the user in production. Also, the AI can observe and find repeated steps and cluster them to make reusable components in your tests. For Example – Login, Logout scenarios. So now we have scenarios that are actually created based on real production data instead of us assuming what the user will do in production. In this way, we also get good test coverage based on real data. Testim already does this and we are trying to make it better.

  • UI Based TDD

We have heard of ATDD, BDD, TDD and also my favorite SDD (StackOverflow Driven Development) 🙂 . In 3 years, we will have UITDD. What this means is, when the developers get mockups to develop a new feature; the AI potentially could scan through the images in the mockups and start creating tests, while the developer is building the feature in parallel. Eventually, by the time the developer has finished implementing the feature, the AI would have already written tests for it based on these mockups using the power of image recognition. We just need to run the tests against the new feature and see whether it passes/fails.

  • AI for Mocking Responses

Currently we mock server requests/responses for testing functionalities that depend on other functionalities which haven’t been implemented yet or for making our tests faster by decreasing response time of API requests. Potentially, AI can be used to save commonly used API requests/responses and prevent unnecessary communication to servers when the same test is repeated again and again. As a result, your UI tests will be much faster as the response time has been drastically improved with the help of AI managing the interaction between the application and the servers.

 

Will our jobs be replaced?

@Raj: Over the past decade, technologies have evolved drastically. There have been so many changes happening in the technology space but one thing constant is human testers’ interaction with them and how we use them for our needs. The same holds true for AI as well. Secondly, to train the AI, we need good data combinations (which we call a training dataset). So to work with modern software we need to choose this training dataset carefully as the AI starts learning from this and starts creating relationships based on what we give to it. Also, it is important to monitor how the AI is learning as we give different training datasets. This is going to be vital to how the software is going to be tested as well. We would still need human involvement in training the AI.

Finally, it is important to ensure while working with AI the security, privacy and ethical aspects of the software are not compromised. All these factors contribute to better testability of the software. We need humans for this too.

In summary, we will continue to do exploratory testing manually but will use AI to automate processes while we do this exploration. It is just like automation tools which do not replace manual testing but complement it. So, contrary to popular belief, the outlook is not all ‘doom-and-gloom;’ being a real, live human does have its advantages. For instance, human testers can improvise and test without written specifications, differentiate clarity from confusion, and sense when the ‘look and feel’ of an on-screen component is ‘off’ or wrong. Complete replacement of manual testers will only happen when AI exceeds those unique qualities of human intellect. There are a myriad of areas that will require in-depth testing to ensure safety, security, and accuracy of all the data-driven technology and apps being created on a daily basis. In this regard, utilizing AI for software testing is still in its infancy with the potential for monumental impact.

 

Did you have to maintain several base images for each device size and type?

Has anyone implemented MBT to automate regression when code is checked in?

 

Resources relevant to discussions in the webinar

“When I talk to my colleagues and tell them that we have a team of developers that are performing manual testing guided by a test-plan nobody believes me… ”

Let me just start by saying that I’m very proud of our team and how far we have come.

Since my arrival, we have tripled in size and have significantly improved our quality standards. When I Joined Testim, we were a small R&D team based in Israeli. We had no QA department or defined testing process and had to implement it all from scratch. There are many challenges in trying to create a process where testing is integrated into our CI/CD workflows. For one, there is awareness. The process of assimilating the Idea that we have to test our produced features and acknowledging there are consequences if we don’t. We have a very talented and speed oriented Dev team.  But, it comes to a point when you have to stop and take a step back to really plan for the future. As we strive for continuous improvement, we have to periodically evaluate our process from end to end, making sometimes small adjustments which can improve the flow holistically. We had to assimilate the idea that it is better to “waste” a week of testing than 2 weeks of refactoring. Or worse – to lose our customers trust in our product.

I’m very happy to say that one of the things I love most about our team – is that we do not have inner battles. (At least not anymore). We truly understand that we are all working for the same cause and moving as a team to get the job done.

Some history…

I started at Testim as a one-man show. I arrived with a  background in quality assurance and solid automation experience working for larger companies. The startup environment was new to me.  I took the first month to learn the culture, build rapport with the people, understand the customers and their objectives, and observe the various internal and external process end to end.  From these learnings, I could conclude which things needed to be changed, what was good and should be preserved, and which tools or methodologies could suit our needs the best way.

For example, the product-dev-testing workflow needed some tuning. With that said, what suits an enterprise would be terrible overhead for a fast-paced startup. Typically, a startup requires a lot of risk analysis, fine-tuning and wise tool selection. We needed flexible tools that were easy to use and configure as our processes would quickly evolve over time. We did not have time for lengthy implementations which involved consultants or systems integrators, we needed to deliver value instantly without any setbacks. 

I see a lot of guys being recruited to a small startup coming from a big enterprise, and without fully evaluating their surroundings, start to incorporate heavy test management, reporting, and bug tracking tools just because “This is the way it should be done.” Frequently they will encounter resistance from the team because they simply do not see the need, or it creates an unnecessary burden on the routine without really delivering added value to the current work process. The thing is that there is no “single right way” to do things, and each team needs to collaborate to create an efficient workflow that works for them.

The 5 steps we took to building our quality driven development process

  1. Effective Product-Dev-Testing flows – We needed some more concrete requirements, tech designs, with lean and effective documentation. We also needed to establish what was the entrance and exit point to all of the “stops” in our cycle (And how the ping-pong would be addressed during the cycle). That took some work but I think that we are in a very good place now and we continue to learn and iterate.
  2. Incorporating quality initiatives into our roadmap – An important thing to realize – plans tend to change in a fast-paced startup. With that said, if we don’t bring up things for the agenda – they will not go into our mindset (and eventually into our roadmap). I mapped the things I needed to implement in regards to quality and automation, ensuring it was a part of the plan and scheduled work procedure. One more thing I needed to change was proper time estimations for feature releases. While we do need to move fast and give reasonable time estimations, we started incorporating testing and automation coverage time into our time assessments for each feature. That way, the time we needed to release a well tested and “covered” product is being properly evaluated.
  3. Tool choice – We decided on tools, frameworks and document formats that suited our needs the best way and worked on perfecting the flow with every sprint. The key guideline was to make the tool work without making us work for the tool.
  4. Testing and automation coverage – We worked on reaching a point where our own automation gives us a very solid picture of our product and risk-analysis to cover what needed to be performed manually. In addition to that, we made it part of the process for each and every one of us to write automated coverage for the features we deliver. The standardized approach and guidelines are maintained with some insights from me and our Chief Architect. We also implemented an effective CI/CD process and pipeline to assure our fast pace and quality standards could go hand-in-hand.
  5. Ownership & teamwork – It is true that in a startup there is a lot more versatility in everybody’s roles.  With that said, over time we learned our strong points and took responsibility for things where we can contribute to making our work more productive. I took the Quality Assurance and Test-Automation part because it was my background. There was a time when testing was my sole responsibility. Time went by, we grew to understand that a bottleneck is not a good place to be stuck in (If you know what I mean). That lead us to adopt a whole new approach. We always worked as a team, and as such, everybody feels responsible to make their task come to an end with maximum quality and fast delivery time. A part of that is to take responsibility and test your own work. When I tell my colleagues that we have a team of developers writing test automation and performing manual testing according to a test plan – nobody believes me. It’s actually very simple math. We had a lot of work to do, and a limited staff. The alternative is to deliver features without testing or to delay releases, and that was not an option. Test your work and be responsible for it – it’s as simple as that. Ones you push that deploy button – you have to be confident that you did what you could to assure it’s quality. Psychologically speaking, when you know that you don’t have a “Gatekeeper” checking after you, you would do a more thorough job. With that said – at times you do need an additional set of eyes or it’s better to let someone else take a look after you have boiled your own code for a long time – and that’s where I come in – or someone else from the team who is available. The popular conception is that only testers can test a product. But, what is the difference between a tester and a developer? The main difference is that a tester decided to learn how to test and a developer decided to learn how to develop. Is it possible for someone to learn how to do both? Traditionally, organizations preferred that it was separate functions. However, today’s Agile and DevOps practices try to reduce the handoffs between various stages of the process to create a continuous workflow. Once you teach a developer how to test, and give them the correct mindset for testing – they can tackle any feature they develop. Since they know the code and its weak points, they can do an amazing job of testing it., Everybody who “Touches” the product and its feature at any stage of the development lifecycle has to do their best to deliver quality work. The bottom line is as I mentioned before, there is not a “one size fits all” model – it takes time and continuous improvement to adopt a process that works best for your team.

I think the main key here is making quality an organizational initiative. Whether you are in product, engineering, sales or marketing, focusing on the customers experience should be the responsibility of everyone.  We try to leverage as much data as possible to provide cross-functional team transparency and not let emotion and anecdotal assessments get in the way of our decisions. We all share the same agenda and my mission is to influence and foster quality awareness and maintain a productive workflow which combines a detailed code review, effective CI/CD, unit testing and some TDD elements – our winning recipe for success.

 

It’s no surprise that mobile is big and only getting bigger. There are currently 8.7 billion mobile connections in the world, more than a billion active devices than there are people. Ensuring seamless user experiences for mobile users requires extensive software testing. However, testing mobile is much easier said than done – until now!

We are thrilled to unveil Testim Mobile.  SIGN UP  to our Mobile Beta program to experience super fast automated test creation and execution with minimal maintenance.

For those who are familiar with Testim, now you get all of the same AI capabilities you are accustomed to in the web version, now available in mobile. With this release, we have become the only fully integrated AI test automation platform that supports Web, Mobile Web and Native applications.

For those who are not familiar with Testim, we help engineering teams of all sizes across the globe improve test coverage, speed up release cycles and reduce risk in their delivery process. Companies like Wix, Globality, Autodesk, Swisscom and LogMeIn use Testim to integrate testing into their evolving Agile software development processes.

Playback Tests on Real Devices or Emulators

To get the true behavior of the application, it is recommended to playback tests on real devices, and not settle for an emulator. The main challenge with real device coverage is the plethora of available devices and configurations, which can include screen resolutions, performance levels, different firmwares from manufacturers, different operating systems and much more.

With Testim, you can directly playback tests on the physical device seamlessly; apart from just using the emulator.

Record actions in real time

When the user performs gestures such as tap, text inputs, scroll etc on the device/emulator, they are recorded in real time in the Testim editor. This allows you to see exactly which actions are being captured.

Get screenshots for each and every step

As we start recording different flows and creating multiple tests, it becomes critical to visually see the UI elements of the app when recording and playing the tests. Testim provides screenshots for each and every step that is recorded. Once the same test is played back, you can see the baseline and the expected image side by side. This way you know exactly what is happening underneath the hood.

Assertions

One of the most important capability of any automation framework is its ability to do assertions, this holds true for mobile as well. With Testim, you can do various validations to ensure the UI elements on the page are displayed as expected.

Feedback on each step

The user also gets feedback on each step in terms of whether the tests Passed or Failed by showing a “Green” or “Red icon” on the top left portion of each step as shown below

Ease of Use

The downfall of most of the automation frameworks is the complexity that goes into setting up the physical device with the framework, recording/playing back tests, maintaining and organizing the tests into meaningful components. With Testim, everything is handled within the the Testim editor just like how it is for users who are accustomed to doing Web and Mobile Web automation using Testim. There is no context switching between different windows, tabs, tools or frameworks. Just pair the device and start recording. It is that simple. We take care of the rest. All the features you have been using such as branching, version control, test suite/plans, test runs works the same in Testim Mobile.

In addition, you can maintain your Web, Mobile Web and Native automation projects all in one place and easily switch between projects and manage your tests.

Robust CLI actions

Users have the ability to create custom Node.js scripts that can be executed in the CLI (Command Line Interface). This will give users the flexibility to not only perform UI level validations on the mobile application but also the opportunity to do more advanced actions such as database validations.

Inserting Custom Code

Custom javascript code can be added to perform addition validations (if needed). We can perform manipulation/validation of parameters.

Control structures built within Testim

Save time by using the while loop to repeat steps. This eliminates the need to duplicate steps or write code to perform certain actions on your mobile application repeatedly. This feature helps to build simple and robust tests.

Multi Application Recording

With Testim Mobile you have the the flexibility to record actions from multiple applications within the same test. This means, you can easily switch between applications during recording and perform end to end testing, without any additional configuration.

 

Upcoming features in future releases

We will be releasing more features for Testim Mobile in the upcoming months. Here is a quick sneak peek into some of them-

Support for advanced gestures

Users will be able to record/playback more gestures on their applications that includes double tap, nested scroll, multi-touch, long press menus and much more.

Pixel level validation

Testim provides a step by step baseline and expected images out of the box, as a basic means of debugging your test runs. In addition, you will also be able to do visual validation. This feature is helpful when you need to ensure the UI of the mobile application is exactly what is supposed to be, across different devices.

Support for multiple IME’s

With multiple IME’s (Input Method Editor) available in the Android ecosystem, it makes sense to give users the flexibility to record/playback tests with the desired IME. We will soon be able to use alternative input methods such as on-screen keyboards, speech input etc, on our devices and record/playback tests with Testim Mobile.

 

Get started with Mobile Native App Automation

Don’t take our word for it, test drive the Mobile Beta and find out first hand for yourself to see how AI driven mobile automation can be super fast, stable, and simple.

It is also worth mentioning that, users who sign up for the program get a personalized on-boarding process from our team, which also includes 24/7 customer support. The best part of all this is, everything is FREE. We are more interested in your feedback and would like to showcase our AI powered mobile automation framework.

So, what are you waiting for? Enroll to our beta program and take your first step towards easy and seamless mobile native app test automation using AI.

Be smart & save time...
Simply automate.

Get Started Free