Category

Product

Category

Testim gives you the ability to override timeouts within a test, outside a test and across a group of tests. This helps to control the amount of time tests need to wait before a particular condition is met; after which tests fail gracefully after the set timeout period expires. The different ways to handle timeouts are as follows-

Tip 1: Timeouts within a step

Every step you record in Testim has a default timeout value of 30 seconds. You can override this value by following the below steps

  • Navigate to the properties panel of the step
  • Select “Override timeout” option
  • Change the default timeout value from 30 seconds to the desired timeout value
  • Click on Save

Tip 2: Timeouts within Test Configs

You have the ability to change the timeout for all the tests using a particular test config. You can do this by following the below steps-

  • Navigate to the properties panel of the setup step (first step in the test)
  • Click on the edit config icon next to the exiting resolution
  • Change the default timeout value from 30 seconds to the desired timeout value
  • Click on Save

NOTE: You can also edit the Step delay value in the edit config screen

Tip 3: Setting timeout for test runs

To abort a test run after a certain timeout has elapsed, you can use the CLI –timeout command. The default value is set to 10 minutes.

Usage example:

npm i -g @testim/testim-cli && testim –token “eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ” –project “vZDyQTfE” –grid “Testim-grid” –timeout 120000.

This timeout value can also be set in the Advanced Scheduler screen as shown below


Introduction

We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Advanced Analytics, Results Export Feature. Check them out and let us know what you think.

Advanced Analytics

What is it?

You will now be able to see an aggregated failure summary report inside a suite runs page. The report will contain a snapshot of the test failures and a pie chart. To help with debugging; clicking any error group or one of the pie chart segments will filter the view to show only the tests that failed on the selected error. This speeds up the troubleshooting of failed tests by pinpointing the root cause of run failures.

Why should I care?

Sometimes, a single problem in one shared step (for example, a “login” group) can cause many tests to fail. With the release of this feature, you will now be able to quickly hone in on those high-impact problems and dramatically reduce the time it takes to stabilize the suite.

 

Results Export Feature

What is it?

You will now see an export button in the results page (suite runs, test runs and single suite view). Clicking this button will download the results data as a CSV file, which can then be imported into excel or Google sheets.

NOTE: Only the data that is currently presented in the UI will be put into the CSV file. For example, if “Last 24 hours” is selected and the status filter is set to “Failed”, the CSV file will only include failed tests from the last 24 hours.

Why should I care?

We now have the ability to easily share the test results across teams with people who use and do not use Testim. This also gives the flexibility to feed the generated CSV file to any external tool or framework for more customized reporting. The possibilities are endless.

Testim gives users the flexibility to run tests on different screen resolutions. But sometimes this can get confusing where in; some tests run on a certain resolution and the newly created tests run on a different resolution. Below, are two simple tips to set screen resolution for a particular test and also apply it globally to all the tests in the project.

Tip 1: To ensure a test runs on a particular screen resolution each time you run it, follow the below steps

  • Navigate to the properties panel of the setup step (first step in the test)
  • Click on “Choose Other” option to select a resolution from the already existing config list OR
  • Click on the edit config icon next to the exiting resolution
  • Set the desired screen resolution you want
  • Give a name for the newly created resolution
  • Then click “Save”

 

Tip 2: To apply an existing/new resolution to all the tests in your Test List, follow the below steps

  • Navigate to the Test List view
  • Click on “Select All”
  • Click on the “Set configuration for selected tests” icon
  • Choose the required resolution you want to be applied to all the tests

NOTE: Test configs can also be overridden during runtime via the –test-config parameter in the CLI and the Override default configurations option in the Scheduler.

The Software Development Lifecycle (SDLC) consists of various roles and processes that have to seamlessly mesh together, to release high-quality software. This holds true right from the initial planning phase, all the way to production release and monitoring. In terms of roles, we have designers, business analysts (BA), developers, testers, scrum masters, project managers, product owners (PO) and technical architects (TA), who bring a varying level of experience and skill set to the project. They collaborate to discuss and implement different aspects of the software. In terms of processes, based on team size, release schedules, availability of resources and complexity of the software, the amount of processes can vary from no process to strict quality gates, at different phases of the SDLC.

What is the Knowledge Gap?

As teams start to collaborate to work on different features of the software, they often run into the below situations-

  • Each team member has different interpretations of the requirements
  • A majority of the team finds it hard to understand the technical jargons used to describe the working of the system
  • Developers assume the testers have the same level of technical expertise as them, while explaining changes in code during code reviews
  • Developers fail to do development testing and assume testers will test the newly implemented feature completely
  • There is no clear distinction of responsibilities in the team; leading to ambiguity and confusion
  • The PO comes up with a feature to implement and each team member has different interpretations of how the feature should work
  • The BA writes requirements that are hard to understand and implement, due to lack of understanding of the technical aspects of the system
  • The TA explains how a feature should be implemented using technical jargons that are hard to understand by designers, PO’s, BA’s and testers
  • The developer develops the feature without paying attention to the testability of the feature
  • The tester sits in on code reviews and the developers assume he/she has the same level of technical expertise as them when explaining their implementation
  • The tester does not get enough time to complete their testing due to tight release schedules
  • The developer fails to do development testing and makes testers responsible for the quality of the product
  • There is no clear distinction of responsibilities on who would do what task in the SDLC and everyone assumes someone would do the tasks. As a result; majority of them never gets done

And so on…

Now, you may ask? Why do teams get into the above situations more often than expected? The answer is, there is a knowledge gap in teams. This is especially true when teams have a mix of entry-level, mid-level and expert level resources and each one makes different assumptions on the skillset, experience and the domain knowledge every individual brings to the table.

Also, these gaps can stem from a more granular level when teams are using different tools as well. For example – When customers use Testim, we have observed first hand that, different developers/testers think and use our platform differently.

  • Manual testers see Testim as a time saving tool that helps them quickly create and run stable tests, and as something that can help them in reducing the amount of manual effort it takes to test applications
  • Automation Engineers see Testim as an integrated tool that helps them to do coded automated testing with the help of JavaScript and API testing in one single platform instead of using multiple tools for functional and API testing
  • Developers see Testim as a quick feedback tool that helps them to run several UI tests quickly and get fast feedback on the application under test. They also recognize the ability to do more complex tests by interacting with databases and UI, all in one single platform
  • Release Engineers see Testim as a tool that can be easily integrated in their CI/CD pipeline. They also recognize the ability to trigger specific tests on every code check in; to ensure the application is still working as expected
  • Business and other stakeholders view Testim as a collaborative tool that can help them easily get involved in the automation process irrespective of their technical expertise. They also recognize the detailed reports they get from the test runs that eventually helps them to make go/no go decisions

As we can see, within the same team, people have different perceptions of tools that are being used within the project as well. These are good examples of knowledge gaps.

How do we identify knowledge gaps?

Identifying knowledge gaps in the the SDLC is critical not only to ensure the release of high quality software but also to sustain high levels of team morale, productivity, job satisfaction and the feeling of empowerment within teams. The following questions help to identify knowledge gaps-

  • What are the common problems that occur in each phase of the SDLC?
  • What processes are in place during the requirements, design, development, testing, acceptance and release phases of the SDLC?
  • How often does a requirement change due to scope creep?
  • How effective is the communication between different roles in the team?
  • Are the responsibilities of each team member clearly identified?
  • How visible is the status of the project at any instant of time?
  • Do developers/testers have discussions on testability of the product?
  • How often are release cycles pushed to accommodate for more development and testing?
  • Are the teams aware of what kind of customers are going to use the product?
  • Has there been lapses in productivity and team morale?
  • Is the velocity of the team stable? How often does it fluctuate and by how much?

In terms of tools being used:

  • How are teams using different tools within the project? Are they using tools the right way?
  • Are all the resources sufficiently trained to use different tools within the project?
  • Does one sub-group within a team have more problems in using a tool than others?
  • How effective is a particular tool in saving time and effort to perform different tasks?

Answering these questions as a team helps to identify the knowledge gaps and helps in thinking about solutions to these problems.

How do we bridge the knowledge gap?

There are 5 main factors that help to bridge the knowledge gap in teams. They are as follows-

  1. Training

Sufficient training needs to be given to designers, developers, testers, scrum masters, project managers to help them do their job better, in the context of the project and using different tools. Doing this will help designers understand how mockups need to be designed so that developers can effectively implement the feature, testers can attend code reviews without feeling intimidated and use tools more effectively with sufficient technical training, developers will understand why thinking about the testability of the feature being implemented is important and realize tools can help aid their development testing effort, the scrum master can better manage tasks in the project, project managers can ensure they help the team to collaboratively meet release schedules and deadlines and finally, stakeholders can get a high level overview of the project when they learn what reports are generated from different tools and how to comprehend them.

  1. Visibility

If we want the teams to start seamlessly working together like a well oiled machine in an assembly plant; we need to make the results of everyone’s effort visible to the entire team. It is important for us to know how our contributions help in the overall goal of releasing a high quality product within the scheduled date. There are various way to increase visibility in teams such as,

  • Checklists – where there is a list of items to be done in each phase of the SDLC, that helps everyone to be aware of the expectations from each one of them. This is especially helpful when the team consists of members of varying skill sets and experience. If the items in the list are marked as DONE, then there is no ambiguity in terms of what task has been completed
  • Visual Dashboards – Another solution is having visual dashboards giving a high-level overview of the project health and status. This not only helps stakeholders but also individual contributing team members. These can be created on whiteboards, easel boards or software that is accessible to everyone. Everyday, during stand up meetings, the teams should make it a point to address the dashboard and ensure everyone is aware of the high-level project status. For example – In Testim, we provide a dashboard showing what percentage of test runs passed, number of active tests, average duration of tests, how many new tests were written, how many tests were updated, how many steps have changed, what are the most flaky tests in your test suite and all these details can be filtered based on the current day, 7 day or a 30 day period.
  1. Clear definition of responsibilities

There needs to be clearly defined responsibilities in teams. Each one needs to know why he/she is in the team and what task they need to accomplish on a daily, weekly, monthly and a quarterly basis. Goals, objectives and expectations from each team member need to be clearly discussed with their respective peers/managers. This prevents a majority of the confusion that may occur in terms of task completion.

  1. Empowering the team

In this day and age, where individuals are more technical and skilled, what they are lacking in is – getting some level of empowerment and autonomy. Contrary, to popular beliefs that there needs to be one leader for the whole project; the leadership responsibility should be divided within each role in the team. There needs to be one point of contact each from the design, development and testing teams. Each point of contact who also in most cases help to lead their respective sub-teams, meet up with other leads and ensure all the sub-teams within the project are on the same page and working towards the same goals and objectives. This way, the whole team is empowered.

  1. Experimentation

Once the gaps are identified, the whole team needs to sit together (usually in retrospective meetings after sprints) to discuss different solutions to problems. Based on this, the team needs to experiment with different solutions and see what works well/doesn’t. This constant experimentation and feedback loop helps to make the team more creative and empowers them to come up with solutions that work for them.

In summary,

the “knowledge gap” has been one of the major obstacles for teams to reach their fullest potential. Identifying and reducing them, will help to increase efficiency and as a result lead to faster release cycles, with higher quality. Also, Testim can be used as one of the aids in this entire process. Sign up now, to know how it helps to increase collaboration and bridge the gap.

 

Introduction

We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Auto Scroll, Scheduler Failure Notifications. Check them out and let us know what you think.

Auto Scroll

What is it?

When an element on the page is moved around, finding the target element may require scrolling even though it wasn’t required when the test was initially recorded. Testim does this automatically with auto scroll.  

NOTE: User also has the option to disable this feature, if required.

Why should I care?

You no longer have to worry about tests failing because of element not visible/found when the element location is changed in the page; thereby needing to scroll.  With auto scroll, you can scroll automatically to the element that is outside the viewport.

Scheduler Failure Notifications

What is it?

Users now have the ability to get email notifications on every failed test that ran on the grid using the scheduler.

Why should I care?

With the new “Send notifications on every failure” feature, users will receive notifications on failures every time a scheduler run fails. Now, you have instant feedback on failed scheduler runs. This is unlike the “Notify on error” option, where uses gets notifications only once; when a scheduler run fails. No new email is sent out until the failed scheduler run is fixed.

 

What are Loops?

Loops are one of the most powerful concepts in programming and sometimes could be a little hard to understand. At Testim, we made it easier for users to use Loops in their tests; by building it within the framework itself. This means, the user does not need to write code to repeat a group of steps a certain number of times.

Why do we need Loops?

Loops are useful when we need to repeat an action several times in our test.

For example –  Say we want to check whether the “Post” button on Twitter works consistently. We could have a scenario where we want to click the button 50 times to ensure it works consistently without any problems. In this scenario, are we going to record 50 steps to click the button 50 times or just record the step once and repeat it 50 times?

This is where Loops can be our best friend. It helps in repeating an action several times without needing to repeat the steps again and again. As a result, we save a lot of time in test authoring and execution.

How to use Loops?

Loops could still be a little hard concept to grasp. So here is a quick tip on how to easily use loops within Testim.

Let’ say we have the below code

for(i = 1; i < 4; i++) {

//Steps to repeat 3 times

}

What we are doing here is –

  • We are initializing a variable “i” to be equal to 1 before the loop starts. This is our Iterator.
  • We are specifying a condition to repeat the steps a certain number of times by giving “i<4”. In this case we want to repeat a set of actions 3 times.
  • Each time we exit a loop we need to increment our Iterator. In this case, we are doing “i++” which will increment the variable i from 1 to 2, 2 to 3 and 3 to 4. Then, eventually we exit the loop when i = 4, as we have a condition which checks to see if i<4 (Remember 4 is NOT LESS than 4 but EQUAL)

The same logic applies to Loops in Testim as well. Where we have 3 steps –

Step 1 – Initialize the Iterator

Step 2 – Specify a condition to repeat the steps a number of times

Step 3 – Increment the Iterator

This is what we do in this Demo Example, where we-

 

STEP 1Initialize the Iterator

 

STEP 2Specify a condition to repeat the steps a number of times

  • Group the steps to be repeated in one single reusable component
  • Go to the Properties Panel and add Custom code
  • Give condition to repeat the group a certain number of times

 

STEP 3Increment the Iterator

  • Add a custom action as the last step inside the newly created group
  • Increment the Iterator

 

The same steps applies to any set of actions you may want to repeat using our Loops functionality. You can also use do…while loops which follows similar concepts.

Also remember, Testim supports loops based on element conditions apart from just custom code.

“When I talk to my colleagues and tell them that we have a team of developers that are performing manual testing guided by a test-plan nobody believes me… ”

Let me just start by saying that I’m very proud of our team and how far we have come.

Since my arrival, we have tripled in size and have significantly improved our quality standards. When I Joined Testim, we were a small R&D team based in Israeli. We had no QA department or defined testing process and had to implement it all from scratch. There are many challenges in trying to create a process where testing is integrated into our CI/CD workflows. For one, there is awareness. The process of assimilating the Idea that we have to test our produced features and acknowledging there are consequences if we don’t. We have a very talented and speed oriented Dev team.  But, it comes to a point when you have to stop and take a step back to really plan for the future. As we strive for continuous improvement, we have to periodically evaluate our process from end to end, making sometimes small adjustments which can improve the flow holistically. We had to assimilate the idea that it is better to “waste” a week of testing than 2 weeks of refactoring. Or worse – to lose our customers trust in our product.

I’m very happy to say that one of the things I love most about our team – is that we do not have inner battles. (At least not anymore). We truly understand that we are all working for the same cause and moving as a team to get the job done.

Some history…

I started at Testim as a one-man show. I arrived with a  background in quality assurance and solid automation experience working for larger companies. The startup environment was new to me.  I took the first month to learn the culture, build rapport with the people, understand the customers and their objectives, and observe the various internal and external process end to end.  From these learnings, I could conclude which things needed to be changed, what was good and should be preserved, and which tools or methodologies could suit our needs the best way.

For example, the product-dev-testing workflow needed some tuning. With that said, what suits an enterprise would be terrible overhead for a fast-paced startup. Typically, a startup requires a lot of risk analysis, fine-tuning and wise tool selection. We needed flexible tools that were easy to use and configure as our processes would quickly evolve over time. We did not have time for lengthy implementations which involved consultants or systems integrators, we needed to deliver value instantly without any setbacks. 

I see a lot of guys being recruited to a small startup coming from a big enterprise, and without fully evaluating their surroundings, start to incorporate heavy test management, reporting, and bug tracking tools just because “This is the way it should be done.” Frequently they will encounter resistance from the team because they simply do not see the need, or it creates an unnecessary burden on the routine without really delivering added value to the current work process. The thing is that there is no “single right way” to do things, and each team needs to collaborate to create an efficient workflow that works for them.

The 5 steps we took to building our quality driven development process

  1. Effective Product-Dev-Testing flows – We needed some more concrete requirements, tech designs, with lean and effective documentation. We also needed to establish what was the entrance and exit point to all of the “stops” in our cycle (And how the ping-pong would be addressed during the cycle). That took some work but I think that we are in a very good place now and we continue to learn and iterate.
  2. Incorporating quality initiatives into our roadmap – An important thing to realize – plans tend to change in a fast-paced startup. With that said, if we don’t bring up things for the agenda – they will not go into our mindset (and eventually into our roadmap). I mapped the things I needed to implement in regards to quality and automation, ensuring it was a part of the plan and scheduled work procedure. One more thing I needed to change was proper time estimations for feature releases. While we do need to move fast and give reasonable time estimations, we started incorporating testing and automation coverage time into our time assessments for each feature. That way, the time we needed to release a well tested and “covered” product is being properly evaluated.
  3. Tool choice – We decided on tools, frameworks and document formats that suited our needs the best way and worked on perfecting the flow with every sprint. The key guideline was to make the tool work without making us work for the tool.
  4. Testing and automation coverage – We worked on reaching a point where our own automation gives us a very solid picture of our product and risk-analysis to cover what needed to be performed manually. In addition to that, we made it part of the process for each and every one of us to write automated coverage for the features we deliver. The standardized approach and guidelines are maintained with some insights from me and our Chief Architect. We also implemented an effective CI/CD process and pipeline to assure our fast pace and quality standards could go hand-in-hand.
  5. Ownership & teamwork – It is true that in a startup there is a lot more versatility in everybody’s roles.  With that said, over time we learned our strong points and took responsibility for things where we can contribute to making our work more productive. I took the Quality Assurance and Test-Automation part because it was my background. There was a time when testing was my sole responsibility. Time went by, we grew to understand that a bottleneck is not a good place to be stuck in (If you know what I mean). That lead us to adopt a whole new approach. We always worked as a team, and as such, everybody feels responsible to make their task come to an end with maximum quality and fast delivery time. A part of that is to take responsibility and test your own work. When I tell my colleagues that we have a team of developers writing test automation and performing manual testing according to a test plan – nobody believes me. It’s actually very simple math. We had a lot of work to do, and a limited staff. The alternative is to deliver features without testing or to delay releases, and that was not an option. Test your work and be responsible for it – it’s as simple as that. Ones you push that deploy button – you have to be confident that you did what you could to assure it’s quality. Psychologically speaking, when you know that you don’t have a “Gatekeeper” checking after you, you would do a more thorough job. With that said – at times you do need an additional set of eyes or it’s better to let someone else take a look after you have boiled your own code for a long time – and that’s where I come in – or someone else from the team who is available. The popular conception is that only testers can test a product. But, what is the difference between a tester and a developer? The main difference is that a tester decided to learn how to test and a developer decided to learn how to develop. Is it possible for someone to learn how to do both? Traditionally, organizations preferred that it was separate functions. However, today’s Agile and DevOps practices try to reduce the handoffs between various stages of the process to create a continuous workflow. Once you teach a developer how to test, and give them the correct mindset for testing – they can tackle any feature they develop. Since they know the code and its weak points, they can do an amazing job of testing it., Everybody who “Touches” the product and its feature at any stage of the development lifecycle has to do their best to deliver quality work. The bottom line is as I mentioned before, there is not a “one size fits all” model – it takes time and continuous improvement to adopt a process that works best for your team.

I think the main key here is making quality an organizational initiative. Whether you are in product, engineering, sales or marketing, focusing on the customers experience should be the responsibility of everyone.  We try to leverage as much data as possible to provide cross-functional team transparency and not let emotion and anecdotal assessments get in the way of our decisions. We all share the same agenda and my mission is to influence and foster quality awareness and maintain a productive workflow which combines a detailed code review, effective CI/CD, unit testing and some TDD elements – our winning recipe for success.

 

It’s no surprise that mobile is big and only getting bigger. There are currently 8.7 billion mobile connections in the world, more than a billion active devices than there are people. Ensuring seamless user experiences for mobile users requires extensive software testing. However, testing mobile is much easier said than done – until now!

We are thrilled to unveil Testim Mobile.  SIGN UP  to our Mobile Beta program to experience super fast automated test creation and execution with minimal maintenance.

For those who are familiar with Testim, now you get all of the same AI capabilities you are accustomed to in the web version, now available in mobile. With this release, we have become the only fully integrated AI test automation platform that supports Web, Mobile Web and Native applications.

For those who are not familiar with Testim, we help engineering teams of all sizes across the globe improve test coverage, speed up release cycles and reduce risk in their delivery process. Companies like Wix, Globality, Autodesk, Swisscom and LogMeIn use Testim to integrate testing into their evolving Agile software development processes.

Playback Tests on Real Devices or Emulators

To get the true behavior of the application, it is recommended to playback tests on real devices, and not settle for an emulator. The main challenge with real device coverage is the plethora of available devices and configurations, which can include screen resolutions, performance levels, different firmwares from manufacturers, different operating systems and much more.

With Testim, you can directly playback tests on the physical device seamlessly; apart from just using the emulator.

Record actions in real time

When the user performs gestures such as tap, text inputs, scroll etc on the device/emulator, they are recorded in real time in the Testim editor. This allows you to see exactly which actions are being captured.

Get screenshots for each and every step

As we start recording different flows and creating multiple tests, it becomes critical to visually see the UI elements of the app when recording and playing the tests. Testim provides screenshots for each and every step that is recorded. Once the same test is played back, you can see the baseline and the expected image side by side. This way you know exactly what is happening underneath the hood.

Assertions

One of the most important capability of any automation framework is its ability to do assertions, this holds true for mobile as well. With Testim, you can do various validations to ensure the UI elements on the page are displayed as expected.

Feedback on each step

The user also gets feedback on each step in terms of whether the tests Passed or Failed by showing a “Green” or “Red icon” on the top left portion of each step as shown below

Ease of Use

The downfall of most of the automation frameworks is the complexity that goes into setting up the physical device with the framework, recording/playing back tests, maintaining and organizing the tests into meaningful components. With Testim, everything is handled within the the Testim editor just like how it is for users who are accustomed to doing Web and Mobile Web automation using Testim. There is no context switching between different windows, tabs, tools or frameworks. Just pair the device and start recording. It is that simple. We take care of the rest. All the features you have been using such as branching, version control, test suite/plans, test runs works the same in Testim Mobile.

In addition, you can maintain your Web, Mobile Web and Native automation projects all in one place and easily switch between projects and manage your tests.

Robust CLI actions

Users have the ability to create custom Node.js scripts that can be executed in the CLI (Command Line Interface). This will give users the flexibility to not only perform UI level validations on the mobile application but also the opportunity to do more advanced actions such as database validations.

Inserting Custom Code

Custom javascript code can be added to perform addition validations (if needed). We can perform manipulation/validation of parameters.

Control structures built within Testim

Save time by using the while loop to repeat steps. This eliminates the need to duplicate steps or write code to perform certain actions on your mobile application repeatedly. This feature helps to build simple and robust tests.

Multi Application Recording

With Testim Mobile you have the the flexibility to record actions from multiple applications within the same test. This means, you can easily switch between applications during recording and perform end to end testing, without any additional configuration.

 

Upcoming features in future releases

We will be releasing more features for Testim Mobile in the upcoming months. Here is a quick sneak peek into some of them-

Support for advanced gestures

Users will be able to record/playback more gestures on their applications that includes double tap, nested scroll, multi-touch, long press menus and much more.

Pixel level validation

Testim provides a step by step baseline and expected images out of the box, as a basic means of debugging your test runs. In addition, you will also be able to do visual validation. This feature is helpful when you need to ensure the UI of the mobile application is exactly what is supposed to be, across different devices.

Support for multiple IME’s

With multiple IME’s (Input Method Editor) available in the Android ecosystem, it makes sense to give users the flexibility to record/playback tests with the desired IME. We will soon be able to use alternative input methods such as on-screen keyboards, speech input etc, on our devices and record/playback tests with Testim Mobile.

 

Get started with Mobile Native App Automation

Don’t take our word for it, test drive the Mobile Beta and find out first hand for yourself to see how AI driven mobile automation can be super fast, stable, and simple.

It is also worth mentioning that, users who sign up for the program get a personalized on-boarding process from our team, which also includes 24/7 customer support. The best part of all this is, everything is FREE. We are more interested in your feedback and would like to showcase our AI powered mobile automation framework.

So, what are you waiting for? Enroll to our beta program and take your first step towards easy and seamless mobile native app test automation using AI.

Authoring and Execution of tests is an important aspect of test automation. Tests should be simple to write, to understand, and to execute across projects. The chosen platform should give the flexibility to both record and playback tests and write custom code to extend the functionalities of the automation framework. I recently came across Angie’s article on 10 features every codeless test automation tool should offer. She does a great job of discussing different aspects of test automation that needs to be an integral part of any automation platform.

Angie’s breakdown appeals to the heart and soul of what we set out to do when we built Testim. Starting from her explanation of why record and playback tools fail (we discuss some of these issues in this post as well) to the different challenges mentioned in her article.

We are really proud of what we do at Testim and wanted to address how we handle some of the important aspects of test automation pointed out in her article. We also highlight how we use AI to solve the “maintenance” issue which is arguably the hardest challenge of test automation.

 

  • Smart element locators

Testim’s founder (Oren Rubin) coined the term “Smart element locators” in 2015, when he gave the first demo of Testim. He showed us how AI can be used to improve locators. Using hundreds of locators instead of a single static one, Testim can learn from each run and improve over time.

With static locators (e.g. CSS-Selector/XPath), we use only one attribute of an element to uniquely identify it on a page and if this changes, the test breaks and as testers, we end up spending a considerable amount of time troubleshooting the problem and fixing it. Based on research, about 30% of testers’ time is consumed in just maintaining tests. Can you imagine the opportunity cost associated with this effort?

A Testers’ time is valuable and better spent on actually exploring the application and providing information to help stakeholders make informed decisions about the product. With AI based testing we can overcome this problem by using dynamic locators. Dynamic Locators is a concept where we use multiple attributes of an element to locate it on on the page instead of a single attribute. This way, even if one attribute changes, the element can still be successfully located with the help of other attributes that have already been extracted from the DOM by the AI. Testim is based on Dynamic Location Strategy making your tests more resilient to change.

  1. Conditional waiting

Testim supports conditional waits and can be added in a click of a button. We provide built in wait functionalities based on element visibility, text (or regex), code based (custom) waits using JavaScript, waits based on downloaded file and of course the hardcoded “sleep” (which is not generally advisable to use in tests unless you have no other option).

  1. Control structures

Testim supports “if” statements and “loops”. Looping can be applied on the test level by parameterizing your tests with different datasets (aka Data Driven) or on a specific subset of actions (something super hard, that only Testim supports). These conditions (when to stop the loops) can either be simple, predefined (such as element or text being visible) or can be more complex with custom code. This has been an integral part of Testim since the first version.

  1.  Easy assertions

Assertions are one of the most widely performed actions with test automation. You want to validate an element based on different conditions. For example – If a particular element needs to appear on a particular page of your website, we need to add an assertion to validate the presence of element. With Testim, we made it easy for users to add assertions with a single mouse click and built all of them within the tool itself.

Users have various validation options that include:

  • Validate element visible
  • Validate element not visible
  • Validate element text
  • Validate via API call
  • Validate file download
  • Validation via custom JS code running in the browser (great for custom UI)
  • Validation via custom JS code running in node.js (great for pdf and DB validations)
  • Visual validation – via Applitools integration*.

Testim integrates seamlessly with Applitools, a Visual Assertion platform, which allows you to validate not only the text, but also its appearance like font and color.

  1. Modification without redo

Testim not only supports easy modification of steps, the platform also supports full version control, including creating branches and auto-sync with github.

In Testim, you can add a step at any point in your test.

You can also easily delete or modify any step in your test

  1. Reusable steps

Testim supports reusability by grouping several steps together and the ability to reuse the same group in another test (and passing different parameters e.g. a login),

For Example – The simple steps to log into an application, is one of the most commonly repeated steps in test automation. In Testim, you can create a reusable “Login” step by selecting the steps we want to group together and click on “Add new Group” as shown below.

 

 

Not only does Testim support the creation of reusable components as explained above, the platform also supports maximizing the use of reusable component through a feature called Group Context. Imagine you have one or more components (E.g. gallery of images) within a page or across several pages, and you need to easily state on which component to perform the group of actions. Although this is relatively doable in coding (via the Page Object design pattern), this was extremely hard to implement in low code tools until now with the release of Group Context. Testim is the only codeless platform that can currently support this action.

  1. Cross-browser support

Testim uses Selenium underneath the hood, and supports test execution on ALL browsers, even mobile web, and mobile native (iOS/Android) which is currently in private beta. Signup for a free trial to execute tests on different browser combinations that include Chrome, Safari, Edge, Firefox and IE11.

  1. Reporting

It is vital to get quick feedback on your test runs, especially root cause analysis. The reports generated should be easy to read and needs to have relevant information on the state of the test. In Testim, there are different levels of reporting to help users know what exactly happened within each test run. Some of the features worth mentioning here include.

  • Screenshots

While each test is recorded, the platform takes screenshots of all the Passed and Failed results for each step. As a result, users find it easier to troubleshoot problems and understand what happens underneath the hood.

  • Feedback on each step

The user gets feedback on all the Passed or Failed steps in a test by showing a “Green” or “Red icon” on the top left portion of the step as shown below.

  • Entire DOM captured on failure

On failure, the user also has the option of interacting with the real HTML DOM and see what objects were extracted during the run.

  • Test Logs

Logs are a rich source of information on what happened underneath the AUT. Testim provides test logs when the user runs the tests on the grids. The option can be found in the in top section of editor.

  • Suite and Test Runs

We have suite and test runs views that enables the user to get granular details on each test run in terms of when the test ran, what was the result, the duration of the run, the level of concurrency, what browser the test ran on and much more. We also have filters to drill down based on different options.

 

  • Reports

We make it easy to get a high level health check of your tests using the Reports feature. There is no other testing platform that currently exists, that can give this level of details about the tests and this is done all in one single dashboard. We give details that include what percentage of test runs passed, number of active tests, average duration of tests, how many new tests were written, how many tests were updated, how many steps have changed, what are the most flaky tests in your test suite and all these details can be filtered based on the current day, 7 day or a 30 day period.

  1. Ability to Insert Code

Testim gives the flexibility for organizations to extend the functionalities of our platform using JavaScript by either running it in the browser; where Testim can help by finding elements for you in the DOM (aka dependency injection), or by running the code on node.js, which allows loading many common libraries (that includes accessing databases or inspect downloaded PDFs).

In addition, Testim has test-hooks to run before or after each test/suite.

For example, if you want to validate a particular price on a web page, you can grab the price, convert the string to a number and do the necessary validation. In the below example we are validating that, the price is not over $1000.

  1. CI/CD Integration

Testim easily integrates with ALL CI servers (e.g. Jenkins, Circleci, VSTS) by literally just copying the automatically generated CLI command and pasting it in your build servers. We integrate with ALL 3rd party grids hosting, that supports Selenium and Appium (Amazon/SauceLabs/BrowserStack…). We also support locally hosted grids and also provide our own Testim grids.

Apart from all the above mentioned features to help build stable automated tests, we also have   a notable feature to make tester’s life a lot easier by saving time to report bugs

 

Capture Feature

One of the most time consuming aspects of testing is bug reporting, where in, after finding a bug, we need to report it to the developer with relevant information, to speed up the troubleshooting and fixing of issues.

With Testim you can do this with a single click, with the help of our chrome extension. All the details related to the bug are automatically generated for you in matter of seconds.

 

In summary, we wanted to build a tool that could help in the authoring, maintenance and collaboration, which we consider the 3 pillars of test automation. Hopefully this post helps to highlight this aspect.

We also love to hear your feedback about our tool, so feel free to reach out to us by not only  trying out Testim for FREE but also getting a free consultation on Test Design and Test Automation on your existing testing frameworks and practices. Remember as Steve Jobs said “Simplicity is the ultimate sophistication” and this is the basis on which Testim was created.

Testim is #1 in innovation among all other competitors for several years now. This can be seen from the new features we have been releasing constantly to make automation faster, more stable and much more collaborative than ever. In continuation with this, we are excited to bring you our next big feature which we are calling Group Context. Imagine you have one or more components (E.g. gallery of images) within a page or across several pages, and you need to easily state on which component to perform the group of actions. Although this is relatively doable in coding (via the Page Object design pattern), this was extremely hard to implement in codeless tools until now with the release of Group Context. Testim is the only codeless platform that can currently support this action

For those of you not familiar with Testim, let’s start by defining “Context”?

“Context” is key in real life and in coding. You provide the necessary information in order to perform an action. For Example – Say you make a reservation at a restaurant for a party of four people, you would be required to provide a name for the reservation. Here the name is “contextual information” and you making a reservation is the “context”. Similarly, in test automation, it is important to know the context of elements from the DOM level in order to perform an action.

A Context in Test Automation

Context is all the more important when you have reusable components in your automation suite. For Example – let’s take the below web page.

 

It is a simple web page, containing a gallery of items. In this case, it has names of places with description and an option to book them as part of a reservation. In each item in the gallery, you have similar elements such as an image, text, and button, this is because the code for generating those instances is the same. So, on a high level, all these gallery items are exactly the same except for the different data showing up for each element in the gallery item.

Say we create a reusable group to:

  1. Validate whether an image and text is present in the gallery item 1 which is “Madan”
  2. Validate whether there is a “Book” button in the gallery item
  3. Click on the “Book” button
  4. Do some validations in the “Checkout” screen

It would look something like this:

 

Now, what if I want to use group on Item 2 of the gallery which is “Shenji” instead of “Madan” (which was the 1st item) ?

 

Typically, we would have to create another reusable group to make it work for gallery item 2, which is time consuming and does not make sense when the whole gallery shares the same DOM structure.

When using code (e.g. Selenium), you can use the Page Object design pattern, and just pass the context in the constructor, either as a locator or a WebElement (see slides 40 and 41 here).

In codeless/low-code platforms, this wasn’t possible until now with the release of our new feature “Group Context”. Now you can maximize reuse by assigning entire groups to different elements of the page and across pages within a single click. Taking the above example, you would not have to reassign all the elements/properties of steps within the group when you want to perform same set of actions, on another item in the gallery, with the exact same DOM structure. This means, we can use the same group on gallery item 2 which is “Shenji” without having to do any rework by just choosing the Context-> Custom option from the properties panel as shown below

 

Use Cases for Group Context

Group Context can be used in many other scenarios, such as:

  • Repeating elements: when you have similar elements repeating in the page and want to execute the same steps on all of them.
  • Table rows: when you want to execute the same steps on different rows in a table.
  • Tabs or frames: when you want to reuse a group of steps recorded on one tab on other tabs in the same or different page

 

Summary

We constantly keep working hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. We strongly believe this feature will benefit teams and help in making automation much smarter. Group Context is currently in beta and we would love your feedback.

Please use our chat support to give us feedback or e-mail us at product@testim.io. Happy Testing!!!

Be smart & save time...
Simply automate.

Get Started Free