Category

Product

Category

Quite often there are some elements and attributes that change dynamically on a page. For example – Say you have a currency conversion application. Today, 1 US Dollar would be equal to 0.89 Euros. The same dollar could be 0.90 Euros the very next day; as they change dynamically on a daily basis. In Testim, you have the following ways to handle dynamically changing elements-

Tip 1: Validating Dynamically Changing Elements

Whenever you want to validate dynamically changing elements, Testim has export parameters to handle this situation. 

For example – Say you want to validate the price of your flight trip. The price changes dynamically on a daily basis based on multiple factors. In this case, you can use export parameters; where in, you can store the element in an export variable and use the variable in different steps as parameters and also use it to do different validations in a page.

Tip 2: Generating random values during run time

Quite often there is a need to generate random values containing text, numbers and certain prefixes during run time. This is possible using the generate random value step in Testim. The value gets stored in a variable name which can be used in other steps as well. By default the variable name is randomValue.

Introduction

We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Multi Tab Indicator, Advanced Merge, Failed Test Retry Flag and Refresh Option. Check them out and let us know what you think.

Multi Tab Indicator

What is it?

You now have a numeric value that displays in the top right section of each step that executes in multiple tabs.

Why should I care?

There is no longer a need to open a step and look at the screenshots or other related information to know what tab the step ran on. With the tab number indication, the user has more visibility into test runs that involve multiple tabs/windows. Learn More

Advanced Merge

What is it?

When merging changes into a branch, a modal window will now pop up showing different changes that are being merged into the branch. The changes are categorized into Test, Shared Steps and Suites.

At the top level, users will be able to see what changed in each category – how many new items were created, updated, or deleted. Expanding each item, will display more details about individual changes.

Why should I care?

You now have better visibility and confidence before merging branches. All the details that are getting into the branch are clearly detailed in the modal window.

Failed Test Retry Flag

What is it?

When this flag is set, a failed test will be executed repeatedly until either the test passes or the max number of retries has been reached (in which case the test will finish execution with a failed status). This flag is passed in via the CLI by using the below syntax

––retries <max_num_of_retries>

Why should I care?

You now have the ability to re-run failed tests automatically, with the use of this flag. So even if a test fails once or twice because of some unexpected issues, it will automatically run again so that the test passes the next time.  

NOTE: When a test passes after one or more retries, it will be indicated in the UI as shown below

Refresh Option

What is it?

This new option will completely reload the page before proceeding to the next step.

Why should I care?

You no longer have to add custom action to reload the page. Now, there is a one click option to do it.

Testim gives you the flexibility to add regular expressions (RegEx) to help in easier string searching and validations.  It is extremely helpful in extracting required information from a web page or when there is a need to validate strings that has a portion of it changing dynamically.

For Example – Say you want to validate the label  “Price” in the below page.

The price value is going to change dynamically based on the itinerary booked; every time you run the test. So, if you want to ensure the label “Price” is displayed correctly in the page no matter what the price value maybe, RegEx can be of great help here.

You could add /^Price/ in the Expected Value Field in the properties panel of the Test Step; within Testim.  What this does is, it validates whether the text starts with the word “Price”, allowing the rest of the text to be dynamic and still pass the validation.

Commonly used RegEx and their syntax are as follows-

More references on how to use RegEx can be found below-

https://docs.testim.io/advanced-steps/advanced-text-validations

https://www.w3schools.com/jsref/jsref_obj_regexp.asp

https://regexone.com/



Testim gives you the ability to playback tests in incognito mode. The reason you may want to use incognito mode is, to get the true behavior of the application without any cached data. This is similar to running tests on the grid, where in each test runs on a new browser instance without any cached data (same as running in incognito mode). The different tips to help you playback tests in incognito mode are as follows-

NOTE: Ensure you allow Testim to run in incognito, before playing the tests.

Tip 1: Running a single test in incognito

If you want to play a test you just created in incognito, follow the below steps

  • Click on the drop down arrow next to the play button
  • Click on “Run in Incognito mode”

Tip 2: Running multiple tests in Incognito

Multiple tests can be run in incognito mode by using the CLI. Each time a test is run on the grid a fresh browser instance opens up without any cached data. Follow the below steps to run multiple tests in incognito all at once-

NOTE: Tests cannot be run in incognito mode from the Test List view. The CLI needs to be used to run multiple tests in incognito.

Introduction

We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Email Validation, Advanced Scheduler, For Each Loop. Check them out and let us know what you think.

Email Validation

What is it?

You now have the ability to generate email addresses, send emails and validate the contents of an email within the Testim IDE itself.

Why should I care?

There is no longer a need to use 3rd party email vendors such as Guerilla Mail and Shark Lasers to do email validations. All this is handled within Testim and there is no need for any context switching. The validations can be done within a click of a button as shown below.

Advanced Scheduler

What is it?

You now have the ability to run tests in parallel, add results label, choose branches and set a timeout for a scheduler run with the Advanced Scheduler feature.

Why should I care?

With this new feature, you now have more control over you scheduled runs in terms of making it run faster by adding parallelism, labeling each scheduled run, pick and choose which branch you want to run the tests on and finally setting a timeout value to control when a test needs to be aborted.

For Each Loop

What is it?

You now have the ability to iterate over any list of similar items and perform repeated actions.

Why should I care?

Iterating over rows in a table, clicking on multiple checkboxes in a list of items or validating the order of a list of similar items; just got a lot easier with the for each loop functionality. You simply choose this option and select the element you want to repeatedly click to perform certain validations. It does not matter which element in the list of similar items is selected as the loop always starts from the first element and iterates over its siblings.

Click on this demo test to learn how the For Each Loop functionality works.

Testim gives you the ability to override timeouts within a test, outside a test and across a group of tests. This helps to control the amount of time tests need to wait before a particular condition is met; after which tests fail gracefully after the set timeout period expires. The different ways to handle timeouts are as follows-

Tip 1: Timeouts within a step

Every step you record in Testim has a default timeout value of 30 seconds. You can override this value by following the below steps

  • Navigate to the properties panel of the step
  • Select “Override timeout” option
  • Change the default timeout value from 30 seconds to the desired timeout value
  • Click on Save

Tip 2: Timeouts within Test Configs

You have the ability to change the timeout for all the tests using a particular test config. You can do this by following the below steps-

  • Navigate to the properties panel of the setup step (first step in the test)
  • Click on the edit config icon next to the exiting resolution
  • Change the default timeout value from 30 seconds to the desired timeout value
  • Click on Save

NOTE: You can also edit the Step delay value in the edit config screen

Tip 3: Setting timeout for test runs

To abort a test run after a certain timeout has elapsed, you can use the CLI –timeout command. The default value is set to 10 minutes.

Usage example:

npm i -g @testim/testim-cli && testim –token “eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ” –project “vZDyQTfE” –grid “Testim-grid” –timeout 120000.

This timeout value can also be set in the Advanced Scheduler screen as shown below


Introduction

We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Advanced Analytics, Results Export Feature. Check them out and let us know what you think.

Advanced Analytics

What is it?

You will now be able to see an aggregated failure summary report inside a suite runs page. The report will contain a snapshot of the test failures and a pie chart. To help with debugging; clicking any error group or one of the pie chart segments will filter the view to show only the tests that failed on the selected error. This speeds up the troubleshooting of failed tests by pinpointing the root cause of run failures.

Why should I care?

Sometimes, a single problem in one shared step (for example, a “login” group) can cause many tests to fail. With the release of this feature, you will now be able to quickly hone in on those high-impact problems and dramatically reduce the time it takes to stabilize the suite.

 

Results Export Feature

What is it?

You will now see an export button in the results page (suite runs, test runs and single suite view). Clicking this button will download the results data as a CSV file, which can then be imported into excel or Google sheets.

NOTE: Only the data that is currently presented in the UI will be put into the CSV file. For example, if “Last 24 hours” is selected and the status filter is set to “Failed”, the CSV file will only include failed tests from the last 24 hours.

Why should I care?

We now have the ability to easily share the test results across teams with people who use and do not use Testim. This also gives the flexibility to feed the generated CSV file to any external tool or framework for more customized reporting. The possibilities are endless.

Testim gives users the flexibility to run tests on different screen resolutions. But sometimes this can get confusing where in; some tests run on a certain resolution and the newly created tests run on a different resolution. Below, are two simple tips to set screen resolution for a particular test and also apply it globally to all the tests in the project.

Tip 1: To ensure a test runs on a particular screen resolution each time you run it, follow the below steps

  • Navigate to the properties panel of the setup step (first step in the test)
  • Click on “Choose Other” option to select a resolution from the already existing config list OR
  • Click on the edit config icon next to the exiting resolution
  • Set the desired screen resolution you want
  • Give a name for the newly created resolution
  • Then click “Save”

 

Tip 2: To apply an existing/new resolution to all the tests in your Test List, follow the below steps

  • Navigate to the Test List view
  • Click on “Select All”
  • Click on the “Set configuration for selected tests” icon
  • Choose the required resolution you want to be applied to all the tests

NOTE: Test configs can also be overridden during runtime via the –test-config parameter in the CLI and the Override default configurations option in the Scheduler.

The Software Development Lifecycle (SDLC) consists of various roles and processes that have to seamlessly mesh together, to release high-quality software. This holds true right from the initial planning phase, all the way to production release and monitoring. In terms of roles, we have designers, business analysts (BA), developers, testers, scrum masters, project managers, product owners (PO) and technical architects (TA), who bring a varying level of experience and skill set to the project. They collaborate to discuss and implement different aspects of the software. In terms of processes, based on team size, release schedules, availability of resources and complexity of the software, the amount of processes can vary from no process to strict quality gates, at different phases of the SDLC.

What is the Knowledge Gap?

As teams start to collaborate to work on different features of the software, they often run into the below situations-

  • Each team member has different interpretations of the requirements
  • A majority of the team finds it hard to understand the technical jargons used to describe the working of the system
  • Developers assume the testers have the same level of technical expertise as them, while explaining changes in code during code reviews
  • Developers fail to do development testing and assume testers will test the newly implemented feature completely
  • There is no clear distinction of responsibilities in the team; leading to ambiguity and confusion
  • The PO comes up with a feature to implement and each team member has different interpretations of how the feature should work
  • The BA writes requirements that are hard to understand and implement, due to lack of understanding of the technical aspects of the system
  • The TA explains how a feature should be implemented using technical jargons that are hard to understand by designers, PO’s, BA’s and testers
  • The developer develops the feature without paying attention to the testability of the feature
  • The tester sits in on code reviews and the developers assume he/she has the same level of technical expertise as them when explaining their implementation
  • The tester does not get enough time to complete their testing due to tight release schedules
  • The developer fails to do development testing and makes testers responsible for the quality of the product
  • There is no clear distinction of responsibilities on who would do what task in the SDLC and everyone assumes someone would do the tasks. As a result; majority of them never gets done

And so on…

Now, you may ask? Why do teams get into the above situations more often than expected? The answer is, there is a knowledge gap in teams. This is especially true when teams have a mix of entry-level, mid-level and expert level resources and each one makes different assumptions on the skillset, experience and the domain knowledge every individual brings to the table.

Also, these gaps can stem from a more granular level when teams are using different tools as well. For example – When customers use Testim, we have observed first hand that, different developers/testers think and use our platform differently.

  • Manual testers see Testim as a time saving tool that helps them quickly create and run stable tests, and as something that can help them in reducing the amount of manual effort it takes to test applications
  • Automation Engineers see Testim as an integrated tool that helps them to do coded automated testing with the help of JavaScript and API testing in one single platform instead of using multiple tools for functional and API testing
  • Developers see Testim as a quick feedback tool that helps them to run several UI tests quickly and get fast feedback on the application under test. They also recognize the ability to do more complex tests by interacting with databases and UI, all in one single platform
  • Release Engineers see Testim as a tool that can be easily integrated in their CI/CD pipeline. They also recognize the ability to trigger specific tests on every code check in; to ensure the application is still working as expected
  • Business and other stakeholders view Testim as a collaborative tool that can help them easily get involved in the automation process irrespective of their technical expertise. They also recognize the detailed reports they get from the test runs that eventually helps them to make go/no go decisions

As we can see, within the same team, people have different perceptions of tools that are being used within the project as well. These are good examples of knowledge gaps.

How do we identify knowledge gaps?

Identifying knowledge gaps in the the SDLC is critical not only to ensure the release of high quality software but also to sustain high levels of team morale, productivity, job satisfaction and the feeling of empowerment within teams. The following questions help to identify knowledge gaps-

  • What are the common problems that occur in each phase of the SDLC?
  • What processes are in place during the requirements, design, development, testing, acceptance and release phases of the SDLC?
  • How often does a requirement change due to scope creep?
  • How effective is the communication between different roles in the team?
  • Are the responsibilities of each team member clearly identified?
  • How visible is the status of the project at any instant of time?
  • Do developers/testers have discussions on testability of the product?
  • How often are release cycles pushed to accommodate for more development and testing?
  • Are the teams aware of what kind of customers are going to use the product?
  • Has there been lapses in productivity and team morale?
  • Is the velocity of the team stable? How often does it fluctuate and by how much?

In terms of tools being used:

  • How are teams using different tools within the project? Are they using tools the right way?
  • Are all the resources sufficiently trained to use different tools within the project?
  • Does one sub-group within a team have more problems in using a tool than others?
  • How effective is a particular tool in saving time and effort to perform different tasks?

Answering these questions as a team helps to identify the knowledge gaps and helps in thinking about solutions to these problems.

How do we bridge the knowledge gap?

There are 5 main factors that help to bridge the knowledge gap in teams. They are as follows-

  1. Training

Sufficient training needs to be given to designers, developers, testers, scrum masters, project managers to help them do their job better, in the context of the project and using different tools. Doing this will help designers understand how mockups need to be designed so that developers can effectively implement the feature, testers can attend code reviews without feeling intimidated and use tools more effectively with sufficient technical training, developers will understand why thinking about the testability of the feature being implemented is important and realize tools can help aid their development testing effort, the scrum master can better manage tasks in the project, project managers can ensure they help the team to collaboratively meet release schedules and deadlines and finally, stakeholders can get a high level overview of the project when they learn what reports are generated from different tools and how to comprehend them.

  1. Visibility

If we want the teams to start seamlessly working together like a well oiled machine in an assembly plant; we need to make the results of everyone’s effort visible to the entire team. It is important for us to know how our contributions help in the overall goal of releasing a high quality product within the scheduled date. There are various way to increase visibility in teams such as,

  • Checklists – where there is a list of items to be done in each phase of the SDLC, that helps everyone to be aware of the expectations from each one of them. This is especially helpful when the team consists of members of varying skill sets and experience. If the items in the list are marked as DONE, then there is no ambiguity in terms of what task has been completed
  • Visual Dashboards – Another solution is having visual dashboards giving a high-level overview of the project health and status. This not only helps stakeholders but also individual contributing team members. These can be created on whiteboards, easel boards or software that is accessible to everyone. Everyday, during stand up meetings, the teams should make it a point to address the dashboard and ensure everyone is aware of the high-level project status. For example – In Testim, we provide a dashboard showing what percentage of test runs passed, number of active tests, average duration of tests, how many new tests were written, how many tests were updated, how many steps have changed, what are the most flaky tests in your test suite and all these details can be filtered based on the current day, 7 day or a 30 day period.
  1. Clear definition of responsibilities

There needs to be clearly defined responsibilities in teams. Each one needs to know why he/she is in the team and what task they need to accomplish on a daily, weekly, monthly and a quarterly basis. Goals, objectives and expectations from each team member need to be clearly discussed with their respective peers/managers. This prevents a majority of the confusion that may occur in terms of task completion.

  1. Empowering the team

In this day and age, where individuals are more technical and skilled, what they are lacking in is – getting some level of empowerment and autonomy. Contrary, to popular beliefs that there needs to be one leader for the whole project; the leadership responsibility should be divided within each role in the team. There needs to be one point of contact each from the design, development and testing teams. Each point of contact who also in most cases help to lead their respective sub-teams, meet up with other leads and ensure all the sub-teams within the project are on the same page and working towards the same goals and objectives. This way, the whole team is empowered.

  1. Experimentation

Once the gaps are identified, the whole team needs to sit together (usually in retrospective meetings after sprints) to discuss different solutions to problems. Based on this, the team needs to experiment with different solutions and see what works well/doesn’t. This constant experimentation and feedback loop helps to make the team more creative and empowers them to come up with solutions that work for them.

In summary,

the “knowledge gap” has been one of the major obstacles for teams to reach their fullest potential. Identifying and reducing them, will help to increase efficiency and as a result lead to faster release cycles, with higher quality. Also, Testim can be used as one of the aids in this entire process. Sign up now, to know how it helps to increase collaboration and bridge the gap.

 

Introduction

We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Auto Scroll, Scheduler Failure Notifications. Check them out and let us know what you think.

Auto Scroll

What is it?

When an element on the page is moved around, finding the target element may require scrolling even though it wasn’t required when the test was initially recorded. Testim does this automatically with auto scroll.  

NOTE: User also has the option to disable this feature, if required.

Why should I care?

You no longer have to worry about tests failing because of element not visible/found when the element location is changed in the page; thereby needing to scroll.  With auto scroll, you can scroll automatically to the element that is outside the viewport.

Scheduler Failure Notifications

What is it?

Users now have the ability to get email notifications on every failed test that ran on the grid using the scheduler.

Why should I care?

With the new “Send notifications on every failure” feature, users will receive notifications on failures every time a scheduler run fails. Now, you have instant feedback on failed scheduler runs. This is unlike the “Notify on error” option, where uses gets notifications only once; when a scheduler run fails. No new email is sent out until the failed scheduler run is fixed.

 

Be smart & save time...
Simply automate.

Get Started Free