Author

francis@testim.io

Browsing

Introduction

We work hard to improve the functionality and usability of our platform, constantly adding new features. Our sprints are weekly with minor updates being released sometimes every day, so a lot is added over the course of a month. We share updates by email and social media but wanted to provide a monthly recap of the month’s latests enhancements, summarizing the big and small things we delivered to improve your success with the Testim. We’d love to hear your feedback.

Customers have access to these feature now, check it out and let us know what you think. If you’re not a customer, sign up for a free trial to experience autonomous testing. If your an existing user, please send us your feedback.

GitHub

Using GitHub? Then checkout these two new advancements we’ve made to the integration.

What is it?

  1. Align your tests for your branches automatically. Testim now supports webhooks: Your tests will automatically be forked with every new branch.
  2. Easily report bugs in Git Issues straight from your browser. All the relevant bug info will automatically be populated in Testim – learn more about bug capture.

testim integration github

Why should I care?
Merging branches in GitHub? Testim will automatically merge tests in Testim so you don’t have to do that manually. Isn’t that what test automation is all about, more automation? Learn more about branches.

In order to integrate testing into your development cycle, it’s important to integrate your test automation platform to your source control. This allows you to align your tests to branches so that you can test different versions of code simultaneously or in different sequences.

UI/UX Facelift

What is it?

As we strive to improve your user experience we try to make everything more convenient by reducing the number of clicks it takes to perform common actions.  Some of these include:

  1. Edit the steps’ properties
  2. View steps’ screenshot
  3. Delete step

Why should I care?

The new UI allows you to do all of this in a single click, just hover and click the appropriate button.

We made the steps bigger so the thumbnails could contain more information. This way you don’t have to open the screenshot to understand the context. The steps description length is also longer.

We tried to remove the clutter and hide uncommon actions (e.g. override timeout) to have a cleaner UI. But don’t worry, none of the actions are gone.

testim.io new UI

  1. Shows and editor with a few steps (e.g. this one)
  2. Hover the 2nd “book” step, and click delete 3 times
  3. Click on the properties button on the step that is now beneath the mouse
  4. Click on the fullscreen button on the step that is now beneath the mouse (don’t show the spinner)

And one more thing… It’s much prettier! We’d love to hear what you think of the new UI. Please share your thoughts on Twitter or Facebook.

Struggling to get answers to these Test Automation questions? Watch the Recorded  Webinar

  • What constitutes an adequate degree of test automation coverage?
  • Does every single function in the application need a corresponding test associated with it or just the core functionality?
  • Does every application need a full end to end automation suite built out or just enough to satisfy QAs and the business? 

Then register for this webinar: Cracking Your Test Automation Code: The path to CI/CD where Bas Dijkstra, Test Automation Consultant, On Test Automation and Oren Rubin, CEO of Testim discuss the Do’s, Don’ts and misconceptions of test automation.

Bring your questions to get answers to:

  • Why your code impacts your approach to testing?
  • What’s the right mix of unit, functional, end-to-end, UI and other types of testing?
  • How to create your test automation strategy and tactical execution plan?
  • What pitfalls to avoid that will increase your cost of quality?

This round-table discussion will include insights from these industry experts:

  • Bas Dijkstra, On Test Automation
  • Oren Rubin, CEO of Testim

About the presenters:

Bas Dijkstra
Bas is an independent test automation professional who has been in the test automation and service virtualization field for over 10 years now, designing and developing test automation and service virtualization solutions that enhance and improve test teams and test processes. Find out more information about Bas on his LinkedIn profile. For questions and more information you can always send him an email at bas@ontestautomation.com or give me a nudge via @_basdijkstra on Twitter.

Oren Rubin
Oren has over 20 years of experience in the software industry, building mostly test-related products for developers at IBM, Wix, Cadence, Applitools, and Testim.io. In addition to being a busy entrepreneur, Oren is a community activist and and the co-organizer of the Selenium-Israel meetup and the Israeli Google Developer Group meetup. He has taught at Technion University, and mentored at the Google Launchpad Accelerator.

Totango is the leading enterprise-grade customer success platform that helps recurring revenue businesses proactively impact business outcomes with customer success. With solutions to empower customer success teams or entire companies, Totango enables every employee to participate in customer success.

The Problem

Like any growing company, Totango was short staffed on engineering resources and each team member owned multiple tasks. Developers were responsible for doing their own testing, mostly focusing on unit level tests while the overlay team supported the first-level testing efforts. While the team supported a variety of different test automation functions, they never had enough time to invest in automating the User Interface level testing. The team considered Selenium and experimented with it.

“We didn’t have the appetite and desire to work with something like Selenium or one of its variants,”
said Oren Raboy, Co-Founder and VP of Engineering for Totango.
“We experimented with it but
realized  
that it’s a steep learning curve and required a lot of effort just to get some basic value out of
building such a system. It was brittle and slow.”

The Solution

Totango believes that everyone on the team should be able to write tests and own code quality. Totango’s R&D team members picked up Testim quickly, using it for front end test automation coverage.

“Testim’s simple user interface makes it easy for our developers to add tests without a steep learning
curve”
says Oren, ”We increased coverage by 30% in 3 months and found that very little maintenance
is required so we can focus on increasing coverage.”

Totango was able to create a UI regression suite that gave the team confidence that things were not breaking while allowing them to add, expand, and grow without investing a ton of resources into their UI test automation. The regression suite is extremely valuable for the app development team as it serves as a safety net they can rely on when they make changes that break things.

Totango is now working to leverage Testim to not only help with code-coverage but run various test scenarios in production, thus giving them an app level production monitoring to mitigate production malfunctions, above & beyond code regressions.

The Results

The engineering team ran for quite some time by doing UI level testing manually. In order to keep up with customer demands and develop additional products to their suite without slowing down their daily releases, the team needed to optimize their frontend testing efforts. Before Testim, Totango would occasionally have a regression at the UI level that forced the team to stop working and go back and fix the issues. Now the team can find the problems quicker and react to them faster, giving them more coverage without slowing down the development pace.

“We had a lot of automated testing, just not at the UI level,” explained Oren. With Testim, we’ve been
able to place quality front and center and make it a company wide priority vs a topic only QA is responsible
for
. The end result is increase in velocity, quality and customer satisfaction.”

 

It’s inevitable. Agile has matured and in addition to speed, now the new set of challenges are about accountability and proving its worth. Just as with other areas of the business, Agile must also answer to traceability, auditability, and compliance. After all, what good is a methodology that delivers fast, but ultimately fails to deliver more value?  Many organizations are now demanding it and teams new to Agile are starting out in an uphill battle with risk-averse management and globally dispersed project teams, in addition to technical debt.  Even though the executive team made the decision to adopt Agile practices, that decision is coming with its own set of expectations that must be met. And it’s qualitative metrics that will be looked to to satisfy this expectation.

So which metrics should you put in place? Is it more effective to track activity within Agile tools, such as JIRA? Or is is better to track metrics within the software itself? The important thing to realize, though, is what’s really being asked: “is Technology actually improving its impact on the business in a tangible way. Or said another way, as phrased by the Kanban community, is Technology  IT becoming more fit for purpose? Answering this question, of course, requires a clear understanding of what is that the Business expects from its interactions with IT.

To ensure metrics remain relevant, they should be inspected and potentially evolve to ensure they still align with the organization’s short and long term goals. Certain metrics may be created to align with the organization’s specific agile maturity phase and as an organization moves from one phase to another, certain metrics may no longer be relevant. Inspection and adaption will ensure metrics align with the specific goals.

Measure the amount of working software value that reaches customers hands, such as “percentage of story points completed within a sprint”. Velocity trends (not absolute values) are also helpful. Although using Story Points is a crude way of doing this, much better to also estimate the business value of each story (so each story gets two estimates, one for complexity and one for value) then you can measure the value produced by the team easily. The tough part about value is – it is not an easy thing to assign the value to a story. In some circumstances, value cannot be a KPI for IT projects because (1) business value is subjective especially if business value points are used; (2) some projects have no direct way to calculate business value (think regulatory projects that must be done to comply with regulations); and (3) Business value varies too much from project to project based on the perceived business benefit making it hard to measure objectively.

KPIs

That said, KPIs to track during a transformation to Agile include those that fall under the categories of time, quality, and effort. Instead of measuring traditional key performance indicators (KPIs) like profitability or transactions per hour, therefore, executives must shift their focus to how well the enterprise is able to deal with changing customer preferences and a shifting competitive landscape.

Quality

In terms of quality, nothing measures a product’s value or worth better than ROI. The trick is to make sure the return is defined clearly and is measureable. The most straight-forward way to do that is to tie it to a number value: sales, downloads, new members, positive comments, social media likes, etc.

The number of defects is a simple metric to track that speaks loads about not only the quality of the product, but also team dynamics and effectiveness. Good metrics to track for defects include, the number of defects found during development, after release, or by customers or people outside of the team. It can also be insightful to track the number of defects that are carried over to a future release, the number of support tickets, and the percentage of manual vs automated test coverage.

Effort

Sprints are fundamental to Agile iterations. KPIs, therefore, become fundamental to sprints. The design phase of sprints is initial to the development process and, while tweaked as needed to remain focused on objectives, is to remain in place unless proved to be unreliable in achieving objectives.

Velocity tracks the number of user story points completed per sprint. It’s effectiveness to depict anything of value outside of the technology team is debateable. It’s often a misunderstood metric because it’s confused with a team’s productivity. It’s not uncommon for executive management to demand a higher number of story points be completed per sprint in an effort to increase speed to market. However, more often than not this results in a lower quality product. Nonetheless, it can be useful if it’s understood and reported on as an effort-based metric instead of time or quality.

When measuring Velocity, it’s important to keep in mind that different project teams will have different velocities. Each team’s velocity is unique. If team A has a velocity of 50 and team B has a velocity of 75, it doesn’t mean that team B has higher throughput. Since each team’s estimation culture is unique, their velocity will be as well. Resist the temptation to compare velocity across teams. Measure the level of effort and output of work based on each team’s unique interpretation of story points. 

Time

Scrum teams organize development into time-boxed sprints. At the outset of the sprint, the team forecasts how much work they can complete during a sprint. A sprint burndown report then tracks the completion of work throughout the sprint. The x-axis represents time, and the y-axis refers to the amount of work left to complete, measured in either story points or hours. The goal is to have all the forecast work completed by the end of the sprint.

A team that consistently meets its forecast is a compelling advertisement for agile in their organization. But don’t let that tempt you to fudge the numbers by declaring an item complete before it really is. It may look good in the short term, but in the long run only hampers learning and improvement. 

SUMMARY

KPIs are vital to the strategic goals of an organization as in addition to revealing whether the direction of project is on-course, they assist in informing key business decisions. Metrics are just one part in building a team’s culture. They give quantitative insight into the team’s performance and provide measurable goals for the team. While they’re important, don’t get obsessed. Listening to the team’s feedback during retrospectives is equally important in growing trust across the team, quality in the product, and development speed through the release process. Use

SOURCES

  1. Banking on Kanban
  2. https://www.atlassian.com/agile/metrics

 

Testing has been in the spotlight of software development, as organizations continually seek to optimize development cycles. Continuous Delivery and its promises of frequent releases, even as often as hourly, is a big factor driving executives  to find ways  to shave time off of any eligible technical processes. As enterprises embark  on their DevOps transformational journeys, delivering at lightning speed without compromising on quality takes more than cultural changes and process improvements. Test automation is key in helping project teams write and manage the smallest number of test conditions, but get the maximum coverage.

The case for automation: If the scenario is highly quantitative, technology is superior to humans. That’s clearly the case in the current technology landscape, where according to the 2015 OpenSignal report, massive fragmentation has resulted in over 24,000 distinct Android devices that all run on different versions of the Android OS, plus the several variations of iOS devices currently in use. Web is no different. It’s impossible to test every scenario manually, so automation must be leveraged. But therein lies the main point, that automated testing is a tool to be leveraged as needed instead of relied upon to dictate an entire testing strategy.Although logic would deem that if automating a few test scenarios yield fast results, that automating any and every eligible scenario will shorten the test cycle even more; but alas,  that’s usually not how it goes. Automation endeavors take a good deal of planning and forethought in order to add value at the anticipated level. More often than not the overhead of maintenance, especially in CI/CD environments where tests are automatically triggered and one needs to analyze the reports and fix locators, a task that , could  take hours.  This won’t work for organizations who are truly DevOps.  If full stack developers are going to be fully responsible to take the code to production, then they will need tools to automate the process through infrastructure, testing, deployment and monitoring.  This continuous framework enables  weekly , daily and hourly releases.  Leading DevOps organizations like Netflix and Amazon are said to deploy hundreds to thousands times per day.

What’s more, studies reveal that a high percentage of projects utilizing automated testing fall short of the anticipated ROI or fail altogether. These transformations fall short due to their duration, ramp up time, skill-set gap and maintenance overhead.  If the benefits of automated testing aren’t significant enough to mitigate risk then speedy releases could become more of a liability than an asset.

There are varying levels of automation just as there are companies of all different shapes and sizes. Instead of one or the other, it’s more fitting that automated and manual testing are looked at as complementary. Agile and DevOps have created new testing challenges for QA professionals who must meet the requirements for rapid delivery cycles and automated delivery pipelines.

DEMANDS OF CONTINUOUS TESTING

Testing used to have the luxury of having its own phase or at least a set timeframe to stabilize new code prior to pushing to production. No longer. In the era of DevOps, QA must adopt a truly agile mindset and continually be prepared to shift focus to testing new features and functionality as they become available – that could be weekly, daily, or hourly.

From web, mobile and connected devices to IoT, a quick inventory of current technology will reveal a multitude of different devices and literally thousands of potential ways that these technologies can be combined. Across networks, apps, operating systems, browsers and API libraries, the number of  combinations with each requiring its own verification and testing support.  As the push towards Devops continues, the upward trend towards continuous testing is inevitable. As a result, testers and/or developers will need to deliver in brief and rapid iterations.

Beyond automation, what does this look like? QA, Engineering, or whomever is responsible for testing will need to shift left and engage in rounding out user story acceptance criteria to ensure accuracy and completion. In addition, active participation in sprint plannings will help to ensure that user stories are truly decoupled and independent. Or if dependencies are necessary, confirming that story precursors and relationships are in-place. Partnership with Product Owners to confirm that the scope for a given sprint or iteration is realistic.

And finally in support of shifting left, collaboration among Dev and Ops to ensure the necessary support in the event that a code check-in causes a bunch of automated tests to fail. The culprit for these tests will need to be checked manually so it’s important that QA is prepared so as not to pose a blocker for the release. All of these activities is largely what comprises testing outside of automation. It’s a broader role than the traditional QA Specialist that requires initiative and the willingness to embrace an always-on mentality that’s necessary to support DevOps efforts.

SUMMARY

Software development practices have changed and the change brings with it new challenges for the testing arena. The growing need for test automation in addition to the profusion of new testing scenarios will constantly evolve the role of testing and QA.  Yes, the world is changing for testers—in a good way. The days of exhaustive manual testing are certainly numbered, and automated testing is an essential part of the delivery pipeline. But testing has never been more important.

SOURCES

  1. IT Revoloution DevOps Guide
  2. World Quality Report 2016-17
  3. Open Signal: Android Fragmentation

 

Introduction
We work hard to improve the functionality and usability of our platform, constantly adding new features. Our sprints are weekly with minor updates being released sometimes every day, so a lot is added over the course of a month. We share updates by email and social media but wanted to provide a recap of the latest enhancements, summarizing the big and small things we delivered to improve your success with the product. We’d love to hear your feedback.

Test History

Is past performance an indicator of future success? The folks on Wall Street don’t think so… But we do! This is why we are pleased to offer the new Test History feature in Testim.  


The new Test History feature allows users to see slide and dice test results to get more actionable insights.

Why should I care?
This gives users much more insight from the test results. Quickly analyze how many tests ran during the selected time frame, how many succeeded and the average duration of each run, how your tests performs across multiple environments and whether certain environments or scenarios consistently fail. This allows project team to see improvement trends across scenarios or environments over time. Click here to learn more.

What’s included?
Test History allows you to filter based on the following parameters:

  • Run time
  • Specific tests / All tests
  • Last X runs
  • Status
  • Browser

Create Dependencies Between Tests
This new capability allows users to create different aspects of dependencies between tests. As a best practice, we recommend to keep the tests as isolated as possible. However we recognize that sometimes you need the ability to create dependencies between tests.

Why should I care?
Now users can create a logical sequence of tests. By working closely with our customers, we’ve seen project teams required to set a sequence of activities. A general testing framework does not allow you to create a sequence, forcing users to create one long test which may result in performance issues and failures. By creating dependencies during test plans you can create shorter discrete actions, order them in sequence and set dependencies and share data between tests. Click here to learn more.

Setting up Cookies
Cookies is a reserved parameter name for specifying a set of cookies you would like Testim to set for your application.  You can set cookies for each browser session before the test is started or before the page is even loaded. Use cookies to set browser data to be available for your tests. The cookies parameter is set as an array, where each cookie object needs to contain a name and value properties.

Why should I care?
Websites use cookies to personalize user experience. By using this feature, you can test different types of personalized experiences. Click here to learn more.

Improved Scrolling
More flexible way of scrolling through your page.

Why should I care?
Traditionally scrolling in your test automation would be set by pixels so you would test your script to skip 60 pixels. Offering a more flexible scrolling ability makes your tests more adaptive. An  example is to scroll to the element, mouse wheel. Click here to learn more.

How do I get my hands on these hot new features?
Customers have access to these feature now, check it out and let us know what you think. If you’re not a customer, test drive the new features by signing up for a free trial to experience code and maintenance free test automation.

Be smart & save time...
Simply automate.

Get Started Free