Test automation


Engagio is a two year old marketing software startup that helps marketers land and expand their target accounts. The company is rapidly growing with more than 150 customers and 45 employees. Based in San Mateo, Engagio was founded by Jon Miller who was previously the Co-Founder of Marketo.  

Today, I had the pleasure of speaking with Helge Scheil, VP of Engineering for Engagio who shared why he selected Testim, his overall development philosophy and the results his team was able to achieve.  Checkout the series of videos or read the Q&A from our conversation below.

Q: Can you tell us a little about what your software does?

A: We help B2B marketers drive new business and expand relationships with high-value accounts. Our marketing orchestration software helps marketers create and measure engagement in one tool, drive ongoing success, and measure impact easily. Engagio orchestrates integrated account-based programs, providing the scale benefits of automation with the personalization benefits of the human touch. Our software complements Salesforce and existing marketing automation solutions.

Q: What does your development process look like?
A: Our developers work in 2-week sprints, developing features rapidly and deploying to production daily without any production downtime. We’re running on AWS and have the entire develop-test-build-deploy process fully automated. Each developer can deploy at any time, assuming all quality criteria have been met.

Q: What tools do you use to support your development efforts?

A: We are using Atlassian JIRA and Confluence for product strategy, roadmaps, requirements management, work breakdown, sprint planning and release management. We’re using a combination of Codeship, Python (buildbot), Docker, Slack, JUnit and Testim for our continuous build/test/deploy. We have integrated Testim into our deployment bot (which is integrated into Slack).

Q: Prior to Testim, how were you testing your software? What were some of the challenges?

A: Our backend had great coverage with unit and integration tests, including the API level. On the front-end we had very little coverage and the small amount we had was written in Selenium, which was very time-consuming with little fault-tolerance and many “flaky” failures.

Q: What were some things you tried to do to solve these challenges?

A: We were trying to simply allocate more time for Selenium tests to engineers. We considered hiring automation engineers but weren’t too fond of the idea because we believe in engineers being responsible for quality out of the gate, including the regression tests creation and maintenance.

Q: Where there other solutions you evaluated? Why did you select Testim?

A: We didn’t really consider any other solutions after we started evaluating Testim, which was our first vendor to evaluate. Despite some skepticism around the “record and play” concept, we found that Testim’s tolerance (meaning “tests don’t fail”) to UI and feature changes is much greater than we had expected. Other solutions that we considered are relying on pixels/images/coordinates which are inherently sensitive and non-tolerant to changes. We also found that non-engineers (i.e. product manager, functional QA engineers) can write tests, which is unlike Selenium.

Q: After selecting Testim, can you walk me through your getting started experience?

A: During the first week of implementation the team got together to learn the tool and started creating guinea pig tests which evolved into much deeper tests with more intricate feature testing. Those test ended up in our regression test suite which are ran nightly. Rather than allocating two days per sprint, we decided to crank up the coverage with a one day blitz.

Q: After using Testim, what were some the benefits you experienced?

A: We were able to increase our coverage by 4-5x within 6 weeks of using Testim. We can write more tests in less time and maintenance is not as time consuming. We integrated Testim via its CLI and made “run test <label>” commands available in our “deployment” Slack channel as well as a newly created “regression_test” channel. Any deployment that involves our web app now automatically runs our smoke tests. In addition to that we run nightly full regression tests. Running four cloud/grid Testim VMs in parallel we’re able to run our full regression test suite in roughly 10 minutes.

Q: How was your experience working with Testim’s support team?

A: Testim’s responsiveness was extraordinary. We found ourselves asking questions late in the evenings and over the weekends and the team was there to help us familiarize ourselves with the product. If it wasn’t immediately in the middle of the night, we would have the answer by the time we got started the next day.

Thank you to everyone who participated in our round table discussion on The Future of Test Automation: How to prepare for it? We had a fantastic turnout with lots of solid questions from the audience. If you missed the live event, don’t worry…

You can watch the recorded session any time:

Alan Page, QA Director at Unity Technologies and Oren Rubin, CEO of Testim shared their thoughts on:
  • The current state of test automation
  • Today’s test automation challenges
  • Trends that are shaping the future
  • The future of test automation
  • How to create your test automation destiny
In this session they also covered:
  • Tips and techniques for balancing end to end vs. unit testing
  • How testing is moving from the back end to the front end
  • How to overcome mobile and cloud testing challenges
  • Insights into how the roles of developers and testers are evolving
  • Skills you should start developing now to be ready for the future of testing

Some of the audience questions they answered:

  • How do we know what is the right amount of test coverage to live with a reasonable amount of risk?
  • What is the best way to get developers to do more of the testing?
  • How do you deal with dynamic data, is the best practice to read a DB and compare the results to the front end?
  • Does test automation mark the end of manual testing as we know it?

There were several questions that we were not able to address during the live event so I followed up with the panelist afterwards to get their answers.

Q: What is Alan’s idea of what an automated UI test should be?

As much as I rant about UI Automation, I wrote some a few weeks ago. The Unity Developer Dashboard provides quick access to a lot of Unity services for game developers. I wrote a few tests that walk through workflows and ensure that the cross-service integration is working correctly.

The important bit is, that I wrote tests to find issues that could only be found with UI automation. If validation of the application can be done at any lower level, that’s where the test should be written.

Q: The team I work on complex machines with Android UI and separate backend. What layer would you suggest to concentrate more testing effort on?

I’d weight my testing heavily on the backend and push as much logic as possible out of the Android UI and into the backend, where I can test more, and test faster.

Q: Some legacy applications are really difficult to unit test.  What are your suggestions in handling these kind of applications?

Read Working Effectively with Legacy Code by Michael Feathers, and you’ll find that adding unit tests to legacy code isn’t as hard as you thought it was.

Q: How do you implement modern testing to compliment automation efforts?

My mantra in Modern Testing is, Accelerate the Achievement of Shippable Quality. As “modern” testers, we sometimes do that by writing automated tests, but more often, we look at the system we use to make software – everything from the developer desktop all the way to deployment and beyond (like getting customer feedback), and look for ways we can optimize the system.

For example, as a modern tester, I make sure that we are running the right tools (e.g. static analysis) as part of the build process, that we are taking unit testing seriously and finding all the bugs that can be found by unit tests during unit testing. I try to find things to make it easier for the developers I work with to create high quality and high value tests (e.g. wrappers for templates, or tools to help automate their workflow). I make sure we have reliable and efficient methods for getting feedback from our customers, and that we have a tight loop of build-measure-learn based on that feedback.

Q: Alan Page could you give an example of a test that would be better tested (validated) at a lower level (unit) as opposed to UI level?

It would be easier to think of a test that would not be better validated at that level. Let’s say your application has a sign-in page. One could write UI automation to try different combinations of user names, email addresses, and passwords, but you could write tests faster (and run them massively faster) if you just wrote API tests to sign up users.

Of course, you’d still want to test the UI in this case, but I’d prefer to write a bunch of API tests to verify the system, and then exploratory test the UI to make sure it’s working well with the back end, has a proper look and feel, etc.

Q: How critical is today for a QA person to be able to code? In other words, if you are a QA analyst with strong testing/automation skills, but really have not had much coding experience, what would be the best way to incorporate some coding into his or her profile? Where would you start?

Technology is evolving in a rapid pace and the same applies to tools and programming languages as well. This being said, it would be good for testers to know the basics of some programming language in order to keep up with this pace. I would not say this is critical but it will definitely be good to have and with so many online resources available, it is even easier for testers to gain technical knowledge.

Some of the best ways I have known to incorporate coding into his/her profile would be via:

  • Online Tutorials and courses (Udemy, Coursera, Youtube videos)
  • Pairing with developers when they are programming. You can ask them basic questions of how things work, as and when they are coding. This is a nice way to learn
  • Attending code reviews helps to gain some insight into how the programming language works
  • Reading solutions to different problems on Stack Overflow and other forums
  • Volunteering to implement a simple feature in your system/tool/project by pairing with another developer
  • Organizing/Attending meetups and lunch ‘n’ learns focused on a particular programming language and topic
  • Choose a mentor who could guide you and give you weekly assignments to complete. Set clear goals and deadlines for deliverables

Q: My developers really like reusing cucumber steps, but I couldn’t make them write these steps. The adoption problem is getting the budget reallocated. Any advice for what I should do?

Reusing cucumber steps may not be necessarily a bad thing. It could also mean that the steps you may have written are really good and people can use them for other scenarios. In fact, this is a good thing in BDD (Behavior Driven Development) and helps in easier automation of these steps.

But if the developers are lazy and then reusing steps which do not make sense in a scenario, then we have a problem. In this case, what I would do is try to make developers understand why a particular step may not make sense for a scenario and discuss how you would re-write them. This continuous practice of spot feedback would help to instill the habit of writing good cucumber steps. Also, I would raise this point in retrospective and team meetings, and discuss it with the entire team. This will help to come to a common understanding of the expectations.

In terms of budget reallocation, I would talk to your business folks and project manager on the value of writing cucumber steps and how it helps to bring clarity in requirements, helps to catch defects early and saves a lot of time and effort which would otherwise be spent on re-work of stories due to unclear requirements and expectations for a feature.

Q: Can we quickly Capture Baseline Images using AI?

What exactly do you want the AI part to do? Currently, it’s not there are tools (e.g. Applitools and Percy.io) which can create a baseline very fast. I would expect AI to help in the future with setting the regions that must be ignored (e.g. field showing today’s date), and the closest thing I know is Applitools’ layout comparison (looking and comparing the layout of a page rather than the exact pixels, so the text can differ and the number of lines change, but still have a match).

Q: What are your thoughts on Automatic/live static code analysis?

Code analysis is great! It can help prevent bugs and add to code consistency inside the organization. The important thing to remember is that it never replaces functional testing and it’s merely another (orthogonal) layer which also helps.

Q: When we say ‘Automated Acceptance tests’, do they mean REST API automated tests which automation tool is good to learn?

No. They usually mean E2E (functional) tests, though acceptance tests should include anything related to approving a new release, and in some cases, this includes load/stress testing and even security testing.

Regarding good tools, For functional testing, I’m very biased toward Testim.io, but many prefer to code and choose Selenium or its mobile version Appium (though Espresso and Earl grey are catching on in popularity).

For API testing, there are too many, from the giant HP (Stormrunner) to medium sized Blazemeter, to small and cool solutions like APIfortress, Postman, Loadmill, and of course Testim.io.

Why not to call it full stack tests instead of e2e?

Mostly because e2e is used more often in the industry – but I actually prefer to use the google naming conventions, and just call tests small, medium, or large. Full stack / end-to-end tests fall in the large category.

“If it’s worth building, it’s worth testing” — Kent Beck, pioneer of Test Driven Development

Imagine this situation. It is 4:45 pm on a Friday afternoon, and a new feature on the company’s web application for generating sales reports is pushed to production. At 11:30 pm that night, the lead developer gets a frantic call from a customer — the new feature broke an existing business-critical feature. What if the team could have prevented the break in the first place? By including test automation from the beginning of the development process, this is possible.

What Is Testing?

Testing is crucial to many agile software development processes. Testing enables developers to know ahead of time if everything will work as expected. With a well-written set of tests, developers can know whether or not new additions to a codebase will break existing features and behavior. Testing processes become the crystal ball of the software development process.

crystal ball testing

Testing can be automated or performed manually, but automated testing allows software development teams to test code more quickly, frequently, and accurately. Software testers and developers can then free up their time and focus on the more difficult tasks at hand.

How To Develop For Testing

The key to being successful with test automation in the software development life cycle is to introduce it as early on as possible. While many developers recognize the importance of testing their software, the testing process is often times delayed until the end of the development cycle. Testing may even be dropped completely in order to make a deadline or meet budgetary restrictions.  

Those not using test automation may view testing as a burden or roadblock to developing and delivering an application. A well-written set of tests, however, can end up saving time during the development process. The key to this is to write them as soon as new features are developed. This practice is commonly known as Test Driven Development. Writing tests as one develops features also encourages better documentation and leads to smaller changes in the codebase at a given time. Taking smaller steps in creating and changing a codebase enables the developer to make sure that what he/she adds maintains the health of the codebase.


Just as one can adapt his/her development workflow to Test Driven Development, it is important to also adapt the way tests are written when leveraging automated tests. Automated tests typically contain three parts: the setup, the action to be tested, and the validation. The best tests are those that test just one item, so developers know exactly what breaks and how to fix it. Tests that combine multiple actions are more difficult to create and maintain as well as slower to run. Most importantly, complicated tests do not tell the developers exactly what is broken and still require additional debugging/exploration to get to the root of the problem.

Automated testing can be broken down into different types of tests, such as web testing, unit testing, and usability testing. While it may not be possible to have complete automated test coverage for every application, a combination of different types of testing can provide a comprehensive test suite that can be augmented by hands-on testing as well.

Platforms like Testim enable developers to automate web testing across multiple browsers, as if a user was testing the application hands-on. This enables both developers and testers to uncover issues that cannot be discovered using hands-on testing methods.

What causes a test to fail?

The purpose of testing software is to identify bad code. Bad code either does not function as expected or breaks other features in the software. Testing is important to developers as it allows them to quickly correct the bugs and maintain a healthy codebase that enables a team of developers to develop and ship new features

However, tests can fail for reasons that are not bad code. When doing hands-on testing, a failed test can even be the result of human error. With some automated testing suites, if a test is written for a button on a webpage with a certain identifier, and the identifier changes, that test will then fail the next time it is run.

failed tests

The failure of the button test may cause the developer to think something is wrong with thier code, and as a result, they may spend hours digging through endless lines of code to suss out the issue, only to find that it was the result of a bad test and not bad code.

How Does Testim Help With Testing?

Testim gives developers and testers a way to quickly create, execute, and maintain tests. It does this by adapting to the changes that are made during the software development cycle. Through machine learning algorithms, Testim enables developers to create tests that can learn over time. This could lead to things like automated testing that adapts to small changes like the ID of a button, which would create more reliable tests that developers and testers can trust to identify bad code.

Tests that are quick to create and run will transform the software development process into one that ships code quicker, so developers can spend their time developing.

According to the 2017 Test Benchmark Report, survey respondents want to achieve 50%-75% test automation.
Join this round-table discussion with Alan Page, QA Director at Unity Technologies and Oren Rubin, CEO of Testim as they discuss what companies need to start doing now to achieve their 5 year testing plans.
Date: Tuesday, February 27
Time: 9:00am PT
In this session they will cover:
  • The current state of test automation
  • Today’s test automation challenges
  • Trends that are shaping the future
  • The future of test automation
  • How to create your test automation destiny


  • Get tips and techniques for balancing end to end vs. unit testing
  • See how testing is moving from the back end to the front end
  • How to overcome mobile and cloud testing challenges
  • Gain insights into how the roles of developers and testers are evolving
  • Skills you should start developing now to be ready for the future of testing
  • Ask the panelists your own questions live


We work hard to improve the functionality and usability of our autonomous testing platform, constantly adding new features. Our sprints are weekly with minor updates being released sometimes every day, so a lot is added over the course of a month. We share updates by email and social media but wanted to provide a monthly recap of the month’s latests enhancements, summarizing the big and small things we delivered to improve your success with the Testim.

API Testing

What is it?

Do you need to validate data that appears in the app against data from your back-end? Do you need to extract data to use in your tests? Then we have the feature for you.

We provide two types of API steps:

  • API action – Will be used when in need to get data and use it for calculation, or to save it for later use in the test.
  • Validate API – Will be used to validate an element against the API result data.

Why should I care?  
This capability makes it easy to do UI and API simultaneously. No need to toggle between do different systems nor integrate results. Author a UI test and automatically author an API test to ensure the application under test is working correctly. Read more

API Testing

Debug Step Parameters

What is it?

We’ve learned that enriched debugging can be a huge time saver. Debug the console errors and network errors automatically and see the failed steps in the DOM. You can see in each step all the parameters used during the run.

Why should I care?
This feature is helpful when you want to debug your runs and need to figure out which parameters were used in each step. Learn more

Debug step parameters

Company and Project Structure

What is it?

Large enterprises need the ability to manage multiple projects and various groups inside their organization. This feature makes it easy for admins to grant individual users access to specific projects. Earlier this year we’ve added support for a company structure with a company owner managing permissions and owners per project.

Why should I care?
This gives you more flexibility in managing the different groups inside your company and allows control over who has access to which projects. Learn more

Testim Project Management

Customers have access to these features now. Check it out and let us know what you think. If you’re not a customer, sign up for a free trial to experience autonomous testing. We’d love to hear what you think of the new features. Please share your thoughts on Twitter or Facebook.


Should the goals and principles of DevOps be the same for every company? Is it possible for every company to reach CI/CD? 

Software delivery teams come in all different shapes and sizes. Each team has its own DNA that has organically evolved through generations of diverse experiences, skill sets, tools, technologies and sets of processes.

So, if every project team looks entirely different than the next, where do you start?

RESERVE YOUR SEAT for  this roundtable discussion where we will cover different types of organizational situations and the common as well as unique challenges each situation may face in their DevOps transition. We will review ways to address specific inefficiencies in each situations development process as well as how to plan accordingly and minimize risk along the way.

Date: Wednesday, November 15
Time: 9:00am PT

We will cover:

  • How to transition to DevOps and align your development, testing and release processes
  • How to plan for your DevOps transition based on your current operational reality
  • A breakdown of common as well as unique DevOps obstacles
  • A method for systematically resolving your development issues to enable test automation

 RESERVE YOUR SEAT and bring your questions for the panelists:

  • Tanya Kravtsov is the Director of QA at Audible. She is building a new QA organization to support innovative web product development at scale. Previously, as head of automation and continuous delivery at ROKITT, senior QA manager at Syncsort, and VP at Morgan Stanley, Tanya focused on quality, automation, and DevOps practices, working with internal and external customers to transform development and testing processes. Tanya is passionate about process automation, continuous integration, and continuous delivery. She is a founder of the DevOpsQA NJ Meetup group and a frequent speaker at STAREAST, QUEST, and other conferences and events. Follow her on Twitter @DevOpsQA.
  • Bob Crews is the President of Checkpoint Technologies. He is a consultant and trainer with almost three decades of I.T. experience including full life-cycle development involving development, requirements management and software testing. He is also the President of the Tampa Bay Quality Assurance Association. He has consulted and trained for over 200 different organizations in areas such as effectively using automated testing solutions, test planning, implementing automated frameworks and developing practices which ensure the maximum return-on-investment with automated solutions.

We’re excited to announce that Testim has raised $5.6M in Series A funding, led by Lightspeed Venture Partners. In the last year, we’ve raised a total of $8 million from early investors including Foundation Capital, Spider Capital and joined Heavybit.

This is a big deal for us and we’re incredibly thrilled by what it means. The funds will support our mission to help your engineering team make application testing autonomous and integrative to their agile development cycle. We started this company because we’ve used plenty of automation tools and found them hard to use with little confidence in the stability of the tests. I’m sure you’ve had similar experiences. Together, we’ve built a family of customers including Wix.com, NetApp and Walmart, grown our team to 20+ employees, and become the fastest growing provider of autonomous testing with a 34% compound monthly growth rate.

This funding is the fuel we need to continue investing in our customer’s success by growing our engineering team, expanding our customer success team and supporting your evolving processes. In the last 4 months we’ve added tons of enhancements to help teams automate their testing even faster.

In the next year, and with your ongoing input, we’re excited to tackle:

  • Make testing mobile native applications a breeze
  • Add more self-learning capabilities to our algorithms, making our tests even more stable
  • Integrate Testim.io with more development tools (well, we already support Slack, Jira, Git, CI tools and other software development tools)

We’d love to hear what you think of our product.  Test drive Testim today by signing up for a free trial to experience code and maintenance free test automation.

Our client is the market leading data authority for the hybrid cloud. They provide a full range of hybrid cloud data services that simplify application management and data across cloud and on-site environment to accelerate digital transformation. Their solution empowers global organizations to unleash the full potential of their data to expand customer engagement, foster greater innovation, and optimize their operations.

The Challenge – Achieve CI/CD

Their application was complex and included many user flows, requiring the creation of thousands of functional tests with the goal of shifting left. In addition, they needed to test as close to development as possible.

Selenium was their first choice and a team of over a dozen testers was assembled to begin the task. After a few months it became apparent that it was going to take more time and man power to achieve their goal of CI/CD. The tests were complex and took three days to author. To make things worse, the tests would often break, leading the team to spend extra time maintaining and fixing the tests. Shifting left would require teaching the developers Selenium as well.

The lack of tools to troubleshoot a failed test (e.g screenshots to compare, details error messages pointing to the right step, test history or the parameters over the flows) led to long time-to-resolution involving a number of team members. A lot of time was wasted, not only trying to figure out why the test failed, but to also explain the discoveries to the developers.

Maintaining tests took a lot of my time. When developers run tests that fail it becomes more of a
than confidence in bug prevention.Both groups had to stop what they are doing to figure
out if its the functionality or if it’s the test. We found ourselves spending more time trying to stabilize
the tests than actually testing.
” – Company QA Manager

The Solution

Testim’s solution that uses machine learning for the authoring, execution and maintenance of automated test cases was implemented. With Testim they were able to capture the scenarios in minutes, as well as complement that with JavaScript syntaxes. The team was able to spend more time creating new tests, and validating the status of their application.

Today we have hundreds of tests running on every pull, giving the developers feedback to their code
within minutes. The developers themselves easily update test scenarios so QA can focus on increasing
coverage. We also significantly reduced our cost of quality: The rich information we get allows us to
reduce the time to troubleshoot by 80%.

The Results

As a rapidly growing enterprise, they needed a way to optimize processes through automation. This plays a significant role in their move to agile development. Within a couple months of using Testim, The team was able to create hundreds of their UI tests scenarios. Today, they are proud to say that they are authoring tests in under an hour compared to the three days it was originally taking.

Before Testim it would take 3 days to author a single test in Selenium, now even for the less
experiencedtester, writing tests takes under an hour, developers can update tests on the fly and
figure out where the tests are failing without any additional help.

Now, troubleshooting a failed tests takes a fraction of the time. There is an indication on which step failed, including JS syntaxes, screenshots comparing prior runs, and access to the DOM with detailed error messages.

However, the biggest impact was the reduction on maintenance. Tests are stable and trustworthy so when a test fails the user knows it is either due to a bug or the test requires a change in the flow. The team focused most of its time on increasing coverage knowing that the tests adapt to UI changes by the development team.

Today, we are proud to say that they are fully CI/CD, testing on every code push, and running thousands of tests every day.


We work hard to improve the functionality and usability of our autonomous testing platform, constantly adding new features. Our sprints are weekly with minor updates being released sometimes every day, so a lot is added over the course of a month. We share updates by email and social media but wanted to provide a monthly recap of the month’s latests enhancements, summarizing the big and small things we delivered to improve your success with the Testim. We’d love to hear your feedback.

Customers have access to these feature now, check it out and let us know what you think. If you’re not a customer, sign up for a free trial to experience autonomous testing.

Group Usage Across Tests

What is it?

Groups allow you to create reuseable flows instead of rewriting them again and again. Group usage allows you to quickly filter the tests that are using a specific group.

Why should I care?  

Whether you make changes to a specific group or have a number of tests that failed and want to trace the commonality among them, group usage allows you to quickly filter the relevant tests as well as their results. This feature has been sought out by a large number of our customers and is already getting love from some our of our users. Read more here

Scheduler –  Email Notifications

What is is?

Easily schedule tests to run on nightly build, or monitor your production app, make sure you’re notified when it fails.

Why should I care?

Plan your schedule and choose your use case:

  • Monitor – Will run the tests on a set interval. Use this option to monitor the health of your application and alert when your service is down.
  • Nightly run – Schedule the tests to run at certain days of the week and time of day. Use this option to automatically trigger test runs such as nightly regression. By the morning you’ll have an email linking to the failed test results.

Read the docs to learn more

Email Scheduler
Email Scheduler

Mobile Web Pilot Program

What is it?

Testim now supports the authoring and execution of tests for mobile web.

Why should I care?  

The number of mobile devices has suppressed the number of desktop computers. We all consume content through our mobile devices and users expect your application to provide superior experience regardless of the medium. Responsive websites look and behave differently than desktop hence require different tests.

We are currently operating a closed beta for a selected number of customers. Reach out to your Testim representative if you are interested in taking part in the program.

We’d love to hear what you think of the new features. Please share your thoughts on Twitter or Facebook.

As many yoga instructors do, they encouraged students to find balance. It’s an effective bit of advice in that it puts the onus on the practitioner. It’s also a popular concept for software testing, as industry experts often recommend that software teams find the balance between automation and manual testing practices. Another similarity is the trajectory of their adoption rates. One does not dive into yoga, but slowly adopts it over time until it becomes a part of their daily routine. The same goes for test automation: you can’t expect to start automating everything from scratch. Good test automation is the culmination of work over time.

Why not all manual?

Manual testing is fine, and was once the status quo, but with high adoption of Agile and continuous delivery methodologies, it just doesn’t scale. For software development, every enhancement or new feature that’s rolled out for an app must have a corresponding set of tests to ensure the new functionality works and does not break the code from the previous version. In this regard, to check all of the file directories, database, workflows, integrations, rules and logic manually would be extremely time consuming.

For companies looking to improve time to market and increase test coverage, automation provides significant advantages over manual testing. A typical environment can host thousands of test cases of varying complexity to be executed effortlessly and nonstop. As a result, automation dwindles the time required to perform mundane, repetitive test cases from days to hours and even minutes if you run in parallel. That inherent velocity increase is what attracts teams to automated testing.

Why not all automation?

The benefits of automation are obvious. Automation provides the ability to quickly react to ever-changing business requirements and generate new tests continuously. Is it reasonable then to assume that the more you automate the more benefits you reap? Then why not just automate everything?

The short answer, its very difficult. According to the World Quality Report 2017-18 Ninth Edition, these are the top challenges enterprise are facing in regards  to test automation.

Traditionally you would automate only once your platform is stable and the user flow is pretty consistent. This was primarily due to the amount of time it took to write a test, sometimes hours and even days. Plus the amount of time it took to update and maintain tests as code changed. The ROI was not justified up to the point your user flow is stable.

Feedback from our customers such it has changed. We recently heard a number of our clients who automate everything and even as early as wireframes are ready. We reduced authoring time by about 70% of the time and maintenance time by about 90%. Creating a user flow now takes a few minutes with Testim and so is updating the user flow. Reducing the time per test to a few minutes instead of hours and days makes it much more affordable to automate, providing immediate ROI from feedback loop benefits.

So should you strive to 100% automation? Well no but you can get to 80%-90% which was unheard of until recently. There are still scenarios that only humans can do such as Face ID. There are also other aspects of your Software Development Lifecycle (SDLC) that automation is dependent on.


There is an ongoing struggle to keep up with the rate dictated customer pressure and competition to produce on a continuous basis. And not just produce for production’s sake, but a quality product that’s been thoroughly tested. That’s why we created Testim…

To help software teams:

  • Increase coverage by 30% in 3 months
  • Reduce number of missed bugs by 37%   
  • Increase team productivity by 62%
  • Reduce regression testing costs by 80%  
  • Speedup testing schedules by 3 months


  1. https://techbeacon.com/world-quality-report-2017-18-state-qa-testing
  2. https://dzone.com/articles/automated-vs-manual-testing-how-to-find-the-right
  3. https://blog.qasource.com/the-right-balance-of-automated-vs.-manual-testing-service
  4. http://sdtimes.com/software-testing-is-all-about-automation/#sthash.5Idkps2O.dpuf
  5. http://sdtimes.com/report-test-automation-increasing/#sthash.HlHQmD2l.dpuf
  6. https://appdevelopermagazine.com/4935/2017/2/8/test-automation-usage-on-the-rise/
  7. https://techbeacon.com/devops-automation-best-practices-how-much-too-much

Be smart & save time...
Simply automate.

Get Started Free