Oren Rubin


We’re excited to announce that Testim has raised $5.6M in Series A funding, led by Lightspeed Venture Partners. In the last year, we’ve raised a total of $8 million from early investors including Foundation Capital, Spider Capital and joined Heavybit.

This is a big deal for us and we’re incredibly thrilled by what it means. The funds will support our mission to help your engineering team make application testing autonomous and integrative to their agile development cycle. We started this company because we’ve used plenty of automation tools and found them hard to use with little confidence in the stability of the tests. I’m sure you’ve had similar experiences. Together, we’ve built a family of customers including, NetApp and Walmart, grown our team to 20+ employees, and become the fastest growing provider of autonomous testing with a 34% compound monthly growth rate.

This funding is the fuel we need to continue investing in our customer’s success by growing our engineering team, expanding our customer success team and supporting your evolving processes. In the last 4 months we’ve added tons of enhancements to help teams automate their testing even faster.

In the next year, and with your ongoing input, we’re excited to tackle:

  • Make testing mobile native applications a breeze
  • Add more self-learning capabilities to our algorithms, making our tests even more stable
  • Integrate with more development tools (well, we already support Slack, Jira, Git, CI tools and other software development tools)

We’d love to hear what you think of our product.  Test drive Testim today by signing up for a free trial to experience code and maintenance free test automation.


Google recently announced that they released Puppeteer, a Node library which provides an API to control headless Chrome. Within 24hrs they received great feedback from the community;

  • 6,685 stars on Github
  • 2.2K likes and 1.2K shares on Twitter

So why should we care? Here’s a snippet from its GitHub documentation:

Puppeteer’s GitHub documentation Q&A

In Google’s own words, there isn’t much difference from Selenium.

The awesomeness of Selenium is that they convinced ALL browser vendors to support the same low level API (and this took years! try convincing Apple, MS, and Google to work together), and even implemented this API in more than 10 languages (including JS).

Most of Puppeteer’s API is very similar to Selenium (or the alternatives) e.g.;

  • Google’s launch() method vs. Selenium’s init()
  • goto() vs. to url()
  • close() vs. end()
  • type() vs. setValue()
  • click() even stayed the same

Google could have picked the same Selenium API and contributed the changes to the Selenium repo. But the biggest issue isn’t the API. It’s splitting the community and not contributing to the Selenium code base. With Google’s resources and talented developers, they could have contributed to the Selenium project, which currently has a few amazing volunteers, supporting this framework, and some parts are closing for lack of resources.

Selenium is known to be relatively slow compared to operating directly on the browser. This is caused by its architecture, which requires a Selenium Grid or Selenium Standalone server, which acts as a proxy (and even just starting it takes a while). This architecture is great when your tests need to run on multiple browsers in parallel, but you experience the overhead when working with a single browser. By helping the Selenium community speed this up, even if it was just for Chrome, would have been more beneficial than trying to create their own (Chrome only) solution.

Puppeteer is a step in the right direction. Google is an innovative company that pushes the web forward with great ideas and specs, amazing developer tools, and now it seems to help improve UI test automation, which we all know is extremely challenging.

Standardization leads to innovation. With Selenium, not only would you be able to run those tests on other browsers, but the entire industry is relying on those standards to create additional (commercialized) products. For example;, Saucelabs, BrowserStack, Applitools, Cross Browser Testing, and the list goes on and on.

I would love to hear your opinion about Puppeteer and Selenium.

Oren Rubin

As we work with our clients, some themes are recurring. The most common theme is that development organizations are short staffed–they cannot hire enough developers. At the same time the business pressure to ship product features does not let up and pressure is high. So what does the head of engineering do? None of the options are good:

  1. Delay the release – Not acceptable to the business.
  2. Reduce the feature set and only deliver a subset of what the customers expect – Risk or failing to meet customer commitments and losing their business.
  3. Prioritize new feature development over quality and hope for the best.

Unfortunately #3 is what many R&D teams are forced into. Many development executives prioritize the expansion of the development team, and not the QA team. For many growing development teams, adding QA engineers is a luxury. And using sophisticated frameworks like Selenium require skilled engineers.

Yet not doing UI testing invites disaster. The cost of fixing bugs increases over time and so capturing bugs later in the release cycle becomes expensive to fix. Relying on manual UI testing, while better than nothing, provides poor coverage and scale. Many people rationalize their decision not to test the UI, hoping that developers test their own code and whatever their code affects, and maybe we can live without automated UI testing. This often works, until a bug creeps in that stops all forward movement. Suddenly you’re on a fire drill taking valuable resources from multiple departments in the organization. Afterall, developers only test a specific set of the app, on a specific environment.

Does Selenium fix this problem? Not really. The results are OK, but it requires dedicated, skilled engineers to set it up and, more importantly, to maintain. Unmaintained tests lose their value the minute the application changes. And the application is always changing.

The challenge our customers face is how to increase coverage and maintain acceptable quality levels, update their tests to fit code changes and keep up with fast release cycles. If only there was a way to automate the maintenance of tests… possibly leveraging machine learning to ensure the tests keep up with the pace of software delivery…and that the team can grow in this area without adding people.

Unit Tests are in the house!
I’m a big advocate for TDD (Test Driven Development). Research shows TDD has a high correlation to quality code, as it prevents bugs from forming in the first place. Also, the cost of authoring tests and fixing bugs is lower

This is true for the back-end. As for the front-end, although MVC and MVVM based frameworks are doing a great job providing easier tools writing unit testing, we still see low adoption. Some claim it stems from frontend comprising mostly glue-code, and some say it’s harder to write test DOM and event (asynchronous) code.

The need for frontend automation rises
Nowadays, a huge part of any project resides in the frontend. Not only did past challenges remain, the plot thickens as we have more environments (e.g. mobile and tablet), more Operating Systems (Window & Mac are both popular), and more browsers to support (IE11, Chrome, Safari, Firefox, Opera, Edge, not to mention old versions of IE.. may it rest in peace already).

Percentage of core logic – backend vs. frontend

Since more R&D teams shifted to agile, QA has become the bottleneck. A big part of being agile is the frequent releases. When a six month project comprises a three week testing cycle it’s one thing, but when a team wants to deploy weekly or even daily without bugs, QA is challenged with shrinking that to a cycle that takes 1-2 hours. The feedback to the developer should be instantaneous, as research show that longer the feedback cycle is, the longer it takes to fix the problem.

Why testing frontend is difficult
With the move to frontend, high maintenance overhead resurfaced. The maintenance for frontend test automation was as high as 50% twenty years ago. It has only improved marginally in recent years and is still as high as 30%.

To mimic a user interaction, you must first locate the element you wish to act upon. There are two major approaches to locate an element:

  • Visual based – look at the pixels
    Frameworks such as Sikuli use that technique, but it’s super fragile, as render per OS/browser/display-adapter will generate different pixels. Some display adapters are non-deterministic in nature. To have stable tests you must be a computer vision guru.
  • Object based – look at the object hierarchy (DOM).
    Frameworks such as Selenium spread like fire. As opposed to twenty years ago, better software design pattern emerged (such as the Page Object) which helped separate the tests’ business logic from the implementation, and urged reuse.

Nonetheless, both approaches result in flaky and unstable tests. They require high maintenance. or every UI (code) change, a test change is required as well (as both rely on the UI). It’s easier for those practicing the shift left paradigm, where quality is inherent in the development cycle, as finding the cause is easier, but they suffer from the same amount of failures. For those with a QA team, it’s still catastrophic. As almost every morning starts with understanding why tests fail and whether it’s a bug or merely a code change.



For End-to-End testing, some companies turn to crowdsourcing to alleviate the pain. This works great for regression tests which require no prior knowledge of the app. The required ramp up is just to write down the instructions in plain English. The big advantage is the low cost of people willing to click all day long. The problem arises when one tries to implement the shift left paradigm. This requires testing not only in the main branch, but also in all dozens of feature branches. Not only does this makes it not cost effective, but developers are custom to get feedback in seconds or minutes, not hours.
Image a developer promoting a hotfix to be released to fix a critical bug yet having to wait for someone in the other side of the world to wake up and test the new version.
Communication is also challenging at the developer and tester have no experience working together.
As for exploratory testing, crowdsourcing still great. It allows one to test different devices and different networks from all around the world. The rose of the garden is the seductive business model, paying only for proven bugs, comprising a thorn where every little bug is reported and the devs find themselves trying to prove that some of them are not real bugs. Until today, crowdsourcing does not provide a sustainable solution when shifting to continuous delivery.

Guide: The Key to Implementing CI/CD

As mentioned in the previous post the transition between backend and front-end has been a major transformation.

During the mainframe era, most code resided on the server, moving to the client when personal computers came into play. It shifted back to the servers with the web, and in the last five years, we’re seeing code shifting back to the client side as Single Page Applications (SPA) become more popular (and I assume most of you already heard of Angular or React or Ember or Vue and list goes bigger every day).

For back-end testing, most find API testing sufficient as it tests with high fidelity most of the server code. Unit Tests weren’t popular until recent years

As for front-end testing, little code resided on the client side, and API (backend) testing covered most of the app’s code. Many companies historically relied on manual testing. Automation was expensive due to 5 main reasons:

  1. Highly trained people – they are developers in any way: They write complicated code; They spring VMs and browsers on and off and deal with large scale when it comes to performance testing.
  2. Finding great developers is hard. Finding great developers who want to write test automation for a living has the same chances of Shaquille O’Neal not missing a single free throw in ten consecutive shots. 40% of a test automation person time is spent on authoring tests (1).
  3. High maintenance – In the agile world, UI changes are frequent, This implies that changing the UI requires changes in the UI-tests. 30% of a test automation person time is spent on maintaining tests (1).
  4. Long ramp-up – it usually takes weeks or months until you have a dozen of tests.
  5. Few environments. The OS and browsers were not as dispersed as today. Windows ruled the Desktop world, and IE had 95% market share.


Many platforms tried to alleviate slow authoring and the required dev skills by recording the user’s behavior and creating a test, with the ability to play it back again and again. Unfortunately, those tests required massive maintenance, making the total ROI even lower than with dedicated developers.

Visual Validation

Visual Validation – an image is worth 1000 validations

The complexity of the UI, as opposed to API testing, crosses more than functional testing (the shown data) but also includes visual validation, as visual bugs are not merely cosmetic bugs. They impact usability and user experience. As UI validations were usually associated with E2E testing, most attempts were to take screenshots of the entire page as a validation (as opposed to a small component). Those attempts suffered from the next set of challenges:

  1. Stability – Computers’ display adapter are non-deterministic by nature, and rendering the same page twice might result in different images. The main diffs stem mostly from anti-aliasing and sub-pixels shifting. The human eyes won’t notice, but a simple pixel comparison might result in a  5% difference.
  2. Maintenance – Changing a single component which is located across pages (e.g. logo) can imply that hundreds of baseline images are not incorrect and has to be reviewed.
  3. Storage – Maintaining huge volumes of large screenshots in a non-lossy format were something heavy to any version control repository, specifically the well adopted Git.

Visual validation became scarce until recently.
We’ll mention some solutions in the next blog post, stay tuned!

[1] Source:


I’ve been a software developer for many years. I’ve seen two major transformations: one is the transition between frontend and backend code. The other is in release cycles and speed.

Frontend-backend code – Those working many years in software development remember the eras of shifting code back and forth, from the server to the client and back. During the days of the mainframe, most code resided on the server, and moved to the client with the emergence of personal computers. It shifted back to the servers with the transition to web. In the last five years, we’re seeing code shifting back to the client side as Single Page Applications (SPA) become more popular (and I assume most of you have already heard of AngularJS or ReactJS or Ember.js or Vue.js, and the list goes bigger every day) as well as mobile.

Release cycles – In recent years we have seen a transition from waterfall development to agile. Waterfall cycles were long (e.g. Microsoft releasing a new version of Windows every 2 years), and QA teams would spend months manually testing the software. The short cycles in agile drive the need for automation and Test Driven Development (TDD). The road to automation is still bumpy so some organizations only test the backend via API, while most heavily rely on manual testing. Few claim to have sufficient coverage (both client and server) to release a version knowing it’s completely safe (AKA Continuous Deployment). Lack of proper coverage is risky given the renaissance of code shifting to the frontend and the focus on user experience.

The latter is the outcome of legacy tools for functional testing that haven’t taken advantage of technological evolution: improved computing power at a lower cost, cloud infrastructure & software as a service, real-time big data processing in microseconds, deep user analytics, and behavioral flows. Combining those technologies and thinking about quality in a different way can help us leapfrog to a future where tests are automatically created with every new feature. Reducing investment in testing yet improving quality and user experience is not a myth. If driving can be autonomous – so can tests. Over the next couple of blog posts I’ll describe the evolution of software development and quality assurance and why it leads to autonomous testing.

Be smart & save time...
Simply automate.

Get Started Free