One of the most important factors related to automated tests is “Maintenance”. A lot of effort is spent on maintaining the tests than writing actual tests. A recent study suggested about 30% of testers time is spent on maintenance.This leads to wastage of valuable time and effort by the resources, which they could have rather spent on testing the actual application.
Imagine a world where the software can maintain tests without human interaction? This world has become a reality with Testim.io. We use Artificial Intelligence (AI) underneath the hood, which provides self-healing maintenance i.e problems are detected by the AI and automatically fixed without human intervention.
Testim.io also help to speed up the maintenance of tests by providing the follow features within our platform-
At any given time, it is important to have logs of what changes were made to a particular test. This way we can always revert back to an older version of test as and when required. Our platform provides this functionality by showing all the version history by going to the Properties panel of the setup step and clicking on “See old revisions”
At Testim.io, we firmly believe in the “Shift Left Paradigm” where Development and Testing must start in parallel as early as possible in the software development lifecycle. Keeping this in mind, we provide the functionality to teams to create separate branches for each team member and work on the same projects and tests. This way, no one can overwrite the changes of the other team members and teams can work on the same code base at any instant of time
In our platform, we just need to select “Fork” to create a new branch and we can also switch between existing branches
Users have the option of scheduling their tests. This helps to run the tests automatically at a certain day and time without any manual intervention. We can also get notified via email in case of any errors
As testers, we spend considerable amount of time troubleshooting issues. To help in troubleshooting, our platform offers different options to the user to narrow down the scope of the problem. These options are as follows-
The screenshot feature explained in the “Authoring and Execution” section helps users to know what was the baseline image and what was the actual image found.
The properties panel helps to capture the error messages and display it to the user. The user also has the option of interacting with DOM and see what objects were extracted during the run
Logs are a rich source of information on what happened underneath the UI. We provide test logs when the user runs the tests on our grid or a 3rd party grid. The option can be found in the in top section of editor
One of the most time consuming aspects of testing is after finding a bug, we need to report it to the developer with relevant information, to speed up the troubleshooting and fixing of issues.
With Testim.io you can do this with a single click with the help of our chrome extension. All the details related to the bug are automatically generated for you.
We put in a lot of effort to document most of the features of the tool in our User Documentation found under the “Educate” tab.
We also have detailed videos on how to troubleshoot your tests quickly
Artificial Intelligence (AI) and machine learning (ML) are advancing at a rapid pace. Companies like Apple, Tesla, Google, Amazon, Facebook and others have started investing more into AI to solve different technological problems in the areas of healthcare, autonomous cars, search engines, predictive modeling and much more. Applying AI is real. It’s coming fast. It’s going to affect every business, no matter how big or small. This being the case how are we as Testers going to adapt to this change and embrace AI? Here is the summary of different things you need to know about using AI in software testing.
Let’s summarize how the testing practice has evolved over the last 4 decades
In the 1980’s majority of software development was waterfall and testing was manual
In the 1990’s, we had bulky automation tools which were super expensive, unstable and had really primitive functionality. During the same time, there were different development approaches being experimented like Scrum, XP, RAD (Rapid Application Development)
From 2000, the era of open source frameworks began
People wanted to share their knowledge with the community
Started encouraging innovation and asking community of like minded people to help out in improving testing
Agile became a big thing, XP, Scrum, Kanban became a standard process in the SDLC
There were need for faster release cycles as people wanted more software features delivered faster
In the 2010’s, it was all about scale, how to write tests fast and find bugs faster
Encouraging other people to give feedback on the application. Free and Paid services
Cloud testing started
People started realizing they need more
Started to realize the problem of maintenance. How expensive it is to buy hardware and software for maintaining your tests
Then we have
I believe the Future will be about Autonomous Testing using Machine Learning and AI
Basics of AI
Let’s start by de-mystifying some of the terminologies related to AI
Artificial Intelligence (AI) is an area of computer science that emphasizes the creation of intelligent machines that work and react like humans
Machine Learning (ML) evolved from the study of pattern recognition and computational learning theory (studying design and analysis of ML algorithms) in AI. It is a field of study that gives computers ability to learn without being explicitly programmed
Deep Learning(DL) is one of the many approaches to ML. Other approaches include decision tree learning, inductive logic programming, clustering and Bayesian networks. It is based on neural networks in the human body. Each neuron keeps learning and interconnects with other neurons to perform different actions based on different responses
There are 3 types of widely used ML algorithms
Supervised Learning – We are giving the right training data (input/output combinations) for the algorithm to learn
Give bunch of e-mails and identify spam e-mails
Extracting text from audio
Fill out a loan application and find the probability of the user repaying the loan
How to make user click on ads by learning their behavior
Recommendation engines on Amazon, Netflix where customer is recommended products and movies
Amazon uses AI for logistics
Un-supervised learning – We give a bunch of data and see what we can find
Taking a single image and creating a 3D model
Reinforced learning – Based on concept of reward function. Rewarding Good/Bad behavior and making the algorithm learn from it. E.g. Training a Dog
Real life AI applications to visually see how it works
Weka is an open source project where they are using ML algorithms for data mining
What challenges can AI solve?
Let’s discuss the challenges the industry faced while transitioning to agile and what’s still remains a challenge:
How can we use AI to solve testing problems?
There are many companies taking multiple approaches to solve different problems related to software testing and automation. Testim.io is one such company
Testim.io uses Dynamic Locators, The Artificial Intelligence (AI) underneath the platform in real time, analyzes all the DOM objects of a page and extracts the objects and its properties. Finally, the AI decides the best location strategy to locate a particular element based on this analysis. Due to this, even if a developer changes the attribute of an element, the test still continues to run and this leads to more stable tests. As a result, the authoring and execution of automated tests are much faster and more stable.
One of the good practices of writing automated tests is creating reusable components that can be used in different parts of our test suite.
Why is this important?
Creating reusable components is important because it
Helps to increase the readability of the automated tests
Saves effort by not repeating the same set of steps in different parts of the tests
Any changes to the reusable step needs to be done only in one place and it is reflected throughout the tests, across different projects
Makes the automated tests more extensible
Testim.io helps to ensure Reusability by “Grouping” and “Parameterization”.
Any number of related steps can be grouped into one reusable component.
For Example – The “Login” scenario is one of the most commonly used steps in any application. The way we can create a reusable “Login” step would be to select the steps we want to group together and clicking on “Add new Group” as shown below
Our platform gives the option of testing application through various input combinations via Parameterization.
This can be achieved in various ways. One way to do this is to give all the input parameters we would need to test the application in the form of a JSON file in the Setup step (The first step of our tests) as shown below
Then add the variable names used in the json file in the appropriate fields of the step as show below
Another important aspect of automation is building your tests such that it is extensible.
Why is this important?
As the product and teams grow, there will be need to test more complex functionalities which would require building upon already existing tests. This being the case, the automation suites need to be simple, understandable and should be easy to add more tests to already existing test suites with low coupling and high cohesion.
For Example – Say we want to validate the “Select Destination” button from our previous examples. The way to do this would be.
Click on “Add custom action”
Give a name to the New Step and click on “Confirm”
Click on “PARAMS” and Select “HTML” for this example
Add Custom Code
The new step with Custom Code gets added to the list of already existing steps
The above features help to make the automation suite more reusable and extensible.
Authoring and Execution of tests is an important aspect of test automation. Tests should be simple to write, understand and execute across projects. The automation framework or tool chosen should give the flexibility to record and playback tests as well as, write custom code to extend the functionalities of the automation framework.
This is where Testim.io can help you out. We follow a Hybrid Approach and make authoring and execution of tests really simple in such a way that both technical and non-technical members can collaborate and start writing automated tests quickly. This is achieved with the use of “Dynamic Locators”.
What are Dynamic Locators?
The Artificial Intelligence (AI) underneath the platform in real time, analyzes all the DOM objects of a page and extracts the objects and its properties. Finally, the AI decides the best location strategy to locate a particular element based on this analysis. Due to this, even if a developer changes the attribute of an element, the test still continues to run and this leads to more stable tests. As a result, the authoring and execution of automated tests are much faster and more stable.
As we can notice from the above image, the AI parses through all the DOM objects, lists them in the Properties Panel along with the rankings of each and every location strategy for that particular element. In this way, even if the attribute of an element changes, the AI can use a different location strategy from the already parsed list of DOM objects.
Thus, the user does not have to worry about flaky tests.
Some of the basic authoring and execution features Testim.io provides to its customers, are explained below.
How to create a test
We create a new Test by clicking on “Create New” or “New Test”
Recording and Saving a test
Once we click the “Record” button, we can record different user actions in our application. After recording different actions, click on “Stop Recording” button to finish recording our tests. Use the “Save” button to save the tests.
Validations and Assertions
Our platform helps to make validation of different attributes of an element and API’s really simple. We provide various options for users such as
Validate element visibility
Validate element text
Pixel level validation
API level validation
While each test is recorded, the platform takes screenshots of all the Pass and Failed results of each and every step. As a result, users find it easier to troubleshoot problems and understand what happens underneath the hood.
Feedback on each step
The user also gets feedback on each step in terms of whether the tests Passed or Failed by showing a “Green” or “Red icon” on the top left portion of each step as shown below
Testim.io provides the feature to label each and every test a user creates. There are 2 reasons why we may want to label a test
Helps to identify the reason the test was created in the first place
Helps to run tests with the same label all at once through our CLI feature
The way we create labels is by clicking on the “Label” button and either select an existing a label or create a new one.
At Testim.io, we took the effort to provide users with all the documentation they will need to use different features of our platform. Most of the answers about using our platform can be found by clicking on the “Educate” tab and Visiting our documentation site as shown below
With the above features, Testim.io helps to make the authoring and execution of tests really fast and simple for our users. Within a matter of seconds a user can record, replay and save the tests. This is surprisingly one of the most overlooked aspects of test automation and our platform takes care of it for our users.
Engagio is a two year old marketing software startup that helps marketers land and expand their target accounts. The company is rapidly growing with more than 150 customers and 45 employees. Based in San Mateo, Engagio was founded by Jon Miller who was previously the Co-Founder of Marketo.
Today, I had the pleasure of speaking with Helge Scheil, VP of Engineering for Engagio who shared why he selected Testim, his overall development philosophy and the results his team was able to achieve. Checkout the series of videos or read the Q&A from our conversation below.
Q: Can you tell us a little about what your software does?
A: We help B2B marketers drive new business and expand relationships with high-value accounts. Our marketing orchestration software helps marketers create and measure engagement in one tool, drive ongoing success, and measure impact easily. Engagio orchestrates integrated account-based programs, providing the scale benefits of automation with the personalization benefits of the human touch. Our software complements Salesforce and existing marketing automation solutions.
Q: What does your development process look like? A: Our developers work in 2-week sprints, developing features rapidly and deploying to production daily without any production downtime. We’re running on AWS and have the entire develop-test-build-deploy process fully automated. Each developer can deploy at any time, assuming all quality criteria have been met.
Q: What tools do you use to support your development efforts?
A: We are using Atlassian JIRA and Confluence for product strategy, roadmaps, requirements management, work breakdown, sprint planning and release management. We’re using a combination of Codeship, Python (buildbot), Docker, Slack, JUnit and Testim for our continuous build/test/deploy. We have integrated Testim into our deployment bot (which is integrated into Slack).
Q: Prior to Testim, how were you testing your software? What were some of the challenges?
A: Our backend had great coverage with unit and integration tests, including the API level. On the front-end we had very little coverage and the small amount we had was written in Selenium, which was very time-consuming with little fault-tolerance and many “flaky” failures.
Q: What were some things you tried to do to solve these challenges?
A: We were trying to simply allocate more time for Selenium tests to engineers. We considered hiring automation engineers but weren’t too fond of the idea because we believe in engineers being responsible for quality out of the gate, including the regression tests creation and maintenance.
Q: Where there other solutions you evaluated? Why did you select Testim?
A: We didn’t really consider any other solutions after we started evaluating Testim, which was our first vendor to evaluate. Despite some skepticism around the “record and play” concept, we found that Testim’s tolerance (meaning “tests don’t fail”) to UI and feature changes is much greater than we had expected. Other solutions that we considered are relying on pixels/images/coordinates which are inherently sensitive and non-tolerant to changes. We also found that non-engineers (i.e. product manager, functional QA engineers) can write tests, which is unlike Selenium.
Q: After selecting Testim, can you walk me through your getting started experience?
A: During the first week of implementation the team got together to learn the tool and started creating guinea pig tests which evolved into much deeper tests with more intricate feature testing. Those test ended up in our regression test suite which are ran nightly. Rather than allocating two days per sprint, we decided to crank up the coverage with a one day blitz.
Q: After using Testim, what were some the benefits you experienced?
A: We were able to increase our coverage by 4-5x within 6 weeks of using Testim. We can write more tests in less time and maintenance is not as time consuming. We integrated Testim via its CLI and made “run test <label>” commands available in our “deployment” Slack channel as well as a newly created “regression_test” channel. Any deployment that involves our web app now automatically runs our smoke tests. In addition to that we run nightly full regression tests. Running four cloud/grid Testim VMs in parallel we’re able to run our full regression test suite in roughly 10 minutes.
Q: How was your experience working with Testim’s support team?
A: Testim’s responsiveness was extraordinary. We found ourselves asking questions late in the evenings and over the weekends and the team was there to help us familiarize ourselves with the product. If it wasn’t immediately in the middle of the night, we would have the answer by the time we got started the next day.
Thank you to everyone who participated in our round table discussion on The Future of Test Automation: How to prepare for it? We had a fantastic turnout with lots of solid questions from the audience. If you missed the live event, don’t worry…
You can watch the recorded session any time:
Alan Page, QA Director at Unity Technologies and Oren Rubin, CEO of Testim shared their thoughts on:
The current state of test automation
Today’s test automation challenges
Trends that are shaping the future
The future of test automation
How to create your test automation destiny
In this session they also covered:
Tips and techniques for balancing end to end vs. unit testing
How testing is moving from the back end to the front end
How to overcome mobile and cloud testing challenges
Insights into how the roles of developers and testers are evolving
Skills you should start developing now to be ready for the future of testing
Some of the audience questions they answered:
How do we know what is the right amount of test coverage to live with a reasonable amount of risk?
What is the best way to get developers to do more of the testing?
How do you deal with dynamic data, is the best practice to read a DB and compare the results to the front end?
Does test automation mark the end of manual testing as we know it?
There were several questions that we were not able to address during the live event so I followed up with the panelist afterwards to get their answers.
Q: What is Alan’s idea of what an automated UI test should be?
As much as I rant about UI Automation, I wrote some a few weeks ago. The Unity Developer Dashboard provides quick access to a lot of Unity services for game developers. I wrote a few tests that walk through workflows and ensure that the cross-service integration is working correctly.
The important bit is, that I wrote tests to find issues that could only be found with UI automation. If validation of the application can be done at any lower level, that’s where the test should be written.
Q: The team I work on complex machines with Android UI and separate backend. What layer would you suggest to concentrate more testing effort on?
I’d weight my testing heavily on the backend and push as much logic as possible out of the Android UI and into the backend, where I can test more, and test faster.
Q: Some legacy applications are really difficult to unit test. What are your suggestions in handling these kind of applications?
Q: How do you implement modern testing to compliment automation efforts?
My mantra in Modern Testing is, Accelerate the Achievement of Shippable Quality. As “modern” testers, we sometimes do that by writing automated tests, but more often, we look at the system we use to make software – everything from the developer desktop all the way to deployment and beyond (like getting customer feedback), and look for ways we can optimize the system.
For example, as a modern tester, I make sure that we are running the right tools (e.g. static analysis) as part of the build process, that we are taking unit testing seriously and finding all the bugs that can be found by unit tests during unit testing. I try to find things to make it easier for the developers I work with to create high quality and high value tests (e.g. wrappers for templates, or tools to help automate their workflow). I make sure we have reliable and efficient methods for getting feedback from our customers, and that we have a tight loop of build-measure-learn based on that feedback.
Q: Alan Page could you give an example of a test that would be better tested (validated) at a lower level (unit) as opposed to UI level?
It would be easier to think of a test that would not be better validated at that level. Let’s say your application has a sign-in page. One could write UI automation to try different combinations of user names, email addresses, and passwords, but you could write tests faster (and run them massively faster) if you just wrote API tests to sign up users. Of course, you’d still want to test the UI in this case, but I’d prefer to write a bunch of API tests to verify the system, and then exploratory test the UI to make sure it’s working well with the back end, has a proper look and feel, etc.
Q: How critical is today for a QA person to be able to code? In other words, if you are a QA analyst with strong testing/automation skills, but really have not had much coding experience, what would be the best way to incorporate some coding into his or her profile? Where would you start?
Technology is evolving in a rapid pace and the same applies to tools and programming languages as well. This being said, it would be good for testers to know the basics of some programming language in order to keep up with this pace. I would not say this is critical but it will definitely be good to have and with so many online resources available, it is even easier for testers to gain technical knowledge.
Some of the best ways I have known to incorporate coding into his/her profile would be via:
Online Tutorials and courses (Udemy, Coursera, Youtube videos)
Pairing with developers when they are programming. You can ask them basic questions of how things work, as and when they are coding. This is a nice way to learn
Attending code reviews helps to gain some insight into how the programming language works
Reading solutions to different problems on Stack Overflow and other forums
Volunteering to implement a simple feature in your system/tool/project by pairing with another developer
Organizing/Attending meetups and lunch ‘n’ learns focused on a particular programming language and topic
Choose a mentor who could guide you and give you weekly assignments to complete. Set clear goals and deadlines for deliverables
Q: My developers really like reusing cucumber steps, but I couldn’t make them write these steps. The adoption problem is getting the budget reallocated. Any advice for what I should do?
Reusing cucumber steps may not be necessarily a bad thing. It could also mean that the steps you may have written are really good and people can use them for other scenarios. In fact, this is a good thing in BDD (Behavior Driven Development) and helps in easier automation of these steps.
But if the developers are lazy and then reusing steps which do not make sense in a scenario, then we have a problem. In this case, what I would do is try to make developers understand why a particular step may not make sense for a scenario and discuss how you would re-write them. This continuous practice of spot feedback would help to instill the habit of writing good cucumber steps. Also, I would raise this point in retrospective and team meetings, and discuss it with the entire team. This will help to come to a common understanding of the expectations.
In terms of budget reallocation, I would talk to your business folks and project manager on the value of writing cucumber steps and how it helps to bring clarity in requirements, helps to catch defects early and saves a lot of time and effort which would otherwise be spent on re-work of stories due to unclear requirements and expectations for a feature.
Q: Can we quickly Capture Baseline Images using AI?
What exactly do you want the AI part to do? Currently, it’s not there are tools (e.g. Applitools and Percy.io) which can create a baseline very fast. I would expect AI to help in the future with setting the regions that must be ignored (e.g. field showing today’s date), and the closest thing I know is Applitools’ layout comparison (looking and comparing the layout of a page rather than the exact pixels, so the text can differ and the number of lines change, but still have a match).
Q: What are your thoughts on Automatic/live static code analysis?
Code analysis is great! It can help prevent bugs and add to code consistency inside the organization. The important thing to remember is that it never replaces functional testing and it’s merely another (orthogonal) layer which also helps.
Q: When we say ‘Automated Acceptance tests’, do they mean REST API automated tests which automation tool is good to learn?
No. They usually mean E2E (functional) tests, though acceptance tests should include anything related to approving a new release, and in some cases, this includes load/stress testing and even security testing.
Regarding good tools, For functional testing, I’m very biased toward Testim.io, but many prefer to code and choose Selenium or its mobile version Appium (though Espresso and Earl grey are catching on in popularity).
For API testing, there are too many, from the giant HP (Stormrunner) to medium sized Blazemeter, to small and cool solutions like APIfortress, Postman, Loadmill, and of course Testim.io.
Why not to call it full stack tests instead of e2e?
Mostly because e2e is used more often in the industry – but I actually prefer to use the google naming conventions, and just call tests small, medium, or large. Full stack / end-to-end tests fall in the large category.
According to the 2017 Test Benchmark Report, survey respondents want to achieve 50%-75% test automation.
Join this round-table discussion with Alan Page, QA Director at Unity Technologies and Oren Rubin, CEO of Testim as they discuss what companies need to start doing now to achieve their 5 year testing plans.
We work hard to improve the functionality and usability of our autonomous testing platform, constantly adding new features. Our sprints are weekly with minor updates being released sometimes every day, so a lot is added over the course of a month. We share updates by email and social media but wanted to provide a monthly recap of the month’s latests enhancements, summarizing the big and small things we delivered to improve your success with the Testim.
What is it?
Do you need to validate data that appears in the app against data from your back-end? Do you need to extract data to use in your tests? Then we have the feature for you.
We provide two types of API steps:
API action – Will be used when in need to get data and use it for calculation, or to save it for later use in the test.
Validate API – Will be used to validate an element against the API result data.
Why should I care? This capability makes it easy to do UI and API simultaneously. No need to toggle between do different systems nor integrate results. Author a UI test and automatically author an API test to ensure the application under test is working correctly. Read more
Debug Step Parameters
What is it?
We’ve learned that enriched debugging can be a huge time saver. Debug the console errors and network errors automatically and see the failed steps in the DOM. You can see in each step all the parameters used during the run.
Why should I care? This feature is helpful when you want to debug your runs and need to figure out which parameters were used in each step. Learn more
Company and Project Structure
What is it?
Large enterprises need the ability to manage multiple projects and various groups inside their organization. This feature makes it easy for admins to grant individual users access to specific projects. Earlier this year we’ve added support for a company structure with a company owner managing permissions and owners per project.
Why should I care? This gives you more flexibility in managing the different groups inside your company and allows control over who has access to which projects. Learn more
Customers have access to these features now. Check it out and let us know what you think. If you’re not a customer, sign up for a free trial to experience autonomous testing. We’d love to hear what you think of the new features. Please share your thoughts on Twitter or Facebook.