Tag

mobile testing

Browsing

Thank you to everyone who participated in our round table discussion on The Future of Test Automation: How to prepare for it? We had a fantastic turnout with lots of solid questions from the audience. If you missed the live event, don’t worry…

You can watch the recorded session any time:

Alan Page, QA Director at Unity Technologies and Oren Rubin, CEO of Testim shared their thoughts on:
  • The current state of test automation
  • Today’s test automation challenges
  • Trends that are shaping the future
  • The future of test automation
  • How to create your test automation destiny
In this session they also covered:
  • Tips and techniques for balancing end to end vs. unit testing
  • How testing is moving from the back end to the front end
  • How to overcome mobile and cloud testing challenges
  • Insights into how the roles of developers and testers are evolving
  • Skills you should start developing now to be ready for the future of testing

Some of the audience questions they answered:

  • How do we know what is the right amount of test coverage to live with a reasonable amount of risk?
  • What is the best way to get developers to do more of the testing?
  • How do you deal with dynamic data, is the best practice to read a DB and compare the results to the front end?
  • Does test automation mark the end of manual testing as we know it?

There were several questions that we were not able to address during the live event so I followed up with the panelist afterwards to get their answers.

Q: What is Alan’s idea of what an automated UI test should be?

As much as I rant about UI Automation, I wrote some a few weeks ago. The Unity Developer Dashboard provides quick access to a lot of Unity services for game developers. I wrote a few tests that walk through workflows and ensure that the cross-service integration is working correctly.

The important bit is, that I wrote tests to find issues that could only be found with UI automation. If validation of the application can be done at any lower level, that’s where the test should be written.

Q: The team I work on complex machines with Android UI and separate backend. What layer would you suggest to concentrate more testing effort on?

I’d weight my testing heavily on the backend and push as much logic as possible out of the Android UI and into the backend, where I can test more, and test faster.

Q: Some legacy applications are really difficult to unit test.  What are your suggestions in handling these kind of applications?

Read Working Effectively with Legacy Code by Michael Feathers, and you’ll find that adding unit tests to legacy code isn’t as hard as you thought it was.

Q: How do you implement modern testing to compliment automation efforts?

My mantra in Modern Testing is, Accelerate the Achievement of Shippable Quality. As “modern” testers, we sometimes do that by writing automated tests, but more often, we look at the system we use to make software – everything from the developer desktop all the way to deployment and beyond (like getting customer feedback), and look for ways we can optimize the system.

For example, as a modern tester, I make sure that we are running the right tools (e.g. static analysis) as part of the build process, that we are taking unit testing seriously and finding all the bugs that can be found by unit tests during unit testing. I try to find things to make it easier for the developers I work with to create high quality and high value tests (e.g. wrappers for templates, or tools to help automate their workflow). I make sure we have reliable and efficient methods for getting feedback from our customers, and that we have a tight loop of build-measure-learn based on that feedback.

Q: Alan Page could you give an example of a test that would be better tested (validated) at a lower level (unit) as opposed to UI level?

It would be easier to think of a test that would not be better validated at that level. Let’s say your application has a sign-in page. One could write UI automation to try different combinations of user names, email addresses, and passwords, but you could write tests faster (and run them massively faster) if you just wrote API tests to sign up users.

Of course, you’d still want to test the UI in this case, but I’d prefer to write a bunch of API tests to verify the system, and then exploratory test the UI to make sure it’s working well with the back end, has a proper look and feel, etc.

Q: How critical is today for a QA person to be able to code? In other words, if you are a QA analyst with strong testing/automation skills, but really have not had much coding experience, what would be the best way to incorporate some coding into his or her profile? Where would you start?

Technology is evolving in a rapid pace and the same applies to tools and programming languages as well. This being said, it would be good for testers to know the basics of some programming language in order to keep up with this pace. I would not say this is critical but it will definitely be good to have and with so many online resources available, it is even easier for testers to gain technical knowledge.

Some of the best ways I have known to incorporate coding into his/her profile would be via:

  • Online Tutorials and courses (Udemy, Coursera, Youtube videos)
  • Pairing with developers when they are programming. You can ask them basic questions of how things work, as and when they are coding. This is a nice way to learn
  • Attending code reviews helps to gain some insight into how the programming language works
  • Reading solutions to different problems on Stack Overflow and other forums
  • Volunteering to implement a simple feature in your system/tool/project by pairing with another developer
  • Organizing/Attending meetups and lunch ‘n’ learns focused on a particular programming language and topic
  • Choose a mentor who could guide you and give you weekly assignments to complete. Set clear goals and deadlines for deliverables

Q: My developers really like reusing cucumber steps, but I couldn’t make them write these steps. The adoption problem is getting the budget reallocated. Any advice for what I should do?

Reusing cucumber steps may not be necessarily a bad thing. It could also mean that the steps you may have written are really good and people can use them for other scenarios. In fact, this is a good thing in BDD (Behavior Driven Development) and helps in easier automation of these steps.

But if the developers are lazy and then reusing steps which do not make sense in a scenario, then we have a problem. In this case, what I would do is try to make developers understand why a particular step may not make sense for a scenario and discuss how you would re-write them. This continuous practice of spot feedback would help to instill the habit of writing good cucumber steps. Also, I would raise this point in retrospective and team meetings, and discuss it with the entire team. This will help to come to a common understanding of the expectations.

In terms of budget reallocation, I would talk to your business folks and project manager on the value of writing cucumber steps and how it helps to bring clarity in requirements, helps to catch defects early and saves a lot of time and effort which would otherwise be spent on re-work of stories due to unclear requirements and expectations for a feature.

Q: Can we quickly Capture Baseline Images using AI?

What exactly do you want the AI part to do? Currently, it’s not there are tools (e.g. Applitools and Percy.io) which can create a baseline very fast. I would expect AI to help in the future with setting the regions that must be ignored (e.g. field showing today’s date), and the closest thing I know is Applitools’ layout comparison (looking and comparing the layout of a page rather than the exact pixels, so the text can differ and the number of lines change, but still have a match).

Q: What are your thoughts on Automatic/live static code analysis?

Code analysis is great! It can help prevent bugs and add to code consistency inside the organization. The important thing to remember is that it never replaces functional testing and it’s merely another (orthogonal) layer which also helps.

Q: When we say ‘Automated Acceptance tests’, do they mean REST API automated tests which automation tool is good to learn?

No. They usually mean E2E (functional) tests, though acceptance tests should include anything related to approving a new release, and in some cases, this includes load/stress testing and even security testing.

Regarding good tools, For functional testing, I’m very biased toward Testim.io, but many prefer to code and choose Selenium or its mobile version Appium (though Espresso and Earl grey are catching on in popularity).

For API testing, there are too many, from the giant HP (Stormrunner) to medium sized Blazemeter, to small and cool solutions like APIfortress, Postman, Loadmill, and of course Testim.io.

Why not to call it full stack tests instead of e2e?

Mostly because e2e is used more often in the industry – but I actually prefer to use the google naming conventions, and just call tests small, medium, or large. Full stack / end-to-end tests fall in the large category.

According to the 2017 Test Benchmark Report, survey respondents want to achieve 50%-75% test automation.
Join this round-table discussion with Alan Page, QA Director at Unity Technologies and Oren Rubin, CEO of Testim as they discuss what companies need to start doing now to achieve their 5 year testing plans.
Date: Tuesday, February 27
Time: 9:00am PT
In this session they will cover:
  • The current state of test automation
  • Today’s test automation challenges
  • Trends that are shaping the future
  • The future of test automation
  • How to create your test automation destiny

RESERVE YOUR SEAT to:

  • Get tips and techniques for balancing end to end vs. unit testing
  • See how testing is moving from the back end to the front end
  • How to overcome mobile and cloud testing challenges
  • Gain insights into how the roles of developers and testers are evolving
  • Skills you should start developing now to be ready for the future of testing
  • Ask the panelists your own questions live

2017 has been a phenomenal year for Testim, full of exponential growth and product enhancements.

We are proud that so many teams across the globe are utilizing our products to support their software quality initiatives. We want to thank our customers for helping us in making 2017 a huge success! As the leading provider of autonomous testing for Agile teams we would like to recap some of our shared accomplishments.  

100+ Customers Worldwide– Earlier this year, we earned the trust of our 100th customer. Our customers span across a dozen countries, executing more than 1M automated tests per month. We are thrilled to support their CI/CD efforts; reducing risk, faster time to market and releasing higher quality software.

Lightspeed Venture Partners Invests $5.6 Million in Testim– Lightspeed, a Silicon Valley-based early stage venture capital firm focused on accelerating disruptive innovations and trends in the Enterprise and Consumer sectors invested in Testim this past year. This is Testim’s second round of funding in 12 months. The funds will support Testim’s mission to help engineering teams make application testing autonomous and integrative to their agile development cycle.

Testim Delivers Many Highly Requested FeaturesOur customers are the ones that make the Testim community so special and inspire us to innovate. Some of the years most requested customer features include:

We want to express our utmost appreciation and gratitude to our customers, partners and peers in the industry for their continued support. We are thrilled to welcome 2018 and look forward to even more shared success together. Keep your eye out for mobile native, company structure, advanced reporting and much, much more.

Introduction

We work hard to improve the functionality and usability of our autonomous testing platform, constantly adding new features. Our sprints are weekly with minor updates being released sometimes every day, so a lot is added over the course of a month. We share updates by email and social media but wanted to provide a monthly recap of the month’s latests enhancements, summarizing the big and small things we delivered to improve your success with the Testim.

Test Reruns

What is it?

An easy and fast way to rerun a test with the exact parameters used in another run.

Why should I care?

When running large suites, many times you want to rerun a smaller subset of those tests that failed. Since tests fail for a number of reasons, this new capability allows you to test temporary errors or fixes. Reruns let you run a test with the same parameters as a previous test run with a click of a button. This will include all dynamic parameters, from the CLI, Test Data, and parameters exported from tests that ran before this one (via exportsGlobal). Learn more

test suite rerun

To Reuse or not to Reuse

What is is?

Reusing actions is one of the basic principles of programming. Testim always supported reuse, and now we push toward even more developer best practices. For every new group, API call, or custom code step that you create, Testim will prompt you for a (meaningful) name, and whether you would like to make this step shareable or not.

Why should I care?

This feature will save you time since you can author the test once and call it to be used in any suite of tests. This makes it faster to author tests when making changes to a shared group, but do not want it to affect other tests in the suite. You can find more information on Reuse in our docs

test reuse

Mobile Web

What is it?
Testim now supports the authoring and execution of tests for mobile web.

Why should I care?  
The number of mobile devices has suppressed the number of desktop computers. We all consume content through our mobile devices and users expect your application to provide superior experience regardless of the medium. Responsive websites look and behave differently than desktop hence require different tests. Learn more

mobile web

Customers have access to these features now. Check it out and let us know what you think. If you’re not a customer, sign up for a free trial to experience autonomous testing. We’d love to hear what you think of the new features. Please share your thoughts on Twitter or Facebook.

Be smart & save time...
Simply automate.

Get Started Free