Tag

software quality

Browsing

Do you ever ask yourself these questions?

  • How do I know if my quality is improving?
  • What did the team do this week and what was the impact?
  • Do we have enough test coverage in the sprint?

With Testim’s new Managerial Reports, you never have to worry about getting answers to these questions again. Get deep insight into project status and quality metrics which provide granular details into execution cycles, active runs, run durations and success rates – all available online or sent weekly to your inbox.

Testim Test Automation Managerial Reports

These reports, dashboards and KPIs quickly summarize the team’s effort invested over the course of the week, identify tests that require attention as well as if additional effort is required to improve your quality score. Easily track trends week over week and see how your quality coverage is improving.

Testim Test Automation Managerial Reports

For the next two months Testim is offering these reports free of charge. Moving forward, additional licensing will be required to take advantage of these new insights. Sign into Testim to see the new reports now or contact your account manager with any questions.

According to the US Software Engineering Institute, 50% of defects are due to bad specifications.

Are you managing a software team or responsible for looking at their end-to-end delivery processes holistically?

Then, join this round-table discussion with Sam Hatoum, CEO of Xolv.io as he shares how he’s implemented proper specifications throughout projects he’s worked on to increase velocity by up to 35% and reduce defect rates down to 2%.

Date: Tuesday, April 24
Time: 12:00pm PT
RESERVE MY SEAT

Why you should attend this session?

  • Get tips for balancing the tradeoff between velocity and quality
  • Learn how to smooth the transition between development phases while ensuring full alignment
  • See how your test automation strategy becomes a result of your upfront specification planning
  • Succeed in reducing defects before a single line of code is written
  • Minimize the handoffs between Engineering, Quality and Product
  • Capture some quick wins to implement quality practices that will
  • Influence cultural change, speed up release cycles and improve your customer’s experience

Thank you to everyone who participated in our round table discussion on The Future of Test Automation: How to prepare for it? We had a fantastic turnout with lots of solid questions from the audience. If you missed the live event, don’t worry…

You can watch the recorded session any time:

Alan Page, QA Director at Unity Technologies and Oren Rubin, CEO of Testim shared their thoughts on:
  • The current state of test automation
  • Today’s test automation challenges
  • Trends that are shaping the future
  • The future of test automation
  • How to create your test automation destiny
In this session they also covered:
  • Tips and techniques for balancing end to end vs. unit testing
  • How testing is moving from the back end to the front end
  • How to overcome mobile and cloud testing challenges
  • Insights into how the roles of developers and testers are evolving
  • Skills you should start developing now to be ready for the future of testing

Some of the audience questions they answered:

  • How do we know what is the right amount of test coverage to live with a reasonable amount of risk?
  • What is the best way to get developers to do more of the testing?
  • How do you deal with dynamic data, is the best practice to read a DB and compare the results to the front end?
  • Does test automation mark the end of manual testing as we know it?

There were several questions that we were not able to address during the live event so I followed up with the panelist afterwards to get their answers.

Q: What is Alan’s idea of what an automated UI test should be?

As much as I rant about UI Automation, I wrote some a few weeks ago. The Unity Developer Dashboard provides quick access to a lot of Unity services for game developers. I wrote a few tests that walk through workflows and ensure that the cross-service integration is working correctly.

The important bit is, that I wrote tests to find issues that could only be found with UI automation. If validation of the application can be done at any lower level, that’s where the test should be written.

Q: The team I work on complex machines with Android UI and separate backend. What layer would you suggest to concentrate more testing effort on?

I’d weight my testing heavily on the backend and push as much logic as possible out of the Android UI and into the backend, where I can test more, and test faster.

Q: Some legacy applications are really difficult to unit test.  What are your suggestions in handling these kind of applications?

Read Working Effectively with Legacy Code by Michael Feathers, and you’ll find that adding unit tests to legacy code isn’t as hard as you thought it was.

Q: How do you implement modern testing to compliment automation efforts?

My mantra in Modern Testing is, Accelerate the Achievement of Shippable Quality. As “modern” testers, we sometimes do that by writing automated tests, but more often, we look at the system we use to make software – everything from the developer desktop all the way to deployment and beyond (like getting customer feedback), and look for ways we can optimize the system.

For example, as a modern tester, I make sure that we are running the right tools (e.g. static analysis) as part of the build process, that we are taking unit testing seriously and finding all the bugs that can be found by unit tests during unit testing. I try to find things to make it easier for the developers I work with to create high quality and high value tests (e.g. wrappers for templates, or tools to help automate their workflow). I make sure we have reliable and efficient methods for getting feedback from our customers, and that we have a tight loop of build-measure-learn based on that feedback.

Q: Alan Page could you give an example of a test that would be better tested (validated) at a lower level (unit) as opposed to UI level?

It would be easier to think of a test that would not be better validated at that level. Let’s say your application has a sign-in page. One could write UI automation to try different combinations of user names, email addresses, and passwords, but you could write tests faster (and run them massively faster) if you just wrote API tests to sign up users.

Of course, you’d still want to test the UI in this case, but I’d prefer to write a bunch of API tests to verify the system, and then exploratory test the UI to make sure it’s working well with the back end, has a proper look and feel, etc.

Q: How critical is today for a QA person to be able to code? In other words, if you are a QA analyst with strong testing/automation skills, but really have not had much coding experience, what would be the best way to incorporate some coding into his or her profile? Where would you start?

Technology is evolving in a rapid pace and the same applies to tools and programming languages as well. This being said, it would be good for testers to know the basics of some programming language in order to keep up with this pace. I would not say this is critical but it will definitely be good to have and with so many online resources available, it is even easier for testers to gain technical knowledge.

Some of the best ways I have known to incorporate coding into his/her profile would be via:

  • Online Tutorials and courses (Udemy, Coursera, Youtube videos)
  • Pairing with developers when they are programming. You can ask them basic questions of how things work, as and when they are coding. This is a nice way to learn
  • Attending code reviews helps to gain some insight into how the programming language works
  • Reading solutions to different problems on Stack Overflow and other forums
  • Volunteering to implement a simple feature in your system/tool/project by pairing with another developer
  • Organizing/Attending meetups and lunch ‘n’ learns focused on a particular programming language and topic
  • Choose a mentor who could guide you and give you weekly assignments to complete. Set clear goals and deadlines for deliverables

Q: My developers really like reusing cucumber steps, but I couldn’t make them write these steps. The adoption problem is getting the budget reallocated. Any advice for what I should do?

Reusing cucumber steps may not be necessarily a bad thing. It could also mean that the steps you may have written are really good and people can use them for other scenarios. In fact, this is a good thing in BDD (Behavior Driven Development) and helps in easier automation of these steps.

But if the developers are lazy and then reusing steps which do not make sense in a scenario, then we have a problem. In this case, what I would do is try to make developers understand why a particular step may not make sense for a scenario and discuss how you would re-write them. This continuous practice of spot feedback would help to instill the habit of writing good cucumber steps. Also, I would raise this point in retrospective and team meetings, and discuss it with the entire team. This will help to come to a common understanding of the expectations.

In terms of budget reallocation, I would talk to your business folks and project manager on the value of writing cucumber steps and how it helps to bring clarity in requirements, helps to catch defects early and saves a lot of time and effort which would otherwise be spent on re-work of stories due to unclear requirements and expectations for a feature.

Q: Can we quickly Capture Baseline Images using AI?

What exactly do you want the AI part to do? Currently, it’s not there are tools (e.g. Applitools and Percy.io) which can create a baseline very fast. I would expect AI to help in the future with setting the regions that must be ignored (e.g. field showing today’s date), and the closest thing I know is Applitools’ layout comparison (looking and comparing the layout of a page rather than the exact pixels, so the text can differ and the number of lines change, but still have a match).

Q: What are your thoughts on Automatic/live static code analysis?

Code analysis is great! It can help prevent bugs and add to code consistency inside the organization. The important thing to remember is that it never replaces functional testing and it’s merely another (orthogonal) layer which also helps.

Q: When we say ‘Automated Acceptance tests’, do they mean REST API automated tests which automation tool is good to learn?

No. They usually mean E2E (functional) tests, though acceptance tests should include anything related to approving a new release, and in some cases, this includes load/stress testing and even security testing.

Regarding good tools, For functional testing, I’m very biased toward Testim.io, but many prefer to code and choose Selenium or its mobile version Appium (though Espresso and Earl grey are catching on in popularity).

For API testing, there are too many, from the giant HP (Stormrunner) to medium sized Blazemeter, to small and cool solutions like APIfortress, Postman, Loadmill, and of course Testim.io.

Why not to call it full stack tests instead of e2e?

Mostly because e2e is used more often in the industry – but I actually prefer to use the google naming conventions, and just call tests small, medium, or large. Full stack / end-to-end tests fall in the large category.

Time is money. How many times have you heard that?

We are truly a “Startup nation” constantly racing against the clock to deliver multiple features and execute sprint content, to meet our customer’s demands. This intense pace is an everyday reality. As a QA Engineer that has worked with and in different organizations, I have experienced it up-close and personal.

On one side there are owners and investors – they want to see growth. On the other side, there are customers – they want features and capabilities that work. And then there is us, Testers – we want to deliver quality.

But, how do we fit in this never-ending race?

Let’s start by defining software quality and how would you measure it? Well, how would you define a high-quality watch, car, or a clothing item?

Could it be that from using the product you can feel that its creator/maker used good materials (even if it means that the price would be higher)? If you use it for a long time, would it still be preserved from standard wear and tear? Is it designed to be comfortable? Fun to use? Does it break or overheat when you accelerate to a high speeds or drive long distances?

With that said, a Mercedes costs 10 times more than a Honda. Does that mean that a Honda is not a good quality car?

All of these examples teach us that quality is based on a perceived notion of price to value.

  • Does the product serve its purpose in a good way?
  • Can it stand up to our “use demands” in a reliable, long-lasting way?

Price can be interpreted in different ways as well – for example – implementation and maintenance time.  Don’t be fooled,  breaking this perception is a lot easier than building it in the eyes of our users. I can think of more than one car brand that has managed to break our perception of it in the last decade or so.

The farther you run, the more expensive it is to go back.

What I’m about to write will not be a shock to anyone, and still, you will be surprised to hear the number of organizations I see that just don’t seem to assimilate that idea. The earlier you will incorporate quality practices into the product’s life cycle – the less money you will spend on fixing it in the long run. I have seen how a wide aspect feature is being invented, designed and developed, only to understand that it is different from the poet’s intention, or does not serve its purpose the best way.

What can possibly go wrong? Nothing much, just:

  • Features developed contrary to the customer’s tastes/needs
  • Modules that do not carry out the action for which they were designed
  • Time-consuming debates on whether it was a bug or maybe the characterization was not unequivocal and clear enough
  • Going back and forth by working on/fixing the same module for several times
    • As a result: lack of time to perform all of the evaluated “sprint content” = less features, less progress
  • Features that aren’t clear enough for the user and result in complaints or support issues
  • Unsatisfied customers
  • Bad user experience
  • Bugs in production
  • Wasting time = money

Make a decision, start the change!

Start by understanding the importance of having a wide range testing infrastructure in your organization. Sure, it will not happen overnight. Yet still, it is an investment that produces huge value.

Suit up, sneakers on, and start the process:

  • Start designing a work procedure that includes clear written requirements. People never completely understand one another. If there is no written documentation, it can become challenging to examine what to develop and what to test. I am realistic, there is no place or time for detailed requirements in a startup. However, we can design a format which is short enough to be written in our race and clear enough for the developer to understand what to perform.  
  • Implement the use of a knowledge base and management tools that will suit your needs. Lets define knowledge base.  As far as I’m concerned, it can be a Google-Drive but you still need one place that I can go to if I need to know how a feature works. Now, about management tools, lets just say that it is a topic of a whole another article and still what you need to understand is that there are free requirement management, bug management and process driven tools that could help you organize your everyday tasks.
  • Start building a work process which will suit your company’s needs and corporate culture. The same as there are rules to running a marathon, there has to be rules to running a startup. When do you start the run? When can the tester join? What is the criteria that indicates that the lap is over? What can and can’t be done in the middle of the sprint? Every game needs its set of rules.
  • Implement productive communication between product and QA. How? Well for one thing here is personal example. Start by implementing a productive and positive discussion. Every individual requires an approach so they will not “lock up” to a defensive position and observe your words. Take some time to learn your environment and how to approach each member.
  • Assign a quality lead that will provide you with the added value of a productive process.
  • Incorporate quality practices as early on as possible in the concept, design, and development. You will be surprised how effective it is to review requirements (even before one line of code has been written) and how much time it can save you in the long run.  
  • Make an organizational level decision to make quality an high priority issue, and let QA do their job with the appropriate amount of authority.

Two points I would like to add to all that has been written:

  1. It is a process and conception that takes time, but it will return its investment.
  2. It’s not magic, there will always be bugs! The question is in what amount and what severity?

Remember the question we asked at the beginning of the article? (How do we fit in this never-ending race?)

I would like to wrap up this article with a quote by the famous Tester Michael Bolton:

“I wish more testers understood that testing is not about “building confidence in the product”. That’s not our job at all. If you want confidence, go ask a developer or a marketer. It’s our job to find out what the product is and what it does, warts and all. To investigate. It’s our job to find out where confidence isn’t warranted where there are problems in the product that threaten its value.”

Practical steps to shift left

Last month we published “Do you know how much your quality costs?” where we discussed the importance of shifting left and shortening the feedback loop as it relates to the cost of quality. According to Bank of America, discovering a bug in development would cost about 5 hours to fix and $390. Discovering a bug in QA cost about x30. 127 hours to fix, due to to the time it takes to reproduce, the back and forth between dev and QA and the need to trace back the code.. Fixing a bug in production – Estimated at $72,000. Shifting left allows testing close to development as possible and a fast and continuous feedback loop.

Direct cost to find/fix a defect

Four key ingredients to achieving shift left:

  1. Test automation – Traditional QA processes would result in months of testing before you get feedback. A productive tester can get about 20 tests done in a single day. Assuming your test suite includes dozens of tests there are two ways to get the feedback loop down to minutes: Hire hundreds of testers or automate your tests. Automated tests don’t get tired, take days off, go out for lunch etc.
  2. High concurrency – Shifting left means giving near real time feedback i.e minutes. Running dozens of tests, each taking roughly 30-60 seconds, with the expectations of getting the results back in, say 5 min., can only be done if you are running your tests in parallel. The higher your level of parallelism the shorter the feedback loop is. Running your tests in parallel required a grid, a cloud based infrastructure that includes virtual environments allowing you to spin in and out different browsers.
  3. Build-test-deploy automation – Shifting left means you are optimizing the handover across the different stages of the development process. The development cycle includes dozens of handovers and human intervention can always delay that process hence you want to automate the handover using Continuous Integration (CI) servers. CI servers monitor events through your development cycle, such as a new build, and can be programed to trigger a set of actions once a certain event occurred, such as triggering your automated tests.
  4. Suites – It is best practice to optimize your quality assurance processes by segmenting your tests into suites and running different suites at different stages of your development cycle. For example you full regression might include thousands of tests. Running all those tests every time a develop push code would be time and resource consuming. If 2-3 key scenarios failed, there is no point in waiting for the entire suite to be over. You might want to create a small subset of tests, composed of your high priority flows, and run that “sanity” suite often, while running the more extensive suites once if sanity suite ran successfully.

 

Be smart & save time...
Simply automate.

Get Started Free