Author

Daniel Gold

Browsing

“When I talk to my colleagues and tell them that we have a team of developers that are performing manual testing guided by a test-plan nobody believes me… ”

Let me just start by saying that I’m very proud of our team and how far we have come.

Since my arrival, we have tripled in size and have significantly improved our quality standards. When I Joined Testim, we were a small R&D team based in Israeli. We had no QA department or defined testing process and had to implement it all from scratch. There are many challenges in trying to create a process where testing is integrated into our CI/CD workflows. For one, there is awareness. The process of assimilating the Idea that we have to test our produced features and acknowledging there are consequences if we don’t. We have a very talented and speed oriented Dev team.  But, it comes to a point when you have to stop and take a step back to really plan for the future. As we strive for continuous improvement, we have to periodically evaluate our process from end to end, making sometimes small adjustments which can improve the flow holistically. We had to assimilate the idea that it is better to “waste” a week of testing than 2 weeks of refactoring. Or worse – to lose our customers trust in our product.

I’m very happy to say that one of the things I love most about our team – is that we do not have inner battles. (At least not anymore). We truly understand that we are all working for the same cause and moving as a team to get the job done.

Some history…

I started at Testim as a one-man show. I arrived with a  background in quality assurance and solid automation experience working for larger companies. The startup environment was new to me.  I took the first month to learn the culture, build rapport with the people, understand the customers and their objectives, and observe the various internal and external process end to end.  From these learnings, I could conclude which things needed to be changed, what was good and should be preserved, and which tools or methodologies could suit our needs the best way.

For example, the product-dev-testing workflow needed some tuning. With that said, what suits an enterprise would be terrible overhead for a fast-paced startup. Typically, a startup requires a lot of risk analysis, fine-tuning and wise tool selection. We needed flexible tools that were easy to use and configure as our processes would quickly evolve over time. We did not have time for lengthy implementations which involved consultants or systems integrators, we needed to deliver value instantly without any setbacks. 

I see a lot of guys being recruited to a small startup coming from a big enterprise, and without fully evaluating their surroundings, start to incorporate heavy test management, reporting, and bug tracking tools just because “This is the way it should be done.” Frequently they will encounter resistance from the team because they simply do not see the need, or it creates an unnecessary burden on the routine without really delivering added value to the current work process. The thing is that there is no “single right way” to do things, and each team needs to collaborate to create an efficient workflow that works for them.

The 5 steps we took to building our quality driven development process

  1. Effective Product-Dev-Testing flows – We needed some more concrete requirements, tech designs, with lean and effective documentation. We also needed to establish what was the entrance and exit point to all of the “stops” in our cycle (And how the ping-pong would be addressed during the cycle). That took some work but I think that we are in a very good place now and we continue to learn and iterate.
  2. Incorporating quality initiatives into our roadmap – An important thing to realize – plans tend to change in a fast-paced startup. With that said, if we don’t bring up things for the agenda – they will not go into our mindset (and eventually into our roadmap). I mapped the things I needed to implement in regards to quality and automation, ensuring it was a part of the plan and scheduled work procedure. One more thing I needed to change was proper time estimations for feature releases. While we do need to move fast and give reasonable time estimations, we started incorporating testing and automation coverage time into our time assessments for each feature. That way, the time we needed to release a well tested and “covered” product is being properly evaluated.
  3. Tool choice – We decided on tools, frameworks and document formats that suited our needs the best way and worked on perfecting the flow with every sprint. The key guideline was to make the tool work without making us work for the tool.
  4. Testing and automation coverage – We worked on reaching a point where our own automation gives us a very solid picture of our product and risk-analysis to cover what needed to be performed manually. In addition to that, we made it part of the process for each and every one of us to write automated coverage for the features we deliver. The standardized approach and guidelines are maintained with some insights from me and our Chief Architect. We also implemented an effective CI/CD process and pipeline to assure our fast pace and quality standards could go hand-in-hand.
  5. Ownership & teamwork – It is true that in a startup there is a lot more versatility in everybody’s roles.  With that said, over time we learned our strong points and took responsibility for things where we can contribute to making our work more productive. I took the Quality Assurance and Test-Automation part because it was my background. There was a time when testing was my sole responsibility. Time went by, we grew to understand that a bottleneck is not a good place to be stuck in (If you know what I mean). That lead us to adopt a whole new approach. We always worked as a team, and as such, everybody feels responsible to make their task come to an end with maximum quality and fast delivery time. A part of that is to take responsibility and test your own work. When I tell my colleagues that we have a team of developers writing test automation and performing manual testing according to a test plan – nobody believes me. It’s actually very simple math. We had a lot of work to do, and a limited staff. The alternative is to deliver features without testing or to delay releases, and that was not an option. Test your work and be responsible for it – it’s as simple as that. Ones you push that deploy button – you have to be confident that you did what you could to assure it’s quality. Psychologically speaking, when you know that you don’t have a “Gatekeeper” checking after you, you would do a more thorough job. With that said – at times you do need an additional set of eyes or it’s better to let someone else take a look after you have boiled your own code for a long time – and that’s where I come in – or someone else from the team who is available. The popular conception is that only testers can test a product. But, what is the difference between a tester and a developer? The main difference is that a tester decided to learn how to test and a developer decided to learn how to develop. Is it possible for someone to learn how to do both? Traditionally, organizations preferred that it was separate functions. However, today’s Agile and DevOps practices try to reduce the handoffs between various stages of the process to create a continuous workflow. Once you teach a developer how to test, and give them the correct mindset for testing – they can tackle any feature they develop. Since they know the code and its weak points, they can do an amazing job of testing it., Everybody who “Touches” the product and its feature at any stage of the development lifecycle has to do their best to deliver quality work. The bottom line is as I mentioned before, there is not a “one size fits all” model – it takes time and continuous improvement to adopt a process that works best for your team.

I think the main key here is making quality an organizational initiative. Whether you are in product, engineering, sales or marketing, focusing on the customers experience should be the responsibility of everyone.  We try to leverage as much data as possible to provide cross-functional team transparency and not let emotion and anecdotal assessments get in the way of our decisions. We all share the same agenda and my mission is to influence and foster quality awareness and maintain a productive workflow which combines a detailed code review, effective CI/CD, unit testing and some TDD elements – our winning recipe for success.

 

Time is money. How many times have you heard that?

We are truly a “Startup nation” constantly racing against the clock to deliver multiple features and execute sprint content, to meet our customer’s demands. This intense pace is an everyday reality. As a QA Engineer that has worked with and in different organizations, I have experienced it up-close and personal.

On one side there are owners and investors – they want to see growth. On the other side, there are customers – they want features and capabilities that work. And then there is us, Testers – we want to deliver quality.

Where does software quality lie in the release process?

Let’s start by defining software quality and how would you measure it?  In the context of software engineering,  software quality refers to how well it meets functional and non-functional requirements.  In terms of how well you can measure it; there are various metrics associated with it and is complex. This is like asking, “how would you define a high-quality watch, car, or a clothing item?”

Could it be that from using the product you can feel that its creator/maker used good materials (even if it means that the price would be higher)? If you use it for a long time, would it still be preserved from standard wear and tear? Is it designed to be comfortable? Fun to use? Does it break or overheat when you accelerate to a high speeds or drive long distances?

With that said, a Mercedes costs 10 times more than a Honda. Does that mean that a Honda is not a good quality car?

All of these examples teach us that quality is based on a perceived notion of price to value.

  • Does the product serve its purpose in a good way?
  • Can it stand up to our “use demands” in a reliable, long-lasting way?

Price can be interpreted in different ways as well – for example – implementation and maintenance time.  Don’t be fooled,  breaking this perception is a lot easier than building it in the eyes of our users. I can think of more than one car brand that has managed to break our perception of it in the last decade or so.

The farther you run, the more expensive it is to go back.

What I’m about to write will not be a shock to anyone, and still, you will be surprised to hear the number of organizations I see that just don’t seem to assimilate that idea. The earlier you will incorporate quality practices into the product’s life cycle – the less money you will spend on fixing it in the long runI have seen how a wide aspect feature is being invented, designed and developed, only to understand that it is different from the poet’s intention, or does not serve its purpose the best way.

What can possibly go wrong? Nothing much, just:

  • Features developed contrary to the customer’s tastes/needs
  • Modules that do not carry out the action for which they were designed
  • Time-consuming debates on whether it was a bug or maybe the characterization was not unequivocal and clear enough
  • Going back and forth by working on/fixing the same module for several times
    • As a result: lack of time to perform all of the evaluated “sprint content” = less features, less progress
  • Features that aren’t clear enough for the user and result in complaints or support issues
  • Unsatisfied customers
  • Bad user experience
  • Bugs in production
  • Wasting time = money

Make a decision, start the change!

Start by understanding the importance of having a wide range testing infrastructure in your organization. Sure, it will not happen overnight. Yet still, it is an investment that produces huge value.

Suit up, sneakers on, and start the process:

  • Start designing a work procedure that includes clear written requirements. People never completely understand one another. If there is no written documentation, it can become challenging to examine what to develop and what to test. I am realistic, there is no place or time for detailed requirements in a startup. However, we can design a format which is short enough to be written in our race and clear enough for the developer to understand what to perform.  
  • Implement the use of a knowledge base and management tools that will suit your needs. Lets define knowledge base.  As far as I’m concerned, it can be a Google-Drive but you still need one place that I can go to if I need to know how a feature works. Now, about management tools, lets just say that it is a topic of a whole another article and still what you need to understand is that there are free requirement management, bug management and process driven tools that could help you organize your everyday tasks.
  • Start building a work process which will suit your company’s needs and corporate culture. The same as there are rules to running a marathon, there has to be rules to running a startup. When do you start the run? When can the tester join? What is the criteria that indicates that the lap is over? What can and can’t be done in the middle of the sprint? Every game needs its set of rules.
  • Implement productive communication between product and QA. How? Well for one thing here is personal example. Start by implementing a productive and positive discussion. Every individual requires an approach so they will not “lock up” to a defensive position and observe your words. Take some time to learn your environment and how to approach each member.
  • Assign a quality lead that will provide you with the added value of a productive process.
  • Incorporate quality practices as early on as possible in the concept, design, and development. You will be surprised how effective it is to review requirements (even before one line of code has been written) and how much time it can save you in the long run.  
  • Make an organizational level decision to make quality an high priority issue, and let QA do their job with the appropriate amount of authority.

Two points I would like to add to all that has been written:

  1. It is a process and conception that takes time, but it will return its investment.
  2. It’s not magic, there will always be bugs! The question is in what amount and what severity?

Remember the question we asked at the beginning of the article? (How do we fit in this never-ending race?)

I would like to wrap up this article with a quote by the famous Tester Michael Bolton:

“I wish more testers understood that testing is not about “building confidence in the product”. That’s not our job at all. If you want confidence, go ask a developer or a marketer. It’s our job to find out what the product is and what it does, warts and all. To investigate. It’s our job to find out where confidence isn’t warranted where there are problems in the product that threaten its value.”

Be smart & save time...
Simply automate.

Get Started Free