What did the team do this week and what was the impact?
Do we have enough test coverage in the sprint?
With Testim’s new Managerial Reports, you never have to worry about getting answers to these questions again. Get deep insight into project status and quality metrics which provide granular details into execution cycles, active runs, run durations and success rates – all available online or sent weekly to your inbox.
These reports, dashboards and KPIs quickly summarize the team’s effort invested over the course of the week, identify tests that require attention as well as if additional effort is required to improve your quality score. Easily track trends week over week and see how your quality coverage is improving.
For the next two months Testim is offering these reports free of charge. Moving forward, additional licensing will be required to take advantage of these new insights. Sign into Testim to see the new reports now or contact your account manager with any questions.
We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. With that said, we’re thrilled to release a few of your most requested features; negative validation, extract value, and integration to test management. Check them out and let us know what you think.
What is it?
Validations are the most important steps we need in our tests. Validations allow us to see if our app does what we expect it to do. Testim provides a wide range of validation options so you can choose the ones that best suit your needs. Now you can choose a negative validation.
Element Not Visible – Make sure an element is not visible to the user.
Why should I care? Use the element not visible validation to check whether your element does not exist/visible in the page. Use this validation to make sure an element disappeared from the screen or wasn’t shown in the first place. Learn more
What is it?
Extract value lets you copy values directly from your application to be used in later steps. For example, if you have text in your application such as username, name or account number use extract value to validate that it appears on another page. You can also use it to extract a value for future calculation at a later step.
Why should I care? Now, you can use the new parameter in a validation step, set text, custom steps, etc. So if you want to change the element you selected, now you don’t have to re-record this step. Learn more
Integrate Test Management
What is it?
Testim can now sync your test results with TestRail so you can get a side by side view of your manual and automated tests.
Why should I care?
If your doing exploratory testing, now you can link your manual and automated tests to your requirements and defects for end to end traceability. Learn more
Customers have access to these features now. Check it out and let us know what you think. If you’re not a customer, sign up for a free trial to experience autonomous testing. We’d love to hear what you think of the new features. Please share your thoughts on Twitter or Facebook.
According to the US Software Engineering Institute, 50% of defects are due to bad specifications.
Are you managing a software team or responsible for looking at their end-to-end delivery processes holistically?
Then, join this round-table discussion with Sam Hatoum, CEO of Xolv.io as he shares how he’s implemented proper specifications throughout projects he’s worked on to increase velocity by up to 35% and reduce defect rates down to 2%.
Engagio is a two year old marketing software startup that helps marketers land and expand their target accounts. The company is rapidly growing with more than 150 customers and 45 employees. Based in San Mateo, Engagio was founded by Jon Miller who was previously the Co-Founder of Marketo.
Today, I had the pleasure of speaking with Helge Scheil, VP of Engineering for Engagio who shared why he selected Testim, his overall development philosophy and the results his team was able to achieve. Checkout the series of videos or read the Q&A from our conversation below.
Q: Can you tell us a little about what your software does?
A: We help B2B marketers drive new business and expand relationships with high-value accounts. Our marketing orchestration software helps marketers create and measure engagement in one tool, drive ongoing success, and measure impact easily. Engagio orchestrates integrated account-based programs, providing the scale benefits of automation with the personalization benefits of the human touch. Our software complements Salesforce and existing marketing automation solutions.
Q: What does your development process look like? A: Our developers work in 2-week sprints, developing features rapidly and deploying to production daily without any production downtime. We’re running on AWS and have the entire develop-test-build-deploy process fully automated. Each developer can deploy at any time, assuming all quality criteria have been met.
Q: What tools do you use to support your development efforts?
A: We are using Atlassian JIRA and Confluence for product strategy, roadmaps, requirements management, work breakdown, sprint planning and release management. We’re using a combination of Codeship, Python (buildbot), Docker, Slack, JUnit and Testim for our continuous build/test/deploy. We have integrated Testim into our deployment bot (which is integrated into Slack).
Q: Prior to Testim, how were you testing your software? What were some of the challenges?
A: Our backend had great coverage with unit and integration tests, including the API level. On the front-end we had very little coverage and the small amount we had was written in Selenium, which was very time-consuming with little fault-tolerance and many “flaky” failures.
Q: What were some things you tried to do to solve these challenges?
A: We were trying to simply allocate more time for Selenium tests to engineers. We considered hiring automation engineers but weren’t too fond of the idea because we believe in engineers being responsible for quality out of the gate, including the regression tests creation and maintenance.
Q: Where there other solutions you evaluated? Why did you select Testim?
A: We didn’t really consider any other solutions after we started evaluating Testim, which was our first vendor to evaluate. Despite some skepticism around the “record and play” concept, we found that Testim’s tolerance (meaning “tests don’t fail”) to UI and feature changes is much greater than we had expected. Other solutions that we considered are relying on pixels/images/coordinates which are inherently sensitive and non-tolerant to changes. We also found that non-engineers (i.e. product manager, functional QA engineers) can write tests, which is unlike Selenium.
Q: After selecting Testim, can you walk me through your getting started experience?
A: During the first week of implementation the team got together to learn the tool and started creating guinea pig tests which evolved into much deeper tests with more intricate feature testing. Those test ended up in our regression test suite which are ran nightly. Rather than allocating two days per sprint, we decided to crank up the coverage with a one day blitz.
Q: After using Testim, what were some the benefits you experienced?
A: We were able to increase our coverage by 4-5x within 6 weeks of using Testim. We can write more tests in less time and maintenance is not as time consuming. We integrated Testim via its CLI and made “run test <label>” commands available in our “deployment” Slack channel as well as a newly created “regression_test” channel. Any deployment that involves our web app now automatically runs our smoke tests. In addition to that we run nightly full regression tests. Running four cloud/grid Testim VMs in parallel we’re able to run our full regression test suite in roughly 10 minutes.
Q: How was your experience working with Testim’s support team?
A: Testim’s responsiveness was extraordinary. We found ourselves asking questions late in the evenings and over the weekends and the team was there to help us familiarize ourselves with the product. If it wasn’t immediately in the middle of the night, we would have the answer by the time we got started the next day.
Thank you to everyone who participated in our round table discussion on The Future of Test Automation: How to prepare for it? We had a fantastic turnout with lots of solid questions from the audience. If you missed the live event, don’t worry…
You can watch the recorded session any time:
Alan Page, QA Director at Unity Technologies and Oren Rubin, CEO of Testim shared their thoughts on:
The current state of test automation
Today’s test automation challenges
Trends that are shaping the future
The future of test automation
How to create your test automation destiny
In this session they also covered:
Tips and techniques for balancing end to end vs. unit testing
How testing is moving from the back end to the front end
How to overcome mobile and cloud testing challenges
Insights into how the roles of developers and testers are evolving
Skills you should start developing now to be ready for the future of testing
Some of the audience questions they answered:
How do we know what is the right amount of test coverage to live with a reasonable amount of risk?
What is the best way to get developers to do more of the testing?
How do you deal with dynamic data, is the best practice to read a DB and compare the results to the front end?
Does test automation mark the end of manual testing as we know it?
There were several questions that we were not able to address during the live event so I followed up with the panelist afterwards to get their answers.
Q: What is Alan’s idea of what an automated UI test should be?
As much as I rant about UI Automation, I wrote some a few weeks ago. The Unity Developer Dashboard provides quick access to a lot of Unity services for game developers. I wrote a few tests that walk through workflows and ensure that the cross-service integration is working correctly.
The important bit is, that I wrote tests to find issues that could only be found with UI automation. If validation of the application can be done at any lower level, that’s where the test should be written.
Q: The team I work on complex machines with Android UI and separate backend. What layer would you suggest to concentrate more testing effort on?
I’d weight my testing heavily on the backend and push as much logic as possible out of the Android UI and into the backend, where I can test more, and test faster.
Q: Some legacy applications are really difficult to unit test. What are your suggestions in handling these kind of applications?
Q: How do you implement modern testing to compliment automation efforts?
My mantra in Modern Testing is, Accelerate the Achievement of Shippable Quality. As “modern” testers, we sometimes do that by writing automated tests, but more often, we look at the system we use to make software – everything from the developer desktop all the way to deployment and beyond (like getting customer feedback), and look for ways we can optimize the system.
For example, as a modern tester, I make sure that we are running the right tools (e.g. static analysis) as part of the build process, that we are taking unit testing seriously and finding all the bugs that can be found by unit tests during unit testing. I try to find things to make it easier for the developers I work with to create high quality and high value tests (e.g. wrappers for templates, or tools to help automate their workflow). I make sure we have reliable and efficient methods for getting feedback from our customers, and that we have a tight loop of build-measure-learn based on that feedback.
Q: Alan Page could you give an example of a test that would be better tested (validated) at a lower level (unit) as opposed to UI level?
It would be easier to think of a test that would not be better validated at that level. Let’s say your application has a sign-in page. One could write UI automation to try different combinations of user names, email addresses, and passwords, but you could write tests faster (and run them massively faster) if you just wrote API tests to sign up users. Of course, you’d still want to test the UI in this case, but I’d prefer to write a bunch of API tests to verify the system, and then exploratory test the UI to make sure it’s working well with the back end, has a proper look and feel, etc.
Q: How critical is today for a QA person to be able to code? In other words, if you are a QA analyst with strong testing/automation skills, but really have not had much coding experience, what would be the best way to incorporate some coding into his or her profile? Where would you start?
Technology is evolving in a rapid pace and the same applies to tools and programming languages as well. This being said, it would be good for testers to know the basics of some programming language in order to keep up with this pace. I would not say this is critical but it will definitely be good to have and with so many online resources available, it is even easier for testers to gain technical knowledge.
Some of the best ways I have known to incorporate coding into his/her profile would be via:
Online Tutorials and courses (Udemy, Coursera, Youtube videos)
Pairing with developers when they are programming. You can ask them basic questions of how things work, as and when they are coding. This is a nice way to learn
Attending code reviews helps to gain some insight into how the programming language works
Reading solutions to different problems on Stack Overflow and other forums
Volunteering to implement a simple feature in your system/tool/project by pairing with another developer
Organizing/Attending meetups and lunch ‘n’ learns focused on a particular programming language and topic
Choose a mentor who could guide you and give you weekly assignments to complete. Set clear goals and deadlines for deliverables
Q: My developers really like reusing cucumber steps, but I couldn’t make them write these steps. The adoption problem is getting the budget reallocated. Any advice for what I should do?
Reusing cucumber steps may not be necessarily a bad thing. It could also mean that the steps you may have written are really good and people can use them for other scenarios. In fact, this is a good thing in BDD (Behavior Driven Development) and helps in easier automation of these steps.
But if the developers are lazy and then reusing steps which do not make sense in a scenario, then we have a problem. In this case, what I would do is try to make developers understand why a particular step may not make sense for a scenario and discuss how you would re-write them. This continuous practice of spot feedback would help to instill the habit of writing good cucumber steps. Also, I would raise this point in retrospective and team meetings, and discuss it with the entire team. This will help to come to a common understanding of the expectations.
In terms of budget reallocation, I would talk to your business folks and project manager on the value of writing cucumber steps and how it helps to bring clarity in requirements, helps to catch defects early and saves a lot of time and effort which would otherwise be spent on re-work of stories due to unclear requirements and expectations for a feature.
Q: Can we quickly Capture Baseline Images using AI?
What exactly do you want the AI part to do? Currently, it’s not there are tools (e.g. Applitools and Percy.io) which can create a baseline very fast. I would expect AI to help in the future with setting the regions that must be ignored (e.g. field showing today’s date), and the closest thing I know is Applitools’ layout comparison (looking and comparing the layout of a page rather than the exact pixels, so the text can differ and the number of lines change, but still have a match).
Q: What are your thoughts on Automatic/live static code analysis?
Code analysis is great! It can help prevent bugs and add to code consistency inside the organization. The important thing to remember is that it never replaces functional testing and it’s merely another (orthogonal) layer which also helps.
Q: When we say ‘Automated Acceptance tests’, do they mean REST API automated tests which automation tool is good to learn?
No. They usually mean E2E (functional) tests, though acceptance tests should include anything related to approving a new release, and in some cases, this includes load/stress testing and even security testing.
Regarding good tools, For functional testing, I’m very biased toward Testim.io, but many prefer to code and choose Selenium or its mobile version Appium (though Espresso and Earl grey are catching on in popularity).
For API testing, there are too many, from the giant HP (Stormrunner) to medium sized Blazemeter, to small and cool solutions like APIfortress, Postman, Loadmill, and of course Testim.io.
Why not to call it full stack tests instead of e2e?
Mostly because e2e is used more often in the industry – but I actually prefer to use the google naming conventions, and just call tests small, medium, or large. Full stack / end-to-end tests fall in the large category.
Growing up as Testers in the software testing industry, we often go through a lot of thoughts and have several questions in mind such as:
How do I learn about software testing?
I like my job but how do I get to the next level?
Am I good at my job?
There are so many testing jargons people are using and I do not understand any of them
You are trying to communicate effectively but people still do not understand you
Based on testing software for a over a decade now, reading articles and blogs, interacting with practitioners from all over the world and analyzing my success and failures as a tester in the past several years; I discovered everything comes down to 3 Key Factors that paves the path to becoming a strong tester. These factors form the Strong Tester Model Stability as shown below.
Factor 1 – Motivation
“Run Your Own Race” – As Testers, we constantly keep comparing ourselves with other people, try to do more without paying attention to what our goals are and finally end up getting stressed as a result of being overworked and concentrating on lesser priority things. In life and in testing, we need to remember that, the ONLY person we are competing with; is OURSELVES. We need to identify our strengths and answer to our conscience, rather than comparing ourselves with others who totally have different set of goals.
Embrace Your Talents – Recognize your strengths and weaknesses. Embrace them and work to your skill sets with well defined goals and deadlines. Hold yourself accountable.
Go Explore – We will find our true passion only when we explore different things and take chances. Everyone starts from somewhere and no one is an overnight success. So start your journey and exploration. Try anything at least once with full dedication and determination. Remember “If you take a chance in life sometimes good things happen and sometimes bad things happen. But if you don’t take a chance, nothing happens.”
Tips and tricks for sustained and continuous motivation:
Have inspirational quotes to get you going when you are down or feel lost. Everyone has a trigger point, what is yours?
Have a Testing Notebook to note down all things testing when you read, explore and talk to people. Looking at your notes will spark different ideas for new techniques, strategies, articles, talks and so on. The opportunities are endless.
Use Mind Maps to visualize your thoughts and goals. This gives you something concrete to think about and helps in prioritizing each one of them.
Listen to inspiring podcasts and read motivational books.
Do deep work.
Have trusted Mentors to help you out in your journey. They help to challenge ideas, brainstorm solutions and guide you. Meet with them regularly via Skype or on a 1:1 basis.
Factor 2 – Communication
Intercultural communication – In a corporate world, we work with people from different cultures and regions. That being said, be cognizant of the cultural differences, usage of idioms and phrases and help them adapt to these differences. The learning goes both ways in terms of people from different cultures having an open mind and learning from the local people and vice versa.
Body Language – About 55% of our communication is through body language. Thus, having effective body language is important when working with other people. Be a good listener, have proper eye contact and pay attention.
Tone of Voice – Raising our voice in meetings to express our opinions or concerns does not work. When we raise our voice, the point does not get communicated across to the other person. The only thing noticeable during these times is the fact that, the other person is shouting and it automatically makes people react in a less amicable way.
Mental Models – People create their own mental models about objects, systems, ideas, description and so on. It is important to notice this and be open to hearing other people’s ideas.
Know your audience – Different people need different kinds of information. We need to modify our testing story based on our audience.
Safety Language – Prevent digging a hole for yourself by using safety language. Use word fillers like “could be”, “may have”, “would it be” etc. For Example – You can say “This defect could be reproduced under these conditions” instead of saying “This defect will be reproduced only in this way.”
Factor 3– Education
Validated Learning – The“Lean startup” principles of Build->Measure->Learn holds good in testing as well. Always build a Minimal Viable Product or Proof of Concept, then solicit feedback. Based on the feedback keep making the product better. Follow an iterative approach.
Developer – Tester pairing
Pairing while writing unit tests helps to identify gaps in development testing and these gaps can be addressed by Testers during their testing process.
Pairing in code reviews helps to find vulnerabilities in code.
Pairing also helps in learning the programming language.
Pair Testing with Testers/Developers
Paired Testing helps in continuous exchange of ideas/observations and helps to better explore the system
We gain experience only by committing mistakes. Remember “Things are never as bad as they feel or good as they sound.”
The track sessions help to learn about different topics relevant to the industry. If you do not like one session feel free to go to another one. A lot of money is invested by you and your company, so take advantage of it.
Do your research – Before going to a conference, identify people you want to network with, then meet up with them during the conference to learn and exchange ideas.
Hallway conversations and networking – A lot of learning takes place outside the conferences rooms in the hallways and networking events during the conference. Ensure you exchange business cards; in the back of the cards note down hints about the person and follow up with him/her after the conference.
Share your ideas, thoughts and problems with the community. Use blogs, LinkedIn and Twitter to help other people like you.
“If you want different results, you need to be willing to do things differently and different things” (from 12 Week Year)
“If it’s worth building, it’s worth testing” — Kent Beck, pioneer of Test Driven Development
Imagine this situation. It is 4:45 pm on a Friday afternoon, and a new feature on the company’s web application for generating sales reports is pushed to production. At 11:30 pm that night, the lead developer gets a frantic call from a customer — the new feature broke an existing business-critical feature. What if the team could have prevented the break in the first place? By including test automation from the beginning of the development process, this is possible.
What Is Testing?
Testing is crucial to many agile software development processes. Testing enables developers to know ahead of time if everything will work as expected. With a well-written set of tests, developers can know whether or not new additions to a codebase will break existing features and behavior. Testing processes become the crystal ball of the software development process.
Testing can be automated or performed manually, but automated testing allows software development teams to test code more quickly, frequently, and accurately. Software testers and developers can then free up their time and focus on the more difficult tasks at hand.
How To Develop For Testing
The key to being successful with test automation in the software development life cycle is to introduce it as early on as possible. While many developers recognize the importance of testing their software, the testing process is often times delayed until the end of the development cycle. Testing may even be dropped completely in order to make a deadline or meet budgetary restrictions.
Those not using test automation may view testing as a burden or roadblock to developing and delivering an application. A well-written set of tests, however, can end up saving time during the development process. The key to this is to write them as soon as new features are developed. This practice is commonly known as Test Driven Development. Writing tests as one develops features also encourages better documentation and leads to smaller changes in the codebase at a given time. Taking smaller steps in creating and changing a codebase enables the developer to make sure that what he/she adds maintains the health of the codebase.
Just as one can adapt his/her development workflow to Test Driven Development, it is important to also adapt the way tests are written when leveraging automated tests. Automated tests typically contain three parts: the setup, the action to be tested, and the validation. The best tests are those that test just one item, so developers know exactly what breaks and how to fix it. Tests that combine multiple actions are more difficult to create and maintain as well as slower to run. Most importantly, complicated tests do not tell the developers exactly what is broken and still require additional debugging/exploration to get to the root of the problem.
Automated testing can be broken down into different types of tests, such as web testing, unit testing, and usability testing. While it may not be possible to have complete automated test coverage for every application, a combination of different types of testing can provide a comprehensive test suite that can be augmented by hands-on testing as well.
Platforms like Testim enable developers to automate web testing across multiple browsers, as if a user was testing the application hands-on. This enables both developers and testers to uncover issues that cannot be discovered using hands-on testing methods.
What causes a test to fail?
The purpose of testing software is to identify bad code. Bad code either does not function as expected or breaks other features in the software. Testing is important to developers as it allows them to quickly correct the bugs and maintain a healthy codebase that enables a team of developers to develop and ship new features
However, tests can fail for reasons that are not bad code. When doing hands-on testing, a failed test can even be the result of human error. With some automated testing suites, if a test is written for a button on a webpage with a certain identifier, and the identifier changes, that test will then fail the next time it is run.
The failure of the button test may cause the developer to think something is wrong with thier code, and as a result, they may spend hours digging through endless lines of code to suss out the issue, only to find that it was the result of a bad test and not bad code.
How Does Testim Help With Testing?
Testim gives developers and testers a way to quickly create, execute, and maintain tests. It does this by adapting to the changes that are made during the software development cycle. Through machine learning algorithms, Testim enables developers to create tests that can learn over time. This could lead to things like automated testing that adapts to small changes like the ID of a button, which would create more reliable tests that developers and testers can trust to identify bad code.
Tests that are quick to create and run will transform the software development process into one that ships code quicker, so developers can spend their time developing.
According to the 2017 Test Benchmark Report, survey respondents want to achieve 50%-75% test automation.
Join this round-table discussion with Alan Page, QA Director at Unity Technologies and Oren Rubin, CEO of Testim as they discuss what companies need to start doing now to achieve their 5 year testing plans.
We work hard to improve the functionality and usability of our autonomous testing platform, constantly adding new features. Our sprints are weekly with minor updates being released sometimes every day, so a lot is added over the course of a month. We share updates by email and social media but wanted to provide a monthly recap of the month’s latests enhancements, summarizing the big and small things we delivered to improve your success with the Testim.
What is it?
Do you need to validate data that appears in the app against data from your back-end? Do you need to extract data to use in your tests? Then we have the feature for you.
We provide two types of API steps:
API action – Will be used when in need to get data and use it for calculation, or to save it for later use in the test.
Validate API – Will be used to validate an element against the API result data.
Why should I care? This capability makes it easy to do UI and API simultaneously. No need to toggle between do different systems nor integrate results. Author a UI test and automatically author an API test to ensure the application under test is working correctly. Read more
Debug Step Parameters
What is it?
We’ve learned that enriched debugging can be a huge time saver. Debug the console errors and network errors automatically and see the failed steps in the DOM. You can see in each step all the parameters used during the run.
Why should I care? This feature is helpful when you want to debug your runs and need to figure out which parameters were used in each step. Learn more
Company and Project Structure
What is it?
Large enterprises need the ability to manage multiple projects and various groups inside their organization. This feature makes it easy for admins to grant individual users access to specific projects. Earlier this year we’ve added support for a company structure with a company owner managing permissions and owners per project.
Why should I care? This gives you more flexibility in managing the different groups inside your company and allows control over who has access to which projects. Learn more
Customers have access to these features now. Check it out and let us know what you think. If you’re not a customer, sign up for a free trial to experience autonomous testing. We’d love to hear what you think of the new features. Please share your thoughts on Twitter or Facebook.
Time is money. How many times have you heard that?
We are truly a “Startup nation” constantly racing against the clock to deliver multiple features and execute sprint content, to meet our customer’s demands. This intense pace is an everyday reality. As a QA Engineer that has worked with and in different organizations, I have experienced it up-close and personal.
On one side there are owners and investors – they want to see growth. On the other side, there are customers – they want features and capabilities that work. And then there is us, Testers – we want to deliver quality.
But, how do we fit in this never-ending race?
Let’s start by defining software quality and how would you measure it? Well, how would you define a high-quality watch, car, or a clothing item?
Could it be that from using the product you can feel that its creator/maker used good materials (even if it means that the price would be higher)? If you use it for a long time, would it still be preserved from standard wear and tear? Is it designed to be comfortable? Fun to use? Does it break or overheat when you accelerate to a high speeds or drive long distances?
With that said, a Mercedes costs 10 times more than a Honda. Does that mean that a Honda is not a good quality car?
All of these examples teach us that quality is based on a perceived notion of price to value.
Does the product serve its purpose in a good way?
Can it stand up to our “use demands” in a reliable, long-lasting way?
Price can be interpreted in different ways as well – for example – implementation and maintenance time. Don’t be fooled, breaking this perception is a lot easier than building it in the eyes of our users. I can think of more than one car brand that has managed to break our perception of it in the last decade or so.
The farther you run, the more expensive it is to go back.
What I’m about to write will not be a shock to anyone, and still, you will be surprised to hear the number of organizations I see that just don’t seem to assimilate that idea. The earlier you will incorporate quality practices into the product’s life cycle – the less money you will spend on fixing it in the long run. I have seen how a wide aspect feature is being invented, designed and developed, only to understand that it is different from the poet’s intention, or does not serve its purpose the best way.
What can possibly go wrong? Nothing much, just:
Features developed contrary to the customer’s tastes/needs
Modules that do not carry out the action for which they were designed
Time-consuming debates on whether it was a bug or maybe the characterization was not unequivocal and clear enough
Going back and forth by working on/fixing the same module for several times
As a result: lack of time to perform all of the evaluated “sprint content” = less features, less progress
Features that aren’t clear enough for the user and result in complaints or support issues
Bad user experience
Bugs in production
Wasting time = money
Make a decision, start the change!
Start by understanding the importance of having a wide range testing infrastructure in your organization. Sure, it will not happen overnight. Yet still, it is an investment that produces huge value.
Suit up, sneakers on, and start the process:
Start designing a work procedure that includes clear written requirements. People never completely understand one another. If there is no written documentation, it can become challenging to examine what to develop and what to test. I am realistic, there is no place or time for detailed requirements in a startup. However, we can design a format which is short enough to be written in our race and clear enough for the developer to understand what to perform.
Implement the use of a knowledge base and management tools that will suit your needs. Lets define knowledge base. As far as I’m concerned, it can be a Google-Drive but you still need one place that I can go to if I need to know how a feature works. Now, about management tools, lets just say that it is a topic of a whole another article and still what you need to understand is that there are free requirement management, bug management and process driven tools that could help you organize your everyday tasks.
Start building a work process which will suit your company’s needs and corporate culture. The same as there are rules to running a marathon, there has to be rules to running a startup. When do you start the run? When can the tester join? What is the criteria that indicates that the lap is over? What can and can’t be done in the middle of the sprint? Every game needs its set of rules.
Implement productive communication between product and QA. How? Well for one thing here is personal example. Start by implementing a productive and positive discussion. Every individual requires an approach so they will not “lock up” to a defensive position and observe your words. Take some time to learn your environment and how to approach each member.
Assign a quality lead that will provide you with the added value of a productive process.
Incorporate quality practices as early on as possible in the concept, design, and development. You will be surprised how effective it is to review requirements (even before one line of code has been written) and how much time it can save you in the long run.
Make an organizational level decision to make quality an high priority issue, and let QA do their job with the appropriate amount of authority.
Two points I would like to add to all that has been written:
It is a process and conception that takes time, but it will return its investment.
It’s not magic, there will always be bugs! The question is in what amount and what severity?
Remember the question we asked at the beginning of the article? (How do we fit in this never-ending race?)
I would like to wrap up this article with a quote by the famous Tester Michael Bolton:
“I wish more testers understood that testing is not about “building confidence in the product”. That’s not our job at all. If you want confidence, go ask a developer or a marketer. It’s our job to find out what the product is and what it does, warts and all. To investigate. It’s our job to find out where confidence isn’t warranted where there are problems in the product that threaten its value.”