Category

QA

Category

When we hear the phrase “Record and Playback”, a majority of the people cringe with fear and skepticism as they relate it to primitive, unstable and flaky tests. Organizations have viewed it as a sign of vulnerability in automation and have continued to discourage teams from doing record and playback tests for many years now. The main reason for this is, it leads to-

  • Higher maintenance of tests
  • Lesser stability of tests as it breaks if any element changes
  • Unclear test coverage
  • Tests are highly coupled

This is not a new phenomenon, this has been the case for the past 20 years within which the state of automation has evolved by leaps and bounds.

I am not going to refute the above points as it is true in some cases but people fail to realize there is a time and place for everything which includes “record and playback tests”. These type of tests are valuable to-

  • Do fuzz testing (a.k.a monkey testing), which involves recording large amounts of random data through vast number of valid and invalid actions/assertions and observe the application under test. This helps to uncover issues like memory leaks, unexpected crashes and helps to evaluate the system under extreme conditions that otherwise may be hard to do with normal structured automated tests.

Monkey Testing

  • Perform automated exploratory testing, where the user tries out multiple scenarios and records multiple actions while simultaneously learning about the application and the tool used for automated testing.

a b testing

  • Help in load testing, by quickly recording a bunch of tests and simulate thousands of users concurrently performing the same set of recorded actions on the application.

load testing

  • It helps to get the whole team involved in test automation irrespective of their skillsets.

Now, you may think, “Why am I highlighting the advantages and disadvantages of Record and Playback tests”? The answer is, we at Testim.io, recognized these factors and came up with a hybrid approach to solve the problems with record and playback, by building a platform based on Artificial Intelligence (AI).

Testim.io follows a hybrid approach where we give organizations and users the ability to record and playback tests; while at the same time giving the users flexibility to programmatically manipulate these recorded tests. These tasks can be performed easily using inbuilt functionalities of the platform. It also gives teams the freedom to add their own wrappers around the platform (if needed) by using Javascript and HTML.

hybrid testing approach

To increase stability of tests irrespective of the way the tests are written, Testim.io uses Dynamic Locators. The AI underneath the platform in real time, analyzes all the DOM objects of a page and extracts the object trees and its properties. Finally, it decides on the best location strategy to locate a particular element based on this analysis. The more tests you run, the smarter the AI becomes in increasing the stability of the automated tests. So, even if your strategy for automation is only record and playback, re-running the recorded tests multiple times helps to make those tests stable even if the attribute of an element changes in the future. As a result, the authoring and execution of tests are much faster.

In summary, there are various approaches to test automation. Each approach has its own merits/demerits. Understanding and using the approach that makes more sense in the context of the project is crucial to help in better testing using automation tools, platforms and frameworks. As these options continue to mature, it will become all the more important to follow the hybrid approach to cater to different type of skill sets, needs and expectations of teams and organizations. Hybrid approach to testing is the new era of test automation.

Curious to see how we implement the hybrid approach? Sign up for our free trial.

Do you ever ask yourself these questions?

  • How do I know if my quality is improving?
  • What did the team do this week and what was the impact?
  • Do we have enough test coverage in the sprint?

With Testim’s new Managerial Reports, you never have to worry about getting answers to these questions again. Get deep insight into project status and quality metrics which provide granular details into execution cycles, active runs, run durations and success rates – all available online or sent weekly to your inbox.

Testim Test Automation Managerial Reports

These reports, dashboards and KPIs quickly summarize the team’s effort invested over the course of the week, identify tests that require attention as well as if additional effort is required to improve your quality score. Easily track trends week over week and see how your quality coverage is improving.

Testim Test Automation Managerial Reports

For the next two months Testim is offering these reports free of charge. Moving forward, additional licensing will be required to take advantage of these new insights. Sign into Testim to see the new reports now or contact your account manager with any questions.

Growing up as Testers in the software testing industry, we often go through a lot of thoughts and have several questions in mind such as:

  • How do I learn about software testing?
  • I like my job but how do I get to the next level?
  • Am I good at my job?
  • There are so many testing jargons people are using and I do not understand any of them
  • You are trying to communicate effectively but people still do not understand you

Based on testing software for a over a decade now, reading articles and blogs, interacting with practitioners from all over the world and analyzing my success and failures as a tester in the past several years; I discovered everything comes down to 3 Key Factors that paves the path to becoming a strong tester. These factors form the Strong Tester Model Stability as shown below.

Strong TestersFactor 1Motivation

  • “Run Your Own Race” – As Testers, we constantly keep comparing ourselves with other people, try to do more without paying attention to what our goals are and finally end up getting stressed as a result of being overworked and concentrating on lesser priority things. In life and in testing, we need to remember that, the ONLY person we are competing with; is OURSELVES. We need to identify our strengths and answer to our conscience, rather than comparing ourselves with others who totally have different set of goals.
  • Embrace Your Talents – Recognize your strengths and weaknesses. Embrace them and work to your skill sets with well defined goals and deadlines. Hold yourself accountable.
  • Go Explore – We will find our true passion only when we explore different things and take chances. Everyone starts from somewhere and no one is an overnight success. So start your journey and exploration. Try anything at least once with full dedication and determination. Remember “If you take a chance in life sometimes good things happen and sometimes bad things happen. But if you don’t take a chance, nothing happens.”
  • Tips and tricks for sustained and continuous motivation:
    • Have inspirational quotes to get you going when you are down or feel lost. Everyone has a trigger point, what is yours?
    • Have a Testing Notebook to note down all things testing when you read, explore and talk to people. Looking at your notes will spark different ideas for new techniques, strategies, articles, talks and so on. The opportunities are endless.
    • Use Mind Maps to visualize your thoughts and goals. This gives you something concrete to think about and helps in prioritizing each one of them.
    • Listen to inspiring podcasts and read motivational books.
    • Do deep work.
    • Have trusted Mentors to help you out in your journey. They help to challenge ideas, brainstorm solutions and guide you. Meet with them regularly via Skype or on a 1:1 basis.

Factor 2Communication

  • Intercultural communication – In a corporate world, we work with people from different cultures and regions. That being said, be cognizant of the cultural differences, usage of idioms and phrases and help them adapt to these differences. The learning goes both ways in terms of people from different cultures having an open mind and learning from the local people and vice versa.
  • Body Language – About 55% of our communication is through body language. Thus, having effective body language is important when working with other people. Be a good listener, have proper eye contact and pay attention.
  • Tone of Voice – Raising our voice in meetings to express our opinions or concerns does not work. When we raise our voice, the point does not get communicated across to the other person. The only thing noticeable during these times is the fact that, the other person is shouting and it automatically makes people react in a less amicable way.
  • Mental Models – People create their own mental models about objects, systems, ideas, description and so on. It is important to notice this and be open to hearing other people’s ideas.
  • Know your audience – Different people need different kinds of information. We need to modify our testing story based on our audience.
  • Safety Language – Prevent digging a hole for yourself by using safety language. Use word fillers like “could be”, “may have”, “would it be” etc. For Example – You can say “This defect could be reproduced under these conditions” instead of saying “This defect will be reproduced only in this way.”

Factor 3Education

  • Validated Learning – The“Lean startup” principles of Build->Measure->Learn holds good in testing as well. Always build a Minimal Viable Product or Proof of Concept, then solicit feedback. Based on the feedback keep making the product better. Follow an iterative approach.
  • Pairing
    • Developer – Tester pairing
      • Pairing while writing unit tests helps to identify gaps in development testing and these gaps can be addressed by Testers during their testing process.
      • Pairing in code reviews helps to find vulnerabilities in code.
      • Pairing also helps in learning the programming language.
    • Pair Testing with Testers/Developers
      • Paired Testing helps in continuous exchange of ideas/observations and helps to better explore the system
  • We gain experience only by committing mistakes. Remember “Things are never as bad as they feel or good as they sound.”
  • Conferences
    • The track sessions help to learn about different topics relevant to the industry. If you do not like one session feel free to go to another one. A lot of money is invested by you and your company, so take advantage of it.
    • Do your research – Before going to a conference, identify people you want to network with, then meet up with them during the conference to learn and exchange ideas.
    • Hallway conversations and networking – A lot of learning takes place outside the conferences rooms in the hallways and networking events during the conference. Ensure you exchange business cards; in the back of the cards note down hints about the person and follow up with him/her after the conference.
    • Share your ideas, thoughts and problems with the community. Use blogs, LinkedIn and Twitter to help other people like you.

If you want different results, you need to be willing to do things differently and different things” (from 12 Week Year)

In case of any questions, please feel free to contact me at raj@testim.io or visit my website at www.rajsubra.com

To get inspired, visit this page which is my source of inspiration – http://www.rajsubra.com/inspirational-books-articles/

Finally, my youtube videos can be found here – http://www.rajsubra.com/my-youtube-channel/

 

Time is money. How many times have you heard that?

We are truly a “Startup nation” constantly racing against the clock to deliver multiple features and execute sprint content, to meet our customer’s demands. This intense pace is an everyday reality. As a QA Engineer that has worked with and in different organizations, I have experienced it up-close and personal.

On one side there are owners and investors – they want to see growth. On the other side, there are customers – they want features and capabilities that work. And then there is us, Testers – we want to deliver quality.

But, how do we fit in this never-ending race?

Let’s start by defining software quality and how would you measure it? Well, how would you define a high-quality watch, car, or a clothing item?

Could it be that from using the product you can feel that its creator/maker used good materials (even if it means that the price would be higher)? If you use it for a long time, would it still be preserved from standard wear and tear? Is it designed to be comfortable? Fun to use? Does it break or overheat when you accelerate to a high speeds or drive long distances?

With that said, a Mercedes costs 10 times more than a Honda. Does that mean that a Honda is not a good quality car?

All of these examples teach us that quality is based on a perceived notion of price to value.

  • Does the product serve its purpose in a good way?
  • Can it stand up to our “use demands” in a reliable, long-lasting way?

Price can be interpreted in different ways as well – for example – implementation and maintenance time.  Don’t be fooled,  breaking this perception is a lot easier than building it in the eyes of our users. I can think of more than one car brand that has managed to break our perception of it in the last decade or so.

The farther you run, the more expensive it is to go back.

What I’m about to write will not be a shock to anyone, and still, you will be surprised to hear the number of organizations I see that just don’t seem to assimilate that idea. The earlier you will incorporate quality practices into the product’s life cycle – the less money you will spend on fixing it in the long run. I have seen how a wide aspect feature is being invented, designed and developed, only to understand that it is different from the poet’s intention, or does not serve its purpose the best way.

What can possibly go wrong? Nothing much, just:

  • Features developed contrary to the customer’s tastes/needs
  • Modules that do not carry out the action for which they were designed
  • Time-consuming debates on whether it was a bug or maybe the characterization was not unequivocal and clear enough
  • Going back and forth by working on/fixing the same module for several times
    • As a result: lack of time to perform all of the evaluated “sprint content” = less features, less progress
  • Features that aren’t clear enough for the user and result in complaints or support issues
  • Unsatisfied customers
  • Bad user experience
  • Bugs in production
  • Wasting time = money

Make a decision, start the change!

Start by understanding the importance of having a wide range testing infrastructure in your organization. Sure, it will not happen overnight. Yet still, it is an investment that produces huge value.

Suit up, sneakers on, and start the process:

  • Start designing a work procedure that includes clear written requirements. People never completely understand one another. If there is no written documentation, it can become challenging to examine what to develop and what to test. I am realistic, there is no place or time for detailed requirements in a startup. However, we can design a format which is short enough to be written in our race and clear enough for the developer to understand what to perform.  
  • Implement the use of a knowledge base and management tools that will suit your needs. Lets define knowledge base.  As far as I’m concerned, it can be a Google-Drive but you still need one place that I can go to if I need to know how a feature works. Now, about management tools, lets just say that it is a topic of a whole another article and still what you need to understand is that there are free requirement management, bug management and process driven tools that could help you organize your everyday tasks.
  • Start building a work process which will suit your company’s needs and corporate culture. The same as there are rules to running a marathon, there has to be rules to running a startup. When do you start the run? When can the tester join? What is the criteria that indicates that the lap is over? What can and can’t be done in the middle of the sprint? Every game needs its set of rules.
  • Implement productive communication between product and QA. How? Well for one thing here is personal example. Start by implementing a productive and positive discussion. Every individual requires an approach so they will not “lock up” to a defensive position and observe your words. Take some time to learn your environment and how to approach each member.
  • Assign a quality lead that will provide you with the added value of a productive process.
  • Incorporate quality practices as early on as possible in the concept, design, and development. You will be surprised how effective it is to review requirements (even before one line of code has been written) and how much time it can save you in the long run.  
  • Make an organizational level decision to make quality an high priority issue, and let QA do their job with the appropriate amount of authority.

Two points I would like to add to all that has been written:

  1. It is a process and conception that takes time, but it will return its investment.
  2. It’s not magic, there will always be bugs! The question is in what amount and what severity?

Remember the question we asked at the beginning of the article? (How do we fit in this never-ending race?)

I would like to wrap up this article with a quote by the famous Tester Michael Bolton:

“I wish more testers understood that testing is not about “building confidence in the product”. That’s not our job at all. If you want confidence, go ask a developer or a marketer. It’s our job to find out what the product is and what it does, warts and all. To investigate. It’s our job to find out where confidence isn’t warranted where there are problems in the product that threaten its value.”

As many yoga instructors do, they encouraged students to find balance. It’s an effective bit of advice in that it puts the onus on the practitioner. It’s also a popular concept for software testing, as industry experts often recommend that software teams find the balance between automation and manual testing practices. Another similarity is the trajectory of their adoption rates. One does not dive into yoga, but slowly adopts it over time until it becomes a part of their daily routine. The same goes for test automation: you can’t expect to start automating everything from scratch. Good test automation is the culmination of work over time.

Why not all manual?

Manual testing is fine, and was once the status quo, but with high adoption of Agile and continuous delivery methodologies, it just doesn’t scale. For software development, every enhancement or new feature that’s rolled out for an app must have a corresponding set of tests to ensure the new functionality works and does not break the code from the previous version. In this regard, to check all of the file directories, database, workflows, integrations, rules and logic manually would be extremely time consuming.

For companies looking to improve time to market and increase test coverage, automation provides significant advantages over manual testing. A typical environment can host thousands of test cases of varying complexity to be executed effortlessly and nonstop. As a result, automation dwindles the time required to perform mundane, repetitive test cases from days to hours and even minutes if you run in parallel. That inherent velocity increase is what attracts teams to automated testing.

Why not all automation?

The benefits of automation are obvious. Automation provides the ability to quickly react to ever-changing business requirements and generate new tests continuously. Is it reasonable then to assume that the more you automate the more benefits you reap? Then why not just automate everything?

The short answer, its very difficult. According to the World Quality Report 2017-18 Ninth Edition, these are the top challenges enterprise are facing in regards  to test automation.

Traditionally you would automate only once your platform is stable and the user flow is pretty consistent. This was primarily due to the amount of time it took to write a test, sometimes hours and even days. Plus the amount of time it took to update and maintain tests as code changed. The ROI was not justified up to the point your user flow is stable.

Feedback from our customers such it has changed. We recently heard a number of our clients who automate everything and even as early as wireframes are ready. We reduced authoring time by about 70% of the time and maintenance time by about 90%. Creating a user flow now takes a few minutes with Testim and so is updating the user flow. Reducing the time per test to a few minutes instead of hours and days makes it much more affordable to automate, providing immediate ROI from feedback loop benefits.

So should you strive to 100% automation? Well no but you can get to 80%-90% which was unheard of until recently. There are still scenarios that only humans can do such as Face ID. There are also other aspects of your Software Development Lifecycle (SDLC) that automation is dependent on.

Summary

There is an ongoing struggle to keep up with the rate dictated customer pressure and competition to produce on a continuous basis. And not just produce for production’s sake, but a quality product that’s been thoroughly tested. That’s why we created Testim…

To help software teams:

  • Increase coverage by 30% in 3 months
  • Reduce number of missed bugs by 37%   
  • Increase team productivity by 62%
  • Reduce regression testing costs by 80%  
  • Speedup testing schedules by 3 months

Sources

  1. https://techbeacon.com/world-quality-report-2017-18-state-qa-testing
  2. https://dzone.com/articles/automated-vs-manual-testing-how-to-find-the-right
  3. https://blog.qasource.com/the-right-balance-of-automated-vs.-manual-testing-service
  4. http://sdtimes.com/software-testing-is-all-about-automation/#sthash.5Idkps2O.dpuf
  5. http://sdtimes.com/report-test-automation-increasing/#sthash.HlHQmD2l.dpuf
  6. https://appdevelopermagazine.com/4935/2017/2/8/test-automation-usage-on-the-rise/
  7. https://techbeacon.com/devops-automation-best-practices-how-much-too-much

Do you how much quality costs? Have you ever calculated it? If you do, you’ll learn that you spend thousands of dollars and hours of productivity on every bug. From the cost of labor, to the cost of deployment, to the cost of finding the bugs in production, no bug fix is free.

Many companies and academics, from IBM to Microsoft to large banks have performed studies to quantify the cost of finding bugs. It’s been well known for many years that the costs go up the closer to production you go, but the actual numbers are a bit staggering.Their findings show the dramatic financial difference between discovering a bug in the wild, and finding it during development.

  • Fixing a bug in production – Very expensive. These take time to fix, and while repairs take place, your business experiences pain. Your teams are working around the clock to fix something that also requires your developer to rework something they wrote months ago and forgot about.
  • Fixing a bug in QA – Somewhat expensive. The QA person has to report the bug. A senior manager has to determine who to be assigned to this. The developer has to step out of his daily tasks, sometimes impacting the current sprint, attend to the bug, reproduce it, and return to edit some code from the last sprint, which the developer has likely already begun to forget about.
  • Fixing a bug in development – Least expensive. takes a few minutes for the developer to fix. If you have an immediate feedback loop, your developer will get feedback minutes after pushing code. If a test fails, it is still within the current sprint and the developer should recognize the code and its implications as everything is still fresh in their mind. The bug didn’t escalate and the likelihood of on time release is much higher.

Shift left is the term coined for a software development cycle that includes a constant and immediate feedback loop. You may have heard stories about how Facebook has put a billion dollars worth of engineering into creating this feedback loop. The company even wrote a PHP-to-C compiler called HipHop to speed up the loading of its test environments.

If Facebook is willing to spend a billion dollars to get that feedback loop, you can spend a few weeks ensuring your own loop is as fast and reliable as possible. This means fast builds, fast test batteries, and short amounts of time between developer compiling, and developer testing.

This requires certain infrastructure that we’ll get to in a future post but organizations that have deployed it, have seen decrease in cost of quality and increase in on-time delivery. You may even have to disassemble your build pipeline to replace a few slower moving items.

Your goal, here, should be one minute or less between build and test. This may seem impossible, or it may already be the way things work for your team. This can require a lot of work, or a little depending on the existing systems. No matter the environment, there is one fundamental truth for all software development: the faster the developer can see their results in a running binary, the faster they can fix problems and the lower the cost of overall quality.

Want to know how to shift left? Stay tuned.

Sources:

 

Google recently announced that they released Puppeteer, a Node library which provides an API to control headless Chrome. Within 24hrs they received great feedback from the community;

  • 6,685 stars on Github
  • 2.2K likes and 1.2K shares on Twitter

So why should we care? Here’s a snippet from its GitHub documentation:

Puppeteer’s GitHub documentation Q&A

In Google’s own words, there isn’t much difference from Selenium.

The awesomeness of Selenium is that they convinced ALL browser vendors to support the same low level API (and this took years! try convincing Apple, MS, and Google to work together), and even implemented this API in more than 10 languages (including JS).

Most of Puppeteer’s API is very similar to Selenium (or the Webdriver.io/NightwatchJS alternatives) e.g.;

  • Google’s launch() method vs. Selenium’s init()
  • goto() vs. to url()
  • close() vs. end()
  • type() vs. setValue()
  • click() even stayed the same

Google could have picked the same Selenium API and contributed the changes to the Selenium repo. But the biggest issue isn’t the API. It’s splitting the community and not contributing to the Selenium code base. With Google’s resources and talented developers, they could have contributed to the Selenium project, which currently has a few amazing volunteers, supporting this framework, and some parts are closing for lack of resources.

Selenium is known to be relatively slow compared to operating directly on the browser. This is caused by its architecture, which requires a Selenium Grid or Selenium Standalone server, which acts as a proxy (and even just starting it takes a while). This architecture is great when your tests need to run on multiple browsers in parallel, but you experience the overhead when working with a single browser. By helping the Selenium community speed this up, even if it was just for Chrome, would have been more beneficial than trying to create their own (Chrome only) solution.

Puppeteer is a step in the right direction. Google is an innovative company that pushes the web forward with great ideas and specs, amazing developer tools, and now it seems to help improve UI test automation, which we all know is extremely challenging.

Standardization leads to innovation. With Selenium, not only would you be able to run those tests on other browsers, but the entire industry is relying on those standards to create additional (commercialized) products. For example; Testim.io, Saucelabs, BrowserStack, Applitools, Cross Browser Testing, and the list goes on and on.

I would love to hear your opinion about Puppeteer and Selenium.

Cheers,
Oren Rubin
CEO, Testim.io

It’s inevitable. Agile has matured and in addition to speed, now the new set of challenges are about accountability and proving its worth. Just as with other areas of the business, Agile must also answer to traceability, auditability, and compliance. After all, what good is a methodology that delivers fast, but ultimately fails to deliver more value?  Many organizations are now demanding it and teams new to Agile are starting out in an uphill battle with risk-averse management and globally dispersed project teams, in addition to technical debt.  Even though the executive team made the decision to adopt Agile practices, that decision is coming with its own set of expectations that must be met. And it’s qualitative metrics that will be looked to to satisfy this expectation.

So which metrics should you put in place? Is it more effective to track activity within Agile tools, such as JIRA? Or is is better to track metrics within the software itself? The important thing to realize, though, is what’s really being asked: “is Technology actually improving its impact on the business in a tangible way. Or said another way, as phrased by the Kanban community, is Technology  IT becoming more fit for purpose? Answering this question, of course, requires a clear understanding of what is that the Business expects from its interactions with IT.

To ensure metrics remain relevant, they should be inspected and potentially evolve to ensure they still align with the organization’s short and long term goals. Certain metrics may be created to align with the organization’s specific agile maturity phase and as an organization moves from one phase to another, certain metrics may no longer be relevant. Inspection and adaption will ensure metrics align with the specific goals.

Measure the amount of working software value that reaches customers hands, such as “percentage of story points completed within a sprint”. Velocity trends (not absolute values) are also helpful. Although using Story Points is a crude way of doing this, much better to also estimate the business value of each story (so each story gets two estimates, one for complexity and one for value) then you can measure the value produced by the team easily. The tough part about value is – it is not an easy thing to assign the value to a story. In some circumstances, value cannot be a KPI for IT projects because (1) business value is subjective especially if business value points are used; (2) some projects have no direct way to calculate business value (think regulatory projects that must be done to comply with regulations); and (3) Business value varies too much from project to project based on the perceived business benefit making it hard to measure objectively.

KPIs

That said, KPIs to track during a transformation to Agile include those that fall under the categories of time, quality, and effort. Instead of measuring traditional key performance indicators (KPIs) like profitability or transactions per hour, therefore, executives must shift their focus to how well the enterprise is able to deal with changing customer preferences and a shifting competitive landscape.

Quality

In terms of quality, nothing measures a product’s value or worth better than ROI. The trick is to make sure the return is defined clearly and is measureable. The most straight-forward way to do that is to tie it to a number value: sales, downloads, new members, positive comments, social media likes, etc.

The number of defects is a simple metric to track that speaks loads about not only the quality of the product, but also team dynamics and effectiveness. Good metrics to track for defects include, the number of defects found during development, after release, or by customers or people outside of the team. It can also be insightful to track the number of defects that are carried over to a future release, the number of support tickets, and the percentage of manual vs automated test coverage.

Effort

Sprints are fundamental to Agile iterations. KPIs, therefore, become fundamental to sprints. The design phase of sprints is initial to the development process and, while tweaked as needed to remain focused on objectives, is to remain in place unless proved to be unreliable in achieving objectives.

Velocity tracks the number of user story points completed per sprint. It’s effectiveness to depict anything of value outside of the technology team is debateable. It’s often a misunderstood metric because it’s confused with a team’s productivity. It’s not uncommon for executive management to demand a higher number of story points be completed per sprint in an effort to increase speed to market. However, more often than not this results in a lower quality product. Nonetheless, it can be useful if it’s understood and reported on as an effort-based metric instead of time or quality.

When measuring Velocity, it’s important to keep in mind that different project teams will have different velocities. Each team’s velocity is unique. If team A has a velocity of 50 and team B has a velocity of 75, it doesn’t mean that team B has higher throughput. Since each team’s estimation culture is unique, their velocity will be as well. Resist the temptation to compare velocity across teams. Measure the level of effort and output of work based on each team’s unique interpretation of story points. 

Time

Scrum teams organize development into time-boxed sprints. At the outset of the sprint, the team forecasts how much work they can complete during a sprint. A sprint burndown report then tracks the completion of work throughout the sprint. The x-axis represents time, and the y-axis refers to the amount of work left to complete, measured in either story points or hours. The goal is to have all the forecast work completed by the end of the sprint.

A team that consistently meets its forecast is a compelling advertisement for agile in their organization. But don’t let that tempt you to fudge the numbers by declaring an item complete before it really is. It may look good in the short term, but in the long run only hampers learning and improvement. 

SUMMARY

KPIs are vital to the strategic goals of an organization as in addition to revealing whether the direction of project is on-course, they assist in informing key business decisions. Metrics are just one part in building a team’s culture. They give quantitative insight into the team’s performance and provide measurable goals for the team. While they’re important, don’t get obsessed. Listening to the team’s feedback during retrospectives is equally important in growing trust across the team, quality in the product, and development speed through the release process. Use

SOURCES

  1. Banking on Kanban
  2. https://www.atlassian.com/agile/metrics

 

Testing has been in the spotlight of software development, as organizations continually seek to optimize development cycles. Continuous Delivery and its promises of frequent releases, even as often as hourly, is a big factor driving executives  to find ways  to shave time off of any eligible technical processes. As enterprises embark  on their DevOps transformational journeys, delivering at lightning speed without compromising on quality takes more than cultural changes and process improvements. Test automation is key in helping project teams write and manage the smallest number of test conditions, but get the maximum coverage.

The case for automation: If the scenario is highly quantitative, technology is superior to humans. That’s clearly the case in the current technology landscape, where according to the 2015 OpenSignal report, massive fragmentation has resulted in over 24,000 distinct Android devices that all run on different versions of the Android OS, plus the several variations of iOS devices currently in use. Web is no different. It’s impossible to test every scenario manually, so automation must be leveraged. But therein lies the main point, that automated testing is a tool to be leveraged as needed instead of relied upon to dictate an entire testing strategy.Although logic would deem that if automating a few test scenarios yield fast results, that automating any and every eligible scenario will shorten the test cycle even more; but alas,  that’s usually not how it goes. Automation endeavors take a good deal of planning and forethought in order to add value at the anticipated level. More often than not the overhead of maintenance, especially in CI/CD environments where tests are automatically triggered and one needs to analyze the reports and fix locators, a task that , could  take hours.  This won’t work for organizations who are truly DevOps.  If full stack developers are going to be fully responsible to take the code to production, then they will need tools to automate the process through infrastructure, testing, deployment and monitoring.  This continuous framework enables  weekly , daily and hourly releases.  Leading DevOps organizations like Netflix and Amazon are said to deploy hundreds to thousands times per day.

What’s more, studies reveal that a high percentage of projects utilizing automated testing fall short of the anticipated ROI or fail altogether. These transformations fall short due to their duration, ramp up time, skill-set gap and maintenance overhead.  If the benefits of automated testing aren’t significant enough to mitigate risk then speedy releases could become more of a liability than an asset.

There are varying levels of automation just as there are companies of all different shapes and sizes. Instead of one or the other, it’s more fitting that automated and manual testing are looked at as complementary. Agile and DevOps have created new testing challenges for QA professionals who must meet the requirements for rapid delivery cycles and automated delivery pipelines.

DEMANDS OF CONTINUOUS TESTING

Testing used to have the luxury of having its own phase or at least a set timeframe to stabilize new code prior to pushing to production. No longer. In the era of DevOps, QA must adopt a truly agile mindset and continually be prepared to shift focus to testing new features and functionality as they become available – that could be weekly, daily, or hourly.

From web, mobile and connected devices to IoT, a quick inventory of current technology will reveal a multitude of different devices and literally thousands of potential ways that these technologies can be combined. Across networks, apps, operating systems, browsers and API libraries, the number of  combinations with each requiring its own verification and testing support.  As the push towards Devops continues, the upward trend towards continuous testing is inevitable. As a result, testers and/or developers will need to deliver in brief and rapid iterations.

Beyond automation, what does this look like? QA, Engineering, or whomever is responsible for testing will need to shift left and engage in rounding out user story acceptance criteria to ensure accuracy and completion. In addition, active participation in sprint plannings will help to ensure that user stories are truly decoupled and independent. Or if dependencies are necessary, confirming that story precursors and relationships are in-place. Partnership with Product Owners to confirm that the scope for a given sprint or iteration is realistic.

And finally in support of shifting left, collaboration among Dev and Ops to ensure the necessary support in the event that a code check-in causes a bunch of automated tests to fail. The culprit for these tests will need to be checked manually so it’s important that QA is prepared so as not to pose a blocker for the release. All of these activities is largely what comprises testing outside of automation. It’s a broader role than the traditional QA Specialist that requires initiative and the willingness to embrace an always-on mentality that’s necessary to support DevOps efforts.

SUMMARY

Software development practices have changed and the change brings with it new challenges for the testing arena. The growing need for test automation in addition to the profusion of new testing scenarios will constantly evolve the role of testing and QA.  Yes, the world is changing for testers—in a good way. The days of exhaustive manual testing are certainly numbered, and automated testing is an essential part of the delivery pipeline. But testing has never been more important.

SOURCES

  1. IT Revoloution DevOps Guide
  2. World Quality Report 2016-17
  3. Open Signal: Android Fragmentation

 

Introduction
We work hard to improve the functionality and usability of our platform, constantly adding new features. Our sprints are weekly with minor updates being released sometimes every day, so a lot is added over the course of a month. We share updates by email and social media but wanted to provide a recap of the latest enhancements, summarizing the big and small things we delivered to improve your success with the product. We’d love to hear your feedback.

Test History

Is past performance an indicator of future success? The folks on Wall Street don’t think so… But we do! This is why we are pleased to offer the new Test History feature in Testim.  


The new Test History feature allows users to see slide and dice test results to get more actionable insights.

Why should I care?
This gives users much more insight from the test results. Quickly analyze how many tests ran during the selected time frame, how many succeeded and the average duration of each run, how your tests performs across multiple environments and whether certain environments or scenarios consistently fail. This allows project team to see improvement trends across scenarios or environments over time. Click here to learn more.

What’s included?
Test History allows you to filter based on the following parameters:

  • Run time
  • Specific tests / All tests
  • Last X runs
  • Status
  • Browser

Create Dependencies Between Tests
This new capability allows users to create different aspects of dependencies between tests. As a best practice, we recommend to keep the tests as isolated as possible. However we recognize that sometimes you need the ability to create dependencies between tests.

Why should I care?
Now users can create a logical sequence of tests. By working closely with our customers, we’ve seen project teams required to set a sequence of activities. A general testing framework does not allow you to create a sequence, forcing users to create one long test which may result in performance issues and failures. By creating dependencies during test plans you can create shorter discrete actions, order them in sequence and set dependencies and share data between tests. Click here to learn more.

Setting up Cookies
Cookies is a reserved parameter name for specifying a set of cookies you would like Testim to set for your application.  You can set cookies for each browser session before the test is started or before the page is even loaded. Use cookies to set browser data to be available for your tests. The cookies parameter is set as an array, where each cookie object needs to contain a name and value properties.

Why should I care?
Websites use cookies to personalize user experience. By using this feature, you can test different types of personalized experiences. Click here to learn more.

Improved Scrolling
More flexible way of scrolling through your page.

Why should I care?
Traditionally scrolling in your test automation would be set by pixels so you would test your script to skip 60 pixels. Offering a more flexible scrolling ability makes your tests more adaptive. An  example is to scroll to the element, mouse wheel. Click here to learn more.

How do I get my hands on these hot new features?
Customers have access to these feature now, check it out and let us know what you think. If you’re not a customer, test drive the new features by signing up for a free trial to experience code and maintenance free test automation.

Be smart & save time...
Simply automate.

Get Started Free