Category

Agile

Category

While working on our daily tasks in agile teams, we quite often have this feeling where we are working on multiple tasks all day long and at the end of day when we review our work, we realize we haven’t accomplished anything concrete. The main reason for this is, our work environments are filled with distractions from having unnecessary and unproductive meetings to people checking their messages on phones, emails and slack channels. As a result, we feel demotivated, less productive and burnt out by the end of the day.

After years of working in the tech industry and going through the above experiences, I decided to take a hiatus from my job and do some reflection on my personal and career growth. In 2017, I started a 6 month journey of self exploration and discovery. I read books, listened to podcasts on mindfulness, productivity, leadership and self-motivation and interviewed successful people. Finally, when I went back into the workforce I tried to apply various concepts learnt from this journey in my daily tasks at the workplace.

The concepts I learnt helped me to get more focus, improved my critical thinking skills, helped to figure out ways to prioritize my tasks and I made myself more approachable to people in my personal and professional life. Below you will find the different hacks, tools, tips and tricks that I learnt and practiced, that can help anyone become a highly accomplished and productive tester; while working in a vastly chaotic and fast paced environment.

Different Hacks to become mindful and productive

There are 2 books worth mentioning here that has deeply influenced the way I do things. They were Deep Work by Cal Newport and Procrastinating on Purpose by Rory Vaden. In Deep Work, the author discusses the science and practical steps for focusing without distraction on cognitively demanding tasks. You can refer to this blog post for more detailed information – Deep Work. In Procrastinating on purpose, the author discusses a methodology through which you can prioritize your tasks. This was termed the “Focus Funnel”.

In a nutshell this is how it works, say you have a task – TASK A. This is how you decide whether you can work on TASK A by putting it through the focus funnel.

Step 1: Can TASK A be eliminated?

Step 2: If NO, Can TASK A be automated?

Step 3: If NO, Can TASK A be delegated?

Step 4: If NO, Can TASK A can be delayed further by Procrastinating on purpose?

Step 5: If NO, then you work on TASK A by Concentrating on it

Priority Dilution is, when you delay the most important tasks by allowing your attention to focus on less important but urgent tasks. Priority Concentrate is, when you concentrate on the most important tasks and that is your priority NOW. You can find more information from the book, but I have used these ideas to help in my daily decision making process.

Based on the above readings, the numerous research I have done and applying these concepts in real life; I came up with the different hacks to become a mindful tester. It can be broadly classified into 3 categories. In each category there are different tips and tools that can make you more efficient

  • Planning Hacks
  • Mindfulness Hacks
  • Social Hacks

PLANNING HACKS


Daily and Weekly Planning

  • Everyday in the morning spend just 5- 10 minutes reviewing what tasks need to be accomplished for the day with the help of a To do list.
  • Prioritize the list based on the focus funnel described in the initial section. My motto is to finish the top 3 items on my list everyday. The rest gets carried over to next day
  • Schedule blocks of uninterrupted time each focusing on one particular task. I usually try to do 3 blocks/day (about 45 minutes each)
  • At the end of the day spend 5 minutes to review what tasks were accomplished, what gets carried over the next day and finally what are the tasks that need to be accomplished the next day
  • As part of my job I need to do 5 things – Learning, Reading, Writing, Conference Presenting and working with Customers
  • I want to make sure I dedicate time for each one of them. So I already know how many time blocks minimum I need for each of these items; starting with 2 Time Blocks/Week for Learning, Writing and Conference related tasks and do reading when I take breaks between time blocks. Customers are always first priority and they usually vary from 1-3 time blocks a day based on what customer tasks need to be accomplished that particular day.
  • I usually keep Monday – Wednesdays as my customer days and keep Thursday and Friday as my writing days and tasks that need my creativity and thinking

You can always customize the above routines based on your tasks and context.

Meetings

Here are some interesting facts regarding meetings

There are great talks about how unproductive meetings have a huge negative impact on companies and people. Check out the TED talks from David Grady and Jason Fried (co founder of BaseCamp and 37 Signals) for more information.

Based on the above findings it is clear that having unproductive meetings has a detrimental impact on the overall workplace productivity. So how do you avoid them? Here are some tips to help you make that decision.

  • First of all, you need to decide whether a meeting is necessary to discuss a particular issue. Is this something that can be solved by talking to the person directly? Is it something that can be an email conversation?
  • If you have decided a meeting is necessary, not more than 7-8 people should be invited to the meeting. Research suggests that having more than 8 people in a meeting prevents clear decisions being made at the end of the meeting. Remember if you have 15 – 20 people in a meeting it is a conference; not a meeting
  • Meeting invites need to have a clear title and agenda
  • Everyone needs to come prepared for the meeting
  • Start and finish meetings on time
  • Have clear action items and follow up on them

Remote Meetings

How many of you have experienced this situation before when attending remote meetings?

  • Not announcing who is in the meeting room
  • Not paying attention to food chomping, coffee slurping and sounds generated by putting your laptop, notepad or coffee mug on the table. This may seem like a trivial/normal thing for people in the room but for the person joining in remotely, this sounds like a loud noise going right through your ear buds, especially when wearing noise cancelling headphones. I have been there and done that
  • Not sharing screens while going over presentations or when someone is talking about something he/she is projecting on the screen in the room. The remote employee is left to tap into their visualization techniques to make assumptions about what the presenter is showing and create his/her own interpretation of things. This is a really useful technique for meditation but not so much for work meetings
  • Finally, the thing that annoys me the MOST is, a lot of little conversations happen throughout the room during the meeting and it sounds like the remote employee is in the fish market and has no clue of what is happening

So, how to avoid these problems?

  • Be cognizant of the fact that there are remote employees/attendees in the meeting
  • Ensure you announce the people who are present in the room
  • The facilitator should ensure there is a web-cam so that the attendees of the meeting can see each other and this gives a feeling of inclusion
  • Ensure there is only one conversation-taking place at any point of time during the meeting. Also, ensure we check in periodically with remote attendees in case they have any questions or things they want to add to the current conversation
  • Try to use remote collaboration tools like Google Hangouts, Skype, Zoom, WebEx and other softwares that help to bring everyone together and encourages more collaboration
  • Act like an adult and stop putting things on the table really hard, banging on the table or chomping on food near the speakerphone

E-Folders

Organizing papers into physical folders has been a productivity hack for many decades now. The same applies to electronic content as well.  On a daily basis we get numerous emails and also have a lot of content on our own laptops that keep polluting our desktop screen. A good way to handle this electronic clutter is to use email filters to automatically sort incoming emails into their respective folders and also have a folder structure in our laptops to put relevant content in the appropriate buckets respectively.

Emails

Email has become the universal defacto standard for communication. Research shows that globally a staggering 269 billion emails are sent each day. It’s estimated that by the end of 2021 over 316 billion emails will be sent each day and there will be 4.1 billion email users – that’s over half the entire world’s population.

Above being the situation, how do we ensure our email communications are useful, productive and less time consuming to read. Here are some tips for that-

  • E-mail should MOSTLY be in 3-4 bullet points, highlighting the key things we want to convey. If there is more information to convey we are better off talking to the person directly or calling over the phone
  • E-mail with more than 2 e-mail threads is an immediate RED FLAG; it should be stopped then and there. It is like a virus that is going to start spreading and affecting everyone’s productivity and time. This is a sign that the people involved need to talk directly or in the worst case scenario have a short meeting ONLY with the people necessary to get clarity on things.
  • It should fit within a normal laptop screen resolution about 11-13’’ without needing to scroll

Reminders

On a daily basis, there are numerous follow ups to do, timely tasks to accomplish and miscellaneous things we need to take care off; at a certain time of the day, week or month. To ensure we do not forget any of these things it is a good idea to set reminders. There are various ways to set reminders for ourselves. I personally set reminders using google calendar, sticky notes and Asana the task management tool. Usually sticky notes go on my table and google calendar reminders help me access them online at any place and at anytime as they seamlessly sync with all the devices.

Coming to work early

This is one of the most overlooked aspects of productivity. When we come early to work and there is no one in the office; we can get so much stuff done even before the regular day starts.  For example – Say the usual work hours of your office is from 9 AM – 6 PM, just by coming in at 7 AM and getting some high priority work done before the day starts, gives a huge feeling of accomplishment and enables you to do highly focused, uninterrupted sessions of work.

Working from Home

Nowadays, more companies are encouraging their employees to adopt more flexible work from home options; as they are able to get a lot more work done. They do not have to waste time in commuting to and from the office and getting distracted with constant interruptions at the workplace. According to a recent study employees who work from home at least once a month are 24% more likely to feel happy and productive at work. Another study found companies that allow remote work see 25% less turnover than companies who don’t.

MINDFULNESS HACKS

Focusing/De-focusing

The human mind can focus only for a maximum of 45 minutes at a stretch. After which, it is necessary to take a 5 to 20 minute break to recharge. In the book Deep Work, Cal Newport suggests having 1 hour Time Blocks and then taking 10 min breaks in between. He suggests doing 3-4 time blocks of highly focused productive work per day.

I currently have a 100% remote job. This being the case it becomes all the more important to have some routines to do focused work. I end up doing 3-4 timeboxes session/day with a 5 to 10 minute break.

Breaking for Lunch

We need to break for lunch and physically get out of the office to recharge. We can do a lot of activities during our lunch break like reading, listening to podcasts, watching TV, hanging out with co-workers or just sitting by the sun and enjoying nature. Some people choose to work out during this time as well. There is scientific research that shows the value of having well defined lunch breaks to recharge our minds and help us be productive for the rest of the day.

Being Mindful and doing focused work

On an average people spend about 4 hours a day on their smartphones. Half of those is in using Facebook, Instagram, Snapchat, Twitter and Youtube. This is 20 hours/week out of a 40 hour workday. This being the case, it becomes all the more important to track how much time is spent on your phone to increase productivity at work. Apple came out with Screen time; an app inbuilt into the operating system which tracks phone usage. There is another app I personally use call Moments that also has a feature to exclude apps that you want to ignore as part of your phone usage tracking like listening to Spotify while working.

Also there is research which suggests that, it takes 23 minutes and 15 seconds to get back concentration on your original task after interruption. At work we have constant interruptions that prevents us from doing good quality work. To avoid these interruptions at work we can do the following-

  • Book a conference room or find a quiet place to do focused work
  • Put our phones and laptops on Do Not Disturb mode to prevent getting distracted from messages from our phones or other communication channels like Slack

Finally, I personally have found a lot of value from doing meditation before starting the day. There are various benefits in doing meditation; one of which is to become more mindful and focused in work and life. I personally use the Headspace app for meditation in the morning. They have different guided exercises to help reduce stress, increase creativity, be more productive and be aware of your breathing throughout the day.

SOCIAL HACKS

Being Social

After family, work is where we spend the majority of our life time. That being said, why not make it a fun experience getting to know our team and co-workers on a personal level? It is good to be social and approachable to people; at the same time. There are different tips to do this-

  • Try to hangout with your coworkers one day a week/month. You could do some social activity with them and get to know each other
  • Make it a point to smile and say “Hi” to at least 2 people every day. This simple gesture can change your work and personal life dramatically

Appreciate Good Work

People value words of encouragement and appreciation more than monetary benefits. There are numerous research that have proved this point. This being the case, it helps to build better relationships through words of appreciation. We could send notes of appreciation via emails, thank you notes and letting other peers know when someone does a great job; that made a positive impact on another individual or team.


We could say automation is the whole raison d’être for software development. As developers, we seek to employ automation in order to solve problems with more efficiency than before. And we solve problems not only for our clients or employers but also for ourselves. We write scripts and software utilities to automate the packaging and deploy of our applications. We employ plugins and other tools that can automatically check our code for common mistakes and even fix some of them.

Another instance of automation is browser automation. And that’s what this post is all about. If the term doesn’t ring a bell, never fear. The post will do justice to its title and answer the question it poses. And after defining the term, we’ll proceed to show scenarios where browser automation is the right tool for the job. Then, to wrap up the article, we’re going to give you tips so you can get started with browser automation ASAP. That’s what, why, and how in just a single post.

Let’s get started.

Browser Automation: Definition

We’ll start by defining browser automation. We could try something like “‘Browser automation’ means to automate the usage of a web browser” and leave it at that. But that would make for a definition that’s both technically correct and useless unless we define automation. That word is one that we often take for granted, so I think it might be useful to actually take a step back and define it.

Defining Automation

Here’s what Merriam Webster has to say about automation:

1: the technique of making an apparatus, a process, or a system operate automatically
2: the state of being operated automatically
3: automatically controlled operation of an apparatus, process, or system by mechanical or electronic devices that take the place of human labor

Interesting. Now take a look at Wikipedia’s definition:

Automation is the technology by which a process or procedure is performed with minimum human assistance

Bots Don’t Get Bored

What do those two definitions have in common? At least for me, the point that’s really obvious is that automation seeks to remove human intervention from the equation. And why would we want to do that? While we humans are great at a lot of things, we’re also terrible at a lot of things—especially tasks of a repetitive nature. When performing repetitive, boring tasks, we tend to get…well, bored. Our mind easily zooms out of focus as we enter autopilot mode, and soon we’re making mistakes.

But since we’re a pretty smart species, we came up with a device that’s way better—and faster—than we are at performing repetitive tasks. And of course, you know I’m talking about the computer. With all of that in mind, here comes my upgraded definition for browser automation:

Browser automation is the process of automatically performing operations on a web browser, in order to achieve speed and efficiency levels that wouldn’t be possible with human intervention.

It’s far from being a perfect definition, but it’s already something we can work with.

Browser Automation: Scenarios for Usage

Why would someone want to automate the operation of a web browser? As it turns out, there are plenty of use cases for browser automation, and that’s what this section will cover.

Automatic Verification of Broken Links

It’s frustrating to click on a link only to see the infamous “404 Not Found” message. If you have a site, then you should definitely fix the broken links in it or, alternatively, delete them. But before you go about doing that, you first need to find them. This might not prove too much of a problem if your site has just a handful of pages. But think about a complex database-backed portal, with hundreds or even thousands of pages, mostly dynamically generated!

Now, what before was a minor nuisance becomes a herculean task. And that’s where browser automation fits in. You can employ tools that will automatically go through your site, verifying every link and reporting the ones that are broken.

Performance Testing

Performance is a huge concern when talking about software development. In this era of high-speed connections, most users will get frustrated if the site they’re trying to access is even slightly slower than they’d expected. Besides, Google itself penalizes slower sites on its search result pages.

Browser automation can also help with that. It’s possible to employ browser automation tools to do load and performance testing on your websites. This way, you can not only verify your web app’s performance on the average case but also predict its behavior under the stress of traffic that’s higher than usual.

Web Data Extraction

When the World Wide Web was invented 30 years ago, its purpose was to allow researchers to easily propagate their works. In other words, humans put stuff on the web for other humans to consume. In the decades that followed, we watched a rise in the non-human use of the web.

Browser automation definitely plays a part in this. Web data extraction, also known as web scraping, is another use case for browser automation tools. From data mining to content scraping to product price monitoring, the sky is the limit for the uses of web data extraction.

Automated Testing

Last but not least, we have what’s probably the poster child of browser automation use cases: automated testing. Yes, we just talked about performance testing and broken link verification, and those things are also automated tests. But here we’re talking of general, end-to-end functional tests. For instance, you might want to check that, when informing an invalid login and/or password at a login screen, an error message is displayed.

Such tests really shine when you can effectively use them as regression tests. If a fixed problem returns in the feature, you have a safety net that will warn you. And that safety net is way faster and more efficient than human testers—at a fraction of the cost.

How to Get Started With Browser Automation

Learning browser automation can seem like a daunting task. It’s an enormous topic, and there’s a lot to know. But it’s no different from any other area in tech. Approach it the way you would approach learning a new programing language or framework: by doing it.

First, think of at least one use case for browser automation in your current organization. We’ve just shown you some, and I’m sure you can think of many more. Some people call this “scratching your own itch,” and it’s an effective way of motivating yourself to learn something.

As soon as you have a small, discrete problem you think you can solve with browser automation, starting looking around for tutorials on how to get started with some of the available tools. When you get stuck, look for help in the documentation of the tool you’re trying to use. You can also search for help on Stack Overflow under the “browser automation” tag. And of course, there’s always Google.

Build a minimum viable example of browser automation in place. As soon as you get something that works, no matter how simple it is, that’s a milestone. You can use it as a foundation upon which to build more sophisticated and complex approaches.

Where to Go From Here?

Today’s post was meant to give you a quick primer on browser automation. We started by defining the term, then proceeded to show some common use cases for the technique. Finally, we gave you tips on how to get started.

As I like to say when writing introductory articles like this one, this was just the tip of the iceberg. There’s much more to browser automation than what could be covered by a single blog post. Where do you go from here then?

There’s no silver bullet: the answer is to keep studying and practicing. Continue to evolve your first minimum test suite and learn from it. You should also keep an eye out for what’s happening in the world. There are interesting developments, such as the use of machine learning to help developers with the creation, running, and maintenance of test cases.

Additionally, stay tuned in this blog for more automation-related content. Thanks for reading and see you next time!

This post was written by Carlos Schults. Carlos is a .NET software developer with experience in both desktop and web development, and he’s now trying his hand at mobile. He has a passion for writing clean and concise code, and he’s interested in practices that help you improve app health, such as code review, automated testing, and continuous build.

We were recently at the STP Spring 2019 conference. Testim was one of the sponsors for the event. We were also there to give a talk and workshop on Implementing ATDD in large scale agile projects and doing Paired Session Based Exploratory Testing respectively. It was an amazing conference in terms of the content, speakers, attendees and the location.

The conference was held at the Hyatt Regency next to the SFO airport and San Francisco Bay. It was a beautiful location and was easily accessible to everyone. As for the conference, there were a great collection of talks and workshops for attendees to learn from and apply the concepts in their daily project activities. The content included different testing strategies/approaches that can be applied to manual/automated testing, applying AI in software testing, different leadership techniques and traits that can be applied in agile testing, testing in DevOps/Continuous Delivery and performance testing.

This is one of the reasons why we have continued to sponsor STP Conferences in the past couple of years; as they make testing inclusive by bringing people from different countries in one location to share their experiences and also learn from each other.

We met a lot of our friends from SauceLabs, Applitools and other companies at the conference. We also had our own sponsor booth.

NOTE: In case you are interested to test drive Testim yourself, just fill in your details here and we will hook you up with

  • Freebies for you and your team
  • Unlimited access to Testim for 14 days
  • 24/7 Customer Support
  • 1 Hour Free Test Design and Automation Consultation with me

As mentioned earlier, on behalf of Testim, I also gave a talk and a workshop. The talk was titled “ATDD (Acceptance Test Driven Development) Is A Whole Team Approach – A Real Case Study”. It was about my real life experiences implementing ATDD in a large scale agile project. I discussed the problems my team had before implementing ATDD and how I trained the entire team of 25 people on different practices to encourage collaboration, learning and reinstating the mindset of One Team, One Goal. I also discussed the process changes that happened due to ATDD, how my team could leverage test automation throughout this process and finally shared the lessons learned from the implementation.

The workshop I did was titled “Unwrapping the box of Paired Testing”. In this workshop, I shared different testing strategies to do quick tours on your applications based on my real life experiences. I discussed what is Session Based Exploratory Testing and used the template I formed to do paired exploratory testing on live applications.

Below are some articles I wrote covering some of the details discussed in my talk and workshop

The power of Session Based Exploratory Testing

A Quick Guild to Implementing ATDD

Finally, we took part in a 5 minute lightning talk series STP hosted called “Testing Stories”. People shared their real life testing stories that were really insightful within 5 minutes.


Overall, we had a great time at the conference and look forward to the next event to meet new people and make lasting relationships. Thanks again to the STP organizers for putting on a great show.

Testim gives you the ability to override timeouts within a test, outside a test and across a group of tests. This helps to control the amount of time tests need to wait before a particular condition is met; after which tests fail gracefully after the set timeout period expires. The different ways to handle timeouts are as follows-

Tip 1: Timeouts within a step

Every step you record in Testim has a default timeout value of 30 seconds. You can override this value by following the below steps

  • Navigate to the properties panel of the step
  • Select “Override timeout” option
  • Change the default timeout value from 30 seconds to the desired timeout value
  • Click on Save

Tip 2: Timeouts within Test Configs

You have the ability to change the timeout for all the tests using a particular test config. You can do this by following the below steps-

  • Navigate to the properties panel of the setup step (first step in the test)
  • Click on the edit config icon next to the exiting resolution
  • Change the default timeout value from 30 seconds to the desired timeout value
  • Click on Save

NOTE: You can also edit the Step delay value in the edit config screen

Tip 3: Setting timeout for test runs

To abort a test run after a certain timeout has elapsed, you can use the CLI –timeout command. The default value is set to 10 minutes.

Usage example:

npm i -g @testim/testim-cli && testim –token “eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ” –project “vZDyQTfE” –grid “Testim-grid” –timeout 120000.

This timeout value can also be set in the Advanced Scheduler screen as shown below


Exploratory Testing has been around for several decades now. Every tester has been knowingly or unknowingly practicing it in their daily testing activities. There are various definitions and methodologies surrounding this testing approach. One of which is session based exploratory testing (SBET). Some confuse this testing approach with Ad-hoc testing without realizing it is way more powerful and structured. Here is a formal introduction to this testing approach and how to use it in your daily testing activities.

What is SBET?

SBET are time boxed uninterrupted testing sessions focused on a particular goal (module, feature, scenario). There are different approaches and templates used for this approach.

Advantages of SBET

This can be used within any domain, project or application; where you can get quick feedback about the application instead of writing detailed test cases (scripted testing). You get more flexibility in exploring the product and get to use your creativity within the boundaries of the goal of the session.

How to do it?

I personally have had a lot of success pairing up with another tester/developer and we both execute the same scenario in different devices/environments and discuss our observations. For example – Say, I am testing a mobile web application; I will have my colleague test the web app on an Android Tablet and I may have an Apple phone. Then we both execute the same scenario and discuss the observations. Just by doing this you can uncover lots of rendering issues, inconsistencies and unexpected behavior.

Structure of SBET?

SBET usually follows the below structure. They are-

  • 45-90 minute Time Boxed sessions
  • Have Charter/Goal document to guide the session
  • Note down test ideas/scenarios
  • Paraphrase/Debrief the observations
  • Discuss Observations with a developer/business person
  • Log Defects based on the discussion

All the session notes are contained in what is called a Charter Document. This is a document that contains all the details about the session including the goal of the session, necessary resources used in the session, task breakdowns containing time spent on performing different tasks during the session, session notes containing helpful information along with the test ideas and observations, issues uncovered during the session and any screenshots (if necessary).

So everyone knows the details about the session and how much time was spent on it. The document can be attached to a story or any repository where you house your test artifacts.

Doing a number of SBET sessions helps to

  • Get a better idea about the product features
  • Uncover bugs that would be otherwise hard to find with scripted/automated testing
  • Identify high risk areas
  • Identify mundane tasks in manual testing which are time consuming, which are good candidates for automation

How it fits into automation?

Doing SBET helps to set the stage for automation. It helps to learn about the application and think about different scenarios to automate. It is good to have SBET and high level automated tests running in parallel as it gives you good coverage of the application. The time you invest in automation depends on your context i.e how many people are available to do automation, the skill sets, cost vs value of doing automation, timeline and what tools/framework you are using.

After a month or two of getting to know the product by doing SBET, you can start doing some time boxed experimentation with different tools that are available for automation. Then you can practically see what fits your needs. Once you identify the tool, you can start automating the different scenarios.

How SBET fits in Agile Projects?

Given the flexibility SBET provides, the next question that quite often comes to mind is – When is the right time to do SBET? The answer is it depends on the context of the project. If you are just the lone tester or have only 2-3 people in the testing team, you can start doing ET sessions on each user story. Once you get a fair understanding of the functionalities of the application, you can start writing high level test cases and pick out scenarios for test automation based on the knowledge gained from these sessions.

If you are working in large scale agile projects and have a big test team, then you could follow the below approach-

  • For each story, discuss the acceptance criteria. Based on that discussion, identify scenarios that can/cannot be automated
  • For those scenarios that have to be tested manually, figure out the risk and impact associated with the story. For example – If the story is about implementing the payment functionality of a banking system, then there are high risks and huge impact to the customer and the organization, if the feature is not implemented correctly and we do not get proper test coverage. On the other end, if a story is about increasing the font size on the web page from 12 points to 15 points, the risks and impact to the customer are lot lesser. Do customers really care if the font size was not changed correctly? The answer could be Yes; but the impact is minimum as the customers would still be able to perform the required transactions in the application. But if the payment system is not working, then customers cannot make a payment which is a huge deal
  • Once we identified the story as high risk and impact, we can write high level test cases covering the acceptance criteria and some edge cases. This can then be supplemented by one or more ET sessions to explore certain aspects of the functionality in more detail

Once an ET session is complete, all the documentation generated from the session (which usually would be ET charters filled with information) can be attached to the specific story for better traceability and letting stakeholders know the details about the ET session including the different issues uncovered. This way, everything is documented and available for future reference.

During regression testing phase, one or more of these ET charters could be reused to perform additional sessions. Some of the scenarios from the ET session can be converted into high level test cases or automated test cases. Thus, ET sessions can start right from the story testing phase and can extend all the way till acceptance testing phase.

Remember, SBET is NOT a replacement for scripted test case execution but is performed COMPLEMENTARY to it. It is an approach that helps in exercising the creativity and experience of the tester to get more information about the product. As a result, stakeholders can make informed decisions.

Introduction

We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Advanced Analytics, Results Export Feature. Check them out and let us know what you think.

Advanced Analytics

What is it?

You will now be able to see an aggregated failure summary report inside a suite runs page. The report will contain a snapshot of the test failures and a pie chart. To help with debugging; clicking any error group or one of the pie chart segments will filter the view to show only the tests that failed on the selected error. This speeds up the troubleshooting of failed tests by pinpointing the root cause of run failures.

Why should I care?

Sometimes, a single problem in one shared step (for example, a “login” group) can cause many tests to fail. With the release of this feature, you will now be able to quickly hone in on those high-impact problems and dramatically reduce the time it takes to stabilize the suite.

 

Results Export Feature

What is it?

You will now see an export button in the results page (suite runs, test runs and single suite view). Clicking this button will download the results data as a CSV file, which can then be imported into excel or Google sheets.

NOTE: Only the data that is currently presented in the UI will be put into the CSV file. For example, if “Last 24 hours” is selected and the status filter is set to “Failed”, the CSV file will only include failed tests from the last 24 hours.

Why should I care?

We now have the ability to easily share the test results across teams with people who use and do not use Testim. This also gives the flexibility to feed the generated CSV file to any external tool or framework for more customized reporting. The possibilities are endless.

Testim gives users the flexibility to run tests on different screen resolutions. But sometimes this can get confusing where in; some tests run on a certain resolution and the newly created tests run on a different resolution. Below, are two simple tips to set screen resolution for a particular test and also apply it globally to all the tests in the project.

Tip 1: To ensure a test runs on a particular screen resolution each time you run it, follow the below steps

  • Navigate to the properties panel of the setup step (first step in the test)
  • Click on “Choose Other” option to select a resolution from the already existing config list OR
  • Click on the edit config icon next to the exiting resolution
  • Set the desired screen resolution you want
  • Give a name for the newly created resolution
  • Then click “Save”

 

Tip 2: To apply an existing/new resolution to all the tests in your Test List, follow the below steps

  • Navigate to the Test List view
  • Click on “Select All”
  • Click on the “Set configuration for selected tests” icon
  • Choose the required resolution you want to be applied to all the tests

NOTE: Test configs can also be overridden during runtime via the –test-config parameter in the CLI and the Override default configurations option in the Scheduler.

The Software Development Lifecycle (SDLC) consists of various roles and processes that have to seamlessly mesh together, to release high-quality software. This holds true right from the initial planning phase, all the way to production release and monitoring. In terms of roles, we have designers, business analysts (BA), developers, testers, scrum masters, project managers, product owners (PO) and technical architects (TA), who bring a varying level of experience and skill set to the project. They collaborate to discuss and implement different aspects of the software. In terms of processes, based on team size, release schedules, availability of resources and complexity of the software, the amount of processes can vary from no process to strict quality gates, at different phases of the SDLC.

What is the Knowledge Gap?

As teams start to collaborate to work on different features of the software, they often run into the below situations-

  • Each team member has different interpretations of the requirements
  • A majority of the team finds it hard to understand the technical jargons used to describe the working of the system
  • Developers assume the testers have the same level of technical expertise as them, while explaining changes in code during code reviews
  • Developers fail to do development testing and assume testers will test the newly implemented feature completely
  • There is no clear distinction of responsibilities in the team; leading to ambiguity and confusion
  • The PO comes up with a feature to implement and each team member has different interpretations of how the feature should work
  • The BA writes requirements that are hard to understand and implement, due to lack of understanding of the technical aspects of the system
  • The TA explains how a feature should be implemented using technical jargons that are hard to understand by designers, PO’s, BA’s and testers
  • The developer develops the feature without paying attention to the testability of the feature
  • The tester sits in on code reviews and the developers assume he/she has the same level of technical expertise as them when explaining their implementation
  • The tester does not get enough time to complete their testing due to tight release schedules
  • The developer fails to do development testing and makes testers responsible for the quality of the product
  • There is no clear distinction of responsibilities on who would do what task in the SDLC and everyone assumes someone would do the tasks. As a result; majority of them never gets done

And so on…

Now, you may ask? Why do teams get into the above situations more often than expected? The answer is, there is a knowledge gap in teams. This is especially true when teams have a mix of entry-level, mid-level and expert level resources and each one makes different assumptions on the skillset, experience and the domain knowledge every individual brings to the table.

Also, these gaps can stem from a more granular level when teams are using different tools as well. For example – When customers use Testim, we have observed first hand that, different developers/testers think and use our platform differently.

  • Manual testers see Testim as a time saving tool that helps them quickly create and run stable tests, and as something that can help them in reducing the amount of manual effort it takes to test applications
  • Automation Engineers see Testim as an integrated tool that helps them to do coded automated testing with the help of JavaScript and API testing in one single platform instead of using multiple tools for functional and API testing
  • Developers see Testim as a quick feedback tool that helps them to run several UI tests quickly and get fast feedback on the application under test. They also recognize the ability to do more complex tests by interacting with databases and UI, all in one single platform
  • Release Engineers see Testim as a tool that can be easily integrated in their CI/CD pipeline. They also recognize the ability to trigger specific tests on every code check in; to ensure the application is still working as expected
  • Business and other stakeholders view Testim as a collaborative tool that can help them easily get involved in the automation process irrespective of their technical expertise. They also recognize the detailed reports they get from the test runs that eventually helps them to make go/no go decisions

As we can see, within the same team, people have different perceptions of tools that are being used within the project as well. These are good examples of knowledge gaps.

How do we identify knowledge gaps?

Identifying knowledge gaps in the the SDLC is critical not only to ensure the release of high quality software but also to sustain high levels of team morale, productivity, job satisfaction and the feeling of empowerment within teams. The following questions help to identify knowledge gaps-

  • What are the common problems that occur in each phase of the SDLC?
  • What processes are in place during the requirements, design, development, testing, acceptance and release phases of the SDLC?
  • How often does a requirement change due to scope creep?
  • How effective is the communication between different roles in the team?
  • Are the responsibilities of each team member clearly identified?
  • How visible is the status of the project at any instant of time?
  • Do developers/testers have discussions on testability of the product?
  • How often are release cycles pushed to accommodate for more development and testing?
  • Are the teams aware of what kind of customers are going to use the product?
  • Has there been lapses in productivity and team morale?
  • Is the velocity of the team stable? How often does it fluctuate and by how much?

In terms of tools being used:

  • How are teams using different tools within the project? Are they using tools the right way?
  • Are all the resources sufficiently trained to use different tools within the project?
  • Does one sub-group within a team have more problems in using a tool than others?
  • How effective is a particular tool in saving time and effort to perform different tasks?

Answering these questions as a team helps to identify the knowledge gaps and helps in thinking about solutions to these problems.

How do we bridge the knowledge gap?

There are 5 main factors that help to bridge the knowledge gap in teams. They are as follows-

  1. Training

Sufficient training needs to be given to designers, developers, testers, scrum masters, project managers to help them do their job better, in the context of the project and using different tools. Doing this will help designers understand how mockups need to be designed so that developers can effectively implement the feature, testers can attend code reviews without feeling intimidated and use tools more effectively with sufficient technical training, developers will understand why thinking about the testability of the feature being implemented is important and realize tools can help aid their development testing effort, the scrum master can better manage tasks in the project, project managers can ensure they help the team to collaboratively meet release schedules and deadlines and finally, stakeholders can get a high level overview of the project when they learn what reports are generated from different tools and how to comprehend them.

  1. Visibility

If we want the teams to start seamlessly working together like a well oiled machine in an assembly plant; we need to make the results of everyone’s effort visible to the entire team. It is important for us to know how our contributions help in the overall goal of releasing a high quality product within the scheduled date. There are various way to increase visibility in teams such as,

  • Checklists – where there is a list of items to be done in each phase of the SDLC, that helps everyone to be aware of the expectations from each one of them. This is especially helpful when the team consists of members of varying skill sets and experience. If the items in the list are marked as DONE, then there is no ambiguity in terms of what task has been completed
  • Visual Dashboards – Another solution is having visual dashboards giving a high-level overview of the project health and status. This not only helps stakeholders but also individual contributing team members. These can be created on whiteboards, easel boards or software that is accessible to everyone. Everyday, during stand up meetings, the teams should make it a point to address the dashboard and ensure everyone is aware of the high-level project status. For example – In Testim, we provide a dashboard showing what percentage of test runs passed, number of active tests, average duration of tests, how many new tests were written, how many tests were updated, how many steps have changed, what are the most flaky tests in your test suite and all these details can be filtered based on the current day, 7 day or a 30 day period.
  1. Clear definition of responsibilities

There needs to be clearly defined responsibilities in teams. Each one needs to know why he/she is in the team and what task they need to accomplish on a daily, weekly, monthly and a quarterly basis. Goals, objectives and expectations from each team member need to be clearly discussed with their respective peers/managers. This prevents a majority of the confusion that may occur in terms of task completion.

  1. Empowering the team

In this day and age, where individuals are more technical and skilled, what they are lacking in is – getting some level of empowerment and autonomy. Contrary, to popular beliefs that there needs to be one leader for the whole project; the leadership responsibility should be divided within each role in the team. There needs to be one point of contact each from the design, development and testing teams. Each point of contact who also in most cases help to lead their respective sub-teams, meet up with other leads and ensure all the sub-teams within the project are on the same page and working towards the same goals and objectives. This way, the whole team is empowered.

  1. Experimentation

Once the gaps are identified, the whole team needs to sit together (usually in retrospective meetings after sprints) to discuss different solutions to problems. Based on this, the team needs to experiment with different solutions and see what works well/doesn’t. This constant experimentation and feedback loop helps to make the team more creative and empowers them to come up with solutions that work for them.

In summary,

the “knowledge gap” has been one of the major obstacles for teams to reach their fullest potential. Identifying and reducing them, will help to increase efficiency and as a result lead to faster release cycles, with higher quality. Also, Testim can be used as one of the aids in this entire process. Sign up now, to know how it helps to increase collaboration and bridge the gap.

 

Introduction

We work hard to improve the functionality and usability of our autonomous testing platform to support your software quality initiatives. This month we’re thrilled to release a few of your most requested features; Auto Scroll, Scheduler Failure Notifications. Check them out and let us know what you think.

Auto Scroll

What is it?

When an element on the page is moved around, finding the target element may require scrolling even though it wasn’t required when the test was initially recorded. Testim does this automatically with auto scroll.  

NOTE: User also has the option to disable this feature, if required.

Why should I care?

You no longer have to worry about tests failing because of element not visible/found when the element location is changed in the page; thereby needing to scroll.  With auto scroll, you can scroll automatically to the element that is outside the viewport.

Scheduler Failure Notifications

What is it?

Users now have the ability to get email notifications on every failed test that ran on the grid using the scheduler.

Why should I care?

With the new “Send notifications on every failure” feature, users will receive notifications on failures every time a scheduler run fails. Now, you have instant feedback on failed scheduler runs. This is unlike the “Notify on error” option, where uses gets notifications only once; when a scheduler run fails. No new email is sent out until the failed scheduler run is fixed.

 

We recently hosted a webinar on Real Use Cases for using AI in Enterprise Testing with an awesome panel consisting of Angie Jones and Shawn Knight and me being the moderator. There were lot of great discussions on this topic and we wanted to share it with the community as well.

Below you will find the video recording, answers to questions that we couldn’t get to during the webinar (will be updated as and when we get more answers) and some useful resources mentioned in the webinar as well. Please feel free to share these resources with other testers in the community via email, twitter, linkedin and other social channels. Also, reach out to me in case of any questions at raj@testim.io or any of the panel members.

Video Recording

 

Q&A

Any pointers to the existing AI tools for testing?

@Raj: I am assuming this question is about things to know with existing AI tools in the market. If this is the case, then first and foremost, we need to figure out what problems we are trying to solve with an AI based tool that cannot already been done with other possible solutions. If you are looking for decreasing time spent on maintenance, getting non-technical folks involved in automation and making your authoring and execution of UI tests much faster, then AI tools could be a good solution. I may be biased in this but it is definitely worth checking out Testim and Applitools if any of the points I mentioned is your area of interest/pain point as well.

As discussed in the webinar, currently there are a lot of vendors (including us) who use all these AI buzzwords. This may result in you getting confused or overwhelmed in choosing a great solution for your problems. My recommendation is-

  • Identify the problem you are trying to solve
  • Pick different AI tools, frameworks to help solve that problem
  • Select the one that meets the needs of your project
  • Then, proceed with that tool

 

As a tester working with Automation, what should I do to not lose my job?

@Raj: First of all, I believe manual testing can never be replaced. We still need human mind to think out of the box and explore the application to find different vulnerabilities in our product. AI will be used complementary to manual testing.

Secondly, we need humans to train these AI bots to try to simulate human behavior and thinking. AI is still in its initial stages and is going to take another 10 -15 years to completely mature.

In Summary, I think this is the same conversations we had 10 years ago when there were automated tools coming into the market. Then, we concluded that automated tools helps to complement manual testing BUT NOT replace it. I think it is the same analogy here where AI is going to complement manual testing BUT NOT replace it.

As long as people are open to constantly learning and acquiring different skillsets, automation is only going to be make our lives easier while we can pivot and focus on other aspects that cannot be accomplished with automation. This mainly involves things related to creativity, critical thinking, emotion, communication and other things that are hard to automate. The same thing holds through for Artificial Intelligence. While we use AI to automate some of the processes to save us time, we can use this saved time in focusing on acquiring other skills and stay abreast with the latest in technology.

So the question here is, not more about automation/AI replacing humans but about how do we stay creative and relevant in today’s society? That is done by constant learning, development and training.

 

Do we have some open source tool on the market for AI testing?

@Raj: Not really, we do have a small library which was added to the Appium project to give a glimpse of how AI can be using in testing — https://medium.com/testdotai/adding-ai-to-appium-f8db38ea4fac?sk. This is just a small sample of the overall capabilities.

 

What should be possible in testing with ai, in 3 years time? And how do you think testing has changed (or not)?

@Raj: We live in this golden age of testing, Where there are so many new tools, frameworks and libraries that are available to us, to help make testing more effective, easier and collaborative. We are already seeing the effects of AI based testing tools in our daily projects with introduction of new concepts in the areas of location strategies of elements, visual validation, app crawling and much more.

In 3 years, I can see the following possibilities in testing-

  • Autonomous Testing

I think autonomous testing will be more matured and a lot of tools will include AI in their toolset. This means we can create tests based on actual flows done by the user in production. Also, the AI can observe and find repeated steps and cluster them to make reusable components in your tests. For Example – Login, Logout scenarios. So now we have scenarios that are actually created based on real production data instead of us assuming what the user will do in production. In this way, we also get good test coverage based on real data. Testim already does this and we are trying to make it better.

  • UI Based TDD

We have heard of ATDD, BDD, TDD and also my favorite SDD (StackOverflow Driven Development) 🙂 . In 3 years, we will have UITDD. What this means is, when the developers get mockups to develop a new feature; the AI potentially could scan through the images in the mockups and start creating tests, while the developer is building the feature in parallel. Eventually, by the time the developer has finished implementing the feature, the AI would have already written tests for it based on these mockups using the power of image recognition. We just need to run the tests against the new feature and see whether it passes/fails.

  • AI for Mocking Responses

Currently we mock server requests/responses for testing functionalities that depend on other functionalities which haven’t been implemented yet or for making our tests faster by decreasing response time of API requests. Potentially, AI can be used to save commonly used API requests/responses and prevent unnecessary communication to servers when the same test is repeated again and again. As a result, your UI tests will be much faster as the response time has been drastically improved with the help of AI managing the interaction between the application and the servers.

 

Will our jobs be replaced?

@Raj: Over the past decade, technologies have evolved drastically. There have been so many changes happening in the technology space but one thing constant is human testers’ interaction with them and how we use them for our needs. The same holds true for AI as well. Secondly, to train the AI, we need good data combinations (which we call a training dataset). So to work with modern software we need to choose this training dataset carefully as the AI starts learning from this and starts creating relationships based on what we give to it. Also, it is important to monitor how the AI is learning as we give different training datasets. This is going to be vital to how the software is going to be tested as well. We would still need human involvement in training the AI.

Finally, it is important to ensure while working with AI the security, privacy and ethical aspects of the software are not compromised. All these factors contribute to better testability of the software. We need humans for this too.

In summary, we will continue to do exploratory testing manually but will use AI to automate processes while we do this exploration. It is just like automation tools which do not replace manual testing but complement it. So, contrary to popular belief, the outlook is not all ‘doom-and-gloom;’ being a real, live human does have its advantages. For instance, human testers can improvise and test without written specifications, differentiate clarity from confusion, and sense when the ‘look and feel’ of an on-screen component is ‘off’ or wrong. Complete replacement of manual testers will only happen when AI exceeds those unique qualities of human intellect. There are a myriad of areas that will require in-depth testing to ensure safety, security, and accuracy of all the data-driven technology and apps being created on a daily basis. In this regard, utilizing AI for software testing is still in its infancy with the potential for monumental impact.

 

Did you have to maintain several base images for each device size and type?

Has anyone implemented MBT to automate regression when code is checked in?

 

Resources relevant to discussions in the webinar

Be smart & save time...
Simply automate.

Get Started Free