From creating frameworks, solving crazy issues, basic methods and functions, and learning new technologies to debating which mouse and keyboard to use. All things QA
Don't wanna be here? Send us removal request.
Text
QA Guide: Test Planning
Before we can get into the programming of QA, we must understand the core aspects of QA and its methodologies. Test planning is the first step in the QA lifecycle, so it's naturally the first thing we'll be talking about. As we discuss this, please note that every organization is different and has access to different resources, so take each piece and translate it to what you need.
Without Test Planning
As Agile flows continue to be used by more and more companies, teams need to adapt and modularize their releases. As a result, QA now has to match how agile those teams are. In my previous post, I said that QA can struggle in agile environments if the team is inexperienced or if it isn't integrated correctly. An easy sign of this struggle is the lack of test planning and/or test cases. Imagine that you are tasked with the testing of a brand new website that can manage users' roles that are used throughout the rest of the products your company provides. This was designed to centralize where your customers' admins could manage the users in their organization. Now imagine that this is the very first time you have heard about this website and haven't ever seen the portal, but they still want to release this by the end of the week. Also imagine that the documentation on features, functionality, and requirements is non-existent, or worse, inaccurate. How would you go about testing this? Not only is this an actual scenario that has happened, but this is a perfect example of how QA was not integrated within the development process correctly. At this point, there is very little time to create an effective test plan or test cases, but QA still needs to get this done.
In this situation, the tests become more exploratory than functional or integration testing, and that ultimately slows down the current release and all future QA efforts on that project. I have even seen QA teams create the test plan and test cases after it's been released because that's when they finally have time. All of this is wrong. So how do you do test planning correctly?
What is a Test Plan?
A test plan is exactly what it says. A plan of how you will be testing the subject. In order to create this plan, you need to know how the entire product works. A common misconception is that QA only works alongside the developers. Instead, QA should be working more closely with the product managers. This doesn't mean that QA doesn't work with developers at all, but the Product Manager is the unbiased unit that decides on the Product Approval before release and has all the details as to what the acceptance criteria actually is.
Because of this, the test plan will be a reflection of the product details and acceptance criteria, but instead of assigning features and development time, you assign test case writing, documentation, environment setup, and different types of testing. I like to create a literal Test Plan document that has your plan in writing. This is not only a way to protect you and your team, but it gives the developers and product managers a clear understanding of what you will be testing and how.
How To Make a Test Plan
Here's the scenario: You need to test a simple login page because each user now has a new, cookie-based token that gets passed after credential validation. The token grants permissions to the user for which content they can access, which actions they can take, and expire after 30 days.
How would you create the test plan? Let's go through the steps I train my teammates to use:
1. Evaluate the resources you have available
How many QA members do you have access to? How much time have you been given to test? Is any of it already automated? These are the kinds of questions you need to ask. Your two basic resources, time and people, will determine how much you can spend on each piece of the test plan.
2. Divide the test subject into more manageable "parts" or "sections"
For our scenario, depending on our resources, I would first split it into these parts:
a) Existing users are assigned a token on login
b) New users are assigned a token on login
c) Tokens expire after 30 days for all users
d) The assigned tokens map to the correct permissions
At this point, I see a few unknowns that I'd like to ask the Product Manager about:
a) Is there any token security you want implemented in version one of this? Having worked with tokens, I know there are ways to "mimic" or assign yourself a token. If the API does not have the proper validations, then someone who is not supposed to have access to certain permissions can basically do whatever they want.
b) What happens if a user is disabled or blocked because they missed a payment or they switched teams and need access to different features of our product? If their admins change their permissions, do they get a new token? If they do, when do they get it? Does that token get a new 30-day expiration? Do we completely delete the old token?
I'm sure there is more, but notice how there are two different cases here. The first is more of a corner case, but it is something that I have seen taken advantage of. The second is much more common, but was not pointed out in the requirements we were given. As you continue through test planning, test case creation, and even testing, questions like these will continue to come up.
For example, as you start testing, you realize that the actual login now takes considerably longer because of the additional API calls. Is there an acceptable latency benchmark that must be met or is the product manager okay with the loss of performance? Make sure you ask!
3. Determine scope and environment variables
Scope refers to what is considered relevant to the test subject. For example, should we include the test case of changing an existing user's password and then logging in? Should we include the test case of the same user trying to log in on 100 machines at the same time? There will always be an infinite amount of tests that could be done, but because we have a finite amount of resources, we must decide what falls into the test scope.
Environment variables (or platforms) are part of the test scope. Which environments should these tests be run in? When dealing with web apps, this usually comes down to browsers and browser versions. Do we run these tests in Chrome and Firefox only? Do we run these tests in Edge and Opera too? Each environment becomes a multiplier. If we decide to test five different browsers and we also want to test two different versions for each browser, then each test will need to be run ten times!
Mobile apps deal with iOS, Android, Amazon, Windows, and sometimes others. For those in the mobile space, you know that there are tons of versions for each of these (especially Android) and it's impossible to test all of them. This is why there are "supported environments" sections for many popular products.
4. Divide the parts into specific test cases
Once you have the more manageable parts, you must divide those parts into specific test cases. These will eventually become the test cases you and your team create. I will write another post to go over Test Cases, but for now, we'll use what I think is the minimum for a good test case:
a) Pre-conditions and pre-requisites - what is required to properly run the test? This is to help control the environment that the tester has. For example, "clear cache and cookies".
b) Acceptance criteria - what is the criteria that the tester needs to look out for to determine whether a test is a pass or fail? Sometimes this can be copy and pasted from what the Product Manager has detailed, but sometimes you will need to get these details by asking the Product Manager.
c) Steps - each step should have the step (or steps) to take, and the expected condition of that step. Always include screenshots or video for these. A test case should be as simple as possible and easy to follow.
For our scenario, they have decided to include the "forgot password" test case and the "admin changes user's permissions" test case. They've also decided that the token should remain the same and not reset the 30-day expiration. Since those cases deal with an existing user, we will put it under our "Existing users are assigned a token on login" section. Continue this process for the rest of your sections and you'll see an easy-to-follow hierarchy.
5. Create the Test Plan document and get it approved
Now that you have your questions answered and test sections and cases specified, you can easily create the Test Plan document. This document will be the base that you and your team use like a checklist, and you need to make sure it is easy to follow. (Remember that this document is meant to be an approved "base", but that you and your team must be able to adjust and adapt as needed.) Use the points we've discussed so far:
a) List the resources available - this is optional, but at least put the names of the members that the Product Manager and developers can rely on. It's frustrating not to know which QA members are part of the project or who the lead is.
b) List the environments - which platforms and versions you are planning to test against.
c) List the Test Sections and Test Cases - this is almost always just the names of the sections and the names of the test cases with steps. (If you have an ID or URL for these tests, then you can add them too.) For example, this would suffice for the "Existing user" section of our scenario:
Existing users are assigned a token on login
- Forgot Password flow, then login
1. Go to login page and complete forgot password flow
2. Login using new password
3. User's token should be the same
- Admin changes user's permissions
1. As an admin, change user's permissions and save
2. As the user, login
3. User's token should be the same
The benefit of getting this document looked at by the Product Manager and developers is that they can see exactly how you've determined the testing efforts and they can either make edits or approve them. The entire goal is to get everyone on the same page. The last thing you want to do in QA is assume that something "should" work this way or assume that "this is what my Product Manager would want".
Is it the end already?
Test planning is the very first step in QA and for good reason. The better you get at test planning, the more your credibility will increase and the easier your job will become.
0 notes
Text
“What does QA actually do?”
Ah yes, it's time for my first post! My name is Carlos Kidman and I am a QA Automation Engineer by day and a singer/songwriting, book loving, article reading, Magic the Gathering streaming, soccer loving dad by night.
Although this is my intro post, I did want to start things out with speaking to a comment I once heard at work. I do think that some context to the scenario will help set the stage.
We had just undergone some pretty big lay-offs at the company (basically half the people, of which I was one) and people were understandably shaken. I was looking for another position when I was hired back on because they had gotten rid of the entire QA team except one (crazy story for another time). Immediately upon returning, I heard one of the higher-ups say something that really worried me.
"What does QA actually do?"
The way it was said was meant to mean something along the lines of "do we really need QA people?". For anyone in QA, you would immediately be concerned because you know your role in the company and the crucial responsibility you have in the lifecycle in software development and your customers.
I have read many articles where the author talks about how QA should be a major part of the overall process and how automation is becoming a necessity. Obviously I agree, but I have yet to read a post about why QA is important and why a good QA Engineer team is a must. Although there are many reasons, I would like to go over a few to help shed more light on this.
QA Engineers look and think about products in a different way than developers I'm not saying that developers don't care about the end-user, but most of the time, developers are tasked with specific features or stories that have (hopefully) specific requirements to be done. QA Engineers are tasked with the responsibility to validate that the features and stories are not only done correctly, but continue to work for the overall product and ultimately for the end-user. In this way, QA is also testing the overall user experience (UX). There have been countless times where I've pointed things out that may seem obvious, but the product managers and developers just didn't see it that way. For example, one of our products contained a list of companies that would automatically be generated and displayed when the user navigated to the page. It usually took anywhere from 3-5 seconds for that page to load and the entire list to be displayed. The issue was that there was no obviously visual way to know that there would eventually be a list on the page. Although we knew that there would eventually be data, what would a user think if they got to a page and the content was empty? Once I pointed this out and recommended adding a loading image, it was quickly added and the "feel" of that case was greatly improved. Along those lines, I have heard the argument, "We will add it when our customers request it". While this may work for certain features, if we had that same mentality, then how expensive does that simple loading image become? "Expensive" refers to the cost of resources needed to complete something. What if we don't add the loading image and our customers get to that page, scratch their heads not knowing what's going on, and then leave the page or close the window altogether? Once they "request" it, the developers will be working on something entirely different and have to stop what they are working on to pick up this new feature IF that request ever makes it out of the backlog. Product managers have to add it, developers have to work on it, QA needs to test it, and then push it out to release. All of that is time and money wasted for such a small feature!
No amount of Unit Tests can replace QA Again, to some this is obvious, but this is an argument I have also encountered, especially since I do QA Automation. Let me make this part clear - unit tests are a great way to go about development (known as Test-Driven Development or TDD) and can quickly let the developer know that the critical functionalities of the program are still working as they make their changes. That's it, and that's coming from a programmer that writes entire frameworks and tests for a living! There will always be a need for human or manual QA because testing the "feel" and experience of an app cannot be done by scripts. On top of that, there are entire QA methodologies that are used constantly throughout the QA process. From the beginning of Test Plan and Test Case creation, to the controlled and recorded environments that are used to create detailed repro steps, there is an entire QA lifecycle that no one else in the organization would have time for or know how to do without a learning curve. Just like with programming, good QA takes practice because it isn't just clicking and looking at things in an app. To further illustrate this, here is the definition of "Engineer" from Oxford Dictionaries: "a person who designs, builds, or maintains engines, machines, or public works." The first three verbs! Hallelujah! I could not have said it better myself, so thank you Oxford. QA needs to be just as agile and flexible as the other teams while also adhering to the organization and methodologies that good QA requires.
Good QA and proper integration of QA into the development lifecycle will not only increase the quality of the product, but will also speed up the release process One of the biggest myths I've run into is that QA is just another layer that ends up slowing the release process. This is almost always the result of two things: a) QA is inexperienced or the QA processes have just been recently added to the lifecycle. b) QA has not been integrated into the lifecycle correctly. There can be a combination of both, but just because you may have had a bad experience doesn't mean that it's wrong. If QA is a new process being added, then the expense and "growing pains" should be considered when making that decision. I would recommend hiring someone who is experienced in QA (mid or senior level) to run the team and proceed from there. The worst mistake you can make is thinking that QA is easy or a quick checklist and hire an all entry-level team. Just like with any new team, invest carefully and seriously. However, most of the time it's the second point. I have seen many teams include QA in the wrong way. I'm sure you've heard or read that QA should be part of every step in the development lifecycle, but it needs to actually be done! From scrum meetings to daily standups, include the QA team so they are aware of what is happening. They cannot create a Test Plan if they don't know what to plan for. They cannot create an accurate Test Case if the requirements keep getting tweaked without them knowing. I was once asked to create a new test script that validated that certain links navigated to the correct page because they had tokens that could expire and could change. When I was first asked to do this, there were twelve of these posts and, when clicked, they would redirect the user to a page in the same window. They needed to know if any of the links were no longer on the page or if they redirected incorrectly. For additional context, I was working on another project that was also "critical", but this was deemed more critical. I was told this at noon and told that I had two hours to get this completed. Talk about agile! An hour in, the script was done and I was wrapping up my tests when I realized that they began to fail over and over again even though they were working perfectly just minutes before. I began looking through my code try and find the issue. After some debugging, I realized that clicking on the links now opened in a new tab and they didn't tell me! Ok. I added the logic to handle the new tabs and as I was getting ready to turn it in, three of the tests starting failing over and over again! After some debugging I learned that they had removed three of the links without telling me. This happened a couple more times, but you see the pattern. The project that should have taken me an hour ended up taking the rest of my day. Management delegated the requirements to the developers and the testing to me, but they didn't manage the actual process and it became very expensive since the rest of my day was spent on maintaining one script. This also resulted in me having to push the testing of the other critical project to the next day.
I can understand the speculation that QA is not as crucial as other teams or processes in the development lifecycle, but I can promise you that it is one of the most important pieces and can be a huge blessing if implemented correctly. Imagine if the managers had a quick meeting once they got the "emergency feature request" so the developers and QA were on the same page. Perhaps we could have solidified the requirements there instead of constantly changing them throughout the day.
In the end, since part of the product approval lands on QA, it is easy to point the finger and lay blame. If you or your organization is struggling with your QA, take a step back and try to identify where things can be improved. If not, then I recommend taking some time to talk to the people in the QA team and ask for their ideas of ways things can be improved. I think you will be pleasantly surprised.
There is more I could list and discuss, but I think it's best to save it for another post. In other posts I will make sure to come back to this topic, but will even get more technical at different levels. I'll do some beginner posts like "Getting Started with Selenium" and more advanced posts like "Using Docker to Run Tests in Parallel". Anyway, thanks for reading and make sure to tune in for more!
0 notes