Don't wanna be here? Send us removal request.
Text
Full stack SDET - a story of conflict and misunderstandings

The latest fad and bandwagon folks seem to be jumping on in the recent past is this fixation on what a full stack SDET is. Or rather that it doesn't exist. If this sounds similar to past vendettas like manual testing, automated testing, and others, it's because it is! In this article I demystify what it means to be a full stack anything so that hopefully we can set this to rest once and for all. Let's get to it.
What is a stack?
I think first, since we are talking about stacks, we should define what a darn stack is. In computer and technology parlance a stack can mean many things depending on the context, but they all share one common attribute, it is a collection of things grouped together and more specifically "stacked" on each other.
In the context of software development, a "stack" refers to a set of software subsystems or components stacked on top of each other to create a complete platform. This can include the operating system, the web server, the database server, and the programming languages used to write application software. Two popular examples are the LAMP stack (Linux, Apache, MySQL, PHP/Python/Perl) and the MEAN stack (MongoDB, Express.js, AngularJS, Node.js).
What is a Test Engineer?
A Test Engineer is a professional who specializes in designing, implementing, and conducting tests to ensure the quality, functionality, and reliability of products or software at every stage of the development process. There are test engineers that focus on front-end testing and others that focus on testing the back-end of a software application.
Using the example above, a front-end Test Engineer may focus on component validation of a UI that is built in AngularJS. While a back-end Test Engineer may focus on validating the API (Express.js).
What is Full Stack Test Engineer?
A Full Stack Test Engineer is a professional who specializes in testing both the front-end and back-end components of software applications. This role encompasses a broad range of testing activities across the entire stack of technologies used in the development of a software product. A Full Stack Test Engineer needs to have a comprehensive understanding of all layers of the application, including the user interface, the business logic, the database layer, and the network interactions. Their expertise allows them to conduct tests that assess the functionality, performance, security, and usability of the application as a whole.
Using the example bove a full stack engineer would be involved in every aspect and technology used for the application being developed. From validating the front end in PHP to the back end web services in Apache all the way to the database layer in MySQL. If you think about it, this is actually the role most Test Automation Engineers perform today. This is called end-to-end testing.
So, where is the conflict/misunderstanding?
For a good example, check out this LinkedIn discussion: Do I support Full Stack QA/SDET. This author claims that there is no such thing as a full-stack QA for the simple reason that it isn't practical to become an expert in all layers of the stack. The old "Jack of all trades, master of none" mantra.
Some even went as far as calling the term a "marketing gimmick". So I ask, if we are all professionals in the technology field. Don't we know that the term stack is commonly used in computer engineering and it’s used to signify layers. For example, protocol stack, memory stack, etc. This is not a marketing gimmick!
Check out the thread and you will read the barrage of comments from the author's followers claiming concurrence with the point of view that there is no such thing as a full-stack SDET.
Conclusion/Summary
In summary, I think the notion of a full-stack SDET is not anymore less probable than the notion of a full-stack SW engineer. Full stack engineers are not masters of all trades, they still have their specialties, but they have the knowledge and capability to work comfortably at any layer.
Full-stack means that you understand end-to-end systems from an architecture perspective. Of course there are specialists like front end folks and database experts but that’s not half a stack that’s a specialist working only at a specific level in the stack.
Whether you’re in healthcare or fin tech or social media a stack is a stack regardless of implementation details (react, vue, mongo, oracle, etc). What counts is not the tool (these are easily learned) but the understanding of what the tool is and its purpose on the stack.
IMO, That’s what makes a full-stack anything someone with this tool agnostic understanding of a "stack".
0 notes
Text
What Test Automation Isn't
I hear/read a lot of so-called “experts” talk about how test automation is a development activity. But they don’t really go into what that really means. It seems like they imply that you just need to know how to code in a programming language, and some even provide “free” resources on YouTube, etc. But is it really about programming? Or is it about product development? Which includes requirements gathering, system design, programming, testing, and other activities involved in creating systems. I think it’s the latter. I'll try to dig into some of this in this article.
Getting to know the basics
There are no shortcuts; to be an excellent test automation engineer (just like an excellent software engineer), you must first learn the basics. Forget about frameworks and tools that aim to solve a specific problem and focus your valuable energy instead on learning system analysis and design, the different parts of a system, memory management, database design, logic, statistics, and at least one low[er]-level (like C) and one high-level language (like Java or C++).
This is what it takes to create systems and applications and it’s the same way those frameworks and tools being touted are built; and it’s the same way you should be building your test automation. Once you understand how systems are designed and built, you’ll not only understand those frameworks and tools available for test automation and use them effectively but also be able to create your own. And the rest will be history!
It's not about programming
Programming is just one building block (an important one) but not one that will make or break you. Programming languages can be learned in a week (from a syntax and language mechanics perspective) enough to get going. The problem with learning a programming language first is that you risk using that language as a hammer, and every problem will start looking like a nail. This is not necessarily bad, but it can be limiting when it comes to solutions.
The building block approach
By learning the underlying reasons why systems are built and how, as well as all of the moving pieces and parts that make up a "system" individually, you would be better poised to dissect any problem into pieces and parts as well as build more effective systems.
Why do you think college curriculums (at least in the US) focused on Computer Science or Electrical/Computer Engineering start by teaching you the basics? Why do you think in primary school, you start with regular arithmetic and then move on to algebra, geometry, etc? Same principle. First, you learn building blocks, and then you learn more advanced concepts. People shouldn’t get confused by that. This is by design.
Conclusion
Yes! When it comes to test automation everyone needs to start somewhere. But where you start matters and has an impact on the quality of the product produced. Yes, you can start at programming and work backward, but is that really efficient? Perhaps or perhaps not. It isn't when developing products and ins't automation a product that is verifying/validating another product?
I think of it as building blocks that make up the whole. Yes, you can learn the whole without necessarily learning about the building blocks that make it up. But you will only know how to use that collection of building blocks. Learning a tool is an example of this.
Or you can learn the different building blocks that make up all systems independently and be better equipped to operate any tool since you understand the principles under the hood.
Can you start anywhere and still be successful? I reckon you can; It’s just not what I’m advocating for because I feel it does more harm than good.
0 notes
Text
Let's cut testing!

Finance folks don't really understand testing, but they do understand money. And testing costs money. So they assume that by investing in test automation, they are automating testing and are the first to say, "Let's cut testing."
I'm not really endorsing that we cut testing by the way. But I'm glad the hook got you into my article. This post is more about making a pitch for funding test automation and how to be effective at communicating the need.
So, what is test automation?
First, I'll tell you what it isn't. Test Automation is not really about testing, and it is certainly not automating testing either! Test Automation is about development support. The application your developers are cranking code out for must be tested, and the best persons to test that code via automated methods are the developers writing it. We refer to this activity as "verification" - i.e., are we building the product right? (this is a question developers ask themselves continually as they are writing code).
Then there is "validation testing" - i.e. did we build the right thing? (this is a question that cannot be asked until the product or feature is done). Can this be automated? certainly. But this is still just a confirmatory activity, like verification testing.
Wait, what? So, what does this have to do with funding automation?
My point here is that when talking about ROI for test automation these three things need to be separated:
1. Verification (confirmatory - checking the things we know)
2. Validation (confirmatory - checking the things we know)
3. Testing (experiments and explorations - to find the things we don't know about yet)
Numbers 1 and 2 can be automated. Number 3 cannot. You can certainly use tooling and automation to help you, but how can you program a computer to find an unknown if even you do not know what you are looking for? As we all know computer programs only do what they are told. For Number 3 you need skilled people in the art and science of testing.
Conclusion
So, by splitting these three (verification, validation, and testing), we are not just lumping everything into a "testing" bucket and making it all sound disposable. Or that we can somehow automate testing. We have also put it in language that "the money folks can relate to". Yes, developers need to make sure they are building the product right (let's do test automation); yes, we need to make sure that we built the right product for our customers (let's do test automation); and yes the product team also needs to test to identify risks we are not aware of yet (the unknown, unknowns) we also need testing.
0 notes
Text
Developers can't test - a bedtime story

Automated tests written after the fact or not by developers are tech debt. BDD as an end-to-end testing framework is a supercharged tech debt creator.
Now that I have your attention allow me to blow your mind a little bit more. The best developers I know are also the best at testing their software. Not only that but they are the best at imagining how their product will be used (and abused) by their customers. They also acknowledge that they can only accomplish this when they build the product by themselves. By the way, I have only met about half a dozen of these unicorns in my career to date. Why do you think that is? I’ll go into it in this article.
The Lone Ranger
The best developers prefer to work alone. I've asked my friend Oscar (a real developer and unicorn) about the reason for this, and the answer is quite simple: "Because developers are taught either by their organizational policies or product test teams, that they shouldn't have to test because we have specialists that take care of it, and they know this is just not true! So it's the path of least resistance to just ignore the status quo and focus on producing high quality, tested software (by themselves) for others to test". At least this is the case for Oscar. I'm picking on him because I've worked with him at a few companies and have gotten first-hand experience at what he can do alone; that is, build products of high quality the first time, in record time, and on time (as promised). One example is a multi-agent system based point-of-sale application that is dynamically configured depending on the context it is deployed in (this was in the early 2000s when AI wasn't a mainstream fad). This was a personal project for him as he is also an entrepreneur (another trait of good developers), that I helped him test.
The first delivery I saw of this product was 99% complete and bug-free. Why was that? Read on.
Batman and Robin
The best developers value QA specialists' involvement and do so from the beginning and they incorporate feedback from ideation. When Oscar came up with the idea for his product, he immediately reached out to me because he knew that, as an entrepreneur myself, I had owned and operated retail stores in the past, so I was the perfect partner for this new endeavor.
I later found out, though that the product wasn't really a point-of-sale system but a way to prove that the Multi-agent system engine he had built the year before to incorporate into a hobby product that teaches a player how to win in Black Jack (which I also helped him with) would in fact be able to be used in multiple products and for multiple purposes.
Just like superheroes team up and combine their superpowers in fantasy series, developers and quality specialists (the good ones) team up in product development. Hence, a product was delivered with only 2 bugs that were edge cases we both missed during design.
Superman
It so happens that the best developers and quality specialists are outcasts from development communities, forcing them into a life of solitude. Not in the sense that they are alone, but just like Superman was forced to live his life as a common person for fear of persecution, they must hide their true talents from a team that may feel threatened by this unknown power that they possess. Little do they know that they, too, can have this power if they only open up to different perspectives.
Transformers - More than meets the eye!
So how do we fix this? I would take a multi-faceted approach.
The first thing we have to do is acknowledge there is a problem. The problem is this false perception that developers shouldn't test because they can't. This is just not true. Developers don't test because we don't teach those who don't know they have to do this AND don't trust those who want to do it.
Second, assess your teams and figure out who on the team is of the Batman and Robin type and who isn't. Pair those that are with those that aren't.
Third create a safe space where the newly formed superteams can come and share their challenges, solutions, and triumphs. Focus your efforts on those who are trying to get there by praising them to elevate their strengths.
This doesn't have to be a huge transformation. What we are trying to accomplish here is a two-pronged shift in mindset. The developers think they shouldn't test, and the test specialists think that developers can't test. These two completely false mindsets need to be reversed.
Conclusion
Once you are able to overcome this developers can't test and test specialists are the only ones that can problem (which I will go on the line here and assume is present in a lot of organizations), you'll be able to not only deliver high-quality software the first time but you'll be able to do it faster as well. If you find yourself struggling with continuous testing and continuous delivery to production and cannot pinpoint why that is, do a quick sniff test around your organization and see if it is not rooted or minimally related to what I just described here.
What are your thoughts on this topic?
0 notes
Text
AI Code Assistants, Developer Happiness at all costs?
I took a glance at a white paper and associated research that my boss pointed me to regarding AI assistants for code generation. It was not only an eye-opening moment for me (as I had not thought about the implications of coding assistants - most likely because I've been focused on the good ways AI can help test and quality engineers), but it was also very scary.
The white paper and accompanying research were commentary about a blog post from June 2023 posted to the GitHub blog written by Thomas Dohmke. The full article can be found here: https://bit.ly/3u5yMA6 with all the gory details including a link to download the PDF of the full research conducted. There are three main findings that are being reported as a result of this research. I will cover 2 of those here along with my opinion on them.
Finding #1
Less than a year after its general availability, GitHub Copilot is turbocharging developers writing software.
This finding is based on an analysis of a sample of current GitHub Copilot users of around 934,533. The claim is that, on average, the users of Copilot accept nearly 30% of code suggestions thereby reporting an increase in productivity. When comparing senior to junior developers, the latter has greater benefits.
And therein lies the problem I perceive. Junior developers will undoubtedly accept recommendations presented by an Intelisense-type prompt potentially without thinking about the ramifications downstream of their actions. Another problem is that we seem to focus on "speed" instead of "quality". Speed is but one aspect of the Iron Triangle (faster, better, cheaper). With the advent of AI assistance for actually producing the code, we are taking care of "faster" and "cheaper" (two birds, one stone), but let's not neglect better! Invest the time and money we are saving by paying close attention to the quality of the code being generated as well as ensuring that it is in line with our organization's code quality guidelines (DRY principles, for example).
Finding #2
We estimate these generative AI developer productivity benefits could boost global GDP by over $1.5 trillion USD by 2030 by helping to meet growing demand for software.
The main reason being touted for this $1.5 trillion boost in GDP by 2030 is again linked to "productivity gains" based on the estimation that approximately 30% of code being developed is being contributed by Copilot. Further, based on their data, productivity increases will be greater based on developers getting over the learning curve along with projected Copilot improvements.
The Copilot improvements are the key here, as well as the part that is not being covered by the data or the research. The analysis only covered productivity over an initial period of time and seems to ignore what happens after the code is developed and checked in. Based on the GitClear analysis of over 153 million lines of code aimed at answering the questions:
a. Are there measurable side effects to committing AI-generated code?
b. What are the implications of the widespread adoption of AI programming assistants?
They concluded that based on the 6 metrics they tracked and analyzed, "the output quality of AI-generated code resembles that of a developer unfamiliar with the projects they are altering. Just like a developer assigned to a brand new repository, code generation tools are prone to corrupting the DRY-ness of the project". I'm not surprised by this at all, and it points to some potentially serious risks in the realm of Quality.
The rationale behind this is that the suggestions from Copilot seemed to be biased toward adding code (like a junior developer might do) as opposed to activities related to refactoring (as a senior developer might do), like moving, updating, or deleting existing code.
From my perspective, adding more code faster without thinking about future maintenance can create more tech debt as time goes on. I reckon that using Copilot for MVPs, POCs, and other green field experiments where normally net new code is always added is a good use case for it. It would be interesting to see how Copilot behaves when dealing with existing and legacy code bases.
Conclusion/Suggestions
It remains to be seen if the power of Copilot can be harnessed via the deployment of organizational policies and guidelines that drive the incorporation of Copilot into the development landscape. Some examples of this:
Junior developers can use AI assistants for their own learning and upskilling but should pair with more seasoned developers when dealing with production code.
Senior developers are exposed to the guidelines and encouraged to provide code faster and maintain the engineering organization's coding standards.
Limit the use of Copilot and other assistants for greenfield projects like POCs, MVPs, and limited-scope low-risk projects aimed at getting products into customers' hands faster to facilitate rapid feedback.
Encourage the initial use of AI assistants to create lower-risk code that can be utilized for in-sprint test automation.
Encourage the use of AI assistants for infrastructure as code where the scope is more limited and can be used as a learning platform for all software developers as they go through "the learning curve" to get to know the tool.
Create a Community of Practice within the organization where users of GitHub Copilot and other AI assistants can share lessons learned and good practices.
The above community can also be harnessed to collect feedback that can be aggregated and categorized to provide feedback to GitHub Copilot for product improvements and feature suggestions.
In general, I feel optimistic about GitHub Copilot, not because it generates code faster, but because (just to name a few):
It is a new tool that has harnessed broad support from the development community.
It has the capability, via this captive audience that has embraced it, to include all of the Code Quality features that are currently lacking and rely on experience.
It can help the fellows in the testing community or other adjacent communities to learn proper coding as well as participate in the numerous automation initiatives throughout an organization. Pipelines, test automation, etc.
What other suggestions do you have? Feel free to leave them in the comments or ping me directly if you'd like to chat about it.
0 notes
Text
Autonomous Testing - a vision
Autonomous testing embodies a transformative approach to the evolution of traditional quality assurance and quality engineering within a product engineering organization. Autonomous testing is not merely about automating existing test cases; it's also not about replacing humans; it's about reimagining the testing paradigm to empower continuous delivery, enhance product quality, and drive innovation. In this article, I dive into what I consider a comprehensive vision of autonomous testing for any product engineering organization.
I will first state the vision, and then I'll list each critical component of the vision and add a brief description of it.
First the vision statement
"In our pursuit of excellence and innovation, we envision a future where autonomous testing becomes the cornerstone of our product engineering efforts. By leveraging advanced AI and machine learning technologies, we aim to create a self-improving, adaptive testing ecosystem that not only accelerates our delivery cycles but also ensures the highest standards of quality and reliability in our products. This will enable us to consistently exceed customer expectations, adapt swiftly to changing market demands, and uphold our commitment to excellence."
The above vision not only contains a declaration of the purpose, goals, and desired outcomes but is future-focused, ambitious, and inspirational. If you have never had to create a vision before, think about these components and ensure you include them. Now, let's get on to the meat and bones of the vision.
Key Components of the Vision
The below pieces are things you'll get to work on and will become your mission. From an implementation perspective, think of each one of these (e.g. Self-Healing Test Systems) becoming an Epic in your product backlog. You can then plan the development, tool sourcing, staff upskilling, etc., and stay on track while keeping the team focused on "the vision."
Intelligent Test Automation
Develop a suite of intelligent test automation tools that can autonomously generate, execute, and evaluate test cases based on the evolving needs of the product, significantly reducing manual effort and enhancing test coverage.
I list this first because it is the bread and butter of autonomous testing; without this capability, the vision falls flat on its face.
Self-Healing Test Systems
Implement self-healing mechanisms within the testing frameworks that can automatically detect and correct errors in test scripts, thereby minimizing downtime and ensuring that testing processes are always efficient and effective.
This is another key enabler in autonomous systems. Think how much time QEs waste in updating locators! Self-inflicted technical debt will be a thing of the past with autonomous testing.
Predictive Analytics and Risk-Based Testing
Utilize predictive analytics to assess potential risk areas within the codebase, prioritizing testing efforts where they are most needed. This approach ensures optimal allocation of resources, focusing on high-risk areas to prevent critical issues.
This is an area where humans will benefit greatly and should pay close attention to. The more the system learns the more accurate the risk assessments will be.
Continuous Learning and Adaptation
Foster a system where testing tools and processes continuously learn from previous test cycles, user interactions, and code changes. Adaptive testing frameworks will anticipate future testing needs and adjust strategies accordingly, ensuring resilience and relevance.
Make sure you loop the teams' retrospectives into this process as well!
Seamless Integration with Development Processes
Achieve a deep integration of autonomous testing within the continuous integration and delivery pipeline (CI/CD), allowing for real-time feedback and the ability to swiftly address issues as they arise. This integration ensures that verification and validation activities are not a bottleneck but a facilitator of speed and innovation.
Quality Insights and Decision Support
Provide actionable insights and data-driven recommendations to stakeholders through advanced analytics and reporting. This will empower decision-makers to understand quality metrics deeply, make informed choices, and strategize effectively.
Collaborative Ecosystem
Cultivate a collaborative ecosystem that brings together developers, test engineers, quality analysts, and AI technologies in a symbiotic relationship. This collaboration ensures that autonomous testing complements human expertise, leading to innovative solutions and superior product quality.
This is one of the areas where low-code/no-code platforms thrive. But more importantly, it is a must if you want your vision to become a reality!
Cultural Shift Towards Quality Engineering
Drive a cultural shift from reactive testing of the quality of the product to a holistic quality engineering approach, where quality is embedded throughout the development lifecycle. This involves redefining roles, responsibilities, and mindsets to embrace proactive quality enhancement and continuous improvement.
Realizing the Vision
Strategy/Mission
To turn this vision into reality, the organization will invest in cutting-edge technologies, upskill its workforce, and foster a culture of innovation and continuous learning. It will also involve collaboration with technology partners and the adoption of best practices in AI and machine learning.
Create an environment where autonomous testing accelerates product development cycles, enhances product quality, and enables the organization to stay ahead in a competitive landscape. This vision of autonomous testing is not just about technological advancement but about fundamentally transforming how quality is perceived and achieved in product engineering.
Summary
What would your autonomous testing vision look like? What are some of the things you'll include as part of it? Leave them in the comments and let's start the conversation. Autonomous testing is where the industry is headed, and I'm eager to get started on this journey. Are you?
0 notes
Text
About Manual vs Automated
In my opinion, focusing energies on non existent dichotomies takes away from the real work / focus of system verification and validation: system design, test strategy, risk identification and reduction, test techniques to incorporate based on system design and strategy, controlled and uncontrolled experiments to both verify and validate the product, etc.
In other words (basic triage here) if the important work is to understand testing and all that it encompasses (as mentioned above) then focusing on manual vs automation is like (in a ER situation) focusing on placing band-aids on scratches when a person is bleeding and the bleeding is synchronized with heart beat. Focusing on the distinction first is a bit backwards (not to mention controversial - see my last sentence).
From my perspective once a person learns system design (the building blocks of a system), testing techniques (what to do in each situation), testing strategy (your overall approach), etc then, by design and as part of the learning they would have already realized (intuitively) the distinction this false dichotomy is trying to make (which is a really hard distinction to make because both (manual and automated) are intertwined, hence why its controversial to folks that understand this already).
0 notes
Text
About Flaky Tests
A test is an experiment through which we aim to learn information. In Automation (that is, when you are in the process of getting a computer to execute all or part of your experiment) all tests begin as a “flaky test”. This is not only expected but it’s the testing portion of automation; which includes more than just automating steps. During this portion we are looking for unknowns and how it impacts our test (experiment). It’s an iterative process. It is also an investigation to reveal unknowns or “flakiness” of all things about the experiment I’m automating: The environment, the variables, the constants, the content, etc. So I don’t call a test “flaky” just because it fails for an unknown reason. all automated tests start out flaky. I call a test flaky while it’s in the investigation portion and we are working to reveal all of the unknowns as mentioned above. To summarize: is a test strategy really flaky if it takes "flakiness" into account as I have outlined above?
0 notes
Text
Sometimes the applications we are testing require some level of verification of reports that are output by the application under test under a variety of formats. One of these formats is the ever popular PDF. An option for verifying PDFs might be using PDFLib Text Extraction Toolkit from PDFlib GmbHT. This toolkit appears to be very powerful and supports the extraction of text, images as well as all objects that make up a PDF file. The example below are using Squish's API to drive an application developed using Qt framework. As well, while the script examples here are in Perl, you can replace it with the language of your choice. As long as it offers similar services to the programmer. In the following example:
The function get_pdf_objs conceptually uses the external application tet.exe (Text Extraction Toolkit) in combination with Perl’s system function to extract all text and images from a PDF file and store it in a txt file or image file respectively.
parse_pdf is not really a parser, in this example its only role is to produce an array of text strings that it will extract from the text file produced by get_pdf_objs.
Using Perl or any other of the supported scripting languages, we can easily script the handling of PDF files using an external tool like PDFLib TET and a custom utility function developed for this purpose. The above example illustrates this concept; the process is outlined below:
Start the application
Navigate the UI until a report is generated
Print that report
Pass the file to a PDFLib function to extract text, images, etc.
Parse the extracted text, images, etc. and compare to known good values
I'd like to know your thoughts via comments.
1 note
·
View note
Text
Testing with Perl - Test::More
The More module available via CPAN is part of the Test::* series and provides a wide range of testing functions. In the previous post we learned that TAP is just a simple text-based interface between testing modules that are part of a test harness; Test::More is one of these modules that utilizes TAP and expands the "simpleness" of the protocol by providing various ways to say "ok". It includes, among other things:
Better diagnostics (not just OK / NOT OK, but also for example WHY)
Capability to skip tests given a range of criteria
Capability to test futures features (using TODO)
Compare complicated data structures
Lets jump right in and use the above referenced module to test a web application available at myapp.com (this is a fictitious web site that does not exist). As part of our test we should:
Navigate the login page
Make sure we are in the correct starting point
Enter our login credentials
Login to the site
Verify our landing page after the login operation.
Implemented as a Perl test script that uses Selenium::Remote::Driver and Test::More we might write the above as:
1: $driver->get($page); 2: my $loc1 = $driver->get_current_url(); 3: is ( $loc1, LOGINHOME, 'Verify landing page' ); # THIS IS TEST 1 4: login_as( $user_id, $password ); 5: my $loc2 = $driver->get_current_url(); 6: is ( $loc2, APPHOME, 'Verify landing page after login' ); # THIS IS TEST 2
In line 1 we navigate to our starting page (the login page) using Selenium's get method. In line 2 we declare a variable that will hold the URL we just navigated to and that will be returned by Selenium's get_current_url method. Then, in line 3, we utilize Test::More's is() is function to assert that the page we landed on, '$loc1', is in fact the expected one 'LOGINHOME'. Line 4 executes a page object who's sole purpose is to login to the web application under test. After the login operation we once again get the url of the page we landed on, '$loc2', and compare that to the page we expect to be after the login which is 'APPHOME'. NOTE: I used the term Selenium above for readability to refer to Selenium::Remote::Driver - the Perl binding to WebDriver. Below is the output that would be sent to the harness in a PASS case:
ok 1 - Verify landing page ok 2 - Verify landing page after login
And the output for a FAIL case:
not ok 1 - Verify landing page not ok 2 - Verify landing page after login
Whenever a test fails, one of the features of Test::More is that it gives you meaningful information (the whys) you can use when evaluating test failures. For example, in the above failure the following would be sent to the error output file:
# Failed test 'Verify landing page' # at C:\Users\Freddy Vega\SsApp\create_decision_tree.pl line 107. # got: 'http://myapp.com/myloginpage' # expected: 'http://myapp.com/Account/Login' # Failed test 'Verify landing page after login' # at C:\Users\Freddy Vega\SsApp\create_decision_tree.pl line 113. # got: 'http://myapp.com/Apphome' # expected: 'http://myapp.com/Home' # Looks like you failed 2 tests of 2.
As you can probably see by now, testing with Perl means not to re-invent the wheel every time a testing problem arises. In our solution we were able to use Selenium::Remote::Driver to drive our application under test (a web app). We used Test::More to make our assertions while we are testing and TAP::Harness to tie it all together and produce results that can later be mined, interpreted and presented to different audience types (management, users, developers, etc). In the next post in this series I tell you about test harnesses and how you can combine these with other tools to help you design a robust automation framework.
0 notes
Quote
The only part of software development that may not be predictable is when the programmer is actually coding.
FV
0 notes
Text
More data driven functional testing using Perl and MySQL
So I stumbled upon this conversation, I don't remember how, but the question was asked about a "quick sample", if there is such a thing, of using a database to store test data for functional tests. Specifically this question was asked in a forum of Selenium users. I find that Perl has a good set of modules for, well for EVERYTHING! literally! Including interfacing with databases. It is very well documented by the way. Check out http://dbi.perl.org/ for complete details as well as online docs with loads of reference material. But basically in order to get data to be used in the test scripts you can do the following (again, this is in Perl and I'm using DBI with the msql driver): 1. Define the test data by writing the sql queries needed to get the data from the db 2. Create a connection to a database 3. Execute the query and stuff the data into an array 4. I then iterate through the data in the array until there is no more data. 5. Cleanup
1: use DBI; 2: 3: my $query = "SELECT email, pword, open_url, provider_name 4: FROM messaging_test_data_tbl 5: WHERE is_active = true 6: LIMIT 4"; 7: 8: my $dbh = DBI->connect("DBI:$dbtype:$dbname",$dbun,$dbpw, 9: { RaiseError => 1, PrintError => 1, AutoCommit => 1 }); 10: 11: my $sth = $dbh->prepare($query); 12: 13: ### GET THE TEST DATA FROM THE DB 14: $sth->execute() 15: or die "Cannot execute the query: $sth->errstr\n"; 16: 17: ### AND THEN ITERATE USING fetchrow_array ### 18: while ($email,$pword,$open_url,$provider_name) = $sth->fetchrow_array()) 19: { 20: 21: do all of your test stuff with each row of data returned within this while loop 22: 23: } 24: $dbh->disconnect; 25: exit;
Basically, no matter what language you are using you can use a similar flow. And don't forget to check out the linked and referenced documentation. Its a good reference; no matter what language you are programming your tests in.
0 notes
Text
Run a test suite from the command line using Perl
So you have a bunch of functionality you've automated and you'd like to execute the scripts unattended both on a set schedule as well as on certain triggers.
In this quick post I show you one way of running a test suite executing tests sequentially (i.e. one after the other) from the command line by using Perl. The script is appropriately named run_test_suite.pl
The code
1: #!C:/Perl64/bin/perl 2: use strict; 3: use warnings; 4: 5: my $result_dir = "C:\\Automation\\Tests\\$ARGV[0]\\"; 6: opendir (my ($dh), $result_dir) or die "can't open dir: $!"; 7: 8: while (readdir $dh){ 9: if ($_ =~ /pl$/){ 10: system ( $result_dir . $_ ); 11: } 12: }
This script takes one parameter [line 5], the name of directory in the C:\Automation\Tests folder which is where the test scripts should reside. We use this value to read all of the files in the directory [line 6] and, using regular expressions, only action on the ones that end in pl [line 9] (since we're looking for Perl files). Finally we use the system function to execute the script [line 10]
Once you have the above script, assuming you have perl.exe in your path and Perl mapped to open your pl files, to run manually type
1: c:\>run_test_suite.pl functional_tests
Alternatively you can add the Perl script, using your services manager or crontab, to the list of services to be run periodically. Or add it to your CI flow to be executed when certain conditions are met
0 notes
Quote
To avoid criticism, do nothing, say nothing, be nothing.
0 notes
Text
Method to configure your Selenium Desired Capabilities object
Before you can call setup_selenium you have to build your desired capabilities data structure in order to pass it to it. There are many methods for doing this, this is one way of accomplishing this in Perl:
1: sub set_capabilities { 2: 3: my $self = shift; 4: my $browser = shift; 5: my $platform = shift; 6: 7: my ( $grid_server, $os_platform, $port, $app_name, %desired_capabilities ); 8: 9: given $platform { 10: when 'WIN7x64' { 11: $grid_server = X64_HUB_IP; 12: $port = X64_HUB_PORT; 13: $os_platform = 14: 'VISTA'; # The enum has not been updated yet VISTA = WIN7 15: } 16: when 'WIN7x86' { 17: $grid_server = X86_HUB_IP; 18: $port = X86_HUB_PORT; 19: $os_platform = 20: 'VISTA'; # The enum has not been updated yet VISTA = WIN7 21: } 22: when 'XPx86' { 23: $grid_server = X86_HUB_IP; 24: $port = X86_HUB_PORT; 25: $os_platform = 'XP'; 26: } 27: when 'MACx86' { 28: $grid_server = MAC_HUB_IP; 29: $port = MAC_HUB_PORT; 30: $os_platform = 'MAC'; 31: } 32: when 'MACx64' { 33: $grid_server = MAC_HUB_IP; 34: $port = MAC_HUB_PORT; 35: $os_platform = 'MAC'; 36: } 37: when 'VISTAx86' { 38: $grid_server = X86_HUB_IP; 39: $port = X86_HUB_PORT; 40: $os_platform = 'VISTA'; 41: } 42: when 'VISTAx64' { 43: $grid_server = X64_HUB_IP; 44: $port = X64_HUB_PORT; 45: $os_platform = 'VISTA'; 46: } 47: default { 48: $grid_server = DEF_HUB_IP; 49: $port = DEF_HUB_PORT; 50: $os_platform = 'WINDOWS'; 51: } 52: } 53: given $browser { 54: when 'IE8' { $browser = 'internet explorer'; $app_name = 'IE8'; } 55: when 'IE9' { $browser = 'internet explorer'; $app_name = 'IE9'; } 56: when 'IE11' { $browser = 'internet explorer'; $app_name = 'IE11'; } 57: when 'FFX' { $browser = 'firefox'; $app_name = 'FFX'; } 58: when 'CHR' { $browser = 'chrome'; $app_name = 'CHR'; } 59: when 'SFR' { $browser = 'safari'; $app_name = 'SFR'; } 60: when 'OPR' { $browser = 'opera'; $app_name = 'OPR'; } 61: default { $browser = 'firefox'; $app_name = 'FFX'; } 62: } 63: unless ( $browser eq 'chrome' ) { 64: 65: %desired_capabilities = ( 66: remote_server_addr => $grid_server, 67: browser_name => $browser, 68: platform => $os_platform, 69: port => $port, 70: application_name => $platform . $app_name 71: ); 72: } 73: else { 74: %desired_capabilities = ( 75: remote_server_addr => $grid_server, 76: browser_name => $browser, 77: platform => $os_platform, 78: port => $port, 79: application_name => $platform . $app_name, 80: proxy => { proxyType => 'system' } 81: ); 82: 83: } 84: return %desired_capabilities; 85: 86: }
Then in your test script you call the set_capabilities method passing to it the $brower and $platform the test is being executed on. You'll then get a properly formatted %desired_capabilites object with all the desired and chosen attributes:
1: my %desired_capabilities = Custom::SsApp::set_capabilities( undef, $browser, $platform );
0 notes
Text
A method to setup your Selenium session
I find that it's easier to delegate all the setup of selenium stuff to other worker methods. Doing so cuts down on boiler plate code in your test script and also leads to more readable and therefore more maintainable test code. This is one way you can accomplish the above in Perl:
1: sub setup_selenium { 2: 3: my $self = shift; 4: my $browser = shift; 5: my $capabilities = shift; 6: my $driver; 7: my %desired_capabilities; 8: 9: if ($capabilities) { 10: %desired_capabilities = %$capabilities; 11: } 12: 13: # Start selenium with capabilities if passed in from test script 14: unless ( keys %desired_capabilities == 0 ) { 15: 16: $driver = eval { 17: Selenium::Remote::Driver->new(%desired_capabilities); 18: }; 19: return $@ if $@; # Return with error if capability not matched 20: } 21: 22: # Or just start it with default settings 23: else { 24: $driver = eval { 25: Selenium::Remote::Driver->new( browser_name => $browser, 26: proxy => { proxyType => 'system' } ); 27: }; 28: return $@ if $@; # Return with error if capability not matched 29: } 30: return $driver; 31: 32: }
Then, in your test script: First you call set_capabilites and pass to it $browser and $platform being tested on which returns an object with all of your desired capabilities.
1: my %desired_capabilities = Custom::SsApp::set_capabilities( undef, $browser, $platform );
Then you pass that object to the setup_selenium method to get back an instantiated Webdriver.
1: my $driver = Custom::SsApp::setup_selenium( undef, undef, \%desired_capabilities );
The above $driver can now be used throughout your script to navigate and interact with the web application or site.
1 note
·
View note
Text
Wrapped screenshot method for Selenium
Wrapping Webdriver's screenshot function seems like a good idea. One of the first that comes to mind is the ability to, not only control the file nomenclature, but also manipulate things like file locations, size limits, formats, etc.
As always, there are many ways to implement this, here's one way in Perl:
1: sub get_screenshot { 2: 3: my $self = shift; 4: # Random number used in file nomenclature 5: my $random_number = int(rand(99999)); 6: # This is the BASE file path you'd like to save your screenshots to 7: my $file_path = "C://Screenshots//"; 8: 9: open (FH, '>', $file_path . $self->{test_id} . $random_number . $self->{browser} . $self->{test_timestamp} . 'screenshot.png'); 10: # We have to do this to the handle otherwise you'll get garbage 11: binmode FH; 12: 13: # Use Webdriver's screenshot function 14: my $png_base64 = $self->{driver}->screenshot(); 15: # Write the screenshot image to the disk 16: print FH MIME::Base64::decode_base64($png_base64); 17: close FH; 18: 19: }
Then in your test scripts whenever you need you can just call your own custom get_screenshot function:
1: $webapp->get_screenshot( 'NORMAL', 'Stop_Campaign_ID_' . $campaign->{test_timestamp} . '_' );
Since you are calling this function from your test script you can include as much or as little information you would like in your call to get_screenshot. Things like Test Name, Test Description, etc are all objects that the test script should at least be aware of and, as such, should be able to be included when building your screenshot nomenclature
0 notes