simplythetest
simplythetest
Simply The Test
246 posts
Testing software and beyond. Blog of Josh Grant, also known as @joshin4colours on Twitter
Don't wanna be here? Send us removal request.
simplythetest · 26 days ago
Text
On Writing
I'm breaking new grounds with this post: I'm writing about why I write. The topic and tone may be a bit different than usual, but I hope you enjoy this for one reason or another.
AI Is Eating The World
I've been keeping a blog on software testing and software development since 2012. I don't write every day like other (very good) technical bloggers but I have written quite a bit. I've also recently starting writing more personal and less software-oriented posts on Medium. Basically, I've been blogging online for a while and I've enjoyed writing casually since grade school.
So far, it's mostly worked out. I like to write about various topics, and some people have read my blogs and even commented on them. Everyone's happy about this arrangement, or at least, no one has seriously complained to me about it.
Fast forward to the past year or two. The hot new technology is AI - mostly meaning large language models from machine learning done at scale. These AIs have come in hot disrupting basically everything you can think of. Whole industries, full-time jobs, culture, copyright, software development: you name it, someone is trying to "disrupt" it with AI. One area that I've been watching from a distance is using AI for writing. People have used AI to write school essays for them (kind of bad!) while others use AI to improve their English writing in professional settings because English isn't their first language (kind of good!). The list goes on. I recently heard of fairly well known writer Paul Ford being told that people have used ChatGPT to write in his style, to which he didn't seem overly bothered by. It got me thinking: could these AI tools be used write like me?
So I tried it: I asked ChatGPT to write a 500 word article in the style of Simply the Test - the blog you are reading right now - and...it did. It wasn't perfect by any means, but it was pretty good. It even chose a topic related to software testing without prompting. Reader, I had no idea what to think about this. I'm no famous career writer, but the situation is still bizarre. Imagine how someone like Stephen King feels.
The whole situation got me thinking about writing, why I write, and why other people write. And here we are.
So Why Do I Write?
Like I wrote above, I've always enjoyed writing. I've always enjoyed reading, but writing down thoughts and ideas has appealed to me for a long time. There's something about writing that I simply enjoy; putting together a thought and capturing it in a distinct way. Learning to touch type and getting access to computers accelerating things. I've always liked to computers for the way you can not just create but also destroy or revert things back should you have the desire. Now, I can just open a laptop, click a few buttons and start typing away. It's great.
This is why I cannot (cannot!) get my head around folks using AI tools to generate creative writing pursuits. I understand using these tools in other contexts. Many business emails probably should write themselves. Writing essays for a university degree or some course would be much easier if a robot did most of it for you, not that I think this is a good idea or one I endorse. But writing for the sake of writing done by an LLM? I just don't see the point. Of course book writing has been flooded by AI generated crap and it turns out you can even "write" like well known authors and writers thanks to AI. But my question is: why would anyone bother with this? Why would someone read something a human barely spent any time putting effort into creating, and at the same time why would you create writing out of robotic machinery?
I have never and likely will never use AI-based tools in my personal writing, even as assistance. It just doesn't make sense to me. It's not that I'm tool agnostic. I use proselint all the time when I write. Proselint is an automated prose "linter" that checks for things like repeated words, outdated language, and so on. It's a good tool alongside a spellchecker. I've als dabbled with Grammarly, which I won't link to because I now think it's kind of terrible, but it's a similar tool. But the idea of "generating" ideas from some third-party or revising my work according to some trillion-dimensional LLM seems like a step backwards for me. I like rolling around an idea or two in my head, sitting down, and writing it all down. Why would I want to do less of that?
I have considered the idea of writing some kind of machine learning script that will train a model based on my writing and then create some sample blog entry based on parameters. For me that would be more of a machine learning hobby project than a writing project. Ultimately the result would be something I'd still evaluate. If it was good writing, and it totally could be, I'd publish it somewhere noting the process. It wouldn't be something I wrote, it would be, well, something else. It's also not something I'd imagine using beyond some toy examples and outputs because I like writing and I like my approach to writing.
Which of course, brings me to my writing approach.
A Little Bit of My Writing Approach
Now if you've done any amount of significant writing you may be thinking "actually, it would be great if I could get a tool to do some of my writing for me!" I feel that sentiment. There can sometimes be nothing worse than a writer in the middle of writing (or editing). But for me, writing is all about capturing an idea and getting it into a workable form and then sending it out to the world. That's the main part I like: coming up with an idea, putting that idea into a concrete form, then pushing the post button. Something from nothing. Words where there were none. This sort of thing. This is also why I tend to write more technically minded pieces. What's more concrete than comparing two Python build tools or how to use RobotFramework with Sauce Labs? These are tangible ideas brought to a solid, usable form. Maybe these aren't groundbreaking concepts, but they're interesting to me and possibly interesting to someone else. It's still cool to me to have an idea and then bring that idea from the aether to a physical form.
Another motivation for my writing is to include puny titles and quips in posts where I can. Honestly, I would write a whole book just to get a good joke in. Sometimes this works well, and sometimes less so.
I've also thought a bit about the process of my writing since I recently moved my drafts to GitHub into a public repo which you can find here. Previously this repo was private but after noticing that now GitHub actually renders Markdown files correctly, I thought "why not" and I made the whole thing public. You can see my commit history, which typically shows 1 file per commit and that file coresponds to a single post on here or on Medium. One could say I've made my writing and writing process open source. One could, anyway.
Generally my process is pretty lightweight. I don't use any fancy editing tools, and in fact am generally not a good editor at all. I've written pieces with professional editors and cannot recommend having a good editor enough. They will make your writing many, many times better. Again: I like this setup. I get to pick how I do things and this is what I'm doing. No AI required.
So Why Should Anyone Write?
Why indeed. There's so much writing out there already, why create more? Mainly because we're humans and that's what humans do.
Even if you've decided to write a creative blog post or how-to article using a new fangled AI setup, you should still think about why you're writing what you're writing. If it's to check a professional task of a to-do list, then that's fine: we all have our lives to live. But if you're doing something for a creative or personal endeavor, take a bit of time to really think about what you're doing.
I like thinking. It's an important part of my writing process, maybe the most important. For others, that may be less the case. Someone might want to see the end product on a particular platform or publications, while others might want to have the wording just right. Maybe you prefer writing first and crafting multiple drafts instead of trying to "one-shot" it. But you should probably have a purpose or clear goal for your writing, even if you will introduce machine learning into the process. I really do believe that the process is critical for writing; it informs and gives purpose to the writing itself. Without a purpose or reason for your writing, well, what's the point really?
0 notes
simplythetest · 2 months ago
Text
Why I Love Get By Text
I have a confession: I've been writing browser tests lately, and my preferred approach to locators is becoming get by text or get by label.
I am aware that some of you might want to throw some full wine bottles at me now. But I stand by it.
Over the course of my career as a test automation specialist, I've worked with a bunch of web applications for which I automated browser tests. One of the most critical aspects of writing browser tests is finding good locators to hook into in order to drive the application. Naturally, since there are plenty of options there are also plenty of opinions on what kind of locator strategies to use. Typically these follow some kind pattern like this;
Use id attributes that are permanent, if you can. If you can't, then
Use data-testid or other custom attributes specifically for automation if you can. If this isn't an option then
Use class attributes, which tend to be stable. If you can't do this, then
Use CSS properties to specify elements. And if all the above aren't options, then
Use text or xpath locators or something and hope for the best.
Generally patterns like this are a good heuristic for identifying locators. However, the nature of front-end web applications has gradually changed over the past decade. Most front-ends are now generated through frameworks and not through hand-written HTML, CSS and JS. A result of such frameworks is that elements aren't always able to be directly manipulated by developers, and you need to rely on the capabilities of the framework. Browsers (and computers more generally) have gotten faster and more efficient. And lastly, tooling has evolved greatly for browser automation. Selenium WebDriver is a web standard now, and there's lots of other tools that can be used.
Based on all this progress, one would imagine that there's been progress on how to choose or use locators well with modern and maybe less-modern web apps and pages. One would be, I think, disappointed to find out there hasn't been much progress here. Finding and maintaining locators is pretty similar to how things looked many years ago. Front-end developers still hesitate to add custom attributes for testing sometimes. Newer web frameworks dynamically create elements, so id attributes are either not present or not reliable enough for automation. No one understands CSS, still.
What to do based on this state of affairs? I've been using Playwright lately for browser automation, and Playwright provides a getByText() method for finding elements. I started using it out of convenience at first and, well, I'm convinced it's a good approach. Why? Because - frankly - it works well.
The thing about text in web applications, whether that be labels next to inputs or placeholder text, is that it's actually fairly stable. Most buttons with the text Submit will continue to have the text Submit for a long time. And if the text does change on an element it is straightforward and obvious to update your tests. Plus, text doesn't tend to go away: moving from Angular to React to Vue to Svelte still means your Name field has a label of "Name" that end users will see.
One big objection to using text is localization internationalization, which can be a valid point. However, if your web app has five options for language, does that mean the logic and workflows change as well? They might, but if they don't, you can likely test one language and still feel confident in the test results. If you can't use text-based locators, then you'll have to evalutate your strategy anyway.
I am a big fan of the adage "What's the simplest thing that could possibly work". When it comes to finding elements by text, this advice seems to hold true.
0 notes
simplythetest · 3 months ago
Text
The Software Quality Floor
Lately I've been thinking about software quality, and in particular software product quality versus software technical quality. I might expand on this difference but for now I want to focus on software product quality.
Years ago, the insightful Cory Doctorow coined the term "enshitification" to describe the process of a software product having its quality decline over a period of time. There's now a Wikipedia entry for Enshitification so if you're curious you can get more details there. The main idea behind enshitification is that there are factors (typically business-driven) that drive down a software product's quality over time. This may apply to non-software products, but typically it focuses on software. There are many examples out there, such as X/Twitter. To me personally, this platform has gradually had quality erode to the point where I really do not use it anymore.
One of the assumptions of the enshitifcation theory is that a piece of software has a relatively high level of quality and in turn that quality can be reduced in terms of trade-offs for other aspects of the product. For example, Google made its search worse to raise its ad business. Even in doing this, lots of people still use Google search, perhaps not even being aware in the reduction of quality.
This is where I make a claim: software quality can be lowered or traded-off but still has a quality floor that once passed means that users and customers will no longer use or buy the software product.
There is some economic theory to back up such claims, but typically those are stated in terms of supply and demand of a good, not really in terms of quality. But I think it's still true when quality is taken into consideration.
Going back to my example of X/Twitter, some product quality decisions gradually caused me to stop using the product. The appearance of obvious spam accounts, excessive and deceptive advertisting tweets, and pushy paid service offerings caused me to decline is using X/Twitter and eventually causing me to basically leave the platform. The quality of the product is simply too low for me to continue to be interested in using.
I think this is an important aspect of product quality. We tend to think of product quality in terms of "delighting" users and customers, increasing quality or at least keeping a consistent experience. However, there can be a cost to reducing quality, direct or not.
2 notes · View notes
simplythetest · 4 months ago
Text
Some Assorted Thoughts On AI
Some thoughts on AI, Large Language Models (LLMs), and Machine Learning, in no particular order.
AI
From all the discussion around AI, it seems like AI in the workplace is a form of automation. Of course the canonical guide to how automation can disrupt the workplace in the United States is Player Piano.
There is a very small slice of the workforce of human workers who can be basically replaced one-to-one with machines or software. These people know it, so do their bosses.
I simply do not believe that AI will become sentient and destroy humanity.
I saw a question for someone asking about AI use in "3rd and 4th party dependencies" in some software systems. AI might just force companies to understand in details how their software dependencies work.
AI might just change professional software development the most out of any industry, but that remains to be seen.
As with many things, artists will likely show us the true value of AI, in some form another.
I don't even know what to do or say about OpenAI.
Like other software products, software quality is a thing that exists and matters. One day I'm sure a lot of Silicon Valley AI companies will realize this.
AI also seems to be something that will expose the sheer weirdness of humanity, with products like [this](https://www.404media.co/i-tested-the-ai-that-calls-your-elderly-parents-if-you-cant-bothered/).
LLMs
Note to Self: Learn some machine learning stuff in Python.
Remember Big Data? Hadoop? These used to be hot technologies. Large Language Models may follow as similar trajectory as folks figure out they don't necessarily need the "large" part of them.
LLMs are truly disruptive, and will probably be incorporated into lots of things over time. I think this is largely a good thing.
Unlike the AI hype, LLMs have been building as an area of research and practice in data science and software engineering for a while now.
Machine Learning
One of the most memorable demos I've seen in a software development/engineering context was for a machine learning algorithm. In the first half of the demo, there was explanation of the algorithm, its assumptions and models used. This part made effectively no sense to me since I'm not really too deep in data science. The second half showed how this model could be directed to a plain spreadsheet of data containing fake employee data, with sheets containing lists of employees by office location, salary, department, and so on. The model took a plain English language query and returned data, and was frankly amazing. It just worked, flawlessly. I feel like this is still how a lot of people who aren't experts view machine learning today.
1 note · View note
simplythetest · 5 months ago
Text
I Hate Authentication
I wake up one morning, and my kids want to watch something on a streaming service. I go to that service on my television, and find the service has logged me out. Groggy, grumpy, and unsure of the password since it uses my partner's account and email, I think "I hate authentication".
I start my day of work, noticing there's a meeting that's been added right at the beginning of the day. It seems urgent; my boss organized it. I try to open up the meeting the video conference app, and it requires Google Authentication, which in turn requires my Google password, which I keep in a password manager, which also requires a password. As I fumble around multiple apps and in turn end up late to this meeting, I think "I hate authentication".
As part of my work as a software developer, I research how to use a service's public API. I find exactly the endpoint I need, and even find some documented examples that line up with my use case. I am elated, until I notice that the API requires some kind of token in order to access it. As I enter my second hour of research to find what and where these tokens are, I think "I hate authentication".
Late in the work day, I want to login to a tool I use at work, when it prompts me. "Password expired. Please update your old password with a new one". No other information is provided. As I notice I might be late for my train, I think "I hate authentication".
At the end of a long day, I want to sit and enjoy a movie I've been looking forward to. I remember I still can't login to that streaming service from earlier, and my partner is out for the evening. As I crack open my beer, with a mixture of sadness and anger, I think "I hate authentication".
3 notes · View notes
simplythetest · 7 months ago
Text
Thinking (Too) Big: Targeting 1 Billion Users
Today I read a great article from 404 Media that discussed how one startup founder wants to use AI to create music. The entire article is great but I was struck by this one sentence from startup founder (emphasis mine):
"We didn’t just want to build a company that makes the current crop of creators 10 percent faster or makes it 10 percent easier to make music. If you want to impact the way a billion people experience music you have to build something for a billion people."
It's quite a statement: the idea of building an application for 1 billion people seems to be something this founder and CEO has thought about. In this case, he's thinking about 1 billion users for an AI app that creates music.
I think this is a terrible approach to product development.
I'm not going to discuss the value (or lack thereof) of using technology to automatically create music, other than saying that this is definitely something that could be done. I'm more interested in the idea of building for a large number of users from essentially the founding of a company. One billion is a big number, and humans are naturally inept at reasoning about big numbers.
First, let's look at some comparisons: the population of China is around 1.4 billion people. The population of Europe and the USA combined is around 1 billion people. These numbers include babies, infants, people in comas, people without regular access to computers, and so on, so it's not likely that every person in these regions could use an AI product. If we include only people who could use this kind of product, we're looking at a user base larger than the Western world, roughly speaking.
That's a lot of people. What's your CLAs and uptime strategy for that? How do you support that many people even being on your website?
This leads into the next problem: what's your business model? Running a business means having costs, so you need to generate revenue to offset this and either turn a profit or exist long enough to exit the market. What's your plan for this?
Targeting a billion users means your options are limited. The first thing you could try is to charge for your product, but that will be difficult if not impossible. You'll necessarily need to collect payments in multiple markets and currencies. More crucially, you'll have to convince at least several millions of people to hand over money for your product. That's a tough sell, particularly if your product is optional, which almost all software products are.
A more well-trodden path is to make your product completely free for end users. Facebook does this: anyone can create a Facebook account for free and start using it. Of course, this means you'll have to figure out some way of generating revenue from this user base. Advertisting comes to mind. Less savoury practices such as data collection and sale also come to mind as well.
And finally, the most pressing matter: what exactly makes you want to build something for one billion people? Why start there?
The wonderful Gerald Weinberg once wrote that things are the way the are because they got that way. While this quote predates Weinberg's writing, it is a great way to think about product work. Something used by billions of people was previously used by millions of people, and that product was used by thousands of people, and so on. A billion people is a lot of people: thinking of what they want or need uniformly is basically impossible.
Now music is one thing that a billion people might have in common, whether it's music enjoyment or performance. However this is still a big nut to crack, and likely would need some deep questioning: what about enjoying music is common to such a large group of people? How big is the diffence between listening to music and creating music? These aren't just product questions to be discussed in requirements meetings. Questions of this nature philosophical and profound, not to mention extremely complex.
Overall, I'm skeptical of any startup plan that includes "impacting 1 billion" people as a first order goal. The numbers don't support it.
0 notes
simplythetest · 8 months ago
Text
Looking back at 2024
All I can truly say about this year is that things were bananas.
Here's a look back at 2024 from me:
Software Development
It's cliché, but honestly: learn the basics, it will pay off over time
There's a strange convergence of Java and JavaScript in terms of general usability. This might get interesting over the next while.
In some ways, Node's user experience is pretty good overall, perhaps on par with Java
Honestly I haven't used AI-tooling too much to generate code, so take that for what it's worth
This year I got to dive into Playwright for browser testing and so far, Playwright has been pretty decent. There are some occasional things I miss from the Selenium WebDriver ecosystem, but Playwright is geared towards modern test automation so I do enjoy it
This post on replacing authentication and authorization with login and permissions was a notable one this year. Sometimes solutions to technical problems aren't always that technical, and really need a bit of a shift in thinking more than anything.
General Tech
Social media has greatly shifted in the past few years. I still think no one quite knows what to do there.
LLMs and small-language models could have a moment in the coming years. Remember Big Data?
There are so many intelligent folks outside of IT departments that could really be excellent in IT, and vice versa.
Years ago I saw someone refer to Silicon Valley as "Silly Valley". That person was not wrong.
I used to think crytocurrencies and block-chain were a fad that were destined to die away, and now I'm afraid I was incorrect about that.
Computers can and should be used for fun, non-serious stuff.
I had an interesting conversation with some software developers this year around software quality and automation. One had the opinion that in the age of AI everything, test engineers are going to become more valuable since they will actually understand what's going on with particular projects. I think I agree but we will need to see how this all plays out.
Parenting
Raising kids is hard, regardless of your particular life situation.
I never thought going to Subway for a quick lunch would be so enjoyable to do, but here we are.
Something surprising to me is how much I enjoy drawing and painting with kids, particularly young kids.
Canadiana
I'm pretty fortunate to have seen as much of Canada as I have, this country is big and small at the same time.
Toronto is definitely a border town.
There are entire books to be written about the Kendrick/Drake battle from Drake's perspective.
I recently saw a video in which an American describes moving to Canada and living here for over a year. She pointed out that Canada has its own celebrities and business culture, among other things. I'm really hoping Americans catch onto this.
I'm utterly fascinated with the Arctic and Inuit culture. So close and yet so far away.
Generally Speaking
Art is truly amazing.
Taylor Swift is fine, get over it.
The election of Trump certainly means there's going to be a lot of strife over the next few years.
I've been thinking about this article on the tyranny of the US penny since I read it. There is a lot going on there.
Really hoping I can up my math game this year.
0 notes
simplythetest · 9 months ago
Text
The Hazy Divide Between Manual and Automated Testing
"Are you a manual tester or an automated tester?"
I've heard this phrase or something like it for most of my career. In many organizations, there's some kind of distinction between manual testing and automated testing. In some cases there may even be different roles for manual testing ("QA Analyst") and automated testing ("Software Engineer in Test"). Much ink has been spilled on whether this division is helpful or not, so I'll simply mention that this distinction exists in some form in a lot of places.
Something I've noticed is that this distinction is getting more blurry all the time, in ways that I didn't expect. Of course this has been true for a while but things like machine learning are highlighting the haziness of this divide.
Here's something I did today that really drove home this point: I opened up a white boarding app wanting to see if I could make a mind map. I couldn't find the actual controls to create one, so I typed in "Create mind map to test email" into an AI prompt, clicked a button and - poof - a reasonable mind map was created with branches like Security testing, Performance testing and so on.
The basic creation of a "manual" testing skill, creating a mind map to test an area of software, has been automated to some extent. Perhaps the manual is becoming the automated.
I've seen "automated" testing is being automated with things like low-code and no-code tools and code generation tools. While complete test code generation fully replacing SEIT work is I think a computer science research fever dream, these tools for test automation have come a long way. Now imagine an individual on a team who can operate both manual and automated testing via automation, which is not an unlikely scenario.
Also note that there's nothing to say that someone without a job title including "quality" or "test" could use the above tools to do their work.
The wide gap between manual and automated testing is closing. What happens next could be anyone's guess.
0 notes
simplythetest · 10 months ago
Text
Testing Specialists Shouldn't Advocate for the Customer
A long time ago, when I started in software, I read about what software testers should do as part of their roles. One thing that came up was the notion of testers being advocates for the customer. In this model, testers would find issues with software and advocate that certain issues be addressed and remedied "on behalf of the customer". After all, a tester can find issues, and who else can defend what the customer wants and needs better than a testing expert?
It turns out that a bunch of different people could advocate for the customer, and should.
For example, a product manager or product owner may regularly get direct feedback from customers, through interviews or surveys for example, where actual end users of software tell them exactly what they do and don't want. Customer support engineers see firsthand problems customers run into and help address those issues, possibly feeding information back into product development teams. And let's not forget our dear friends in customer success and sales, as these people regularly talk to customers and potential customers about all kinds of things. These roles have the ability to advocate for the customer since they actually know what the customer might want.
Sometimes it feels correct for testers in development organizations to want to advocate for end users. In some cases, testers are the end users. A tester who works for a bank and uses an online portal for that bank as a customer might have some insights in actually using the software. This might be even more true for software such as test case management or test automation, if you find a tester working for a company creating such software. Often, this isn't actually the case: a tester may be an excellent software professional contributing to software they will never use. In fact, they might not even come into contact with anyone who does use it. I've worked on a couple of software products that I certainly will not use for one reason or another.
The reason I point this out is that testers who try to advocate for "the customer" might have good intentions but actually not be helpful. Guesses - even educated ones - might lead to addressing quality issues that don't really exist. This wastes time and effort for everyone. Arguing with folks in product over hypotheticals can be a real drain on resources and enthusiasm. It also can create or worsen the perception of "testing as bottleneck" that slows everything down, since testers may chase unnecessary issues.
Naturally I believe quality is everyone's responsibility, but good teams work together.
3 notes · View notes
simplythetest · 11 months ago
Text
Programming is Fun Again
This past weekend, I had the distinct pleasure of attending PyBay, a Python conference in the Bay Area. PyBay is always good, with excellent speakers and an even better community of people who come together to geek out over Python.
One thing that I took away from this year's PyBay was the idea of having fun - and only fun - with programming. This is something I've been thinking about for the past year or two. I've been getting into making art by painting over the past two years or so, in part because painting does not make use of computers. I realized over the years that I associate computers generally with "professional" activities, and not "artistic" activities. This distinction has worked well for me (hello test automation) but I'm learning that computers can be used for fun things in addition to professional business things.
This past week at PyBay had two presentations that showed how you can use computers - with some Python, incidentally - for fun. The first talk was Playing Improv ...with Python! by the wonderful Pamela Fox. This talk effectively showed how to play improv games with either human players or learning model players. On top of being a nice introduction to chat-based LLMs, Pamela also showed that LLMs can be used for fun and silly things. It was a great talk.
Another talk that got me thinking of play was Scrolling Animated ASCII Art in Python by the enjoyable Al Sweigart. Al showed various example of "scroll art", which is ASCII-generated code printed to a console line-by-line. Partly animation, partly word art, scroll art is totally cool. Al also memtioned that new forms are art are always being created, and that scroll art is a straightforward approach to teaching programming on virtually any computing device, which ties in nicely with the low-fi artistic aspect.
In both cases, the goal of programming or using software was to have fun. In particular, scroll art is based on barebones programming constructs: only loops, conditionals, variables, and ASCII or Unicode characters are used to create scroll art programs. The goal is to create something aesthetic and enjoyable. Computers can do that.
The theme of making computers fun again does tie into the greater Python ecosystem and community. There is definitely a good community vibe when it comes to taking chances and trying out new, fun ideas. Python allows humans to be human, on some level.
As a test developer, I've often been on the other end of computers, working to make sure that programs function properly for mostly commercial customers. Like other programmers, I learn specific tools, languages, and frameworks in order to solve business problems. I enjoy this as a career and frankly it's a pretty good one. But that doesn't mean computers are always for work; they can be for fun, too.
0 notes
simplythetest · 1 year ago
Text
JavaScript In Anger: Global Test Timeouts
For the past number of years, I've worked with a variety of NodeJS-based test frameworks. These frameworks include Mocha, Jasmine, and Playwright. Interestingly - to me at least - these frameworks tend to have patterns and constructs in common. While some of these patterns are good (using a config file by default, for example), there is one that I frankly hate: implementing a global test timeout, and that timeout is turned on by default.
Dear reader, I don't like this feature. Not in the slightest.
The idea is straightforward: there is some timeout value - let's say 30 seconds - where if a test run's total execution time exceeds this timeout, the tests either fail or raise an error with a message along the lines of "your test run has exceeded 30 seconds". In many cases, this timeout is by default around 30-60 seconds. Having such a feature configured this way is, flatly, bonkers.
Test frameworks come in all shapes and sizes, just like software projects. Some are lean and mean, with a small number of relatively fast tests. Others are large and slow, with test code that is far from optimal. If you are creating a framework for test automation, you should anticipate each of these kinds of projects might use your tooling. For some, it may take 30 seconds to set up and connect to the system under test. Your test will fail out before it even gets started.
Remembering the power of defaults in software, a default value can greatly influence how your tooling works. A failed test is a failed test, regardless of the reason. If the reason ends up being that someone on a different team years ago decided that 30 seconds is "too long" for a test run, then that can lead to some awkward conversations. Coding frameworks should help your team and work, not be a stumbling block. Having a test fail because it took longer than some unclear value seems like a violation of the design principle of least astonishment.
Of course there are cases where you can disable this timeout, but that just makes me wonder why it was enabled in the first place? For example, in Jasmine there is a DEFAULT_TIMEOUT_INTERVAL value that is the max time a spec can take before timing out and failing. But why not disable this by default? How long should an arbitrary spec take to run? It's a completely open question based on context. These are questions and discussions that could be avoided simply by not allowing this feature to exist in the first place.
I will say I can see some good use cases for a global test timeout. In mature continuous delivery pipelines, tests that exceed some threshold of runtime could be taken out of the pipeline and re-evaluated, but this would require other pieces in place. A global timeout could also be a rough signal of performance problems, but again, other checks and balances would likely need to be in place for this work.
Having this timeouts like this enabled by default are something that do not spark joy in me, and I think this is the same for many teams working with JS testing for the first time.
0 notes
simplythetest · 1 year ago
Text
Enable, Execute, Expect
Much ink has been spilled (metaphorically) around the topic of writing good unit tests. One of the best patterns I've seen for thinking about how to write good unit tests is the Arrange-Act-Assert pattern, also known as Triple AAA tests. Here's a selection of articles illustrating this pattern:
The Automation Panda's Take on AAA tests
Jay Cruz and the Three A's of Unit Testing on Dev.to
Leaning unit testing from Microsoft
and so on.
The gist is that every unit test should have three parts: arrange your test with fixtures and elements to test, perform an act on these fixtures that you wish to test, and then assert the actual result of the action is the same as you expect.
This is a great pattern, except for one tiny issue: in some languages, there isn't really an "assert".
Increasingly, instead of the syntax
assert expectedCondition == actualCondition
we see something like
expect(expectedCondition).toBe(actualCondition)
This is true in JavaScript/TypeScript world, as well as in the Ruby world. In JS you have this pattern in libraries like Jest, while in Ruby the RSpec framework uses expect as well. This could lead to developers who never actually use an assert keyword in code. This leads the Triple AAA mnemonic to shambles.
Hence, here is the recipe Triple EEE tests:
Enable test fixtures as needed
Execute a step to be tested
Expect a result and verify this happens.
A nice acronym, exactly expected.
0 notes
simplythetest · 1 year ago
Text
Ask Why But Don't Expect An Answer
Lately, I've been thinking about code, maintenance and some other bigger thoughts. Often, things will occur over the course of working with software that makes us go "But why?". Asking why something is the way it is a necessary skill for a software tester, but like other skills it should be learned and worked on over time.
A long time ago, JB Rainsberger wrote something about working with legacy code. He was working with a piece of old code that made him go "why would someone write code like this?" or similar. (I'm not able to find the exact post of his.). His advice however was to ask "why", but not expect any answer. After all, the programmer who wrote the code was no longer there to answer it, and the answer wouldn't help the refactoring effort anyway.
I've been reflecting on this, and it strikes me as great advice for testers, developers and a bunch of other people as well. Asking why something (like code) was done a particular way is a natural question for people to ask. It can help provide context, as well as satisfy a need to understand the problem better. However, sometimes the explanation doesn't exist or isn't helpful. "Time pressure" doesn't exactly euclidate the problem, nor does reaching out to someone long gone from the situation.
Asking why is always something we can do to help us solve a problem, but we shouldn't always expect an answer, let alone a good one.
0 notes
simplythetest · 1 year ago
Text
A Short Digression on Test Data
Lately I've been thinking about test automation and test data. I've come to a rather philosophical opinion.
Test data is always the problem, and test data is never the problem.
Test data is always the problem: Reading posts on generating test data for unit tests but also how to wrangle test data for various environments shows a lot of conflicting thoughts. Some say to never use random test data, some say they love it. Most people agree having "realistic" testing data is helpful but often for security or practical reasons, user data can't be directly imported and used for testing internally. I think in part there's so much conflicting experience because test data is unique to each team working with. Trying to use testing data to test an online banking application is nothing like data needed to test a mobile video game, which are both nothing like needing test data for enterprise accounting software. Throw in the fact that automation and manual approaches also have different needs and you'll see even more confusion.
Test data is unique to every team's problem. Even similar domains or applications may have subtle differences. Teams need to work things out themselves eventually.
Of course there's more to testing than simply data management, which leads to the second part of this, that test data is never the problem.
One of the biggest challenges I've had as a test automation specialist is getting people to understand what test automation is and why it's valuable. Even when I can achieve this and get support, test writing and strategy are still tricky sometimes. Getting developers - and their product managers! - to test and to test well are often bigger problems than whether we have good test data or not. In the words of wonderful Gerald Weinberg, it's always a people problem.
All problems with testing are due to problems of test data, until they aren't.
0 notes
simplythetest · 1 year ago
Text
The Possible and Impossible: Mathematical Thinking for Planning
Sometimes, even knowing what is possible can be big step in the right direction.
In the past I've written about impossibility theorems in mathematics. These are class of theorems that definitively conclude that, given a set of constraints and conditions, something is impossible. A straightforward example is the impossibility of tiling the plane with regular pentagons. Try finding some regular pentagon shapes and making them tile a floor; you will find you cannot do this.
On the flip side to this, there is another class of theorems in mathematics called existence theorems. These are theorems that show that, definitively, something exists given some assumptions. Existence theorems may not provide any information on how to figure out how to find that thing that exists, or even provide a clue to what a value is. They may not even guarantee there's only one of the thing that exists. The only information an existence theorem (in general) provides is that a thing exists. One example is the contraction mapping theorem; basically if a function is a contraction - ie loosely speaking, the function's input maps to a subset of the input, "contracting" the input to a smaller set - then iteratively applying the function will produce at least one "fixed point" or a single point the iteration converges to.
So what does existence and impossibility theorems have to do with, well, anything other than higher mathematics?
Human beings tend to like certainty. We like knowing certain things will happen or won't happen, and avoid situations with uncertainty. Impossibility theorems and existence theorems are ways to figure out that, yes, something is certain. Whether that thing is certainly impossible or certainly possible may help us reckon with what to do next. This certainty may provide clues to how to find a solution, or to stop looking for one, or even to reformulate our assumptions.
In any case, knowing what is possible can be a big help, at least on a psychological level.
1 note · View note
simplythetest · 2 years ago
Text
So You Want To Hire an SEIT
As software companies grow so do their engineering teams. A role that comes along with growing teams is the software engineer in test (SEIT), a software developer who primarily works with test automation and related testing infrastructure. SEITs can be extremely valuable to engineering teams as they can enable good automation practices for shipping quality code faster, and they can be a great touchpoint for test engineers and operations engineers to work with development teams. They can also be a bit tricky to hire since SEITs don't have quite the same skills as application developers or test engineers.
Let's take a look at how your might approach hiring SEITs at different levels.
Junior/Intern
I've worked with several interns (in Ontario these people are called co-op students) and working with test automation is a great introduction to being a software engineer of any kind.
When to hire
Juniors are great on teams with established SEITs and test engineers. In some organizations there are entire teams dedicated to test automation efforts. If you need someone to bang out many test code or take partial ownership of a large test automation project, a junior hire could be a great choice.
Like in other software develoment roles, hiring a junior usually comes with an expectation that the junior candidate will receive some menitorship and early career training and experience.
What to look for
Candidates for a junior SEIT should show characteristics of a junior test engineer and/or application developer. On the testing side, an ideal candidate should show curiosity about software or systems, the ability to develop at least basic mental models of how a piece of software might work, and good communication skills, written or oral. On the development side, an ideal candidate should show at least a basic aptitude for programming from school or otherwise. They should be able to read, write and understand code in some programming language even a language that your organization doesn't use. Previous experience working in software development is nice but not a requirement. It can be safe to assume that junior candidates have worked with at most one test automation tool and likely none.
Key questions to ask
"How would you test a pen?": a classic test engineering question to access a candidate's reasoning skills, communication skills and question asking ability.
"Write a function to reverse a list in a given language": a basic but straightforward assessment of programming skills. The language can be set by the interviewer or chosen by the candidate.
"What is something you achieved that you are proud of?": a question to get some insight into the candidate's motivations and interests.
Intermediate
Teams getting started with test automation or parts of test automation could benefit from hiring experienced SEITs. Experienced SEITs can write test code but also set up needed infrastructure and coordinate with developers, operation engineers, and testers. Intermediate SEITs may also be able to start test automation efforts for new projects as needed.
When to hire
The main reason to look for an intermediate SEIT is when automation efforts exist but have grown past being managed part time by single individuals. For example, a team may have written some Selenium-based tests to test some scenarios, and at first a small number of tests can be handled by the team or specific members of the team without a test automation speciality. Once the number of tests grow to a certain size and/or complexity, that team may want to have a dedicated person looking after them. This could also be the case with automated performance tests, service-level tests, and so on. Or a team may seek to level up on test automation skills.
What to look for
Test automation is a speciality of software development, and as I've said before (if slightly reworded) SEITs are developers. Evaluating them as you would other intermediate developers isn't unreasonable. Look for one or more roles where they worked as SEITs, DevOps engineers, or other forms of automation. Note however that SEITs may have written many, many lines of code but aren't traditionally skilled in areas of programming. Most SEITs wouldn't be able to code low level algorithms like linked lists or binary trees and may have never really used map/reduce/fold approaches. Intermediate SEITs should be able to discuss tools they've used well, and strong intermediates can identify differences between tools and levels of testing. And they can definitely write tests; any intermediate SEIT should be able to write a few tests for at least one layer of an application, such as unit or component tests, API level tests, and/or end-to-end tests.
Key questions to ask
"Write three automated tests in a given language to test this class": This question assess a candidate's knowledge of test development code writng
"Here is an example login test. How would you improve it?": A possibly fun exercise that helps evaluate a candidate's knowledge and technical communication skills.
"Suppose a software bug is found using test code that you wrote. What would be your first reaction?": Teams handle bug reporting in various ways with varying levels of success, so this question can shed some light on a candidate's experiences.
"Describe a time that you and a team mate - could be an app developer, tester or SEIT - disagreed. How did you resolve your differences?": Classic interview question for verifying what a candidate's personality is like.
"What's your favourite continuous integration tool and why"
Senior
If you work for a organization that has experienced growth and is now "established" in some sense, you may find some senior SEITs or other roles similar to that. In medium- to large-sized companies - companies a typical person may have heard of - there are often whole testing teams and sometimes engineering teams dedicated strictly to internal infrastructure. If you consider automated tests as infrastructure, this means you'll need dedicated folks to oversee project health. This includes writing and maintaining test code, but also includes managing people and computational resources. Good SEITs know how to put things together from a testing standpoint, and how to organize accordingly.
When to hire
This is definitely a bit of a "I'll know it when I see it" situation. Teams can often get pretty far with hiring and retaining good intermediate SEITs or a combination of SEITs, testers and DevOps folks. In many larger organizations, test frameworks can grow to require their own dedicated team leads and managers. This is often where senior SEITs come in. Another scenario is where test automation efforts expect to ramp up, and having someone knowledagble in test automation can guide the process.
What to look for
Senior SEITs have worked with multiple forms of test automation at most or all layers of an application. They will likely be well versed in more than one area of test automation, and have built test frameworks from stratch and worked with existing test frameworks. Depending on the circumstance, they also may have some team lead or engineering management experience, and so may be able to work with more junior members on a team in addition to coding and configuring infrastructure. One of the most valuable aspects that senior SEITs can bring to the table is an informed opinion on test automation topics.
Key questions to ask
"Suppose you were asked to review a test suite of 1000 tests. How would you approach this?": This question gets at how a candidate approaches a relatively complex situation. The idea here is spur discussion more than provide a flat answer.
"What's your preferred test framework and why? What's your least preferred?": In light of hiring someone who brings experience and opinions, this question asks for some of those opinions directly. It can also be a good indicator of communication abilities and temperament.
"Describe a time that you and a team mate - could be an app developer, tester or SEIT - disagreed. How did you resolve your differences?": Again, a classic interview question that could be critical if the candidate in question has to manage people.
"Are you willing to mentor junior app developers and SEITs?"
"Explain the value of end-to-end testing to me if I were the CEO of the company": Test automation can seem like an esoteric speciality to those not in engineering organizations. Senior developers of all kinds (in my opinion) should be able to talk in business terms in addition to technical terms.
Happy testing!
0 notes
simplythetest · 2 years ago
Text
Breaking Down Problems Like a Mathematician
This week I changed a toilet seat on my main floor toilet. This was a substantial achievement, and I was able to do this thanks to my extensive mathematical training.
If you've ever changed a toilet seat on a standard toilet (in North America, anyway), you'll know it is fairly straightforward: undo the screws or latches on the old seat, remove the seat from the porcelain, remove any screws or nuts, and attach the new seat. I've done this a few times in my life and it is not too difficult.
However, this toilet seat was not easy to replace.
It was a style installed by the previous owners of my house that did not have easy to remove latches or nuts. The hinges were chrome and only attached by thick screws that went through the porcelain and had plastic nuts that held the screws and seat in place. The nuts were fastened extra tight, and could would not budge. It took me multiple attempts with multiple wrenches, and I could still not remove even one nut.
The turning point was realizing the entire problem (remove the old seat and install the new one) depended on solving one small aspect of the problem (remove one nut from a bolt). If I could solve this one small problem, I could solve the entire larger problem.
In mathematics sometimes a large problem can be reduced to a smaller problem or to a set of smaller problems. For example, properties of square matrices are often related, and knowing one aspect of a square matrix - for example, that it has a determinant of 4 - means you know lots about the matrix in question. With a determinant of 4 (aka not zero), we know the matrix has full rank and that it is invertible. Many problems in dynamical systems theory come down to knowing the eigenvalues of a square matrix, so even knowing what the determinant is can help provide information about the entire system.
In my case, I knew I had to remove the plastic nut. Using a hammer and chisel I hacked off the nut, which loosened the bolt enough that I could shift and remove the second nut on the other side, and in turn, remove the old seat. I was able to reduce my larger problem into a smaller one.
This shift in thinking also had a psychological benefit. All of a sudden a difficult multi-step problem became a focused single-step one. I moved the problem forward.
Math is surprisingly helpful to learn as a way to learn problem solving and thinking, even in situations that look like you're going down the drain.
Friendly Announcement: If you liked this post or ones like it, sign up for my math newsletter
1 note · View note