Tumgik
#(I would really like to find a more detailed and thorough discussion of the festival so I could feel more comfortable stating anything
paradife-loft · 4 years
Text
while doing some various Ghost Research™ for fic purposes over the past week, one piece of information I’ve seen mentioned or implied in a couple of places* is that, while generally mourning & veneration for the dead is only performed for ancestors, people older than you - that rule doesn’t necessarily hold for Zhongyuan Jie/the Hungry Ghost festival, since part of the deal there is making offerings for any visiting ghosts that might show up to cause trouble, particularly those that were forgotten/not buried right/otherwise have reasons to be displeased and out to Cause Problems?
anyway, what I’m thinking about here because of that specific detail is. yearly visits from small unmourned violently killed toddler ghost Jin Rusong.
hot, late summer visits to Jinlintai, and a private corner in one of Jin Guangyao’s half-enclosed gardens, set up with a few plates of offerings in the style he grew up setting out, for the child he cannot publicly perform his grief for, for the child poisoned before he was even given life by his father’s failings. folk religion generally disdained by the cultivator elite, but - these customs his mother taught him are still a part of him, and his son still crosses over into the living, sad and bewildered and looking for all of what he was cut short from. how can he deny him?
(Lan Xichen watches as he lights paper money and lanterns, and offers a silent hand on his arm. he can feel the cold yin of the spirit lingering faintly nearby. he understands enough not to offer to help release it.)
#look I simply think that jinlintai deserves to be. very very haunted.#(I would really like to find a more detailed and thorough discussion of the festival so I could feel more comfortable stating anything#without approximately eight billion hedges on implications?#but ah well)#'wouldn't JRS have gotten a soul-calming ceremony and not be ghost material tho?' I hear you say and well#yes probably he would have but consider: ghost kiddo#and if he is too young to have done MUCH cultivation practise at all when he dies.... SHRUG SHRUG?#I feel like he has plenty of reason to become a hungry dissatisfied spirit when he dies#(and unless the Nies also do not do those ceremonies... we have canonical evidence that they are not a foolproof thing Either)#anyway sometimes you are hanging out kind of melancholy in a lovely blooming flower garden and it is just#you and your sworn brother/boyfriend and the unrestful ghost of your sworn brother/boyfriend's dead child who nobody talks about#('doesn't it make more sense for him to mourn his kid with qin su rather than lxc' I hear and also: yes however)#(there is simply a xiyao Vibe that is Parts Of Us That Cannot Be Acknowledged Otherwise That We Share With Each Other and so..)#(also yeah I do think it's kind of implied that a lot of superstitious/religious practises are not... taken v seriously by cultivators?)#(and so there's more reasons for why this wouldn't be a Jin Family Affair if none of the rest of them grew up with it)#(meanwhile LXC exists in the liminal space of We Were On The Run Together Outside of Cultivator Society And Normal Rules history)#no good things for the poor sad cultivators#Jin Guangyao#I suppose I will just end up. writing this post instead of fic bc that's very responsible of me
31 notes · View notes
Text
Grade Book
Word Count: 1600+ (oneshot) [AO3]
Genre: Angst/Fluff
Characters: Korosensei, Class E (mentioned), the Second Reaper (mentioned)
Summary: When he was a man, the Reaper kept meticulous records of those he killed, as a mark of pride in his own work. Now that he’s Korosensei, what he wants to leave behind for good is a record of pride in his beloved students.
Written for the @assclasszine.
~0~
The Reaper is a methodical man.
It would be a rookie mistake to leave evidence of his work around his apartment, he knows that. Nobody but himself ever comes inside it. Even then, when he vacates his various residencies after some time, he leaves them emptier than they were when he first moved in, in body and soul, and it feels as if no one ever lived in them at all. He is a spirit, a god of slaughter, and the spaces he passes through leave no trace of human presence, only death.
At least, that’s the way it’s supposed to be, according to both his reputation and his own standards for what a legendary assassin is made of. But the Reaper is only human, after all, and he can in fact succumb to the average human compulsions. He’s fairly certain that it’s only humans that feel the need to meticulously list and organize things, the pleasure centers of the brain stimulated when a pattern is found and adhered to. He theorizes that it comes from the desire of a weak species to find some order or control over their lives, which can be ended or thrown into irreparable disarray out of absolutely nowhere.
The Reaper is not weak, and needs no such reassurance. He has very little life to upset in the first place. But he finds the process comforting anyway.
This time around he has been lucky enough to rent an apartment that comes with a desk. When he returns home with his most recent mission completed, he retrieves his blank black binder and a ballpoint pen from his suitcase, and sits down at it. He’s always surprised at how pleasant he finds the mixed scents of looseleaf paper, old wood, and fresh ink.
First he documents the details of the mission, taking it all down in a cipher of his own creation to hide his own location and methods, as well as the names of his employers. He doesn’t assume it to be unbreakable, but he supposes it will give anyone who doesn’t know him quite a job to do in solving it. He feels neither fear or doubt when he sets out to kill. At least, this is what he tells himself. 
This habit used to be for study purposes, back when he was in training himself. He used to have a section for reflecting on the mistakes he’d made, working on ways to do better. He makes no mistakes as a full-fledged killer, and when that section reappears in recent entries it is reserved only for the failings of his apprentice. Now instead he sticks firmly onto the pages identification photos of his targets, front and center, and the photos he takes to give his employers the proof that his job has been completed as ordered.
He writes down biological observations, the initial information on them given him by those employers (as well as whatever connection both share), any specifications they may have given him for the kill, the weapons and methods that he used in bringing about their deaths. He is tempted sometimes to put in the pictures and text clippings from the various newspaper articles about them — even the pitiful scraps that the largely overlooked ones get, in remembrance for average lives — but always decides against it. It isn’t his own personally gathered data, and he’s not some run of the mill serial killer, after all, gathering trophies and memorabilia from a hobby. 
The Reaper is a professional, the best of the best. His work is his life, and it is only fitting that one of his very few indulgences in that life is documenting that exceptional work. Statistics are not all of what makes him the world’s most perfect assassin, of course. People in his circles discuss what does, behind his back in hushed, bitter tones. He has heard many of their conclusions over the years, all of them wrong. The conclusion that he himself has drawn — which certainly lends it credence as the right one — is that his success comes from two things. It’s not only the core of ice that’s long since replaced his heart, allowing him to commit any gruesome task asked of him with the clearest mind and the least regret. It is also the intense devotion to his trade that has replaced any other emotion that might get in his way. He has nothing else, and needs nothing else, except for the death that has always surrounded him.
This book is merely a testament to that. To his work, if not himself. Like the shadowy god for which he’s named himself, when somebody finally takes his life, whoever he is will disappear into the misty night. Unimportant and unacknowledged. Only the work he has left behind will remain. Only the trail of blood stretching endlessly into the horizon.
The Reaper supposes that it is perfectly fitting. Such is the inescapable point of life, isn’t it? 
He writes out the name and time of this latest death, in a top corner, like he assumes a doctor would do. The point of his pencil lingers on the grayish paper, and idly scratches out the vague form of the kill’s broken form on the street.
~0~
Korosensei has very little experience with things like textbooks and strict curricula. So though if asked, he would vigorously deny anything so unprofessional as winging it, that is the majority of what he is doing at first. Karasuma must have his suspicions, of course, but he never says so outright, only gruffly barks him towards the right direction like an irritated sheepdog.
He doesn’t think he’s ever had teammates before, any more than he’s had this many students to train. The small sea of determined young faces looking up at him is unlike anything he’s ever been faced with. They’re certainly on the other side of the universe from the eternal dissonant calm on the face of his apprentice. Where the Second Reaper is ice inside, his children are pure youthful fire: overwhelming, beautiful, and sometimes even terrifying to behold. 
So it is almost second nature to begin recording them. Some part of him mourns the loss of his old scrapbooks, but he supposes that this grade book is a perfectly worthy replacement.
He doesn’t even notice it at first when his books become more than that. More than they have ever been, even at their most thorough.
All the information in his students’ files he meticulously copies down. Personal information and opinions come next, along with lesson plans, weapons data, the tactics they choose and their results. With all of his new appendages, it’s easier and faster than ever before to take down all his thoughts before he loses them. It’s all just logs and facts and records, really, just a whir of necessary information...until it isn’t.
All of a sudden, it’s candid photos instead of yearbook and ID standards, with the bright smiles of his students’ true selves instead of the dull-eyed depression their school life has forced upon them. It’s a diagram of the makings of anti-Sensei bullets, above the top ten best shots in the class. It’s train and plane tickets from their resort trip, bordering the pages of their vacation pictures, and four whole pages of bits and pieces from their festival success. Outstanding test grades are plastered everywhere, from cover to cover. 
Also scattered around are tentacle-drawn sketches (improving with each new attempt, if he does say so himself) of the best aspects of his classroom. He thinks he’s finally captured the wryness of Karma’s smirk, the strangely familiar shape of Kayano’s face, and most intriguing of all, the bright, striking sharpness of Nagisa’s eyes, glowing with killing intent. 
Korosensei fills so many pages that sometimes he forgets that his time and their space is limited. His pencil shakes over the page when it hits him that the date of his inevitable destruction is drawing near. He’ll need to wrap it up, as painful as it is...
Yes, that is exactly what he shall do, he decides, heart leaping a little. His personalized graduation albums are a work of art, but he supposes it couldn’t hurt to leave one more hidden treasure for Class E to find here, after the final bell has rung. So he gathers up all his books from the beginning of the year to now, and sets them all in orderly piles in a box, which he stores safely inside of his desk. 
He almost wants to take them all back out again, and look through them one last time. Maybe adjust some things. But no. No time for that. Besides, his raw and unedited feelings ought to mean the most to them, anyway. They are so very pure of heart and bursting with passion themselves, after all...
Korosensei straightens up and looks out the window at the ravaged moon. He hopes and prays that his children will be the ones to kill him, in the end, before he can destroy them. Those faces of theirs would make for a fine last sight. And he doesn’t want to be the one who snuffs those brilliant lights out, after all, before they’ve even reached their prime. He hopes they will always know how special they are, and how much they are worth, and how deeply his adoration of them runs even when he is gone. 
The Reaper never once told anyone “I love you.” Korosensei isn’t quite sure how to, either. But for his students, he has given it his best try. 
The name of the Reaper is gone, and the trail of blood has run just about dry. And when Korosensei disappears, it is life and love that he will leave behind, for his children to carry with them as they surge forward and thrive.
5 notes · View notes
jeanshesallenberger · 7 years
Text
Discovery on a Budget: Part I
If you crack open any design textbook, you’ll see some depiction of the design cycle: discover, ideate, create, evaluate, and repeat. Whenever we bring on a new client or start working on a new feature, we start at the top of the wheel with discover (or discovery). It is the time in the project when we define what problem we are trying to solve and what our first approach at solving it should be.
Ye olde design cycle
We commonly talk about discovery at the start of a sprint cycle at an established business, where there are things like budgets, product teams, and existing customers. The discovery process may include interviewing stakeholders or pouring over existing user data. And we always exit the discovery phase with some sort of idea to move forward with.
However, discovery is inherently different when you work at a nonprofit, startup, or fledgling small business. It may be a design team of one (you), with zero dollars to spend, and only a handful of people aware the business even exists. There are no clients to interview and no existing data to examine. This may also be the case at large businesses when they want to test the waters on a new direction without overcommitting (or overspending). Whenever you are constrained on budget, data, and stakeholders, you need to be flexible and crafty in how you conduct discovery research. But you can’t skimp on rigor and thoroughness. If the idea you exit the discovery phase with isn’t any good, your big launch could turn out to be a business-ending flop.
In this article I’ll take you through a discovery research cycle, but apply it towards a (fictitious) startup idea. I’ll introduce strategies for conducting discovery research with no budget, existing user data, or resources to speak of. And I’ll show how the research shapes the business going forward.
Write up the problem hypothesis
An awful lot of ink (virtual or otherwise) has been spent on proclaiming we should all, “fall in love with the problem, not the solution.” And it has been ink spent well. When it comes to product building, a problem-focused philosophy is the cornerstone of any user-centric business.
But how, exactly, do you know when you have a problem worth solving? If you work at a large, established business you may have user feedback and data pointing you like flashing arrows on a well-marked road towards a problem worth solving. However, if you are launching a startup, or work at a larger business venturing into new territory, it can be more like hiking through the woods and searching for the next blaze mark on the trail. Your ideas are likely based on personal experiences and gut instincts.
When your ideas are based on personal experiences, assumptions, and instincts, it’s important to realize they need a higher-than-average level of tire-kicking. You need to evaluate the question “Do I have a problem worth solving?” with a higher level of rigor than you would at a company with budget to spare and a wealth of existing data. You need to take all of your ideas and assumptions and examine them thoroughly. And the best way to examine your ideas and categorize your assumptions is with a hypothesis.
As the dictionary describes, a hypothesis is “a supposition or proposed explanation made on the basis of limited evidence as a starting point for further investigation.” That also serves as a good description of why we do discovery research in the first place. We may have an idea that there is a problem worth solving, but we don’t yet know the scope or critical details. Articulating our instincts, ideas, and assumptions as a problem hypothesis lays a foundation for the research moving forward.
Here is a general formula you can use to write a problem hypothesis:
Because [assumptions and gut instincts about the problem], users are [in some undesirable state]. They need [solution idea].
For this article, I decided to “launch” a fictitious (and overly ambitious) startup as an example. Here is the problem hypothesis I wrote for my startup:
Because their business model relies on advertising, social media tools like Facebook are deliberately designed to “hook” users and make them addicted to the service. Users are unhappy with this and would rather have a healthier relationship with social media tools. They would be willing to pay for a social media service that was designed with mental health in mind.
You can see in this example that my assumptions are:
Users feel that social media sites like Facebook are addictive.
Users don’t like to be addicted to social media.
Users would be willing to pay for a non-addictive Facebook replacement.
These are the assumptions I’ll be researching and testing throughout the discovery process. If I find through my research that I cannot readily affirm these assumptions, it means I might not be ready to take on Mr. Zuckerberg just yet.
The benefit of articulating our assumptions in the form of a hypothesis is that it provides something concrete to talk about, refer to, and test. The whole product team can be involved in forming the initial problem hypothesis, and you can refer back to it throughout the discovery process. Once we’ve completed the research and analyzed the results, we can edit the hypothesis to reflect our new understanding of our users and the problems we want to solve.
Now that we’ve articulated a problem hypothesis, it is time to figure out our research plan. In the following two sections, I’ll cover the research method I recommend the most for new ventures, as well as strategies for recruiting participants on a budget.
A method that is useful in all phases of design: interviews
In my career as a user researcher, I have used all sorts of methods. I’ve done A/B testing, eye tracking, Wizard of Oz testing, think-alouds, contextual inquiries, and guerilla testing. But the one research method I utilize the most, and that I believe provides the most “bang for the buck,” is user interviews.
User interviews are relatively inexpensive to conduct. You don’t need to travel to a client site and you don’t need a fortune’s worth of equipment. If you have access to a phone, you can conduct an interview with participants all around the world. Yet interviews provide a wealth of information and can be used in every phase of research and design. Interviews are especially useful in discovery, because it is a method that is adaptable. As you learn more about the problem you are trying to solve, you can adapt your interview protocol to match.
To be clear, your interviewees will not tell you:
what to build;
or how to build it.
But they absolutely can tell you:
what problem they have;
how they feel about it;
and what the value of a solution would mean to them.
And if you know the problem, how users feels about it, and the value of a solution, you are well on your way to designing the right product.
The challenge of conducting a good user interview is making sure you ask the questions that elicit that information. Here are a couple tips:
Tip 1: always ask the following two questions:
“What do you like about [blank]?”
“What do you dislike about [blank]?”
… where you fill “[blank]” with whatever domain your future product will improve.
Your objective is to gain an understanding of all aspects of the problem your potential customers face—the bad and the good. One common mistake is to spend too much time investigating what’s wrong with the current state of affairs. Naturally, you want your product to fix all the problems your customers face. However, you also need to preserve what currently works well, what is satisfying, or what is otherwise good about how users accomplish their goals currently. So it is important to ask about both in user interviews.
For example, in my interviews I always asked, “What do you like about using Facebook?” And it wasn’t until my interview participant told me everything they enjoyed about Facebook that I would ask, “What do you dislike about using Facebook?”
Tip 2: after (nearly) every response, ask them to say more.
The goal of conducting interviews is to gain an exhaustive set of data to review and consider moving forward. That means you don’t want your participants to discuss one thing they like and dislike, you want them to tell you all the things they like and dislike.
Here is an example of how this played out in one of the interviews I conducted:
Interviewer (Me): What do you like about using Facebook?
Interviewee: I like seeing people on there that I wouldn’t otherwise get a chance to see and catch up with in real life. I have moved a couple times so I have a lot of friends that I don’t see regularly. I also like seeing the people I know do well, even though I haven’t seen them since, maybe, high school. But I like seeing how their life has gone. I like seeing their kids. I like seeing their accomplishments. It’s also a little creepy because it’s a window into their life and we haven’t actually talked in forever. But I like staying connected.
Interviewer (Me): What else do you like about it?
Interviewee: Um, well it’s also sort of a convenient way of keeping contacts. There have been a few times when I was able to message people and get in touch with people even when I don’t have their address or email in my phone. I could message them through Facebook.
Interviewer (Me): Great. Is there anything else you like about it?
Interviewee: Let me think … well I also find cool stuff to do on the weekends there sometimes. They have an events feature. And businesses, or local places, will post events and there have been a couple times where I’ve gone to something cool. Like I found a cool movie festival once that way.
Interviewer (Me): That seems cool. What else do you like about using Facebook?
Interviewee: Uh … that’s all I think I really use it for. I can’t really think of anything else. Mainly I use it just to keep in touch with people that I’ve met over the years.
From this example you can see the first feature that popped into the interviewee’s mind was their ability to keep up with friends that they otherwise wouldn’t have much opportunity to connect with anymore. That is a feature that any Facebook replacement would have to replicate. However, if I hadn’t pushed the interviewee to think of even more features they like, I might have never uncovered an important secondary feature: convenient in-app messaging. In fact, six out of the eleven people I interviewed for this project said they liked Facebook Messenger. But not a single one of them mentioned that feature first. It only came up in conversation after I probed for more.
As I continued to repeat my question, the interviewee thought of one more feature they liked: local event listings. (Five out of the eleven people I interviewed mentioned this feature.) But after that, the interviewee couldn’t think of any more features to discuss. You know you can move on to the next question in the interview when your participant starts to repeat themselves or bluntly tells you they have nothing else to say.
Recruit all around you, then document the bias
There are all sorts of ways to recruit participants for research. You can hire an agency or use a tool like UserTesting.com. But many of those paid-for options can be quite costly, and since we are working with a shoestring budget we have roughly zero dollars to spend on recruitment. We will have to be creative.
My post on Facebook to recruit volunteers. One volunteer decided to respond with a Hunger Games “I volunteer as tribute!” gif.
For my project, I decided to rely on the kindness of friends and strangers I could reach through Facebook. I posted one request for participants on my personal Facebook page, and another on the local FreeCodeCamp page. A day after I posted my request, twenty-five friends and five strangers volunteered. This type of participant recruitment method is called convenience sampling, because I was recruiting participants that were conveniently accessible to me.
Since my project involved talking to people about social media sites like Facebook, it was appropriate for my first attempt at recruiting to start on Facebook. I could be sure that everyone who saw my request uses Facebook in some form or fashion. However, like all convenience sampling, my recruitment method was biased. (I’ll explain how in just a bit.)
Bias is something that we should try—whenever possible—to avoid. If we have access to more sophisticated recruitment methods, we should use them. However, when you have a tight budget, avoiding recruitment bias is virtually impossible. In this scenario, our goals should be to:
mitigate bias as best we can;
and document all the biases we see.
For my project, I could mitigate some of the biases by using a few more recruitment methods. I could go to various neighborhoods and try to recruit participants off the street (i.e., guerilla testing). If I had a little bit of money to spend, I could hang out in various coffee shops and offer folks free coffee in exchange for ten-minute interviews. These recruitment methods also fall under the umbrella of convenience sampling, but by using a variety of methods I can mitigate some of the bias I would have from using just one of them.
Also, it is always important to reflect on and document how your sampling method is biased. For my project, I wrote the following in my notes:
All of the people I interviewed were connected to me in some way on Facebook. Many of them I know well enough to be “friends” with. All of them were around my age, many (but not all) worked in tech in some form or fashion, and all of them but one lived in the US.
Documenting bias ensures that we won’t forget about the bias when it comes time to analyze and discuss the results.
Let’s keep this going
As the title suggests, this is just the first installment of a series of articles on the discovery process. In part two, I will analyze the results of my interviews, revise my problem hypothesis, and continue to work on my experimental startup. I will launch into another round of discovery research, but this time utilizing some different research methods, like A/B testing and fake-door testing. You can help me out by checking out this mock landing page for Candor Network (what I’ve named my fictitious startup) and taking the survey you see there.
http://ift.tt/2CSs5T0
0 notes
mariaaklnthony · 7 years
Text
Discovery on a Budget: Part I
If you crack open any design textbook, you’ll see some depiction of the design cycle: discover, ideate, create, evaluate, and repeat. Whenever we bring on a new client or start working on a new feature, we start at the top of the wheel with discover (or discovery). It is the time in the project when we define what problem we are trying to solve and what our first approach at solving it should be.
Ye olde design cycle
We commonly talk about discovery at the start of a sprint cycle at an established business, where there are things like budgets, product teams, and existing customers. The discovery process may include interviewing stakeholders or pouring over existing user data. And we always exit the discovery phase with some sort of idea to move forward with.
However, discovery is inherently different when you work at a nonprofit, startup, or fledgling small business. It may be a design team of one (you), with zero dollars to spend, and only a handful of people aware the business even exists. There are no clients to interview and no existing data to examine. This may also be the case at large businesses when they want to test the waters on a new direction without overcommitting (or overspending). Whenever you are constrained on budget, data, and stakeholders, you need to be flexible and crafty in how you conduct discovery research. But you can’t skimp on rigor and thoroughness. If the idea you exit the discovery phase with isn’t any good, your big launch could turn out to be a business-ending flop.
In this article I’ll take you through a discovery research cycle, but apply it towards a (fictitious) startup idea. I’ll introduce strategies for conducting discovery research with no budget, existing user data, or resources to speak of. And I’ll show how the research shapes the business going forward.
Write up the problem hypothesis
An awful lot of ink (virtual or otherwise) has been spent on proclaiming we should all, “fall in love with the problem, not the solution.” And it has been ink spent well. When it comes to product building, a problem-focused philosophy is the cornerstone of any user-centric business.
But how, exactly, do you know when you have a problem worth solving? If you work at a large, established business you may have user feedback and data pointing you like flashing arrows on a well-marked road towards a problem worth solving. However, if you are launching a startup, or work at a larger business venturing into new territory, it can be more like hiking through the woods and searching for the next blaze mark on the trail. Your ideas are likely based on personal experiences and gut instincts.
When your ideas are based on personal experiences, assumptions, and instincts, it’s important to realize they need a higher-than-average level of tire-kicking. You need to evaluate the question “Do I have a problem worth solving?” with a higher level of rigor than you would at a company with budget to spare and a wealth of existing data. You need to take all of your ideas and assumptions and examine them thoroughly. And the best way to examine your ideas and categorize your assumptions is with a hypothesis.
As the dictionary describes, a hypothesis is “a supposition or proposed explanation made on the basis of limited evidence as a starting point for further investigation.” That also serves as a good description of why we do discovery research in the first place. We may have an idea that there is a problem worth solving, but we don’t yet know the scope or critical details. Articulating our instincts, ideas, and assumptions as a problem hypothesis lays a foundation for the research moving forward.
Here is a general formula you can use to write a problem hypothesis:
Because [assumptions and gut instincts about the problem], users are [in some undesirable state]. They need [solution idea].
For this article, I decided to “launch” a fictitious (and overly ambitious) startup as an example. Here is the problem hypothesis I wrote for my startup:
Because their business model relies on advertising, social media tools like Facebook are deliberately designed to “hook” users and make them addicted to the service. Users are unhappy with this and would rather have a healthier relationship with social media tools. They would be willing to pay for a social media service that was designed with mental health in mind.
You can see in this example that my assumptions are:
Users feel that social media sites like Facebook are addictive.
Users don’t like to be addicted to social media.
Users would be willing to pay for a non-addictive Facebook replacement.
These are the assumptions I’ll be researching and testing throughout the discovery process. If I find through my research that I cannot readily affirm these assumptions, it means I might not be ready to take on Mr. Zuckerberg just yet.
The benefit of articulating our assumptions in the form of a hypothesis is that it provides something concrete to talk about, refer to, and test. The whole product team can be involved in forming the initial problem hypothesis, and you can refer back to it throughout the discovery process. Once we’ve completed the research and analyzed the results, we can edit the hypothesis to reflect our new understanding of our users and the problems we want to solve.
Now that we’ve articulated a problem hypothesis, it is time to figure out our research plan. In the following two sections, I’ll cover the research method I recommend the most for new ventures, as well as strategies for recruiting participants on a budget.
A method that is useful in all phases of design: interviews
In my career as a user researcher, I have used all sorts of methods. I’ve done A/B testing, eye tracking, Wizard of Oz testing, think-alouds, contextual inquiries, and guerilla testing. But the one research method I utilize the most, and that I believe provides the most “bang for the buck,” is user interviews.
User interviews are relatively inexpensive to conduct. You don’t need to travel to a client site and you don’t need a fortune’s worth of equipment. If you have access to a phone, you can conduct an interview with participants all around the world. Yet interviews provide a wealth of information and can be used in every phase of research and design. Interviews are especially useful in discovery, because it is a method that is adaptable. As you learn more about the problem you are trying to solve, you can adapt your interview protocol to match.
To be clear, your interviewees will not tell you:
what to build;
or how to build it.
But they absolutely can tell you:
what problem they have;
how they feel about it;
and what the value of a solution would mean to them.
And if you know the problem, how users feels about it, and the value of a solution, you are well on your way to designing the right product.
The challenge of conducting a good user interview is making sure you ask the questions that elicit that information. Here are a couple tips:
Tip 1: always ask the following two questions:
“What do you like about [blank]?”
“What do you dislike about [blank]?”
… where you fill “[blank]” with whatever domain your future product will improve.
Your objective is to gain an understanding of all aspects of the problem your potential customers face—the bad and the good. One common mistake is to spend too much time investigating what’s wrong with the current state of affairs. Naturally, you want your product to fix all the problems your customers face. However, you also need to preserve what currently works well, what is satisfying, or what is otherwise good about how users accomplish their goals currently. So it is important to ask about both in user interviews.
For example, in my interviews I always asked, “What do you like about using Facebook?” And it wasn’t until my interview participant told me everything they enjoyed about Facebook that I would ask, “What do you dislike about using Facebook?”
Tip 2: after (nearly) every response, ask them to say more.
The goal of conducting interviews is to gain an exhaustive set of data to review and consider moving forward. That means you don’t want your participants to discuss one thing they like and dislike, you want them to tell you all the things they like and dislike.
Here is an example of how this played out in one of the interviews I conducted:
Interviewer (Me): What do you like about using Facebook?
Interviewee: I like seeing people on there that I wouldn’t otherwise get a chance to see and catch up with in real life. I have moved a couple times so I have a lot of friends that I don’t see regularly. I also like seeing the people I know do well, even though I haven’t seen them since, maybe, high school. But I like seeing how their life has gone. I like seeing their kids. I like seeing their accomplishments. It’s also a little creepy because it’s a window into their life and we haven’t actually talked in forever. But I like staying connected.
Interviewer (Me): What else do you like about it?
Interviewee: Um, well it’s also sort of a convenient way of keeping contacts. There have been a few times when I was able to message people and get in touch with people even when I don’t have their address or email in my phone. I could message them through Facebook.
Interviewer (Me): Great. Is there anything else you like about it?
Interviewee: Let me think … well I also find cool stuff to do on the weekends there sometimes. They have an events feature. And businesses, or local places, will post events and there have been a couple times where I’ve gone to something cool. Like I found a cool movie festival once that way.
Interviewer (Me): That seems cool. What else do you like about using Facebook?
Interviewee: Uh … that’s all I think I really use it for. I can’t really think of anything else. Mainly I use it just to keep in touch with people that I’ve met over the years.
From this example you can see the first feature that popped into the interviewee’s mind was their ability to keep up with friends that they otherwise wouldn’t have much opportunity to connect with anymore. That is a feature that any Facebook replacement would have to replicate. However, if I hadn’t pushed the interviewee to think of even more features they like, I might have never uncovered an important secondary feature: convenient in-app messaging. In fact, six out of the eleven people I interviewed for this project said they liked Facebook Messenger. But not a single one of them mentioned that feature first. It only came up in conversation after I probed for more.
As I continued to repeat my question, the interviewee thought of one more feature they liked: local event listings. (Five out of the eleven people I interviewed mentioned this feature.) But after that, the interviewee couldn’t think of any more features to discuss. You know you can move on to the next question in the interview when your participant starts to repeat themselves or bluntly tells you they have nothing else to say.
Recruit all around you, then document the bias
There are all sorts of ways to recruit participants for research. You can hire an agency or use a tool like UserTesting.com. But many of those paid-for options can be quite costly, and since we are working with a shoestring budget we have roughly zero dollars to spend on recruitment. We will have to be creative.
My post on Facebook to recruit volunteers. One volunteer decided to respond with a Hunger Games “I volunteer as tribute!” gif.
For my project, I decided to rely on the kindness of friends and strangers I could reach through Facebook. I posted one request for participants on my personal Facebook page, and another on the local FreeCodeCamp page. A day after I posted my request, twenty-five friends and five strangers volunteered. This type of participant recruitment method is called convenience sampling, because I was recruiting participants that were conveniently accessible to me.
Since my project involved talking to people about social media sites like Facebook, it was appropriate for my first attempt at recruiting to start on Facebook. I could be sure that everyone who saw my request uses Facebook in some form or fashion. However, like all convenience sampling, my recruitment method was biased. (I’ll explain how in just a bit.)
Bias is something that we should try—whenever possible—to avoid. If we have access to more sophisticated recruitment methods, we should use them. However, when you have a tight budget, avoiding recruitment bias is virtually impossible. In this scenario, our goals should be to:
mitigate bias as best we can;
and document all the biases we see.
For my project, I could mitigate some of the biases by using a few more recruitment methods. I could go to various neighborhoods and try to recruit participants off the street (i.e., guerilla testing). If I had a little bit of money to spend, I could hang out in various coffee shops and offer folks free coffee in exchange for ten-minute interviews. These recruitment methods also fall under the umbrella of convenience sampling, but by using a variety of methods I can mitigate some of the bias I would have from using just one of them.
Also, it is always important to reflect on and document how your sampling method is biased. For my project, I wrote the following in my notes:
All of the people I interviewed were connected to me in some way on Facebook. Many of them I know well enough to be “friends” with. All of them were around my age, many (but not all) worked in tech in some form or fashion, and all of them but one lived in the US.
Documenting bias ensures that we won’t forget about the bias when it comes time to analyze and discuss the results.
Let’s keep this going
As the title suggests, this is just the first installment of a series of articles on the discovery process. In part two, I will analyze the results of my interviews, revise my problem hypothesis, and continue to work on my experimental startup. I will launch into another round of discovery research, but this time utilizing some different research methods, like A/B testing and fake-door testing. You can help me out by checking out this mock landing page for Candor Network (what I’ve named my fictitious startup) and taking the survey you see there.
http://ift.tt/2CSs5T0
0 notes
dustinwootenne · 7 years
Text
Discovery on a Budget: Part I
If you crack open any design textbook, you’ll see some depiction of the design cycle: discover, ideate, create, evaluate, and repeat. Whenever we bring on a new client or start working on a new feature, we start at the top of the wheel with discover (or discovery). It is the time in the project when we define what problem we are trying to solve and what our first approach at solving it should be.
Ye olde design cycle
We commonly talk about discovery at the start of a sprint cycle at an established business, where there are things like budgets, product teams, and existing customers. The discovery process may include interviewing stakeholders or pouring over existing user data. And we always exit the discovery phase with some sort of idea to move forward with.
However, discovery is inherently different when you work at a nonprofit, startup, or fledgling small business. It may be a design team of one (you), with zero dollars to spend, and only a handful of people aware the business even exists. There are no clients to interview and no existing data to examine. This may also be the case at large businesses when they want to test the waters on a new direction without overcommitting (or overspending). Whenever you are constrained on budget, data, and stakeholders, you need to be flexible and crafty in how you conduct discovery research. But you can’t skimp on rigor and thoroughness. If the idea you exit the discovery phase with isn’t any good, your big launch could turn out to be a business-ending flop.
In this article I’ll take you through a discovery research cycle, but apply it towards a (fictitious) startup idea. I’ll introduce strategies for conducting discovery research with no budget, existing user data, or resources to speak of. And I’ll show how the research shapes the business going forward.
Write up the problem hypothesis
An awful lot of ink (virtual or otherwise) has been spent on proclaiming we should all, “fall in love with the problem, not the solution.” And it has been ink spent well. When it comes to product building, a problem-focused philosophy is the cornerstone of any user-centric business.
But how, exactly, do you know when you have a problem worth solving? If you work at a large, established business you may have user feedback and data pointing you like flashing arrows on a well-marked road towards a problem worth solving. However, if you are launching a startup, or work at a larger business venturing into new territory, it can be more like hiking through the woods and searching for the next blaze mark on the trail. Your ideas are likely based on personal experiences and gut instincts.
When your ideas are based on personal experiences, assumptions, and instincts, it’s important to realize they need a higher-than-average level of tire-kicking. You need to evaluate the question “Do I have a problem worth solving?” with a higher level of rigor than you would at a company with budget to spare and a wealth of existing data. You need to take all of your ideas and assumptions and examine them thoroughly. And the best way to examine your ideas and categorize your assumptions is with a hypothesis.
As the dictionary describes, a hypothesis is “a supposition or proposed explanation made on the basis of limited evidence as a starting point for further investigation.” That also serves as a good description of why we do discovery research in the first place. We may have an idea that there is a problem worth solving, but we don’t yet know the scope or critical details. Articulating our instincts, ideas, and assumptions as a problem hypothesis lays a foundation for the research moving forward.
Here is a general formula you can use to write a problem hypothesis:
Because [assumptions and gut instincts about the problem], users are [in some undesirable state]. They need [solution idea].
For this article, I decided to “launch” a fictitious (and overly ambitious) startup as an example. Here is the problem hypothesis I wrote for my startup:
Because their business model relies on advertising, social media tools like Facebook are deliberately designed to “hook” users and make them addicted to the service. Users are unhappy with this and would rather have a healthier relationship with social media tools. They would be willing to pay for a social media service that was designed with mental health in mind.
You can see in this example that my assumptions are:
Users feel that social media sites like Facebook are addictive.
Users don’t like to be addicted to social media.
Users would be willing to pay for a non-addictive Facebook replacement.
These are the assumptions I’ll be researching and testing throughout the discovery process. If I find through my research that I cannot readily affirm these assumptions, it means I might not be ready to take on Mr. Zuckerberg just yet.
The benefit of articulating our assumptions in the form of a hypothesis is that it provides something concrete to talk about, refer to, and test. The whole product team can be involved in forming the initial problem hypothesis, and you can refer back to it throughout the discovery process. Once we’ve completed the research and analyzed the results, we can edit the hypothesis to reflect our new understanding of our users and the problems we want to solve.
Now that we’ve articulated a problem hypothesis, it is time to figure out our research plan. In the following two sections, I’ll cover the research method I recommend the most for new ventures, as well as strategies for recruiting participants on a budget.
A method that is useful in all phases of design: interviews
In my career as a user researcher, I have used all sorts of methods. I’ve done A/B testing, eye tracking, Wizard of Oz testing, think-alouds, contextual inquiries, and guerilla testing. But the one research method I utilize the most, and that I believe provides the most “bang for the buck,” is user interviews.
User interviews are relatively inexpensive to conduct. You don’t need to travel to a client site and you don’t need a fortune’s worth of equipment. If you have access to a phone, you can conduct an interview with participants all around the world. Yet interviews provide a wealth of information and can be used in every phase of research and design. Interviews are especially useful in discovery, because it is a method that is adaptable. As you learn more about the problem you are trying to solve, you can adapt your interview protocol to match.
To be clear, your interviewees will not tell you:
what to build;
or how to build it.
But they absolutely can tell you:
what problem they have;
how they feel about it;
and what the value of a solution would mean to them.
And if you know the problem, how users feels about it, and the value of a solution, you are well on your way to designing the right product.
The challenge of conducting a good user interview is making sure you ask the questions that elicit that information. Here are a couple tips:
Tip 1: always ask the following two questions:
“What do you like about [blank]?”
“What do you dislike about [blank]?”
… where you fill “[blank]” with whatever domain your future product will improve.
Your objective is to gain an understanding of all aspects of the problem your potential customers face—the bad and the good. One common mistake is to spend too much time investigating what’s wrong with the current state of affairs. Naturally, you want your product to fix all the problems your customers face. However, you also need to preserve what currently works well, what is satisfying, or what is otherwise good about how users accomplish their goals currently. So it is important to ask about both in user interviews.
For example, in my interviews I always asked, “What do you like about using Facebook?” And it wasn’t until my interview participant told me everything they enjoyed about Facebook that I would ask, “What do you dislike about using Facebook?”
Tip 2: after (nearly) every response, ask them to say more.
The goal of conducting interviews is to gain an exhaustive set of data to review and consider moving forward. That means you don’t want your participants to discuss one thing they like and dislike, you want them to tell you all the things they like and dislike.
Here is an example of how this played out in one of the interviews I conducted:
Interviewer (Me): What do you like about using Facebook?
Interviewee: I like seeing people on there that I wouldn’t otherwise get a chance to see and catch up with in real life. I have moved a couple times so I have a lot of friends that I don’t see regularly. I also like seeing the people I know do well, even though I haven’t seen them since, maybe, high school. But I like seeing how their life has gone. I like seeing their kids. I like seeing their accomplishments. It’s also a little creepy because it’s a window into their life and we haven’t actually talked in forever. But I like staying connected.
Interviewer (Me): What else do you like about it?
Interviewee: Um, well it’s also sort of a convenient way of keeping contacts. There have been a few times when I was able to message people and get in touch with people even when I don’t have their address or email in my phone. I could message them through Facebook.
Interviewer (Me): Great. Is there anything else you like about it?
Interviewee: Let me think … well I also find cool stuff to do on the weekends there sometimes. They have an events feature. And businesses, or local places, will post events and there have been a couple times where I’ve gone to something cool. Like I found a cool movie festival once that way.
Interviewer (Me): That seems cool. What else do you like about using Facebook?
Interviewee: Uh … that’s all I think I really use it for. I can’t really think of anything else. Mainly I use it just to keep in touch with people that I’ve met over the years.
From this example you can see the first feature that popped into the interviewee’s mind was their ability to keep up with friends that they otherwise wouldn’t have much opportunity to connect with anymore. That is a feature that any Facebook replacement would have to replicate. However, if I hadn’t pushed the interviewee to think of even more features they like, I might have never uncovered an important secondary feature: convenient in-app messaging. In fact, six out of the eleven people I interviewed for this project said they liked Facebook Messenger. But not a single one of them mentioned that feature first. It only came up in conversation after I probed for more.
As I continued to repeat my question, the interviewee thought of one more feature they liked: local event listings. (Five out of the eleven people I interviewed mentioned this feature.) But after that, the interviewee couldn’t think of any more features to discuss. You know you can move on to the next question in the interview when your participant starts to repeat themselves or bluntly tells you they have nothing else to say.
Recruit all around you, then document the bias
There are all sorts of ways to recruit participants for research. You can hire an agency or use a tool like UserTesting.com. But many of those paid-for options can be quite costly, and since we are working with a shoestring budget we have roughly zero dollars to spend on recruitment. We will have to be creative.
My post on Facebook to recruit volunteers. One volunteer decided to respond with a Hunger Games “I volunteer as tribute!” gif.
For my project, I decided to rely on the kindness of friends and strangers I could reach through Facebook. I posted one request for participants on my personal Facebook page, and another on the local FreeCodeCamp page. A day after I posted my request, twenty-five friends and five strangers volunteered. This type of participant recruitment method is called convenience sampling, because I was recruiting participants that were conveniently accessible to me.
Since my project involved talking to people about social media sites like Facebook, it was appropriate for my first attempt at recruiting to start on Facebook. I could be sure that everyone who saw my request uses Facebook in some form or fashion. However, like all convenience sampling, my recruitment method was biased. (I’ll explain how in just a bit.)
Bias is something that we should try—whenever possible—to avoid. If we have access to more sophisticated recruitment methods, we should use them. However, when you have a tight budget, avoiding recruitment bias is virtually impossible. In this scenario, our goals should be to:
mitigate bias as best we can;
and document all the biases we see.
For my project, I could mitigate some of the biases by using a few more recruitment methods. I could go to various neighborhoods and try to recruit participants off the street (i.e., guerilla testing). If I had a little bit of money to spend, I could hang out in various coffee shops and offer folks free coffee in exchange for ten-minute interviews. These recruitment methods also fall under the umbrella of convenience sampling, but by using a variety of methods I can mitigate some of the bias I would have from using just one of them.
Also, it is always important to reflect on and document how your sampling method is biased. For my project, I wrote the following in my notes:
All of the people I interviewed were connected to me in some way on Facebook. Many of them I know well enough to be “friends�� with. All of them were around my age, many (but not all) worked in tech in some form or fashion, and all of them but one lived in the US.
Documenting bias ensures that we won’t forget about the bias when it comes time to analyze and discuss the results.
Let’s keep this going
As the title suggests, this is just the first installment of a series of articles on the discovery process. In part two, I will analyze the results of my interviews, revise my problem hypothesis, and continue to work on my experimental startup. I will launch into another round of discovery research, but this time utilizing some different research methods, like A/B testing and fake-door testing. You can help me out by checking out this mock landing page for Candor Network (what I’ve named my fictitious startup) and taking the survey you see there.
http://ift.tt/2CSs5T0
0 notes
waltercostellone · 7 years
Text
Discovery on a Budget: Part I
If you crack open any design textbook, you’ll see some depiction of the design cycle: discover, ideate, create, evaluate, and repeat. Whenever we bring on a new client or start working on a new feature, we start at the top of the wheel with discover (or discovery). It is the time in the project when we define what problem we are trying to solve and what our first approach at solving it should be.
Ye olde design cycle
We commonly talk about discovery at the start of a sprint cycle at an established business, where there are things like budgets, product teams, and existing customers. The discovery process may include interviewing stakeholders or pouring over existing user data. And we always exit the discovery phase with some sort of idea to move forward with.
However, discovery is inherently different when you work at a nonprofit, startup, or fledgling small business. It may be a design team of one (you), with zero dollars to spend, and only a handful of people aware the business even exists. There are no clients to interview and no existing data to examine. This may also be the case at large businesses when they want to test the waters on a new direction without overcommitting (or overspending). Whenever you are constrained on budget, data, and stakeholders, you need to be flexible and crafty in how you conduct discovery research. But you can’t skimp on rigor and thoroughness. If the idea you exit the discovery phase with isn’t any good, your big launch could turn out to be a business-ending flop.
In this article I’ll take you through a discovery research cycle, but apply it towards a (fictitious) startup idea. I’ll introduce strategies for conducting discovery research with no budget, existing user data, or resources to speak of. And I’ll show how the research shapes the business going forward.
Write up the problem hypothesis
An awful lot of ink (virtual or otherwise) has been spent on proclaiming we should all, “fall in love with the problem, not the solution.” And it has been ink spent well. When it comes to product building, a problem-focused philosophy is the cornerstone of any user-centric business.
But how, exactly, do you know when you have a problem worth solving? If you work at a large, established business you may have user feedback and data pointing you like flashing arrows on a well-marked road towards a problem worth solving. However, if you are launching a startup, or work at a larger business venturing into new territory, it can be more like hiking through the woods and searching for the next blaze mark on the trail. Your ideas are likely based on personal experiences and gut instincts.
When your ideas are based on personal experiences, assumptions, and instincts, it’s important to realize they need a higher-than-average level of tire-kicking. You need to evaluate the question “Do I have a problem worth solving?” with a higher level of rigor than you would at a company with budget to spare and a wealth of existing data. You need to take all of your ideas and assumptions and examine them thoroughly. And the best way to examine your ideas and categorize your assumptions is with a hypothesis.
As the dictionary describes, a hypothesis is “a supposition or proposed explanation made on the basis of limited evidence as a starting point for further investigation.” That also serves as a good description of why we do discovery research in the first place. We may have an idea that there is a problem worth solving, but we don’t yet know the scope or critical details. Articulating our instincts, ideas, and assumptions as a problem hypothesis lays a foundation for the research moving forward.
Here is a general formula you can use to write a problem hypothesis:
Because [assumptions and gut instincts about the problem], users are [in some undesirable state]. They need [solution idea].
For this article, I decided to “launch” a fictitious (and overly ambitious) startup as an example. Here is the problem hypothesis I wrote for my startup:
Because their business model relies on advertising, social media tools like Facebook are deliberately designed to “hook” users and make them addicted to the service. Users are unhappy with this and would rather have a healthier relationship with social media tools. They would be willing to pay for a social media service that was designed with mental health in mind.
You can see in this example that my assumptions are:
Users feel that social media sites like Facebook are addictive.
Users don’t like to be addicted to social media.
Users would be willing to pay for a non-addictive Facebook replacement.
These are the assumptions I’ll be researching and testing throughout the discovery process. If I find through my research that I cannot readily affirm these assumptions, it means I might not be ready to take on Mr. Zuckerberg just yet.
The benefit of articulating our assumptions in the form of a hypothesis is that it provides something concrete to talk about, refer to, and test. The whole product team can be involved in forming the initial problem hypothesis, and you can refer back to it throughout the discovery process. Once we’ve completed the research and analyzed the results, we can edit the hypothesis to reflect our new understanding of our users and the problems we want to solve.
Now that we’ve articulated a problem hypothesis, it is time to figure out our research plan. In the following two sections, I’ll cover the research method I recommend the most for new ventures, as well as strategies for recruiting participants on a budget.
A method that is useful in all phases of design: interviews
In my career as a user researcher, I have used all sorts of methods. I’ve done A/B testing, eye tracking, Wizard of Oz testing, think-alouds, contextual inquiries, and guerilla testing. But the one research method I utilize the most, and that I believe provides the most “bang for the buck,” is user interviews.
User interviews are relatively inexpensive to conduct. You don’t need to travel to a client site and you don’t need a fortune’s worth of equipment. If you have access to a phone, you can conduct an interview with participants all around the world. Yet interviews provide a wealth of information and can be used in every phase of research and design. Interviews are especially useful in discovery, because it is a method that is adaptable. As you learn more about the problem you are trying to solve, you can adapt your interview protocol to match.
To be clear, your interviewees will not tell you:
what to build;
or how to build it.
But they absolutely can tell you:
what problem they have;
how they feel about it;
and what the value of a solution would mean to them.
And if you know the problem, how users feels about it, and the value of a solution, you are well on your way to designing the right product.
The challenge of conducting a good user interview is making sure you ask the questions that elicit that information. Here are a couple tips:
Tip 1: always ask the following two questions:
“What do you like about [blank]?”
“What do you dislike about [blank]?”
… where you fill “[blank]” with whatever domain your future product will improve.
Your objective is to gain an understanding of all aspects of the problem your potential customers face—the bad and the good. One common mistake is to spend too much time investigating what’s wrong with the current state of affairs. Naturally, you want your product to fix all the problems your customers face. However, you also need to preserve what currently works well, what is satisfying, or what is otherwise good about how users accomplish their goals currently. So it is important to ask about both in user interviews.
For example, in my interviews I always asked, “What do you like about using Facebook?” And it wasn’t until my interview participant told me everything they enjoyed about Facebook that I would ask, “What do you dislike about using Facebook?”
Tip 2: after (nearly) every response, ask them to say more.
The goal of conducting interviews is to gain an exhaustive set of data to review and consider moving forward. That means you don’t want your participants to discuss one thing they like and dislike, you want them to tell you all the things they like and dislike.
Here is an example of how this played out in one of the interviews I conducted:
Interviewer (Me): What do you like about using Facebook?
Interviewee: I like seeing people on there that I wouldn’t otherwise get a chance to see and catch up with in real life. I have moved a couple times so I have a lot of friends that I don’t see regularly. I also like seeing the people I know do well, even though I haven’t seen them since, maybe, high school. But I like seeing how their life has gone. I like seeing their kids. I like seeing their accomplishments. It’s also a little creepy because it’s a window into their life and we haven’t actually talked in forever. But I like staying connected.
Interviewer (Me): What else do you like about it?
Interviewee: Um, well it’s also sort of a convenient way of keeping contacts. There have been a few times when I was able to message people and get in touch with people even when I don’t have their address or email in my phone. I could message them through Facebook.
Interviewer (Me): Great. Is there anything else you like about it?
Interviewee: Let me think … well I also find cool stuff to do on the weekends there sometimes. They have an events feature. And businesses, or local places, will post events and there have been a couple times where I’ve gone to something cool. Like I found a cool movie festival once that way.
Interviewer (Me): That seems cool. What else do you like about using Facebook?
Interviewee: Uh … that’s all I think I really use it for. I can’t really think of anything else. Mainly I use it just to keep in touch with people that I’ve met over the years.
From this example you can see the first feature that popped into the interviewee’s mind was their ability to keep up with friends that they otherwise wouldn’t have much opportunity to connect with anymore. That is a feature that any Facebook replacement would have to replicate. However, if I hadn’t pushed the interviewee to think of even more features they like, I might have never uncovered an important secondary feature: convenient in-app messaging. In fact, six out of the eleven people I interviewed for this project said they liked Facebook Messenger. But not a single one of them mentioned that feature first. It only came up in conversation after I probed for more.
As I continued to repeat my question, the interviewee thought of one more feature they liked: local event listings. (Five out of the eleven people I interviewed mentioned this feature.) But after that, the interviewee couldn’t think of any more features to discuss. You know you can move on to the next question in the interview when your participant starts to repeat themselves or bluntly tells you they have nothing else to say.
Recruit all around you, then document the bias
There are all sorts of ways to recruit participants for research. You can hire an agency or use a tool like UserTesting.com. But many of those paid-for options can be quite costly, and since we are working with a shoestring budget we have roughly zero dollars to spend on recruitment. We will have to be creative.
My post on Facebook to recruit volunteers. One volunteer decided to respond with a Hunger Games “I volunteer as tribute!” gif.
For my project, I decided to rely on the kindness of friends and strangers I could reach through Facebook. I posted one request for participants on my personal Facebook page, and another on the local FreeCodeCamp page. A day after I posted my request, twenty-five friends and five strangers volunteered. This type of participant recruitment method is called convenience sampling, because I was recruiting participants that were conveniently accessible to me.
Since my project involved talking to people about social media sites like Facebook, it was appropriate for my first attempt at recruiting to start on Facebook. I could be sure that everyone who saw my request uses Facebook in some form or fashion. However, like all convenience sampling, my recruitment method was biased. (I’ll explain how in just a bit.)
Bias is something that we should try—whenever possible—to avoid. If we have access to more sophisticated recruitment methods, we should use them. However, when you have a tight budget, avoiding recruitment bias is virtually impossible. In this scenario, our goals should be to:
mitigate bias as best we can;
and document all the biases we see.
For my project, I could mitigate some of the biases by using a few more recruitment methods. I could go to various neighborhoods and try to recruit participants off the street (i.e., guerilla testing). If I had a little bit of money to spend, I could hang out in various coffee shops and offer folks free coffee in exchange for ten-minute interviews. These recruitment methods also fall under the umbrella of convenience sampling, but by using a variety of methods I can mitigate some of the bias I would have from using just one of them.
Also, it is always important to reflect on and document how your sampling method is biased. For my project, I wrote the following in my notes:
All of the people I interviewed were connected to me in some way on Facebook. Many of them I know well enough to be “friends” with. All of them were around my age, many (but not all) worked in tech in some form or fashion, and all of them but one lived in the US.
Documenting bias ensures that we won’t forget about the bias when it comes time to analyze and discuss the results.
Let’s keep this going
As the title suggests, this is just the first installment of a series of articles on the discovery process. In part two, I will analyze the results of my interviews, revise my problem hypothesis, and continue to work on my experimental startup. I will launch into another round of discovery research, but this time utilizing some different research methods, like A/B testing and fake-door testing. You can help me out by checking out this mock landing page for Candor Network (what I’ve named my fictitious startup) and taking the survey you see there.
http://ift.tt/2CSs5T0
0 notes