#and that includes people talking about the pitfalls of this technology
Explore tagged Tumblr posts
Text
my spicy hot take regarding AI chatbots lying to people is that, no, the chatbot isn't lying. chatgpt is not lying. it's not capable of making the conscious decision to lie to you. that doesn't mean it's providing factual information, though, because that's not what it's meant to do (despite how it's being marketed and portrayed). chatgpt is a language learning model simply predicting what responses are most probable based on established parameters.
it's not lying, it's providing the most statistically likely output based on its training data. and that includes making shit up.
#multi makes text posts#idk how much sense this makes but yeah#anti ai#one thing i get so wary of with these chatbots is people ascribing a sentience to them that isn't there#and that includes people talking about the pitfalls of this technology#specifically saying that the bots are lying... no. they're not lying#they're not even really communicating#they're algorithms trained on specific data that enables them to form convincing responses#disclaimer: i am not an expert in ai#but that doesn't make this less of a valid point#also honestly even calling these chatbots ai is a misnomer#there is no intelligence! no intent! it's an algorithm!!
24 notes
·
View notes
Note
does pump play animal crossing if so who are his favorite villagers
HAMMY'S POSTING. Uhhhhh, good question actually! We know that these kids play videogames because of SM1, where Skid seems to own either a GameCube or a Wii (the controllers are white yet don't have wires? Likely a WaveBird Wireless Controller like a comment below this post has pointed out, but this was Pelo's first SM animation so who knows) and a copy of Luigi's Mansion (it HAS to be the very first game on the GameCube, the sequel was a 3DS title). So, at least, one of them plays Nintendo games. Also Pump has a console at home too?? But we don't know what it is yet?? It's on Patreon, Pelo showed a small bit of the first scene's script and he was playing a videogame and asks Susie to help him win it, so just putting it out there. Now, in terms of timeframe, we know from SM5 (particularly the newspaper talking about Bob's escape) that the show, as of SM6, takes place on either late 2013 or sometime during 2014. Therefore, we have all the Animal Crossing games up to New Leaf (2012) for possibilities. However, if we narrow it down based on what I have said above, especially because no handhelds have ever been referenced in the show, it'd either be Population Growing (2003) and/or City Folk (2008), depending on the console. Yeah I'm actually putting IRL context just so I can reason, and prove, whatever I'm going to say next lmao. This is @pumpkinhcad for God's sake, we do NOT give short answers here.
Considering how technologically-literate Susie is, and how she's a 2000s child, I'd be surprised if the Wonders didn't own a Wii?? I feel like the Wii was that console every kid owned (me included), and City Folk was one of the best-selling games in the console; so I would not be surprised if that was a game she owned and played, especially because AC is a popular game series within her demographic. Being a demonology enthusiast aside.
Pump has likely played all the games his older sister owns one way or another so, by that logic, he has probably wrecked havoc in his sister's City Folk town. You know what I'm talking about: hitting your villagers with nets, setting up pitfalls, pissing off Mr. Resetti, the like. He would probably find the game to be very boring otherwise, though, considering it is mostly a simple life simulation game with cute animals as your neighbors. Also the game he is playing in SM7 seems to involve killing a witch as a knight, so yeah, not his kind of game.
Regardless, I think he'd like the Lazy villagers, funny enough. They like bugs and candy, and will often say pretty out-of-left-field stuff just like him. And who is a spooky-looking villager that has the Lazy personality? That's right, Lucky! He even looks like a mummy!
Maybe also Stitches can be included in that list, because he looks like a Frankenstein's teddy bear. Coco is an obvious option too, and I think Pump would tell people that she can suck your soul out of your body if you stare into her empty eyes for long enough, something silly like that. Gyroids are super eerie in general, I wonder if he'd be interested in looking for and collecting those.
The TL;DR is: he probably has played Animal Crossing off his sister once or twice and he likes the lazy villagers (mainly Lucky, he looks spooky).
#💝 •|| OUT OF CHARACTER.#🎃 •|| SOMEONE'S CALLING.#🎃 •|| UNKNOWN NUMBER.#🎃 •|| HEADCANON.#(Now I'm wondering why am I physically incapable of just replying to the ask without making it lengthy...)#(Also ''why no Jack mention''? Because this is about the villagers more than anything.)#(Also Pump goes OUTSIDE during Halloween; he would not be able to play the in-game Halloween stuff with Jack unless he time-traveled.)#(I have a feeling this ask will age horribly the second SM7 is out btw; a lot of that episode seems to revolve around games.)#(Editing to add that controller bit since I had legit forgotten those wireless controllers existed.)
2 notes
·
View notes
Text

Finding Affordable US Immigration Legal Services in India
In 2023, over 200,000 Indian nationals applied for various U.S. visas—a number that continues to rise. With this surge in demand, more people are seeking legal help to navigate the complex U.S. immigration system.
However, finding affordable and reliable legal support in India can be challenging. This article explores the common obstacles and offers practical tips to connect with trustworthy and cost-effective immigration legal services.
Why the Demand Is Rising More Indians are applying for U.S. visas, green cards, and citizenship due to:
Career opportunities in the U.S., especially in tech
Educational goals
Family reunification
Political and economic factors
As interest grows, so does the need for proper legal guidance. U.S. immigration laws are complex and often change. A simple mistake in your application could lead to delays, denials, or even bans.
Good legal help is essential—but it must also be affordable. Many people struggle to access quality legal services due to high fees. That’s why it’s important to find experienced, transparent, and budget-friendly professionals.
How to Choose Affordable U.S. Immigration Legal Services
Check Experience and Expertise Choose a law firm that specializes in U.S. immigration law. Look for one with a strong track record of handling cases like yours. Experienced lawyers understand the system well and can help you avoid common pitfalls.
Look for Transparent Pricing Avoid surprises by choosing a firm that clearly lists its fees. Flat-rate packages are often easier to manage than hourly rates. During your consultation, ask for a detailed breakdown of costs—including any potential extras.
Use Initial Consultations Wisely An initial consultation helps you understand if a firm fits your needs. Ask questions, share your situation, and assess their responsiveness. Some firms offer low-cost or free consultations, making it easier to explore your options.
Read Client Reviews Reviews can tell you a lot about a firm’s service quality. Focus on third-party websites for unbiased feedback. Look for consistent positive comments, especially from clients with similar cases.
Tips to Get the Best Value • Be Prepared Before meeting your lawyer, gather all important documents—such as your passport, visa forms, and previous communications. This saves time and lets your lawyer focus directly on your case.
• Use Online Resources Many firms provide free tools on their websites like guides and FAQs. You can also join online forums and communities to learn from others' experiences. These resources can reduce the number of consultations you need.
• Discuss Payment Plans Don't hesitate to talk about your budget. Some attorneys offer flexible payment options or discounts. Asking about flat-rate services can also help you avoid unexpected costs.
How Technology Helps Lower Legal Costs • Virtual Consultations Meeting your lawyer online saves time and money. It removes the need for travel and offers flexible scheduling, especially useful when living in India.
• Automated Documents Legal tech tools help fill out forms accurately and quickly. This reduces manual work and cuts down on errors and delays—saving you money in the long run.
• Secure Client Portals These platforms let you track your case, share documents, and communicate with your lawyer online. They reduce the need for in-person meetings and provide better transparency throughout the process.
Avoiding Scams and Red Flags • Watch Out for Scams Be cautious of firms that make unrealistic promises or demand large upfront payments. Common scams include fake lawyers and guaranteed visa approvals.
• Verify Credentials Check if the firm is registered with legal authorities in India or the U.S. Reputable firms are transparent about their licenses, experience, and contact details.
• Clarify What’s Included Before you agree to anything, get a written breakdown of what services are covered in the fee. This will help you avoid hidden charges and manage your budget better.
Final Word At Gehi’s Immigration and International Legal Services, we understand how overwhelming the U.S. immigration process can be—especially when you're seeking support from India. That’s why we focus on:
Clear, upfront pricing
Experienced legal guidance
Efficient, tech-enabled services
A client-first approach
While affordability is key, always prioritize quality, transparency, and a proven track record. By taking these steps, you can find the right legal partner to guide you through your immigration journey.
If you have questions, we’re here to help. Contact us today for a consultation.
🔗 Follow us on Facebook
#GehilawIndia #USImmigrationHelp #AffordableImmigrationLawyer #ImmigrationIndia #VisaApplicationHelp #LegalSupportUSA
0 notes
Text
101 of How to Franchise a Business: Common Franchising Mistakes to Avoid

Franchising can be a powerful engine for business growth, allowing entrepreneurs to expand their reach and build a formidable brand. However, the path to franchising success is paved with potential pitfalls. Many aspiring franchisors make critical errors that can derail their expansion plans, damage their reputation, and even lead to the demise of their business. To avoid these common traps, it's crucial to understand the potential hazards and proactively address them. This blog successfully answers about how to franchise a business successfully in Australia without committing common mistakes.
1. Inadequate Due Diligence
Performing a full investigation becomes essential before starting into the franchise business. The evaluation process needs to look deeply into how the franchise works and performs as a business system and its effective operations including competitive strengths. Franchisors need to research heavily how their franchise performs across all market areas before discovering what matters to customers there.
2. Neglecting Franchisee Training and Support
A successful franchise system functions by offering complete training and daily support for all franchisees. Insufficient training makes franchisees unable to work their businesses correctly which creates poor results and unhappy customers. When franchise owners receive minor support they feel abandoned which reduces their performance and damages brand quality.
3. Overlooking Legal and Regulatory Compliance
The many rules and laws governing franchise businesses can challenge anyone to understand them. Franchisors need to make sure their franchise contracts follow every legal rule about consumer protection, fair competition and intellectual property rights. When franchisees do not follow the rules they may face expensive court cases and heavy fines.
4. Setting Unrealistic Expectations
When a franchisor makes unrealistic promises to potential business owners and then fails to deliver it destroys trust in the brand's reputation. Franchisors need to openly discuss what owners experience when running a franchise and show the monetary dangers plus the need for dedication.
5. Failing to Build a Strong Franchise Culture
A successful franchise network needs to have a solid company culture that supports all partners. To build a strong franchise connection they should bring everyone together making sure people talk freely and recognise their wins. Franchisee bonding events both online and in person let members strengthen their connections and stay true to the business vision.
6. Ignoring the Importance of Technology
Technology drives important operations in our modern digital society. Using technology tools helps us connect faster with our customers while doing things better and making their experience better.
7. Neglecting Marketing and Brand Consistency
Every franchise location must align with the company brand image to help people find and trust the brand better. Franchisors need to create complete marketing rules and help their franchisees run successful local promotions.
8. Failing to Adapt to Changing Market Conditions
Effective franchisors must stay ready to modify their operations with everchanging business world. Updated franchise systems with new documents and training while developing marketing strategies to help franchises succeed in the long term.
9. Underestimating the Importance of Customer Service
Franchisors need to teach their franchisees about creating happy customers while offering constant help to maintain high-quality customer service.
10. Failing to Build Strong Relationships with Franchisees
Establishing and preserving good connections with franchisees serves as our top priority. The franchisor needs to maintain continual talks with their partners while actively helping their businesses succeed.
Final Words
Now you can stop worrying about how to franchise a business successfully in Australia with this blog. It helps in identifying the common mistakes and how to avoid them successfully while franchising a business. By carefully considering these potential pitfalls and taking proactive steps to address them, franchisors can increase their chances of building a successful and sustainable franchise system.
Also Read: Effective Onboarding Practices To Improve Franchisee Success
0 notes
Text
Machines
of
Loving Grace1
How AI Could Transform the World for the Better
October 2024
I think and talk a lot about the risks of powerful AI. The company I’m the CEO of, Anthropic, does a lot of research on how to reduce these risks. Because of this, people sometimes draw the conclusion that I’m a pessimist or “doomer” who thinks AI will be mostly bad or dangerous. I don’t think that at all. In fact, one of my main reasons for focusing on risks is that they’re the only thing standing between us and what I see as a fundamentally positive future. I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.
In this essay I try to sketch out what that upside might look like—what a world with powerful AI might look like if everything goes right. Of course no one can know the future with any certainty or precision, and the effects of powerful AI are likely to be even more unpredictable than past technological changes, so all of this is unavoidably going to consist of guesses. But I am aiming for at least educated and useful guesses, which capture the flavor of what will happen even if most details end up being wrong. I’m including lots of details mainly because I think a concrete vision does more to advance discussion than a highly hedged and abstract one.
First, however, I wanted to briefly explain why I and Anthropic haven’t talked that much about powerful AI’s upsides, and why we’ll probably continue, overall, to talk a lot about risks. In particular, I’ve made this choice out of a desire to:
Maximize leverage. The basic development of AI technology and many (not all) of its benefits seems inevitable (unless the risks derail everything) and is fundamentally driven by powerful market forces. On the other hand, the risks are not predetermined and our actions can greatly change their likelihood.
Avoid perception of propaganda. AI companies talking about all the amazing benefits of AI can come off like propagandists, or as if they’re attempting to distract from downsides. I also think that as a matter of principle it’s bad for your soul to spend too much of your time “talking your book”.
Avoid grandiosity. I am often turned off by the way many AI risk public figures (not to mention AI company leaders) talk about the post-AGI world, as if it’s their mission to single-handedly bring it about like a prophet leading their people to salvation. I think it’s dangerous to view companies as unilaterally shaping the world, and dangerous to view practical technological goals in essentially religious terms.
Avoid “sci-fi” baggage. Although I think most people underestimate the upside of powerful AI, the small community of people who do discuss radical AI futures often does so in an excessively “sci-fi” tone (featuring e.g. uploaded minds, space exploration, or general cyberpunk vibes). I think this causes people to take the claims less seriously, and to imbue them with a sort of unreality. To be clear, the issue isn’t whether the technologies described are possible or likely (the main essay discusses this in granular detail)—it’s more that the “vibe” connotatively smuggles in a bunch of cultural baggage and unstated assumptions about what kind of future is desirable, how various societal issues will play out, etc. The result often ends up reading like a fantasy for a narrow subculture, while being off-putting to most people.
Yet despite all of the concerns above, I really do think it’s important to discuss what a good world with powerful AI could look like, while doing our best to avoid the above pitfalls. In fact I think it is critical to have a genuinely inspiring vision of the future, and not just a plan to fight fires. Many of the implications of powerful AI are adversarial or dangerous, but at the end of it all, there has to be something we’re fighting for, some positive-sum outcome where everyone is better off, something to rally people to rise above their squabbles and confront the challenges ahead. Fear is one kind of motivator, but it’s not enough: we need hope as well.
The list of positive applications of powerful AI is extremely long (and includes robotics, manufacturing, energy, and much more), but I’m going to focus on a small number of areas that seem to me to have the greatest potential to directly improve the quality of human life. The five categories I am most excited about are:
Biology and physical health
Neuroscience and mental health
Economic development and poverty
Peace and governance
Work and meaning
My predictions are going to be radical as judged by most standards (other than sci-fi “singularity” visions2), but I mean them earnestly and sincerely. Everything I’m saying could very easily be wrong (to repeat my point from above), but I’ve at least attempted to ground my views in a semi-analytical assessment of how much progress in various fields might speed up and what that might mean in practice. I am fortunate to have professional experience in both biology and neuroscience, and I am an informed amateur in the field of economic development, but I am sure I will get plenty of things wrong. One thing writing this essay has made me realize is that it would be valuable to bring together a group of domain experts (in biology, economics, international relations, and other areas) to write a much better and more informed version of what I’ve produced here. It’s probably best to view my efforts here as a starting prompt for that group.
Basic assumptions and framework
To make this whole essay more precise and grounded, it’s helpful to specify clearly what we mean by powerful AI (i.e. the threshold at which the 5-10 year clock starts counting), as well as laying out a framework for thinking about the effects of such AI once it’s present.
What powerful AI (I dislike the term AGI)3 will look like, and when (or if) it will arrive, is a huge topic in itself. It’s one I’ve discussed publicly and could write a completely separate essay on (I probably will at some point). Obviously, many people are skeptical that powerful AI will be built soon and some are skeptical that it will ever be built at all. I think it could come as early as 2026, though there are also ways it could take much longer. But for the purposes of this essay, I’d like to put these issues aside, assume it will come reasonably soon, and focus on what happens in the 5-10 years after that. I also want to assume a definition of what such a system will look like, what its capabilities are and how it interacts, even though there is room for disagreement on this.
By powerful AI, I have in mind an AI model—likely similar to today’s LLM’s in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently—with the following properties:
In terms of pure intelligence4, it is smarter than a Nobel Prize winner across most relevant fields – biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.
In addition to just being a “smart thing you talk to”, it has all the “interfaces” available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world.
It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary.
It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use.
The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10x-100x human speed5. It may however be limited by the response time of the physical world or of software it interacts with.
Each of these million copies can act independently on unrelated tasks, or if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks.
We could summarize this as a “country of geniuses in a datacenter”.
Clearly such an entity would be capable of solving very difficult problems, very fast, but it is not trivial to figure out how fast. Two “extreme” positions both seem false to me. First, you might think that the world would be instantly transformed on the scale of seconds or days (“the Singularity”), as superior intelligence builds on itself and solves every possible scientific, engineering, and operational task almost immediately. The problem with this is that there are real physical and practical limits, for example around building hardware or conducting biological experiments. Even a new country of geniuses would hit up against these limits. Intelligence may be very powerful, but it isn’t magic fairy dust.
Second, and conversely, you might believe that technological progress is saturated or rate-limited by real world data or by social factors, and that better-than-human intelligence will add very little6. This seems equally implausible to me—I can think of hundreds of scientific or even social problems where a large group of really smart people would drastically speed up progress, especially if they aren’t limited to analysis and can make things happen in the real world (which our postulated country of geniuses can, including by directing or assisting teams of humans).
I think the truth is likely to be some messy admixture of these two extreme pictures, something that varies by task and field and is very subtle in its details. I believe we need new frameworks to think about these details in a productive way.
Economists often talk about “factors of production”: things like labor, land, and capital. The phrase “marginal returns to labor/land/capital” captures the idea that in a given situation, a given factor may or may not be the limiting one – for example, an air force needs both planes and pilots, and hiring more pilots doesn’t help much if you’re out of planes. I believe that in the AI age, we should be talking about the marginal returns to intelligence7, and trying to figure out what the other factors are that are complementary to intelligence and that become limiting factors when intelligence is very high. We are not used to thinking in this way—to asking “how much does being smarter help with this task, and on what timescale?”—but it seems like the right way to conceptualize a world with very powerful AI.
My guess at a list of factors that limit or are complementary to intelligence includes:
Speed of the outside world. Intelligent agents need to operate interactively in the world in order to accomplish things and also to learn8. But the world only moves so fast. Cells and animals run at a fixed speed so experiments on them take a certain amount of time which may be irreducible. The same is true of hardware, materials science, anything involving communicating with people, and even our existing software infrastructure. Furthermore, in science many experiments are often needed in sequence, each learning from or building on the last. All of this means that the speed at which a major project—for example developing a cancer cure—can be completed may have an irreducible minimum that cannot be decreased further even as intelligence continues to increase.
Need for data. Sometimes raw data is lacking and in its absence more intelligence does not help. Today’s particle physicists are very ingenious and have developed a wide range of theories, but lack the data to choose between them because particle accelerator data is so limited. It is not clear that they would do drastically better if they were superintelligent—other than perhaps by speeding up the construction of a bigger accelerator.
Intrinsic complexity. Some things are inherently unpredictable or chaotic and even the most powerful AI cannot predict or untangle them substantially better than a human or a computer today. For example, even incredibly powerful AI could predict only marginally further ahead in a chaotic system (such as the three-body problem) in the general case,9 as compared to today’s humans and computers.
Constraints from humans. Many things cannot be done without breaking laws, harming humans, or messing up society. An aligned AI would not want to do these things (and if we have an unaligned AI, we’re back to talking about risks). Many human societal structures are inefficient or even actively harmful, but are hard to change while respecting constraints like legal requirements on clinical trials, people’s willingness to change their habits, or the behavior of governments. Examples of advances that work well in a technical sense, but whose impact has been substantially reduced by regulations or misplaced fears, include nuclear power, supersonic flight, and even elevators.
Physical laws. This is a starker version of the first point. There are certain physical laws that appear to be unbreakable. It’s not possible to travel faster than light. Pudding does not unstir. Chips can only have so many transistors per square centimeter before they become unreliable. Computation requires a certain minimum energy per bit erased, limiting the density of computation in the world.
There is a further distinction based on timescales. Things that are hard constraints in the short run may become more malleable to intelligence in the long run. For example, intelligence might be used to develop a new experimental paradigm that allows us to learn in vitro what used to require live animal experiments, or to build the tools needed to collect new data (e.g. the bigger particle accelerator), or to (within ethical limits) find ways around human-based constraints (e.g. helping to improve the clinical trial system, helping to create new jurisdictions where clinical trials have less bureaucracy, or improving the science itself to make human clinical trials less necessary or cheaper).
Thus, we should imagine a picture where intelligence is initially heavily bottlenecked by the other factors of production, but over time intelligence itself increasingly routes around the other factors, even if they never fully dissolve (and some things like physical laws are absolute)10. The key question is how fast it all happens and in what order.
With the above framework in mind, I’ll try to answer that question for the five areas mentioned in the introduction.
1. Biology and health
Biology is probably the area where scientific progress has the greatest potential to directly and unambiguously improve the quality of human life. In the last century some of the most ancient human afflictions (such as smallpox) have finally been vanquished, but many more still remain, and defeating them would be an enormous humanitarian accomplishment. Beyond even curing disease, biological science can in principle improve the baseline quality of human health, by extending the healthy human lifespan, increasing control and freedom over our own biological processes, and addressing everyday problems that we currently think of as immutable parts of the human condition.
In the “limiting factors” language of the previous section, the main challenges with directly applying intelligence to biology are data, the speed of the physical world, and intrinsic complexity (in fact, all three are related to each other). Human constraints also play a role at a later stage, when clinical trials are involved. Let’s take these one by one.
Experiments on cells, animals, and even chemical processes are limited by the speed of the physical world: many biological protocols involve culturing bacteria or other cells, or simply waiting for chemical reactions to occur, and this can sometimes take days or even weeks, with no obvious way to speed it up. Animal experiments can take months (or more) and human experiments often take years (or even decades for long-term outcome studies). Somewhat related to this, data is often lacking—not so much in quantity, but quality: there is always a dearth of clear, unambiguous data that isolates a biological effect of interest from the other 10,000 confounding things that are going on, or that intervenes causally in a given process, or that directly measures some effect (as opposed to inferring its consequences in some indirect or noisy way). Even massive, quantitative molecular data, like the proteomics data that I collected while working on mass spectrometry techniques, is noisy and misses a lot (which types of cells were these proteins in? Which part of the cell? At what phase in the cell cycle?).
In part responsible for these problems with data is intrinsic complexity: if you’ve ever seen a diagram showing the biochemistry of human metabolism, you’ll know that it’s very hard to isolate the effect of any part of this complex system, and even harder to intervene on the system in a precise or predictable way. And finally, beyond just the intrinsic time that it takes to run an experiment on humans, actual clinical trials involve a lot of bureaucracy and regulatory requirements that (in the opinion of many people, including me) add unnecessary additional time and delay progress.
Given all this, many biologists have long been skeptical of the value of AI and “big data” more generally in biology. Historically, mathematicians, computer scientists, and physicists who have applied their skills to biology over the last 30 years have been quite successful, but have not had the truly transformative impact initially hoped for. Some of the skepticism has been reduced by major and revolutionary breakthroughs like AlphaFold (which has just deservedly won its creators the Nobel Prize in Chemistry) and AlphaProteo11, but there’s still a perception that AI is (and will continue to be) useful in only a limited set of circumstances. A common formulation is “AI can do a better job analyzing your data, but it can’t produce more data or improve the quality of the data. Garbage in, garbage out”.
But I think that pessimistic perspective is thinking about AI in the wrong way. If our core hypothesis about AI progress is correct, then the right way to think of AI is not as a method of data analysis, but as a virtual biologist who performs all the tasks biologists do, including designing and running experiments in the real world (by controlling lab robots or simply telling humans which experiments to run – as a Principal Investigator would to their graduate students), inventing new biological methods or measurement techniques, and so on. It is by speeding up the whole research process that AI can truly accelerate biology. I want to repeat this because it’s the most common misconception that comes up when I talk about AI’s ability to transform biology: I am not talking about AI as merely a tool to analyze data. In line with the definition of powerful AI at the beginning of this essay, I’m talking about using AI to perform, direct, and improve upon nearly everything biologists do.
To get more specific on where I think acceleration is likely to come from, a surprisingly large fraction of the progress in biology has come from a truly tiny number of discoveries, often related to broad measurement tools or techniques12 that allow precise but generalized or programmable intervention in biological systems. There’s perhaps ~1 of these major discoveries per year and collectively they arguably drive >50% of progress in biology. These discoveries are so powerful precisely because they cut through intrinsic complexity and data limitations, directly increasing our understanding and control over biological processes. A few discoveries per decade have enabled both the bulk of our basic scientific understanding of biology, and have driven many of the most powerful medical treatments.
Some examples include:
CRISPR: a technique that allows live editing of any gene in living organisms (replacement of any arbitrary gene sequence with any other arbitrary sequence). Since the original technique was developed, there have been constant improvements to target specific cell types, increasing accuracy, and reducing edits of the wrong gene—all of which are needed for safe use in humans.
Various kinds of microscopy for watching what is going on at a precise level: advanced light microscopes (with various kinds of fluorescent techniques, special optics, etc), electron microscopes, atomic force microscopes, etc.
Genome sequencing and synthesis, which has dropped in cost by several orders of magnitude in the last couple decades.
Optogenetic techniques that allow you to get a neuron to fire by shining a light on it.
mRNA vaccines that, in principle, allow us to design a vaccine against anything and then quickly adapt it (mRNA vaccines of course became famous during COVID).
Cell therapies such as CAR-T that allow immune cells to be taken out of the body and “reprogrammed” to attack, in principle, anything.
Conceptual insights like the germ theory of disease or the realization of a link between the immune system and cancer13.
I’m going to the trouble of listing all these technologies because I want to make a crucial claim about them: I think their rate of discovery could be increased by 10x or more if there were a lot more talented, creative researchers. Or, put another way, I think the returns to intelligence are high for these discoveries, and that everything else in biology and medicine mostly follows from them.
Why do I think this? Because of the answers to some questions that we should get in the habit of asking when we’re trying to determine “returns to intelligence”. First, these discoveries are generally made by a tiny number of researchers, often the same people repeatedly, suggesting skill and not random search (the latter might suggest lengthy experiments are the limiting factor). Second, they often “could have been made” years earlier than they were: for example, CRISPR was a naturally occurring component of the immune system in bacteria that’s been known since the 80’s, but it took another 25 years for people to realize it could be repurposed for general gene editing. They also are often delayed many years by lack of support from the scientific community for promising directions (see this profile on the inventor of mRNA vaccines; similar stories abound). Third, successful projects are often scrappy or were afterthoughts that people didn’t initially think were promising, rather than massively funded efforts. This suggests that it’s not just massive resource concentration that drives discoveries, but ingenuity.
Finally, although some of these discoveries have “serial dependence” (you need to make discovery A first in order to have the tools or knowledge to make discovery B)—which again might create experimental delays—many, perhaps most, are independent, meaning many at once can be worked on in parallel. Both these facts, and my general experience as a biologist, strongly suggest to me that there are hundreds of these discoveries waiting to be made if scientists were smarter and better at making connections between the vast amount of biological knowledge humanity possesses (again consider the CRISPR example). The success of AlphaFold/AlphaProteo at solving important problems much more effectively than humans, despite decades of carefully designed physics modeling, provides a proof of principle (albeit with a narrow tool in a narrow domain) that should point the way forward.
Thus, it’s my guess that powerful AI could at least 10x the rate of these discoveries, giving us the next 50-100 years of biological progress in 5-10 years.14 Why not 100x? Perhaps it is possible, but here both serial dependence and experiment times become important: getting 100 years of progress in 1 year requires a lot of things to go right the first time, including animal experiments and things like designing microscopes or expensive lab facilities. I’m actually open to the (perhaps absurd-sounding) idea that we could get 1000 years of progress in 5-10 years, but very skeptical that we can get 100 years in 1 year. Another way to put it is I think there’s an unavoidable constant delay: experiments and hardware design have a certain “latency” and need to be iterated upon a certain “irreducible” number of times in order to learn things that can’t be deduced logically. But massive parallelism may be possible on top of that15.
What about clinical trials? Although there is a lot of bureaucracy and slowdown associated with them, the truth is that a lot (though by no means all!) of their slowness ultimately derives from the need to rigorously evaluate drugs that barely work or ambiguously work. This is sadly true of most therapies today: the average cancer drug increases survival by a few months while having significant side effects that need to be carefully measured (there’s a similar story for Alzheimer’s drugs). This leads to huge studies (in order to achieve statistical power) and difficult tradeoffs which regulatory agencies generally aren’t great at making, again because of bureaucracy and the complexity of competing interests.
When something works really well, it goes much faster: there’s an accelerated approval track and the ease of approval is much greater when effect sizes are larger. mRNA vaccines for COVID were approved in 9 months—much faster than the usual pace. That said, even under these conditions clinical trials are still too slow—mRNA vaccines arguably should have been approved in ~2 months. But these kinds of delays (~1 year end-to-end for a drug) combined with massive parallelization and the need for some but not too much iteration (“a few tries”) are very compatible with radical transformation in 5-10 years. Even more optimistically, it is possible that AI-enabled biological science will reduce the need for iteration in clinical trials by developing better animal and cell experimental models (or even simulations) that are more accurate in predicting what will happen in humans. This will be particularly important in developing drugs against the aging process, which plays out over decades and where we need a faster iteration loop.
Finally, on the topic of clinical trials and societal barriers, it is worth pointing out explicitly that in some ways biomedical innovations have an unusually strong track record of being successfully deployed, in contrast to some other technologies16. As mentioned in the introduction, many technologies are hampered by societal factors despite working well technically. This might suggest a pessimistic perspective on what AI can accomplish. But biomedicine is unique in that although the process of developing drugs is overly cumbersome, once developed they generally are successfully deployed and used.
To summarize the above, my basic prediction is that AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years. I’ll refer to this as the “compressed 21st century”: the idea that after powerful AI is developed, we will in a few years make all the progress in biology and medicine that we would have made in the whole 21st century.
Although predicting what powerful AI can do in a few years remains inherently difficult and speculative, there is some concreteness to asking “what could humans do unaided in the next 100 years?”. Simply looking at what we’ve accomplished in the 20th century, or extrapolating from the first 2 decades of the 21st, or asking what “10 CRISPR’s and 50 CAR-T’s” would get us, all offer practical, grounded ways to estimate the general level of progress we might expect from powerful AI.
Below I try to make a list of what we might expect. This is not based on any rigorous methodology, and will almost certainly prove wrong in the details, but it’s trying to get across the general level of radicalism we should expect:
Reliable prevention and treatment of nearly all17 natural infectious disease. Given the enormous advances against infectious disease in the 20th century, it is not radical to imagine that we could more or less “finish the job” in a compressed 21st. mRNA vaccines and similar technology already point the way towards “vaccines for anything”. Whether infectious disease is fully eradicated from the world (as opposed to just in some places) depends on questions about poverty and inequality, which are discussed in Section 3.
Elimination of most cancer. Death rates from cancer have been dropping ~2% per year for the last few decades; thus we are on track to eliminate most cancer in the 21st century at the current pace of human science. Some subtypes have already been largely cured (for example some types of leukemia with CAR-T therapy), and I’m perhaps even more excited for very selective drugs that target cancer in its infancy and prevent it from ever growing. AI will also make possible treatment regimens very finely adapted to the individualized genome of the cancer—these are possible today, but hugely expensive in time and human expertise, which AI should allow us to scale. Reductions of 95% or more in both mortality and incidence seem possible. That said, cancer is extremely varied and adaptive, and is likely the hardest of these diseases to fully destroy. It would not be surprising if an assortment of rare, difficult malignancies persists.
Very effective prevention and effective cures for genetic disease. Greatly improved embryo screening will likely make it possible to prevent most genetic disease, and some safer, more reliable descendant of CRISPR may cure most genetic disease in existing people. Whole-body afflictions that affect a large fraction of cells may be the last holdouts, however.
Prevention of Alzheimer’s. We’ve had a very hard time figuring out what causes Alzheimer’s (it is somehow related to beta-amyloid protein, but the actual details seem to be very complex). It seems like exactly the type of problem that can be solved with better measurement tools that isolate biological effects; thus I am bullish about AI’s ability to solve it. There is a good chance it can eventually be prevented with relatively simple interventions, once we actually understand what is going on. That said, damage from already-existing Alzheimer’s may be very difficult to reverse.
Improved treatment of most other ailments. This is a catch-all category for other ailments including diabetes, obesity, heart disease, autoimmune diseases, and more. Most of these seem “easier” to solve than cancer and Alzheimer’s and in many cases are already in steep decline. For example, deaths from heart disease have already declined over 50%, and simple interventions like GLP-1 agonists have already made huge progress against obesity and diabetes.
Biological freedom. The last 70 years featured advances in birth control, fertility, management of weight, and much more. But I suspect AI-accelerated biology will greatly expand what is possible: weight, physical appearance, reproduction, and other biological processes will be fully under people’s control. We’ll refer to these under the heading of biological freedom: the idea that everyone should be empowered to choose what they want to become and live their lives in the way that most appeals to them. There will of course be important questions about global equality of access; see Section 3 for these.
Doubling of the human lifespan18. This might seem radical, but life expectancy increased almost 2x in the 20th century (from ~40 years to ~75), so it’s “on trend” that the “compressed 21st” would double it again to 150. Obviously the interventions involved in slowing the actual aging process will be different from those that were needed in the last century to prevent (mostly childhood) premature deaths from disease, but the magnitude of change is not unprecedented19. Concretely, there already exist drugs that increase maximum lifespan in rats by 25-50% with limited ill-effects. And some animals (e.g. some types of turtle) already live 200 years, so humans are manifestly not at some theoretical upper limit. At a guess, the most important thing that is needed might be reliable, non-Goodhart-able biomarkers of human aging, as that will allow fast iteration on experiments and clinical trials. Once human lifespan is 150, we may be able to reach “escape velocity”, buying enough time that most of those currently alive today will be able to live as long as they want, although there’s certainly no guarantee this is biologically possible.
It is worth looking at this list and reflecting on how different the world will be if all of it is achieved 7-12 years from now (which would be in line with an aggressive AI timeline). It goes without saying that it would be an unimaginable humanitarian triumph, the elimination all at once of most of the scourges that have haunted humanity for millennia. Many of my friends and colleagues are raising children, and when those children grow up, I hope that any mention of disease will sound to them the way scurvy, smallpox, or bubonic plague sounds to us. That generation will also benefit from increased biological freedom and self-expression, and with luck may also be able to live as long as they want.
It’s hard to overestimate how surprising these changes will be to everyone except the small community of people who expected powerful AI. For example, thousands of economists and policy experts in the US currently debate how to keep Social Security and Medicare solvent, and more broadly how to keep down the cost of healthcare (which is mostly consumed by those over 70 and especially those with terminal illnesses such as cancer). The situation for these programs is likely to be radically improved if all this comes to pass20, as the ratio of working age to retired population will change drastically. No doubt these challenges will be replaced with others, such as how to ensure widespread access to the new technologies, but it is worth reflecting on how much the world will change even if biology is the only area to be successfully accelerated by AI.
2. Neuroscience and mind
In the previous section I focused on physical diseases and biology in general, and didn’t cover neuroscience or mental health. But neuroscience is a subdiscipline of biology and mental health is just as important as physical health. In fact, if anything, mental health affects human well-being even more directly than physical health. Hundreds of millions of people have very low quality of life due to problems like addiction, depression, schizophrenia, low-functioning autism, PTSD, psychopathy21, or intellectual disabilities. Billions more struggle with everyday problems that can often be interpreted as much milder versions of one of these severe clinical disorders. And as with general biology, it may be possible to go beyond addressing problems to improving the baseline quality of human experience.
The basic framework that I laid out for biology applies equally to neuroscience. The field is propelled forward by a small number of discoveries often related to tools for measurement or precise intervention – in the list of those above, optogenetics was a neuroscience discovery, and more recently CLARITY and expansion microscopy are advances in the same vein, in addition to many of the general cell biology methods directly carrying over to neuroscience. I think the rate of these advances will be similarly accelerated by AI and therefore that the framework of “100 years of progress in 5-10 years” applies to neuroscience in the same way it does to biology and for the same reasons. As in biology, the progress in 20th century neuroscience was enormous – for example we didn’t even understand how or why neurons fired until the 1950’s. Thus, it seems reasonable to expect AI-accelerated neuroscience to produce rapid progress over a few years.
There is one thing we should add to this basic picture, which is that some of the things we’ve learned (or are learning) about AI itself in the last few years are likely to help advance neuroscience, even if it continues to be done only by humans. Interpretability is an obvious example: although biological neurons superficially operate in a completely different manner from artificial neurons (they communicate via spikes and often spike rates, so there is a time element not present in artificial neurons, and a bunch of details relating to cell physiology and neurotransmitters modifies their operation substantially), the basic question of “how do distributed, trained networks of simple units that perform combined linear/non-linear operations work together to perform important computations” is the same, and I strongly suspect the details of individual neuron communication will be abstracted away in most of the interesting questions about computation and circuits22. As just one example of this, a computational mechanism discovered by interpretability researchers in AI systems was recently rediscovered in the brains of mice.
It is much easier to do experiments on artificial neural networks than on real ones (the latter often requires cutting into animal brains), so interpretability may well become a tool for improving our understanding of neuroscience. Furthermore, powerful AI’s will themselves probably be able to develop and apply this tool better than humans can.
Beyond just interpretability though, what we have learned from AI about how intelligent systems are trained should (though I am not sure it has yet) cause a revolution in neuroscience. When I was working in neuroscience, a lot of people focused on what I would now consider the wrong questions about learning, because the concept of the scaling hypothesis / bitter lesson didn’t exist yet. The idea that a simple objective function plus a lot of data can drive incredibly complex behaviors makes it more interesting to understand the objective functions and architectural biases and less interesting to understand the details of the emergent computations. I have not followed the field closely in recent years, but I have a vague sense that computational neuroscientists have still not fully absorbed the lesson. My attitude to the scaling hypothesis has always been “aha – this is an explanation, at a high level, of how intelligence works and how it so easily evolved”, but I don’t think that’s the average neuroscientist’s view, in part because the scaling hypothesis as “the secret to intelligence” isn’t fully accepted even within AI.
I think that neuroscientists should be trying to combine this basic insight with the particularities of the human brain (biophysical limitations, evolutionary history, topology, details of motor and sensory inputs/outputs) to try to figure out some of neuroscience’s key puzzles. Some likely are, but I suspect it’s not enough yet, and that AI neuroscientists will be able to more effectively leverage this angle to accelerate progress.
I expect AI to accelerate neuroscientific progress along four distinct routes, all of which can hopefully work together to cure mental illness and improve function:
Traditional molecular biology, chemistry, and genetics. This is essentially the same story as general biology in section 1, and AI can likely speed it up via the same mechanisms. There are many drugs that modulate neurotransmitters in order to alter brain function, affect alertness or perception, change mood, etc., and AI can help us invent many more. AI can probably also accelerate research on the genetic basis of mental illness.
Fine-grained neural measurement and intervention. This is the ability to measure what a lot of individual neurons or neuronal circuits are doing, and intervene to change their behavior. Optogenetics and neural probes are technologies capable of both measurement and intervention in live organisms, and a number of very advanced methods (such as molecular ticker tapes to read out the firing patterns of large numbers of individual neurons) have also been proposed and seem possible in principle.
Advanced computational neuroscience. As noted above, both the specific insights and the gestalt of modern AI can probably be applied fruitfully to questions in systems neuroscience, including perhaps uncovering the real causes and dynamics of complex diseases like psychosis or mood disorders.
Behavioral interventions. I haven’t much mentioned it given the focus on the biological side of neuroscience, but psychiatry and psychology have of course developed a wide repertoire of behavioral interventions over the 20th century; it stands to reason that AI could accelerate these as well, both the development of new methods and helping patients to adhere to existing methods. More broadly, the idea of an “AI coach” who always helps you to be the best version of yourself, who studies your interactions and helps you learn to be more effective, seems very promising.
It’s my guess that these four routes of progress working together would, as with physical disease, be on track to lead to the cure or prevention of most mental illness in the next 100 years even if AI was not involved – and thus might reasonably be completed in 5-10 AI-accelerated years. Concretely my guess at what will happen is something like:
Most mental illness can probably be cured. I’m not an expert in psychiatric disease (my time in neuroscience was spent building probes to study small groups of neurons) but it’s my guess that diseases like PTSD, depression, schizophrenia, addiction, etc. can be figured out and very effectively treated via some combination of the four directions above. The answer is likely to be some combination of “something went wrong biochemically” (although it could be very complex) and “something went wrong with the neural network, at a high level”. That is, it’s a systems neuroscience question—though that doesn’t gainsay the impact of the behavioral interventions discussed above. Tools for measurement and intervention, especially in live humans, seem likely to lead to rapid iteration and progress.
Conditions that are very “structural” may be more difficult, but not impossible. There’s some evidence that psychopathy is associated with obvious neuroanatomical differences – that some brain regions are simply smaller or less developed in psychopaths. Psychopaths are also believed to lack empathy from a young age; whatever is different about their brain, it was probably always that way. The same may be true of some intellectual disabilities, and perhaps other conditions. Restructuring the brain sounds hard, but it also seems like a task with high returns to intelligence. Perhaps there is some way to coax the adult brain into an earlier or more plastic state where it can be reshaped. I’m very uncertain how possible this is, but my instinct is to be optimistic about what AI can invent here.
Effective genetic prevention of mental illness seems possible. Most mental illness is partially heritable, and genome-wide association studies are starting to gain traction on identifying the relevant factors, which are often many in number. It will probably be possible to prevent most of these diseases via embryo screening, similar to the story with physical disease. One difference is that psychiatric disease is more likely to be polygenic (many genes contribute), so due to complexity there’s an increased risk of unknowingly selecting against positive traits that are correlated with disease. Oddly however, in recent years GWAS studies seem to suggest that these correlations might have been overstated. In any case, AI-accelerated neuroscience may help us to figure these things out. Of course, embryo screening for complex traits raises a number of societal issues and will be controversial, though I would guess that most people would support screening for severe or debilitating mental illness.
Everyday problems that we don’t think of as clinical disease will also be solved. Most of us have everyday psychological problems that are not ordinarily thought of as rising to the level of clinical disease. Some people are quick to anger, others have trouble focusing or are often drowsy, some are fearful or anxious, or react badly to change. Today, drugs already exist to help with e.g. alertness or focus (caffeine, modafinil, ritalin) but as with many other previous areas, much more is likely to be possible. Probably many more such drugs exist and have not been discovered, and there may also be totally new modalities of intervention, such as targeted light stimulation (see optogenetics above) or magnetic fields. Given how many drugs we’ve developed in the 20th century that tune cognitive function and emotional state, I’m very optimistic about the “compressed 21st” where everyone can get their brain to behave a bit better and have a more fulfilling day-to-day experience.
Human baseline experience can be much better. Taking one step further, many people have experienced extraordinary moments of revelation, creative inspiration, compassion, fulfillment, transcendence, love, beauty, or meditative peace. The character and frequency of these experiences differs greatly from person to person and within the same person at different times, and can also sometimes be triggered by various drugs (though often with side effects). All of this suggests that the “space of what is possible to experience” is very broad and that a larger fraction of people’s lives could consist of these extraordinary moments. It is probably also possible to improve various cognitive functions across the board. This is perhaps the neuroscience version of “biological freedom” or “extended lifespans”.
One topic that often comes up in sci-fi depictions of AI, but that I intentionally haven’t discussed here, is “mind uploading”, the idea of capturing the pattern and dynamics of a human brain and instantiating them in software. This topic could be the subject of an essay all by itself, but suffice it to say that while I think uploading is almost certainly possible in principle, in practice it faces significant technological and societal challenges, even with powerful AI, that likely put it outside the 5-10 year window we are discussing.
In summary, AI-accelerated neuroscience is likely to vastly improve treatments for, or even cure, most mental illness as well as greatly expand “cognitive and mental freedom” and human cognitive and emotional abilities. It will be every bit as radical as the improvements in physical health described in the previous section. Perhaps the world will not be visibly different on the outside, but the world as experienced by humans will be a much better and more humane place, as well as a place that offers greater opportunities for self-actualization. I also suspect that improved mental health will ameliorate a lot of other societal problems, including ones that seem political or economic.
3. Economic development and poverty
The previous two sections are about developing new technologies that cure disease and improve the quality of human life. However an obvious question, from a humanitarian perspective, is: “will everyone have access to these technologies?”
It is one thing to develop a cure for a disease, it is another thing to eradicate the disease from the world. More broadly, many existing health interventions have not yet been applied everywhere in the world, and for that matter the same is true of (non-health) technological improvements in general. Another way to say this is that living standards in many parts of the world are still desperately poor: GDP per capita is ~$2,000 in Sub-Saharan Africa as compared to ~$75,000 in the United States. If AI further increases economic growth and quality of life in the developed world, while doing little to help the developing world, we should view that as a terrible moral failure and a blemish on the genuine humanitarian victories in the previous two sections. Ideally, powerful AI should help the developing world catch up to the developed world, even as it revolutionizes the latter.
I am not as confident that AI can address inequality and economic growth as I am that it can invent fundamental technologies, because technology has such obvious high returns to intelligence (including the ability to route around complexities and lack of data) whereas the economy involves a lot of constraints from humans, as well as a large dose of intrinsic complexity. I am somewhat skeptical that an AI could solve the famous “socialist calculation problem”23 and I don’t think governments will (or should) turn over their economic policy to such an entity, even if it could do so. There are also problems like how to convince people to take treatments that are effective but that they may be suspicious of.
The challenges facing the developing world are made even more complicated by pervasive corruption in both private and public sectors. Corruption creates a vicious cycle: it exacerbates poverty, and poverty in turn breeds more corruption. AI-driven plans for economic development need to reckon with corruption, weak institutions, and other very human challenges.
Nevertheless, I do see significant reasons for optimism. Diseases have been eradicated and many countries have gone from poor to rich, and it is clear that the decisions involved in these tasks exhibit high returns to intelligence (despite human constraints and complexity). Therefore, AI can likely do them better than they are currently being done. There may also be targeted interventions that get around the human constraints and that AI could focus on. More importantly though, we have to try. Both AI companies and developed world policymakers will need to do their part to ensure that the developing world is not left out; the moral imperative is too great. So in this section, I’ll continue to make the optimistic case, but keep in mind everywhere that success is not guaranteed and depends on our collective efforts.
Below I make some guesses about how I think things may go in the developing world over the 5-10 years after powerful AI is developed:
Distribution of health interventions. The area where I am perhaps most optimistic is distributing health interventions throughout the world. Diseases have actually been eradicated by top-down campaigns: smallpox was fully eliminated in the 1970’s, and polio and guinea worm are nearly eradicated with less than 100 cases per year. Mathematically sophisticated epidemiological modeling plays an active role in disease eradication campaigns, and it seems very likely that there is room for smarter-than-human AI systems to do a better job of it than humans are. The logistics of distribution can probably also be greatly optimized. One thing I learned as an early donor to GiveWell is that some health charities are way more effective than others; the hope is that AI-accelerated efforts would be more effective still. Additionally, some biological advances actually make the logistics of distribution much easier: for example, malaria has been difficult to eradicate because it requires treatment each time the disease is contracted; a vaccine that only needs to be administered once makes the logistics much simpler (and such vaccines for malaria are in fact currently being developed). Even simpler distribution mechanisms are possible: some diseases could in principle be eradicated by targeting their animal carriers, for example releasing mosquitoes infected with a bacterium that blocks their ability to carry a disease (who then infect all the other mosquitos) or simply using gene drives to wipe out the mosquitos. This requires one or a few centralized actions, rather than a coordinated campaign that must individually treat millions. Overall, I think 5-10 years is a reasonable timeline for a good fraction (maybe 50%) of AI-driven health benefits to propagate to even the poorest countries in the world. A good goal might be for the developing world 5-10 years after powerful AI to at least be substantially healthier than the developed world is today, even if it continues to lag behind the developed world. Accomplishing this will of course require a huge effort in global health, philanthropy, political advocacy, and many other efforts, which both AI developers and policymakers should help with.
Economic growth. Can the developing world quickly catch up to the developed world, not just in health, but across the board economically? There is some precedent for this: in the final decades of the 20th century, several East Asian economies achieved sustained ~10% annual real GDP growth rates, allowing them to catch up with the developed world. Human economic planners made the decisions that led to this success, not by directly controlling entire economies but by pulling a few key levers (such as an industrial policy of export-led growth, and resisting the temptation to rely on natural resource wealth); it’s plausible that “AI finance ministers and central bankers” could replicate or exceed this 10% accomplishment. An important question is how to get developing world governments to adopt them while respecting the principle of self-determination—some may be enthusiastic about it, but others are likely to be skeptical. On the optimistic side, many of the health interventions in the previous bullet point are likely to organically increase economic growth: eradicating AIDS/malaria/parasitic worms would have a transformative effect on productivity, not to mention the economic benefits that some of the neuroscience interventions (such as improved mood and focus) would have in developed and developing world alike. Finally, non-health AI-accelerated technology (such as energy technology, transport drones, improved building materials, better logistics and distribution, and so on) may simply permeate the world naturally; for example, even cell phones quickly permeated sub-Saharan Africa via market mechanisms, without needing philanthropic efforts. On the more negative side, while AI and automation have many potential benefits, they also pose challenges for economic development, particularly for countries that haven't yet industrialized. Finding ways to ensure these countries can still develop and improve their economies in an age of increasing automation is an important challenge for economists and policymakers to address. Overall, a dream scenario—perhaps a goal to aim for—would be 20% annual GDP growth rate in the developing world, with 10% each coming from AI-enabled economic decisions and the natural spread of AI-accelerated technologies, including but not limited to health. If achieved, this would bring sub-Saharan Africa to the current per-capita GDP of China in 5-10 years, while raising much of the rest of the developing world to levels higher than the current US GDP. Again, this is a dream scenario, not what happens by default: it’s something all of us must work together to make more likely.
Food security 24. Advances in crop technology like better fertilizers and pesticides, more automation, and more efficient land use drastically increased crop yields across the 20th Century, saving millions of people from hunger. Genetic engineering is currently improving many crops even further. Finding even more ways to do this—as well as to make agricultural supply chains even more efficient—could give us an AI-driven second Green Revolution, helping close the gap between the developing and developed world.
Mitigating climate change. Climate change will be felt much more strongly in the developing world, hampering its development. We can expect that AI will lead to improvements in technologies that slow or prevent climate change, from atmospheric carbon-removal and clean energy technology to lab-grown meat that reduces our reliance on carbon-intensive factory farming. Of course, as discussed above, technology isn’t the only thing restricting progress on climate change—as with all of the other issues discussed in this essay, human societal factors are important. But there’s good reason to think that AI-enhanced research will give us the means to make mitigating climate change far less costly and disruptive, rendering many of the objections moot and freeing up developing countries to make more economic progress.
Inequality within countries. I’ve mostly talked about inequality as a global phenomenon (which I do think is its most important manifestation), but of course inequality also exists within countries. With advanced health interventions and especially radical increases in lifespan or cognitive enhancement drugs, there will certainly be valid worries that these technologies are “only for the rich”. I am more optimistic about within-country inequality especially in the developed world, for two reasons. First, markets function better in the developed world, and markets are typically good at bringing down the cost of high-value technologies over time25. Second, developed world political institutions are more responsive to their citizens and have greater state capacity to execute universal access programs—and I expect citizens to demand access to technologies that so radically improve quality of life. Of course it’s not predetermined that such demands succeed—and here is another place where we collectively have to do all we can to ensure a fair society. There is a separate problem in inequality of wealth (as opposed to inequality of access to life-saving and life-enhancing technologies), which seems harder and which I discuss in Section 5.
The opt-out problem. One concern in both developed and developing world alike is people opting out of AI-enabled benefits (similar to the anti-vaccine movement, or Luddite movements more generally). There could end up being bad feedback cycles where, for example, the people who are least able to make good decisions opt out of the very technologies that improve their decision-making abilities, leading to an ever-increasing gap and even creating a dystopian underclass (some researchers have argued that this will undermine democracy, a topic I discuss further in the next section). This would, once again, place a moral blemish on AI’s positive advances. This is a difficult problem to solve as I don’t think it is ethically okay to coerce people, but we can at least try to increase people’s scientific understanding—and perhaps AI itself can help us with this. One hopeful sign is that historically anti-technology movements have been more bark than bite: railing against modern technology is popular, but most people adopt it in the end, at least when it’s a matter of individual choice. Individuals tend to adopt most health and consumer technologies, while technologies that are truly hampered, like nuclear power, tend to be collective political decisions.
Overall, I am optimistic about quickly bringing AI’s biological advances to people in the developing world. I am hopeful, though not confident, that AI can also enable unprecedented economic growth rates and allow the developing world to at least surpass where the developed world is now. I am concerned about the “opt out” problem in both the developed and developing world, but suspect that it will peter out over time and that AI can help accelerate this process. It won’t be a perfect world, and those who are behind won’t fully catch up, at least not in the first few years. But with strong efforts on our part, we may be able to get things moving in the right direction—and fast. If we do, we can make at least a downpayment on the promises of dignity and equality that we owe to every human being on earth.
4. Peace and governance
Suppose that everything in the first three sections goes well: disease, poverty, and inequality are significantly reduced and the baseline of human experience is raised substantially. It does not follow that all major causes of human suffering are solved. Humans are still a threat to each other. Although there is a trend of technological improvement and economic development leading to democracy and peace, it is a very loose trend, with frequent (and recent) backsliding. At the dawn of the 20th Century, people thought they had put war behind them; then came the two world wars. Thirty years ago Francis Fukuyama wrote about “the End of History” and a final triumph of liberal democracy; that hasn’t happened yet. Twenty years ago US policymakers believed that free trade with China would cause it to liberalize as it became richer; that very much didn’t happen, and we now seem headed for a second cold war with a resurgent authoritarian bloc. And plausible theories suggest that internet technology may actually advantage authoritarianism, not democracy as initially believed (e.g. in the “Arab Spring” period). It seems important to try to understand how powerful AI will intersect with these issues of peace, democracy, and freedom.
Unfortunately, I see no strong reason to believe AI will preferentially or structurally advance democracy and peace, in the same way that I think it will structurally advance human health and alleviate poverty. Human conflict is adversarial and AI can in principle help both the “good guys” and the “bad guys”. If anything, some structural factors seem worrying: AI seems likely to enable much better propaganda and surveillance, both major tools in the autocrat’s toolkit. It’s therefore up to us as individual actors to tilt things in the right direction: if we want AI to favor democracy and individual rights, we are going to have to fight for that outcome. I feel even more strongly about this than I do about international inequality: the triumph of liberal democracy and political stability is not guaranteed, perhaps not even likely, and will require great sacrifice and commitment on all of our parts, as it often has in the past.
I think of the issue as having two parts: international conflict, and the internal structure of nations. On the international side, it seems very important that democracies have the upper hand on the world stage when powerful AI is created. AI-powered authoritarianism seems too terrible to contemplate, so democracies need to be able to set the terms by which powerful AI is brought into the world, both to avoid being overpowered by authoritarians and to prevent human rights abuses within authoritarian countries.
My current guess at the best way to do this is via an “entente strategy”26, in which a coalition of democracies seeks to gain a clear advantage (even just a temporary one) on powerful AI by securing its supply chain, scaling quickly, and blocking or delaying adversaries’ access to key resources like chips and semiconductor equipment. This coalition would on one hand use AI to achieve robust military superiority (the stick) while at the same time offering to distribute the benefits of powerful AI (the carrot) to a wider and wider group of countries in exchange for supporting the coalition’s strategy to promote democracy (this would be a bit analogous to “Atoms for Peace”). The coalition would aim to gain the support of more and more of the world, isolating our worst adversaries and eventually putting them in a position where they are better off taking the same bargain as the rest of the world: give up competing with democracies in order to receive all the benefits and not fight a superior foe.
If we can do all this, we will have a world in which democracies lead on the world stage and have the economic and military strength to avoid being undermined, conquered, or sabotaged by autocracies, and may be able to parlay their AI superiority into a durable advantage. This could optimistically lead to an “eternal 1991”—a world where democracies have the upper hand and Fukuyama’s dreams are realized. Again, this will be very difficult to achieve, and will in particular require close cooperation between private AI companies and democratic governments, as well as extraordinarily wise decisions about the balance between carrot and stick.
Even if all that goes well, it leaves the question of the fight between democracy and autocracy within each country. It is obviously hard to predict what will happen here, but I do have some optimism that given a global environment in which democracies control the most powerful AI, then AI may actually structurally favor democracy everywhere. In particular, in this environment democratic governments can use their superior AI to win the information war: they can counter influence and propaganda operations by autocracies and may even be able to create a globally free information environment by providing channels of information and AI services in a way that autocracies lack the technical ability to block or monitor. It probably isn’t necessary to deliver propaganda, only to counter malicious attacks and unblock the free flow of information. Although not immediate, a level playing field like this stands a good chance of gradually tilting global governance towards democracy, for several reasons.
First, the increases in quality of life in Sections 1-3 should, all things equal, promote democracy: historically they have, to at least some extent. In particular I expect improvements in mental health, well-being, and education to increase democracy, as all three are negatively correlated with support for authoritarian leaders. In general people want more self-expression when their other needs are met, and democracy is among other things a form of self-expression. Conversely, authoritarianism thrives on fear and resentment.
Second, there is a good chance free information really does undermine authoritarianism, as long as the authoritarians can’t censor it. And uncensored AI can also bring individuals powerful tools for undermining repressive governments. Repressive governments survive by denying people a certain kind of common knowledge, keeping them from realizing that “the emperor has no clothes”. For example Srđa Popović, who helped to topple the Milošević government in Serbia, has written extensively about techniques for psychologically robbing authoritarians of their power, for breaking the spell and rallying support against a dictator. A superhumanly effective AI version of Popović (whose skills seem like they have high returns to intelligence) in everyone’s pocket, one that dictators are powerless to block or censor, could create a wind at the backs of dissidents and reformers across the world. To say it again, this will be a long and protracted fight, one where victory is not assured, but if we design and build AI in the right way, it may at least be a fight where the advocates of freedom everywhere have an advantage.
As with neuroscience and biology, we can also ask how things could be “better than normal”—not just how to avoid autocracy, but how to make democracies better than they are today. Even within democracies, injustices happen all the time. Rule-of-law societies make a promise to their citizens that everyone will be equal under the law and everyone is entitled to basic human rights, but obviously people do not always receive those rights in practice. That this promise is even partially fulfilled makes it something to be proud of, but can AI help us do better?
For example, could AI improve our legal and judicial system by making decisions and processes more impartial? Today people mostly worry in legal or judicial contexts that AI systems will be a cause of discrimination, and these worries are important and need to be defended against. At the same time, the vitality of democracy depends on harnessing new technologies to improve democratic institutions, not just responding to risks. A truly mature and successful implementation of AI has the potential to reduce bias and be fairer for everyone.
For centuries, legal systems have faced the dilemma that the law aims to be impartial, but is inherently subjective and thus must be interpreted by biased humans. Trying to make the law fully mechanical hasn’t worked because the real world is messy and can’t always be captured in mathematical formulas. Instead legal systems rely on notoriously imprecise criteria like “cruel and unusual punishment” or “utterly without redeeming social importance”, which humans then interpret—and often do so in a manner that displays bias, favoritism, or arbitrariness. “Smart contracts” in cryptocurrencies haven’t revolutionized law because ordinary code isn’t smart enough to adjudicate all that much of interest. But AI might be smart enough for this: it is the first technology capable of making broad, fuzzy judgements in a repeatable and mechanical way.
I am not suggesting that we literally replace judges with AI systems, but the combination of impartiality with the ability to understand and process messy, real world situations feels like it should have some serious positive applications to law and justice. At the very least, such systems could work alongside humans as an aid to decision-making. Transparency would be important in any such system, and a mature science of AI could conceivably provide it: the training process for such systems could be extensively studied, and advanced interpretability techniques could be used to see inside the final model and assess it for hidden biases, in a way that is simply not possible with humans. Such AI tools could also be used to monitor for violations of fundamental rights in a judicial or police context, making constitutions more self-enforcing.
In a similar vein, AI could be used to both aggregate opinions and drive consensus among citizens, resolving conflict, finding common ground, and seeking compromise. Some early ideas in this direction have been undertaken by the computational democracy project, including collaborations with Anthropic. A more informed and thoughtful citizenry would obviously strengthen democratic institutions.
There is also a clear opportunity for AI to be used to help provision government services—such as health benefits or social services—that are in principle available to everyone but in practice often severely lacking, and worse in some places than others. This includes health services, the DMV, taxes, social security, building code enforcement, and so on. Having a very thoughtful and informed AI whose job is to give you everything you’re legally entitled to by the government in a way you can understand—and who also helps you comply with often confusing government rules—would be a big deal. Increasing state capacity both helps to deliver on the promise of equality under the law, and strengthens respect for democratic governance. Poorly implemented services are currently a major driver of cynicism about government27.
All of these are somewhat vague ideas, and as I said at the beginning of this section, I am not nearly as confident in their feasibility as I am in the advances in biology, neuroscience, and poverty alleviation. They may be unrealistically utopian. But the important thing is to have an ambitious vision, to be willing to dream big and try things out. The vision of AI as a guarantor of liberty, individual rights, and equality under the law is too powerful a vision not to fight for. A 21st century, AI-enabled polity could be both a stronger protector of individual freedom, and a beacon of hope that helps make liberal democracy the form of government that the whole world wants to adopt.
5. Work and meaning
Even if everything in the preceding four sections goes well—not only do we alleviate disease, poverty, and inequality, but liberal democracy becomes the dominant form of government, and existing liberal democracies become better versions of themselves—at least one important question still remains. “It’s great we live in such a technologically advanced world as well as a fair and decent one”, someone might object, “but with AI’s doing everything, how will humans have meaning? For that matter, how will they survive economically?”.
I think this question is more difficult than the others. I don’t mean that I am necessarily more pessimistic about it than I am about the other questions (although I do see challenges). I mean that it is fuzzier and harder to predict in advance, because it relates to macroscopic questions about how society is organized that tend to resolve themselves only over time and in a decentralized manner. For example, historical hunter-gatherer societies might have imagined that life is meaningless without hunting and various kinds of hunting-related religious rituals, and would have imagined that our well-fed technological society is devoid of purpose. They might also have not understood how our economy can provide for everyone, or what function people can usefully service in a mechanized society.
Nevertheless, it’s worth saying at least a few words, while keeping in mind that the brevity of this section is not at all to be taken as a sign that I don’t take these issues seriously—on the contrary, it is a sign of a lack of clear answers.
On the question of meaning, I think it is very likely a mistake to believe that tasks you undertake are meaningless simply because an AI could do them better. Most people are not the best in the world at anything, and it doesn’t seem to bother them particularly much. Of course today they can still contribute through comparative advantage, and may derive meaning from the economic value they produce, but people also greatly enjoy activities that produce no economic value. I spend plenty of time playing video games, swimming, walking around outside, and talking to friends, all of which generates zero economic value. I might spend a day trying to get better at a video game, or faster at biking up a mountain, and it doesn’t really matter to me that someone somewhere is much better at those things. In any case I think meaning comes mostly from human relationships and connection, not from economic labor. People do want a sense of accomplishment, even a sense of competition, and in a post-AI world it will be perfectly possible to spend years attempting some very difficult task with a complex strategy, similar to what people do today when they embark on research projects, try to become Hollywood actors, or found companies28. The facts that (a) an AI somewhere could in principle do this task better, and (b) this task is no longer an economically rewarded element of a global economy, don’t seem to me to matter very much.
The economic piece actually seems more difficult to me than the meaning piece. By “economic” in this section I mean the possible problem that most or all humans may not be able to contribute meaningfully to a sufficiently advanced AI-driven economy. This is a more macro problem than the separate problem of inequality, especially inequality in access to the new technologies, which I discussed in Section 3.
First of all, in the short term I agree with arguments that comparative advantage will continue to keep humans relevant and in fact increase their productivity, and may even in some ways level the playing field between humans. As long as AI is only better at 90% of a given job, the other 10% will cause humans to become highly leveraged, increasing compensation and in fact creating a bunch of new human jobs complementing and amplifying what AI is good at, such that the “10%” expands to continue to employ almost everyone. In fact, even if AI can do 100% of things better than humans, but it remains inefficient or expensive at some tasks, or if the resource inputs to humans and AI’s are meaningfully different, then the logic of comparative advantage continues to apply. One area humans are likely to maintain a relative (or even absolute) advantage for a significant time is the physical world. Thus, I think that the human economy may continue to make sense even a little past the point where we reach “a country of geniuses in a datacenter”.
However, I do think in the long run AI will become so broadly effective and so cheap that this will no longer apply. At that point our current economic setup will no longer make sense, and there will be a need for a broader societal conversation about how the economy should be organized.
While that might sound crazy, the fact is that civilization has successfully navigated major economic shifts in the past: from hunter-gathering to farming, farming to feudalism, and feudalism to industrialism. I suspect that some new and stranger thing will be needed, and that it’s something no one today has done a good job of envisioning. It could be as simple as a large universal basic income for everyone, although I suspect that will only be a small part of a solution. It could be a capitalist economy of AI systems, which then give out resources (huge amounts of them, since the overall economic pie will be gigantic) to humans based on some secondary economy of what the AI systems think makes sense to reward in humans (based on some judgment ultimately derived from human values). Perhaps the economy runs on Whuffie points. Or perhaps humans will continue to be economically valuable after all, in some way not anticipated by the usual economic models. All of these solutions have tons of possible problems, and it’s not possible to know whether they will make sense without lots of iteration and experimentation. And as with some of the other challenges, we will likely have to fight to get a good outcome here: exploitative or dystopian directions are clearly also possible and have to be prevented. Much more could be written about these questions and I hope to do so at some later time.
Taking stock
Through the varied topics above, I’ve tried to lay out a vision of a world that is both plausible if everything goes right with AI, and much better than the world today. I don’t know if this world is realistic, and even if it is, it will not be achieved without a huge amount of effort and struggle by many brave and dedicated people. Everyone (including AI companies!) will need to do their part both to prevent risks and to fully realize the benefits.
But it is a world worth fighting for. If all of this really does happen over 5 to 10 years—the defeat of most diseases, the growth in biological and cognitive freedom, the lifting of billions of people out of poverty to share in the new technologies, a renaissance of liberal democracy and human rights—I suspect everyone watching it will be surprised by the effect it has on them. I don’t mean the experience of personally benefiting from all the new technologies, although that will certainly be amazing. I mean the experience of watching a long-held set of ideals materialize in front of us all at once. I think many will be literally moved to tears by it.
Throughout writing this essay I noticed an interesting tension. In one sense the vision laid out here is extremely radical: it is not what almost anyone expects to happen in the next decade, and will likely strike many as an absurd fantasy. Some may not even consider it desirable; it embodies values and political choices that not everyone will agree with. But at the same time there is something blindingly obvious—something overdetermined—about it, as if many different attempts to envision a good world inevitably lead roughly here.
In Iain M. Banks’ The Player of Games29, the protagonist—a member of a society called the Culture, which is based on principles not unlike those I’ve laid out here—travels to a repressive, militaristic empire in which leadership is determined by competition in an intricate battle game. The game, however, is complex enough that a player’s strategy within it tends to reflect their own political and philosophical outlook. The protagonist manages to defeat the emperor in the game, showing that his values (the Culture’s values) represent a winning strategy even in a game designed by a society based on ruthless competition and survival of the fittest. A well-known post by Scott Alexander has the same thesis—that competition is self-defeating and tends to lead to a society based on compassion and cooperation. The “arc of the moral universe” is another similar concept.
I think the Culture’s values are a winning strategy because they’re the sum of a million small decisions that have clear moral force and that tend to pull everyone together onto the same side. Basic human intuitions of fairness, cooperation, curiosity, and autonomy are hard to argue with, and are cumulative in a way that our more destructive impulses often aren’t. It is easy to argue that children shouldn’t die of disease if we can prevent it, and easy from there to argue that everyone’s children deserve that right equally. From there it is not hard to argue that we should all band together and apply our intellects to achieve this outcome. Few disagree that people should be punished for attacking or hurting others unnecessarily, and from there it’s not much of a leap to the idea that punishments should be consistent and systematic across people. It is similarly intuitive that people should have autonomy and responsibility over their own lives and choices. These simple intuitions, if taken to their logical conclusion, lead eventually to rule of law, democracy, and Enlightenment values. If not inevitably, then at least as a statistical tendency, this is where humanity was already headed. AI simply offers an opportunity to get us there more quickly—to make the logic starker and the destination clearer.
Nevertheless, it is a thing of transcendent beauty. We have the opportunity to play some small role in making it real.
Thanks to Kevin Esvelt, Parag Mallick, Stuart Ritchie, Matt Yglesias, Erik Brynjolfsson, Jim McClave, Allan Dafoe, and many people at Anthropic for reviewing drafts of this essay.
To the winners of the 2024 Nobel prize in Chemistry, for showing us all the way.
Footnotes
1https://allpoetry.com/All-Watched-Over-By-Machines-Of-Loving-Grace ↩
2I do anticipate some minority of people’s reaction will be “this is pretty tame”. I think those people need to, in Twitter parlance, “touch grass”. But more importantly, tame is good from a societal perspective. I think there’s only so much change people can handle at once, and the pace I’m describing is probably close to the limits of what society can absorb without extreme turbulence. ↩
3I find AGI to be an imprecise term that has gathered a lot of sci-fi baggage and hype. I prefer "powerful AI" or "Expert-Level Science and Engineering" which get at what I mean without the hype. ↩
4In this essay, I use "intelligence" to refer to a general problem-solving capability that can be applied across diverse domains. This includes abilities like reasoning, learning, planning, and creativity. While I use "intelligence" as a shorthand throughout this essay, I acknowledge that the nature of intelligence is a complex and debated topic in cognitive science and AI research. Some researchers argue that intelligence isn't a single, unified concept but rather a collection of separate cognitive abilities. Others contend that there's a general factor of intelligence (g factor) underlying various cognitive skills. That’s a debate for another time. ↩
5This is roughly the current speed of AI systems – for example they can read a page of text in a couple seconds and write a page of text in maybe 20 seconds, which is 10-100x the speed at which humans can do these things. Over time larger models tend to make this slower but more powerful chips tend to make it faster; to date the two effects have roughly canceled out. ↩
6This might seem like a strawman position, but careful thinkers like Tyler Cowen and Matt Yglesias have raised it as a serious concern (though I don’t think they fully hold the view), and I don’t think it is crazy. ↩
7The closest economics work that I’m aware of to tackling this question is work on “general purpose technologies” and “intangible investments” that serve as complements to general purpose technologies. ↩
8This learning can include temporary, in-context learning, or traditional training; both will be rate-limited by the physical world. ↩
9In a chaotic system, small errors compound exponentially over time, so that even an enormous increase in computing power leads to only a small improvement in how far ahead it is possible to predict, and in practice measurement error may degrade this further. ↩
10Another factor is of course that powerful AI itself can potentially be used to create even more powerful AI. My assumption is that this might (in fact, probably will) occur, but that its effect will be smaller than you might imagine, precisely because of the “decreasing marginal returns to intelligence” discussed here. In other words, AI will continue to get smarter quickly, but its effect will eventually be limited by non-intelligence factors, and analyzing those is what matters most to the speed of scientific progress outside AI. ↩
11These achievements have been an inspiration to me and perhaps the most powerful existing example of AI being used to transform biology. ↩
12“Progress in science depends on new techniques, new discoveries and new ideas, probably in that order.” - Sydney Brenner ↩
13Thanks to Parag Mallick for suggesting this point. ↩
14I didn't want to clog up the text with speculation about what specific future discoveries AI-enabled science could make, but here is a brainstorm of some possibilities:
— Design of better computational tools like AlphaFold and AlphaProteo — that is, a general AI system speeding up our ability to make specialized AI computational biology tools.
— More efficient and selective CRISPR.
— More advanced cell therapies.
— Materials science and miniaturization breakthroughs leading to better implanted devices.
— Better control over stem cells, cell differentiation, and de-differentiation, and a resulting ability to regrow or reshape tissue.
— Better control over the immune system: turning it on selectively to address cancer and infectious disease, and turning it off selectively to address autoimmune diseases. ↩
15AI may of course also help with being smarter about choosing what experiments to run: improving experimental design, learning more from a first round of experiments so that the second round can narrow in on key questions, and so on. ↩
16Thanks to Matthew Yglesias for suggesting this point. ↩
17Fast evolving diseases, like the multidrug resistant strains that essentially use hospitals as an evolutionary laboratory to continually improve their resistance to treatment, could be especially stubborn to deal with, and could be the kind of thing that prevents us from getting to 100%. ↩
18Note it may be hard to know that we have doubled the human lifespan within the 5-10 years. While we might have accomplished it, we may not know it yet within the study time-frame. ↩
19This is one place where I am willing, despite the obvious biological differences between curing diseases and slowing down the aging process itself, to instead look from a greater distance at the statistical trend and say “even though the details are different, I think human science would probably find a way to continue this trend; after all, smooth trends in anything complex are necessarily made by adding up very heterogeneous components. ↩
20As an example, I’m told that an increase in productivity growth per year of 1% or even 0.5% would be transformative in projections related to these programs. If the ideas contemplated in this essay come to pass, productivity gains could be much larger than this. ↩
21The media loves to portray high status psychopaths, but the average psychopath is probably a person with poor economic prospects and poor impulse control who ends up spending significant time in prison. ↩
22I think this is somewhat analogous to the fact that many, though likely not all, of the results we’re learning from interpretability would continue to be relevant even if some of the architectural details of our current artificial neural nets, such as the attention mechanism, were changed or replaced in some way. ↩
23I suspect it is a bit like a classical chaotic system – beset by irreducible complexity that has to be managed in a mostly decentralized manner. Though as I say later in this section, more modest interventions may be possible. A counterargument, made to me by economist Erik Brynjolfsson, is that large companies (such as Walmart or Uber) are starting to have enough centralized knowledge to understand consumers better than any decentralized process could, perhaps forcing us to revise Hayek’s insights about who has the best local knowledge. ↩
24Thanks to Kevin Esvelt for suggesting this point. ↩
25For example, cell phones were initially a technology for the rich, but quickly became very cheap with year-over-year improvements happening so fast as to obviate any advantage of buying a “luxury” cell phone, and today most people have phones of similar quality. ↩
26This is the title of a forthcoming paper from RAND, that lays out roughly the strategy I describe. ↩
27When the average person thinks of public institutions, they probably think of their experience with the DMV, IRS, medicare, or similar functions. Making these experiences more positive than they currently are seems like a powerful way to combat undue cynicism. ↩
28Indeed, in an AI-powered world, the range of such possible challenges and projects will be much vaster than it is today. ↩
29I am breaking my own rule not to make this about science fiction, but I’ve found it hard not to refer to it at least a bit. The truth is that science fiction is one of our only sources of expansive thought experiments about the future; I think it says something bad that it’s entangled so heavily with a particular narrow subculture. ↩
1 note
·
View note
Text
Chapter 1 Blog Post In Chapter 1 of “Mobile and Social Media Journalism: A Practical Guide,” social media is introduced as an ever present and ever growing news aggregate, reporting tool, and platform. Hallie Jackson, news reporter for NBC, was one of the most well known broadcast journalists to begin using social media to bring her reporting to audiences. Jackson uses social media as a more direct form of communication, reaching her audiences even with behind the scenes footage of the reporting process. Using social media and basically cellular phones for some or all of the reporting process, Jackson makes her reporting feel like more of a conversation with her audience by bringing them into the story. Conversational Journalism is a term the chapter also uses to describe the future of journalism and journalistic communication models. Younger generations are increasingly using social media platforms to gain their knowledge about the world and consume news. They are turned off by the black and white nature of traditional print reporting. Data has shown that news presented in first person formats, or digestible talkative tones are more popular with current demographics of people because it allows open doors for more of a dialogue. Also, a benefit of mobile devices that are increasingly in the hands of everybody is the instantaneous nature of how news is spread and reported on. Several examples are given in the chapter but the miracle river landing of an airplane captured by a citizen journalist exemplifies the speed at which information can travel in the modern era. The author states how the photographs taken by Janis Krums were used broadly in initial reports because staff or wire news photographers were too far away to get there first. Twitter or (X) as it is now known became a hub for this king of instantaneous newsgathering and sharing. Now, however, that age of technology has shifted again with the downfall of Twitter, which likely isn’t included in this text book because of how new that shift is. While social media and journalism have had a happy marriage for audiences, factual and journalistic integrity issues have always been present. The author warns journalists of the pitfalls associated with this concern. For example, since information is spread so quickly, it is up to the journalist to do their due diligence in fact-checking the information before it is shared. This might take more time, but it will prevent the over abundance of Fake News and misinformation, which can also widely spread on these media platforms.
0 notes
Text
And because I probably haven't been controversial enough, my attempt at a nuanced opinion regarding generative AI, which continues to be a hot topic on this and other places online:
I think it’s a fascinating and extremely cool technology that might prove revolutionary in several fields.
However, like a lot of new revolutionary technologies, there’s definitely some concerning aspects (including both legal and ethical ones) regarding its implementation, although I think only a handful of those concerns are genuinely worrisome while the rest are kind of overblown.
I understand where a lot of artists and creators who feel threatened or affected by this technology are coming from, and to a certain degree, I can sympathize.
However, I think several of the arguments from artists and other creators against AI are very half-baked and might actually backfire if they keep pushing them, especially when it comes to things like copyright (believe me: The last thing you should want as a creator are more stringent copyright laws, especially in the US).
While I can sympathize with the artists and creators who are against generative AI, I can’t say the same about the rest of the people online who are so vociferously against it. Some of those people are simply expressing solidarity to the artists, which I guess is somewhat fair. The rest, however really seem to be against AI not because they actually understand what it is and what its actual potential pitfalls even are, but rather simply because whatever online tribe they claim to be part of told them “AI is bad”. So they are simply repeating “AI is bad” to signal or reinforce their allegiance to their group without actually understanding what exactly they are against. It is honestly quite frustrating!
Ultimately, my overall position on generative AI is neutral for the most part. I think there are issues about it that must be tackled, but I'm not against it on principle. I will try to refrain from reblogging content created with it out of solidarity to artists and creators, but I reserve the right to like whatever I find that catches my eye. I'm also not going to take part in any "online debate" surrounding it. As I said, I don't think a lot of people have good arguments or even know what they are talking about regarding the topic, and I don't have the time or energy to deal with that.
0 notes
Text
A Crash Course to Design Thinking: Empathy
●~•──────── Introduction ─────────•~●
Hello! Today I wanted to talk about UX design. This post was supposed to be longer but Tumblr deleted my draft and I’m feeling (╯°□°)╯︵ ┻━┻ so here is just part one.. We’ll be covering the “Empathy” step which includes:
Exploring the problem space
Conducting User Research
Defining User Personas
I believe that taking time to do design thinking when creating a product avoids bad door knobs and confusing app interfaces. Here’s a handful of hilariously bad UI demos for taste: https://mattw.io/bad-ui/.
Here’s some other common pitfalls:
Too many choices for a user (overcrowded toolbars)
Not enough options for users (accessibility)
Poor feedback (“Did that form actually go through?”)
Inconsistent interfaces (“Do I push or pull on this door…It says push, but has a pull handle!”)
●~•────────What is a prototype? ─────────•~●
A prototype is an early model mock-up of the product you want to build. We’re focusing on digital products in this case, so the product can be an app, website, or any other applications. Prototypes are useful for conceptualizing and visualizing your ideas for the product. It's also meant to showcase the "flow" of using the app from a user's perspective, as well as show the layout and organization of your product.
●~•───────What is the design thinking process? ────────•~●
The design thinking process is an iterative process to approaching designing products. It's not necessarily linear, but we'll walk through what you should consider at each step. In practice, you may find yourself revisiting steps to refine your problem, ideas, and mock-up itself after getting user feedback. Let’s talk about the first step, empathizing!
Part 1: Empathize
●~•─────── Step 1 ────────•~●
⭐ Pick a problem space.
What problem are you trying to find a potential solution for? It could be as simple as "Tumblr’s draft system sucks" or maybe your friend just said "This book tracking app could be better.." Sources of inspiration are everywhere!
Coming up with your own: Think about your own experiences as a user of different products or services. Have you encountered any frustrating issues or pain points that could be addressed with a potential solution? Maybe you struggle with finding parking in your city and wish there was a more efficient way to find available spots. Or perhaps you find it difficult to keep track of all your passwords and would like a more secure and user-friendly password manager. Consider your own needs and experiences as a starting point for identifying potential problem spaces.
Interacting with others: Talk to people in different industries or fields, or attend events or conferences related to areas you're interested in. This can give you exposure to different perspectives and potential problem spaces that you may not have considered before. For example, if you're interested in education technology, attending an education conference could help you identify common challenges and needs in that space. Or even reading through r/professors or talking to your own instructors!
📚 Resources:
https://www.uxchallenge.co/ - List of problems
https://uxtools.co/challenges/ - Walkthroughs on tackling specific problems focused on UX skills
●~•─────── Step 2 ────────•~●
⭐ Understand the users affected by the problem.
Once you have a problem space, don’t jump ahead and start thinking of solutions! First, we must understand the problem from a variety of user perspectives. Why? Because by understanding the users affected by the problem, we can gain insights into their needs, pain points, and behaviors. This understanding can help us develop effective solutions that address their needs and improve their experiences.
There’s a variety of user research methods we can use to collect user perspectives, this is just a handful of them:
Survey: If the product already exists (and it’s yours), you could add a survey in-app for feedback on a specific feature. Otherwise, you can create a survey assessing a user’s impressions on a problem they might have (“Do you encounter this..?”, “Would you be interested in a product that..”, “What kind of features are most important to you?”).
User Interviews: This involves talking to users one-on-one to gain insights into their experiences, needs, and pain points. It's important to ask open-ended questions and actively listen to their responses to understand their perspectives fully.
Online Research: Checkout user impressions on products by looking up existing reviews online. This can be from Amazon, Reddit, the app store, whatever. To make this kind of data useful, you can identify patterns of what is often mentioned or common pain points users express online. It’s going to be better if you can connect more directly with users about your specific problem area, but this is something to start with.
📚 Resources: (I love nngroup…)
https://www.nngroup.com/articles/ux-research-cheat-sheet/
https://www.nngroup.com/articles/guide-ux-research-methods/
https://www.nngroup.com/articles/which-ux-research-methods/
●~•─────── Step 3 ────────•~●
⭐ Create User Personas to represent the types of users your product will be addressing the needs of.
The user persona shouldn’t represent a specific (real) person, rather it should represent a realistic archetype of a person. I think of it as like a character sheet. For example, if we’re creating an app for book tracking our user personas might be “Reader Rhea - A college student looking to organize books from her classes” or “Bookworm Bryan - A young adult looking to get book recommendations”. The persona should be based on the research you did prior. Creating user personas will help you better understand and empathize with your users, and make design decisions that align with their needs and goals.
Here’s a quick checklist of what to include in a user persona:
Name: Give your persona a name that reflects their characteristics and needs.
Demographics: Include details like age, gender, occupation, and location.
Goals: What are the persona's primary goals and objectives when using your product?
Pain points: What are the main challenges or problems that the persona faces when using your product?
Behaviors: What are the typical behaviors and habits of the persona when using your product?
Motivations: What motivates the persona to use your product?
Personality: What are the persona's personality traits and characteristics?
Scenario: Describe a scenario in which the persona would use your product or service.
Quote: Include a quote that summarizes the persona's attitude or perspective.
📚 Resources:
https://about.gitlab.com/handbook/product/ux/persona-creation/
https://www.nngroup.com/articles/personas-study-guide/
https://www.justinmind.com/blog/user-persona-templates/ - lots of examples and explanations here
●~•─────── That's All! ────────•~●
Phew, ok that is all for now! In a future post, I will go over the second step in the design process. If you have anything to add to this topic, pls share! :D Thanks for reading
#ux design#user experience#ui ux course#prototyping#design thinking#tech#design#creative#user research#comp sci
66 notes
·
View notes
Note
If you were in charge of Teen Wolf, would you have portrayed the Wild Hunt differently or included them at all?

I already gave one possible alternative in my story No Better Tomorrow, and while I'm very happy with the way that story turned out, I've given the Wild Hunt in the Teen Wolf setting a lot of thought, and I want to thank you for giving me the excuse to talk about it.
To me, Teen Wolf was above all other things a bildungsroman, a story about children becoming adults, extrapolating the ubiquitous tension of assuming a new identity that each one of us goes through through lycanthropy and other supernatural manifestations. The Wild Hunt in the actual show served as a manifestation of the same fear of loss that Stiles expressed in Creatures of the Night (5x01): "How come when we graduate we're just expected to go our separate ways? If I've already found the best people in my life, why aren't I not trying to stay with them, you know?"
But in my revision, the Wild Hunt would serve as a manifestation of another pitfall on the way to adulthood: the refusal to grow up. It's tempting to try to hold on to the simplicity and safety of childhood, which is more understandable for these teenagers who were stripped of theirs. After all the things the characters had gone through, there might be a tendency to stay someplace where they wouldn't have to work through their trauma.
The Wild Hunt would be a specific fae court. They would keep the Wild West theme because their appearances change in accordance to their geographic location. In the 1943 flashback they would assume the dress and style of the early German Empire. And that's a clue to their motivation. Within the realm of the Hunt, time has no meaning nor does death, and while that has obvious advantages, it also has drawbacks. Without consequence, there is no victory. There is no tragedy, of course, but there is also no joy. So these fae have come up with a way to experience these things and thus fill their eternity: they Hunt. They track down humans with powerful experiences, abduct them, and drain them of the agony and the ecstasy until they're nothing more than husks.
What attracts them to Beacon Hills is, of course, Hauptmann Douglas. Back during World War II, he tried to control them; he tried to enslave them. To the fae, this is both offensive and highly, highly entertaining. They've been looking for him ever since, and while 70 years is a long time to humans, it's an afternoon's diversion. Once the arcane technology the Doctors were using to hide Douglas no longer functioned, they came and found things just as interesting: the True Alpha of this century, someone may be the only human to survive a nogitsune possession, a girl who spent eight years as a coyote, a Thunder kitsune, an undead werewolf, a Frankenstein monster -- it's a smorgasbord!
But above all, they find Lydia, an exceptionally powerful banshee. They would not consume her, for if she joins their number, she will provide them with a different perspective: forever! They cannot, however, just take her like they take the others. The Fae have few rules, but those they do have terrible consequences if they break them, and she is kin. She must consent to join the Hunt. That doesn't mean, of course, they can't play games with the end goal of getting her to willingly come along. And they love games.
While Lydia (and her relationship with Stiles) would be the focus of the season, all of the characters would, as they are taken by the Hunt, be forced to confront the trauma as the fae feed from their pasts. No show did terrifying hallucinations like Teen Wolf. Lydia will also be tempted to bargain with the Hunt, because the very nature is attractive. If she rides the story forever, she'll no longer have to face the possibility of screaming for the death of everyone she cares about, but she'll also lose what they bring to her. The pack will have engage in Courtly Intrigue, figuring out a way to win their freedom and the freedom of Beacon Hills from an enemy who they can't overpower.
I think it would have been very interesting, to say the least.
27 notes
·
View notes
Text
On Fanfiction and Original Fiction
I have a lot of feelings about the Tumblr debates surrounding fanfiction vs. “real writing” and am going to try to engage with them in the most productive/positive way possible, hopefully in a way that holds space for writers of all backgrounds and ability levels.
A note on my background, for context: I’m a professional published writer and writing educator. I hold an MFA from one of the top ranked MFA programs in the country. In the six years since completing my degree, I’ve been published in journals, anthologies, won literary awards and fellowships, been solicited by agents and presses for upcoming manuscripts, and have my first book coming out next year. My career has unfolded within the literary establishment, and I’m familiar with both its merits and its bullshit. I’m also a successful writer (of poetry, literary fiction, and speculative fiction) who gained many of my first, lifelong writing tools through fanfiction.
I’ve spent a lot of time processing the elitism, classism, and racism that writers (including Latinx writers like myself) face in the MFA world and in the publishing world. I’m working in a literary tradition that uplifts white male American minimalism as a style all writers should value and work towards. A literary tradition that discounts story structures that come from oral tradition, and discounts popular and genre fiction without considering why people connect with those stories. There are so many ways in which writers use their privilege and education to put each other down, and I think that this discussion engages some of these inequities even if it doesn’t come from that place.
As an educator, I’ve taught in a range of literary spaces. I’ve taught at my top-ranked university, where most of my students were financially privileged and had years of access to elite education. I’ve taught in inclusive nonprofit spaces with writers of all ages and backgrounds. I’ve taught in community spaces, writing poems and stories with homeless youth who dropped out of school, whose imaginations and ability to tell their own stories was no less than the young people who had more linguistic tools. A recent class I taught for my nonprofit was called “From Fanfiction to Fan-worthy Fiction”. In this class, I worked with teen fanfic writers to examine craft differences between fanfiction and original fiction. We talked about the tools they gained from fanfiction: writing genuine character moments, understanding character archetypes and tropes, asking “what if” questions and filling gaps in representation, writing toward an audience, developing a consistent writing practice, and learning to write toward the units of scenes and chapters. We also discussed the pitfalls they might discover as they transitioned to original fiction: original world-building, developing complex and nuanced character backstories, finding the right starting place, understanding story structure and pacing, breaking away from fandom inspiration, and editing and polishing.
Within the class, we talked about how, if we only read fanfic, our understanding of storytelling will be limited to what works in fanfic. There’s a world of story out there, and if we want to write original stuff, novels and short stories and poetry will help us gain the tools we need. This is what I think post “read real books” was getting at, but in a world where young people have their attention so divided by media and technology, I try to celebrate any reading my students are doing. If students tell me what kind of fanfics they love, what kinds of tv shows and videos games and stories they love, I recommend books they might also love. I had the privilege of growing up in a household where my love of books was fostered. This isn’t true for all writers. Some of my most successful writer friends and most talented students didn’t grow up in spaces where reading was valued or encouraged. I react against “read real books” because the phrase contains a certain privilege, as if people aren’t reading “real books” out of laziness or lack of ambition, or because they’re in a fanfiction bubble. It implies that consuming story outside of books isn’t “real”. Some of my students have felt intimidated by novels but welcomed by fanfiction. It isn’t a matter of yelling at them and telling them they’re doing something wrong—it’s a matter of helping them see that they can locate their love of story and character in books, then providing access points.
I wouldn’t be a professional writer if not for fanfiction. There are successful writers who have written fanfic and see it as separate from the development of their original work, which is great. But for me, who grew up with no writing community, with little access to creative writing education, and no place to geek out over the books I loved, fanfiction was an incredibly valuable training ground.
The heart of this argument is: who gets to call themselves a “writer”? Who gets to call themselves a “real writer”? What assumptions do we make in the process of assigning those labels? In my opinion, anyone who writes is a writer. My adult student who won literary awards and has her first book of poetry coming out with a major press. My friend who writes for Marvel. My friend who won the Yale Younger Poets Prize and a Lambda Literary Award. My retired adult student who always had a yearning to write but never actually tried it, who took her first class in her sixties. My thirteen-year-old teen student trying to find her way back into the education system, who had no grammatical tools, no education around writing, but wrote songs and raps just for herself. The sixteen-year-old fanfic writer who wrote to me seeking private coaching, who saved up all her money from her first job for those coachings, who didn’t even know what the past tense was and wrote and read only what you’d consider “smutty” anime pairings. All of these people were writing. All were doing the work of writing with the tools they had. All of them had an interest in learning more.
I like to believe that all fanfiction writers are writers, whether they pursue publication or not, whether they write original work or not, whether they develop their tools further or not, whether their writing has value for others or just for their own expression. If those writers want to improve—and we should always be improving, no matter how much we’ve published—then they can learn by reading, they can watch Youtube tutorials, and, if it’s accessible to them, they can pursue education in literary spaces. There are books that earn praise within the literary establishment that leave me cold. There are fanfics that ignite my emotion. There are lauded books that have forever changed me as a person and obscure books that have changed me equally. If you feel that writing is part of who you are, and it’s something you practice often, then you’re a writer, no matter what skill stage you’re at. I hope that claiming that title for yourself empowers you to develop your writing, using whatever tools you have available.
And if you want to take classes with me or other awesome writers from anywhere in the world, with lots of free sessions and scholarship opportunities, check out GrubStreet!
#if you've made it to the end of this post#you've won lunch#with billy boyd and dominic monaghan#be there or be hungry#extended edition ROTK cast commentary references anyone?#fanfiction#on writing#tools for writers#writers on tumblr#original fiction#publishing#writing industry#writing nonprofits#resources for writers#writing mfa#if anyone is wondering my main writing fandoms were#LOTR#Rent#and Smallville#with dips into Star Wars Harry Potter X-Men Daredevil and fandoms that writers weren't allowed to publish in#like Dragonriders of Pern#writing teacher#grubstreet#writing classes#read whatever the fuck you want#write whatever the fuck you want#and call yourself a writer if it's part of you#no one else gets to tell you if you're a writer or not#especially not randos on tumblr
364 notes
·
View notes
Link
In late July, sitting in my sister-in-law’s home in St. Louis, Missouri, I waited in the “lobby” area of Cloud Theatre for Zoom Parah to begin. Itself a creation born of the pandemic, Cloud Theatre is an online platform which strives to offer a seamless digital theatre experience to global audiences. Their “lobby” is a simple but smart artificial space: a live chat box, available to attendees as they login for a show, is positioned next to the image of a theatre stage, framed by red curtains. The waiting room attempts to replicate the experience of audience members mingling and chatting before a performance begins. Joining others in this virtual space, I was excited to see another Malaysian, also based in the United States, mention that they were from Petaling Jaya—my hometown. I excitedly typed back, “I’m from PJ, too!” The spark of recognition flashing across the chat box was akin to overhearing a conversation between strangers, and interjecting to share a mutual connection. Months into social distancing protocols, the Cloud Theatre lobby reminded me that there was something inherently sociable about joining hundreds of people from around the world to watch this production together—albeit, online.
“We had people who’d never seen theatre before experience it for the first time using Zoom.” Malaysian theatre director, actor and writer Jo Kukathas stressed this point repeatedly when discussing Zoom Parah, the online adaptation of the critically acclaimed play, Parah. This digital theatre performance, and the new viewing experiences it made possible, is just one of many examples of innovative work being produced by Southeast Asian directors, producers, and actors since the pandemic. In the early days and weeks of Covid-19, theatre makers from this region—like so many others around the world—watched in despair as stages went dark and theatres shut their doors. Despite the dire conditions, they rallied—with little to no funding and even less governmental support—to reimagine theatre in the time of COVID. They created innovative forms of theatre designed for Zoom, streamed recordings of award-winning plays that had not previously been available online, and held numerous talk-back sessions to reflect on the creative process. The digital turn in Southeast Asian theatre has provided unprecedented access to experimental and critically acclaimed work from the region. These productions have connected audiences and diasporic communities around the world, focusing often on urgent questions of race, identity, and belonging. These developments offer models not only for the professional theatre world, but also for teachers and students of the performing arts who are navigating online education.
In their articles for Offstage and The Business Times, Akanksha Raja and Helmi Yusof discuss half a dozen new Singaporean and Southeast Asian theatre projects which have embraced the digital turn. These include: Murder at Mandai Camp and The Future Stage from Sight Lines Entertainment; Long Distance Affair from Juggerknot Theatre and PopUP Theatrics; Fat Kids Are Harder to Kidnap from How Drama; and Who’s There? from The Transit Ensemble and New Ohio Theatre. While these are just a few of the productions that have emerged since the pandemic began, they are impressive in scale, quantity, and range of forms. These performances have taken advantage of every feature offered by Zoom, YouTube, Instagram, Facebook, WhatsApp and other social media platforms. They’ve incorporated chat boxes, polls, and even collaborative detective work on the part of the audience. In addition to Zoom Parah (by Instant Café Theatre), I’ve had the opportunity to watch Who’s There?, as well as a recording of WILD RICE theatre’s celebrated play, Merdeka, written by Singaporean playwrights Alfian Sa’at and Neo Hai Bin. Of these three, Zoom Parah and Who’s There? illuminate the technological and socio-political interventions of Southeast Asian digital theatre, as well as the ways in which COVID-19 has redefined performance and spectatorship.
In addition to the virtual lobby and chat function, Zoom Parah employed live English translation in a separate text box, making the production accessible to those not fluent in Malay. Who’s There? like Zoom Parah, also made the most of the chat function, along with approximately a dozen polls which punctuated the performance. Each poll gauged audience reactions to the complex issues the play addressed and reflected the responses back to the viewers. This feature required audience members to pause, reflect on a particular scene and its context, and assess the perspectives through which they were viewing the performance. In effect, the polls created a dynamic feedback loop between the cast, crew, and viewers, offering an alternative to the in-person audience response that is so crucial to live performances. Augmenting their efforts to keep audience members plugged in, the play experimented with layering lighting, sound, and mixed media to produce different visual and sound effects within the Zoom frame.
Alongside their adaptation of online technologies, both plays are also noteworthy for their socio-political interventions. Parah, the critically acclaimed play on which Zoom Parah is based, was written in 2011 by award-winning Singaporean writer and resident playwright at WILD RICE theatre, Alfian Sa’at. It follow a group of 11th grade students of different races (Malay, Chinese, and Indian) as they navigate reading the controversial Malaysian novel, Interlok, which sparked national debates surrounding racial stereotypes. The classmates, who share a deep friendship, challenge each other’s views of the novel by reflecting on their lived experiences. Zoom Parah retained the original plot and script, bringing the play’s pressing questions into a national landscape marked by pandemic lockdowns and political upheaval, and shadowed by new iterations of Malay supremacy. At a volatile time for the country, Zoom Parah questions what it means to be Malaysian, making visible the forms of belonging and exclusion that continue to shape national identities.
Who’s There? was also invested in broaching difficult discussions of contemporary issues. A transnational collaboration between artists from the US, Singapore, and Malaysia, the play was part of the New Ohio Theatre’s summer festival, which moved online due to the pandemic. Who’s There? aimed to tackle some of the most contentious racial topics of 2020: the killing of George Floyd and the ensuing Black Lives Matter protests; the use of black and brownface in Malaysia; and the relationship between DNA testing and cultural identity. The production was structured as a series of linked vignettes, featuring different sets of characters wrestling with interconnected racial and national contexts.
Both Parah and Who’s There speak to the arts’ inherent capacity to not merely experiment with form and aesthetics in the digital realm, but to also engage the complexity of history, politics, and contemporary culture. As Kukathas recently reflected, “The act of making theatre to me is always about trying to connect to the society that I live in; that could be local, that could be global . . . people want to hear stories, and to connect through stories.” By taking on the dual challenge of experimenting with digital technologies and responding to what’s happening in the public square, Southeast Asian digital theatre joins work such as the Public Theatre’s all-Black production of Much Ado About Nothing to offer new frames through which to view race, rights, and identity—even and especially in the midst of a global pandemic.
Kukathas’ comments on the inherently social motivations of her work were shared during a Facebook Live discussion entitled “Who’s Afraid of Digital Theater?”. The conversation aired on 20 August, hosted by WILD RICE theatre and moderated by Alfian. Focusing on “the possibilities and pitfalls of digital theatre,” the discussion featured reflections from artists who have helped launch this new era of Southeast Asian theatre. The panelists included Kukathas, Kwin Bhichitkul from Thailand (director, In Own Space) and Sim Yan Ying “YY” from Singapore (co-director and actor, Who’s There?). Approximately 100 people tuned in for the discussion, and the recording has accrued over 8,000 views on Facebook. During the conversation, the theatre makers shared rationales for their creative choices, as well as strategies for navigating the challenges of developing online performances. Their insights offer potential pathways for other theatre professionals, as well as teachers and students of theatre who are continuing to work online.
Bhichitkul, Kukathas, and Sim’s approaches to digital theatre diverged significantly from one another. They each played with different technologies and were guided by distinct motivations. Bhichitkul was focussed on the isolation created by the pandemic and, responding to this fragmentation, he asked 15 artists to create short, 2-minute video performances. Bhichitkul explained that this project also had an improvisational twist: “Every artist need[ed] to be inspired by the message of the [artist’s] video before them. They couldn’t think beforehand, they needed to wait until the day [they received the video]” before creating their own. The creative process was thus limited to just a 24-hour window for each artist. The entire project spanned 15 days, with Bhichitkul stitching the videos together on the final day.
On the other hand, Kukathas felt strongly that her foray into digital theatre required a deep connection to a live, staged performance. Therefore, she chose Parah—a play she directed for six re-stagings between 2011-2013—as the production she would adapt to Zoom. Kukathas explained, “If I was going to start experimenting with doing digital theatre . . . it needed to be a play that I was very familiar with, and a play that the actors were very familiar with. I wanted the actors to really inhabit their bodies, so that the energy of the actor’s body was very present even through the screen . . . I [needed] actors who have a kinetic memory in their body of that performance being 360 degrees.” Unlike Kukathas, Sim was “interested in doing something as far away from live theatre as possible” and did not want to be “beholden” to its conventions. She views digital theatre as “a new art form in itself; not an extension of live theatre, not a replacement, but something that straddles the line between theatre and film.”
The directors’ reflections on their respective productions illustrate the range of forms, techniques, and points of view with which theatre makers are experimenting. They also suggest that digital theatre has the potential to accommodate a surprisingly wide variety of directorial visions and investments.
And while their approaches might vary, these theatre makers all agreed about the benefits and opportunities of digital theatre. They returned repeatedly to the advantages of greater accessibility and transnational reach without the costs of international travel. Kukathas and Sim cited accessibility and the pay-what-you-can model as being particular priorities for them. Kukathas was especially proud of the fact that “we could reach the play to people who would ordinarily not be able to go to the theatre. And we made our tickets really cheap: our cheapest ticket was RM5 (US $1). We did that deliberately so that people who don’t usually even go to the theatre would get a chance to watch it. So we had people who’d never seen theatre before experience it for the first time using Zoom.”
The directors also view the digital turn as one which opens up new avenues for creativity and collaboration. Sim recalls, “We still spent 3-4 hours per rehearsal, 4 times a week, on this space together. We developed a closeness and a relationship with each other even though we never met live. And we still shared a lot of cross-cultural exchanges.” Kukathas views the shift to online technologies and platforms as one which prompts us to ask big questions about theatre and to re-evaluate the rules of spectatorship. Filming theatre at home, sharing it online, and watching it at home creates, according to Kukathas, a merging of “strangeness and ordinariness” that shrinks the spaces between public and private. The ensuing disorientation poses, for Kukathas, a number of pivotal questions: “What is theatre? What are the impulses that drive us to make a piece of theatre? What is it to watch theatre? How free are you now when you’re watching? . . . I think this could be a good chance to question why we have certain rules [in theatre] and whether those rules are really necessary.”
While we are used to hearing laments about the digital as the enemy of “the real,” the digital turn in Southeast Asian theatre suggests an opening and an expansion; a chance to reimagine the performing arts, develop new forms of collaboration, and reach wider and more diverse audiences. As Akanksha Raja notes in Offstage, “performance-makers have been recognising that the way they choose to embrace technology can not only enhance but possibly birth new forms of theatre.”
However, it’s crucial not to romanticise the very real challenges of alternative forms and platforms. Alfian noted that, “In a traditional theatre, you are a captive audience . . . you’re not allowed to be distracted, not allowed to look at your phone. On the one hand, we’re seeing there’s the freedom to not be so disciplined when watching a show. But at the same time, is the freedom necessarily a good thing? You’re actually quite distracted and you’re not giving your 100 percent [attention] to the work.”
Sim and Kukathas agreed to an extent, but pointed out alternative advantages: group chats and texts in a “watch party” format build a sense of connection among audience members and provide real-time audience reactions and feedback. Kukathas recalled how attendees used the chat box (along with text messages and DMs) to alert Kukathas and her producer to a sound issue that they were not aware of. Kukathas laughingly reflected, “I really appreciated how invested people were. They were like, ‘Fix this right now!’ and then we had to rush to try to fix it. It made me feel how alive we were—the audience was shouting at us!”
The digital turn in Southeast Asian theatre is bringing a wide range of productions to global audiences. The literary and cultural traditions of this region are incredibly rich and have always been shaped by complex histories of migration, exchange, and adaptation. Digital theatre is borne of new practices of migration, exchange, and adaptation—and of necessity. While there have been controversial debates in countries like Singapore and Malaysia about the value of the arts during this pandemic, the creatives featured here are turning to the digital in order to keep art alive and to keep their companies and projects afloat. They are extending an invitation to audiences and to collaborators to embrace play and experimentation, to find opportunities in the challenges of online theatre, and to recognise that art is essential, now more than ever.
56 notes
·
View notes
Text
November 20, 2021
Forms of Government and Their Pitfalls
There are roughly three types of governments - One Party State, Two Party State, and Multi Party States. Each has their strengths and weaknesses, and strangely one is not more democratic than the other. You could have multi-party states which are less democratic than one-party states. This seems counterintuitive but here we go…
First, the bevy of terms. Democratic states are states which have a high response rate to the populace. They tend to seek out and focus on deliberation. Authoritarian states do not listen to the people, and most of their efforts are to thwart and convince the people of a certain action. Both states have their uses. A good state requires a large input of data to accurately address concerns and problems, while requires a great deal of persuasion and coercion after deliberation is over, and to quell misinformation and those who seek to destroy the current structure. Reform is ideal, but may be impossible.
One-party states have many different forms, mostly regarding the size of the opposition. As long as the opposition is in permanent minority, one-party can operate as normal, and therefore can be large as about 25% of the legislature...
One-party states permit democracy through the party. While the range of expression might be limited, the constraint usually makes for some creative solution that might arise in the bigger search space of multi-party states. As long the people behave inside the party, relative democracy can flourish. In an ideal process, the states can respond to people’s concerns.
Of course, this can be corrupted. In fact, one-party states have the highest chance of corruption. The constraint makes for limiting use of expression, which means deceits can be easily hidden along with actual concerns. And can they also ignore the public, because the party would never lose power? No, since even if a party never loses power, one can lose the power within the party and as long as the party is sufficiently democratic, that puts the same democratic pressure as multi-party states.
And there is always a chance of revolution. China usually takes itself into a more autocratic state, simply because there are just so many people that it will take a considerable power to compete with whatever insurgent power the public might receive. But the current state of technology might open China as power becomes unbalanced… or does it? Technology has an annoying feature of not actually affecting this balance.
Thermostatic pressure is the rule regarding two-party states, since both parties need to be competent in leading the country at all times. There is an immense pressure to not let the one party rule for too long, or it will just devolve into a one-party state.
Problem is when one or both parties become more extreme and the swing becomes more and more pronounced. Thermostatic pressure would become bigger and bigger until the elective process cannot be maintained and use of force is required. Hence, two-party states, without outside intervention, inevitably leads to civil war.
The United States has gone through all variations of pressure relief, including actual civil war, populace somehow convincing both parties, and outside intervention. The current pressure is unfortunately looking to be of the first option, since there are actually two opposing populace. The United Kingdom has fared better, mostly since thermostatic pressure never existed until the 20th century, since the Monarchy induced a one-party-like stability, and since then foreign intervention has quelled civil war… until now. The United Kingdom is poised for a civil war in a condition not seen as the 17th century… I talked about this in an emotive nature right after the Brexit referendum.
Multi-party states might seem like the structure best suited for democracy, hence all the different public factions get directly represented in the legislature… but since the lever of power is done through the coalition, that is never the case. In fact, multi-party states can act like two or one party states in disguise. In Norway, the restrictive structure actually forced a two-party state (One wonders if such restrictive structure also leads to the same in the United States) of coalition, in which the members might change slightly, but the general consensus does not change. There are also many other examples, in which the people in power don't change, but merely the makeup of the coalition changes, meaning only the busibodies are different but the main driver is the same.
This means the same drive of authoritarianism exists for multi-party democracies. If a nascent party seems politically toxic, more and more unstable coalitions will be formed until the nascent party overtakes the government, and the government collapses to one party state with a heavy disdain for democracy. Story of Weimar Germany. Of course, the government can swallow the pill, but they have to be strong enough to not die.
2 notes
·
View notes
Text
Some stuff, with @vortex-atom, about the (amazing) book The Great Age of Modern History, which I am reading.
So far the description of the book is not wrong, but the first chapter is very misleading. It starts out talking about how, starting around 1500, Europeans invented everything about human civilization they needed, or could be convinced to invent. This is not accurate, but can be treated as true because it's serving as a "hook." The book is about how European civilization went from being highly competitive, violently inegal and rule-following, to a high-technology, low-competition system, which ended abruptly at the beginning of the 19th century. The first chapter is basically a setup for this eventual chapter, which explains the title of the book as a warning about potential pitfalls, and includes several examples (the Reformation, the scientific revolution, and the discovery of new continents) to illustrate the change.
It's a bit like the "everything happened overnight" hypothesis. Everything really does take a bit of time; in many cases many thousands of years, depending on what you count. If you want to hear a nice, uncomplicated saga of how Christianity got started and spread across the globe, all you have to do is listen to lots of historians, read lots of primary sources, and ask a lot of hard questions, and you'll probably do fine.
But the book is still too focused on how Christianity happened and was spread. It'd be nice to read about things that just happened, and were true at the time. To that end, it's been fun so far to read stuff like "How the Vikings Were Made." This is a boring book, but it's the first I'd suggest trying, because a) it would give a feel for what it'd be like to write about Europe a thousand years ago without Christianity, and b) the Vikings are fascinating by themselves. (The book is about what is sometimes called the "Norse age," an 11-less year period from around 1000 to 1130, so it has to cover the same period as Christianity and the Vikings are the central examples of that.)
Mostly irrelevant, but one of the things the book gets me mentioning is that it's a reminder of just how short a time it'd really take to change how life was lived if, say, the entire world was suddenly replaced with another industrial civilization full of billions of people, and everything in them were just basically the same. The Reformation happened in a little more than a single generation. The Scientific Revolution really did happen in a matter of only a few decades. The entire "Industrial Revolution" really happened in around thirty years. That would really shake up the way people lived, and I don't think I ever want to be like "if this happened, things would be back to normal in a few decades."
4 notes
·
View notes
Text
A Conspiracy Theorist’s Take On Coronavirus Conspiracy Theories
So, I am a lifelong Conspiracy Theorist. However, I am part of the often forgot about majority of theorists who don’t base their theories purely on internet hearsay and bigotry. Let’s talk COVID- 19, without that nonsense clouding the truth.
This post is U.S. centric.
I. Origins
I think that the virology lab in Wuhan had something to do with it, just because of how highly suspicious the location of the initial outbreak is. There are a few different possibilities:
a) it was a bio weapon in the later stages of development and was accidentally released;
b) it was a bio weapon in the early stages of development and its release was accidental or meant to be a test but got wildly out of hand;
c) it wasn’t a bio weapon but scientific negligence is what led to the initial outbreak;
d) it’s part of a bigger plan. We’ll get to that theory later
COVID- 19 is highly contagious and has produced several potent mutations, but the death rate is low. This supports either the non- bio weapon or early- stage bio weapon theory: if this was a late stage- bio weapon or an intention release of one, the death rate would be higher. I also don’t think that China would have intentionally released the virus in Wuhan, as this has garnered suspicion and negative attention. It would have made more sense to release it elsewhere, which is why I think it was either accidental or premature.
II. Masks
Three coexisting facts:
1. If the virus came from China, then the American government + corporate powers probably didn’t have a (direct) hand in it;
2. This doesn’t mean that the government + corporate powers aren’t taking advantage of the situation (they definitely are);
3. There is a historical precedent for wearing masks in public and avoiding gatherings during pandemics and epidemics (e.g., the Spanish Flu).
Masks are a conventional, reasonable strategy for avoiding affection. Whether there are organizations using mask mandates for their own purposes is an entirely separate matter, and should be treated as such.
Also, the theory that masks are step one in trying to force Islamic dress codes on us is an example of blatant misinformation used to distract from actual conspiracies + hate mongering used to divide us (the masses). I could give a lot of rebuttals to this bout of Islamophobic nonsense, but I’ll say this: If the malevolent powers that be in this country were, for some reason, interested in forcing Islamic dress codes upon us, our faces would not be their first concern. It would be our midriffs, arms, and legs, then head coverings. Face coverings are far from universal among Islamic communities.
III. Vaccines
This is where stuff gets more complicated. There are a lot of concerns and theories over the vaccine, some of which are more valid than others. There is, of course, the pre- pandemic anti- vaccination movement, which warrants its own discussion. As someone who acknowledges the science behind vaccines in general as sound, I approach this debate with the question “is there anything risky or nefarious about any of the vaccines?”
The most prevalent concerns tend to be :
A) the vaccines were rushed in production and testing and may be unsafe;
B) the vaccines contain a microchip to track us/influence our behavior;
C) the vaccines are designed to reduce or eliminate fertility (especially in women).
Unfortunately, A has no easy answer. The CDC has recently released data that suggests that the vaccine has a higher casualty rate than any other vaccine in the past 20 years, but this still only accounts for 2% of covid related death. To simplify: the COVID-19 vaccines have a high death rate for a vaccine, but a much lower death rate than actual COVID-19.
Of course, it doesn’t help that there’s no real way to verify these numbers, and many news sources either a) refuse to look in to it or b) staunchly believe that COVID was created for the sole purpose of making their lord and savior, Trump, look bad.
Regardless, waiting to get vaccinated is an understandable course of action even if, statistically, getting vaccinated reduces total risk.
Let’s track the history of B: the head of the Russian Communist party and a former Donald Trump advisor support the theory that the vaccines contain microchips to track the movement of the vaccinated, track who has been vaccinated, and possibly influence behavior.
Also feeding the theory is the fact that Bill Gates wanted to give the vaccinated “digital certificates” to identify themselves, and was at one point playing with the idea of injecting people with a “special ink” to make an “invisible tattoo” that would be used to identify the vaccinated without the use of records.
I trust none of these individuals, as each have their own agendas. What makes me skeptical of this theory is that the U.S. government + the corporate powers already track all of us through cell phones and security cameras (fun fact: the U.S. has more surveillance cameras per person than China).
When I took the vaccine out of necessity, I took meticulous notes before and afterwards, documenting my thoughts, opinions, and patterns of behavior. I have not noted any changes.
Finally, C. This one is difficult to prove or disprove, because
1) most people who get vaccinated aren’t going to immediately start actively trying to have babies;
2) fertility rates have been steadily declining for decades;
3) although changes in menstration have been reported after vaccination, the vast majority experienced this as a short term side effect only (I.e, their cycles went back to normal). I had no change in my cycle, nor did my mother.
I will say that I’m not interested in ever giving birth, which is one reason I’m not worried about this- I’m much more inclined to worry about current humans that theoretical future humans.
Now, one thing to note is that reducing the population so drastically is a counterintuitive move for the elites. Less workers = each individual worker is, statistically, less disposable. Less consumers = less consumption.
The only way this theory could be true and make sense is if it’s just the lead up to something else, like lab- based, expensive reproduction to mostly avoid the above, kill off the “undesirables,” and put people in debt whenever they want kids, functionally putting them further under corporate control.
If this was the plan, than the Global Powers That Be (assuming that this was the result of collusion and not China’s attempt at a bio weapon) definitely weren’t ready yet: that kind of technology is available, but not nearly efficient or cost- effective for them to avoid the worker- shortage pitfall yet. Just look at how corporate America has nearly buckled under its own weight at the current shortage of minimum wage workers!
TLDR; the vaccine came from the Wuhan lab and was either a prematurely released bio weapon, the result of epidemiological negligence, or the lead up to some grand conspiracy to control human reproduction, which would be a stupid move on the part of the government- corporate- military complex, and would have required global cooperation and coordination.
Sources/references:
#covid#covid 19#covid vaccine#vaccines#masks#fertility#surveilance#bill gates#conspiracy#conspiracy theories#long post#us#politics
0 notes
Text
Algorithmic And AI Assessment Tools — A New Frontier In Disability Discrimination
New Post has been published on https://perfectirishgifts.com/algorithmic-and-ai-assessment-tools-a-new-frontier-in-disability-discrimination/
Algorithmic And AI Assessment Tools — A New Frontier In Disability Discrimination
Algorithms rely on large data sets that are used to model the normative, standardized behavior of … [] majority populations
The use of software algorithms to assist in organizational decision-making and their potential negative impact on minority populations will be an increasingly important area for humankind to resolve as we embrace our AI future.
These critical issues were brought into even sharper focus earlier this month with the publication of a new report by the Center For Democracy & Technology entitled “Algorithm Hiring Tools: Innovative Recruitment or Expedited Disability Discrimination?”
Looking beyond just the employment sphere, a dedicated panel discussion at last week’s Sight Tech Global conference explored other important areas for people with disabilities impacted by algorithmic decision-making, such as the administration of welfare benefits, education and the criminal justice system.
The key messages emerging from both the panel discussion and the report convey a unanimously stark warning.
Disability rights risk being eroded as they become entangled within wider society’s drive to achieve greater efficiency through the automation of processes that once required careful human deliberation.
This is dangerous for disabled people due to an inescapable tension between the way algorithmic tools work and the lived experience of many people with disabilities.
By their very nature, algorithms rely on large data sets that are used to model the normative, standardized behavior of majority populations.
The lived experience of disabled people naturally sits on the margins of “Big data.” It also remains intrinsically difficult to reflect disabled people’s experiences through population-level modeling due to the individualized nature of medical conditions and prevailing socio-economic factors.
Jutta Treviranus is Director of the Inclusive Design Research Centre and contributed to a panel discussion at Sight Tech Global entitled “AI, Fairness and Bias: What technologists and advocates need to do to ensure that AI helps instead of harms people with disabilities.”
“Artificial intelligence amplifies, automates and accelerates whatever has happened before, said Treviranus at the virtual conference.
“It’s using data from the past to optimize what was optimal in the past. The terrible flaw with artificial intelligence is that it does not deal with diversity or the complexity of the unexpected very well,” she continued.
“Disability is a perfect challenge to artificial intelligence because, if you’re living with a disability, your entire life is much more complex, much more entangled and your experiences are always diverse.”
Algorithm-driven hiring tools in recruitment
The use of algorithm-based assessment tools in recruitment is a particularly thorny pain point for the disability community. Estimates suggest the employment rate for people with disabilities in the U.S. stands at around 37%, compared to 79% for the general population.
Algorithm-hiring tools may involve several different exercises and components. These may include candidates recording videos for the assessment of facial and vocal cues, resume checking software to identify red flags such as long gaps between periods of employment and gamified tests to evaluate reaction speed and learning styles.
Algorithm-driven software is also marketed as being able to identify less tangible, but, potentially, desirable characteristics in candidates such as optimism, enthusiasm, personal stability, sociability and assertiveness.
Of course, straight-out platform inaccessibility is the immediate concern that springs to mind when considering interactions with disabled candidates.
It is entirely valid to wonder how a candidate with a vision impairment might access a gamified test involving graphics and images, how a candidate with motor disabilities might move a mouse to answer multiple-choice questions, or how an individual on the autism spectrum might react to an exercise in reading facial expressions from static photos.
Indeed, the Americans with Disabilities Act specifically prohibits the screening out of candidates with disabilities through inaccessible hiring processes or ones that do not measure attributes directly related to the job in question.
Employers may themselves think they are helping disabled candidates by removing traditional human bias and outsourcing the assessment to an apparently “neutral” AI.
This, however, is to set aside the fact that the tools have most likely been designed by able-bodied, white males in the first place.
Furthermore, approval criteria are often modeled off the pre-determined positive traits of an organization’s currently successful employees.
If the workforce lacks diversity, this is simply reflected back into the algorithm-based testing tool.
By developing an over-reliance on these tools without understanding the pitfalls, employers run the very real risk of sleepwalking into the promotion of discriminatory practices at an industrial scale.
Addressing this point specifically, the report’s authors note, “In the end, the individualized analysis to which candidates are legally entitled under the ADA may be fundamentally in tension with the mass-scale approach to hiring embodied in many algorithm-based tools.”
“Employers must think seriously about not only the legal risks they may face from deploying such a tool, but the ethical, moral, and reputational risks that their use of poorly-conceived hiring tools will compound exclusion in the workforce and in broader society.”
During the Sight Tech Global panel discussion, Lydia X. Z. Brown, a Policy Counsel for the Center For Democracy & Technology’s Privacy and Data Project, was asked whether algorithm-driven assessment tools really do represent a truly modern form of disability discrimination.
“Algorithm discrimination highlights existing ableism, exacerbates and sharpens existing ableism and only shows different ways for ableism that already existed to manifest,” responded Brown.
She later continued, “When we talk about ableism in that way, it helps us understand that algorithmic discrimination doesn’t create something new, it builds on the ableism and other forms of oppression that already existed throughout society.”
Yet, it is the scale and pace at which automation can further seed and embed discrimination that must be of greatest concern.
Building a more inclusive AI future
The CDT report does make some recommendations around the creation of more accessible hiring practices.
The key leap for organizations is to first develop an understanding of the inherent limitations of these tools for assessing individuals with varied and complex disabilities.
Once this reality-check takes hold at a leadership level, organizations can begin to proactively initiate policies to offset the issues.
This may start with a deep-dive into what these tests are actually measuring. Are positive but vague qualities such as “optimism” and “high self-esteem,” as elicited by a snapshot test, truly essential for the position advertised?
Through understanding and appropriately discharging their legal responsibilities, employers should seek to educate and inform all candidates on the specific details of what algorithmic tests involve.
It is only by communicating these details that candidates will be able to make an informed choice around accessibility.
For candidates who proceed with the test, organizations should be energetic in their data collection on accessibility issues.
For candidates, who fear an algorithm may unfairly screen them out, a suite of alternative testing models should readily be made available without any implied stigma.
Finally, it should be incumbent on software vendors to keep accessibility at the forefront of the initial design process.
This can be further bolstered by more stringent regulation in this area but the most useful measure vendors might adopt right now is to co-design alongside disabled people and take account of their feedback.
The simple truth is that AI isn’t just the future. It’s here already and its presence is reaching out exponentially into every facet of human existence.
The destination may be set but there is still time to modify the journey and, through best-practice, take the more direct shortcuts to inclusion, rather than the long road of having to learn from mistakes that risk leaving people behind.
From AI in Perfectirishgifts
2 notes
·
View notes
Text
An idea for a Poke-clone
So since Pokemon day just passed, I started to think about a kind of Poke-clone type of game/series of my own, since that seems to be an upcoming trend. I already thought up the base idea for my own Pokemon region before, and I’m not sure if I’ll eventually combine that idea with this one, but anyways…
The very broad idea is that it’s sort of the best of Pokemon, mixed with the best of Digimon, and I guess for fun’s sake we could say it’s sort of got a Bionicle flair to it a bit too.
I feel like there are some conceptual pitfalls in Pokemon that they’ve sort of tried to step across over time, that obviously aren’t too big a deal but I can probably fix with this idea. The big one that hurts them in the real world is that, by the Pokemon being presented like animals or pets, it makes battles feel like some kind of unethical dog fighting at first glance. Obviously they make it seem like Pokemon have personalities and minds like humans, in that they do genuinely want to battle, and that they genuinely like and want to fight with their trainers, which is fair, but it also begs into question other things, like why they’d want to sit on the floor and eat brown pellets out of bowls instead of on plates with actual food like the humans. Also, it makes you wonder why they even want humans to tell them what to do. In the wild, they can clearly fight on their own, so why do they instantly do what humans say when they’re caught? I guess the assumption is that humans are better with strategy, but even better than Pokemon like Alakazam or Metagross, who are supposed to have superior intelligence? Also, when a Pokemon is given to another trainer, like in the opening of the Deoxys movie, why does it not do anything and wait for its new trainer to tell it what to do, even when it’s being bombarded with attacks, and with its trainer clearly frozen in shock?
Also, back to the idea of catching, what makes the Pokemon want to obey the trainer? At first it almost seems like Pokeballs brainwash the Pokemon into liking whoever catches it, but what about Pokemon like Ash’s Charizard, who don't obey trainers? In the games it’s related to badges, but then why would any Pokemon obey a trainer without any badges? Do they just accept that it’s their lot in life to be caught by a human, and when that happens, you just obey them if you’re not good enough yourself? Obviously a decent amount of Pokemon just become friends with a human, and then they catch them just out of a formality. But, what about Go in the most recent season of the anime? He just catches anything he sees instantly, without much of a fight at all, and he has no badges, so how can he just instantly use anything he catches? Surely not every single Pokemon they come across just wants to bow down to him instantly. Obviously a lot of this lore stuff is just in the background, since the primary purpose is gameplay and whatnot, like Go just sort of representing the catching style of Pokemon Go, and with badges being a logical progression that keeps you from just using the strongest Pokemon traded from a friend and wiping the game clean. Still, even if you just accept it, it’d be nice to just not have to accept it, you know?
Then in terms of design, I kind of like the prospects of Digimon a bit more. Visually, though, I think Digimon are universally worse looking than Pokemon, but the fact that they seem more like friends than pets solves so many problems. First, that they’re made out to be actual sentient (sapient if you want to be pedantic about it) beings, instead of animals. This makes it so much easier to understand why they’d want to fight and protect their less-than-capable humans, and why they’d be willing to fight at all. They’d just understand it’s basically a sparring match or a sport. Also, it makes the humanoid designs so much easier to think about. When you see that classic image of Mimey sitting on the floor picking at the “Pokemon food” from a dog bowl alongside the rest of the Pokemon, it just doesn’t feel right.
Also, what the hell are humans in this world? Why are Pokemon regarded as such special beings in the world? They always say “Welcome to the world of Pokemon” like they know of a different world full of non-Pokemon, and that Pokemon aren’t just animals. It almost made more sense in the first few seasons of the anime, where you’d just see some random fish swim alongside Magikarp or whatever. That made it clear that there are normal animals as well, showing that Pokemon are separate things entirely. But, now, they’ve retconned that, and I don’t think that was how the games worked in the first place at all. Then, the age-old question that all the Youtube game theorists try to answer: Are humans Pokemon? They sure seem resilient to Pokemon attacks, but don’t have any themselves, apart from like Tackle or whatever. It feels like animals were a thing way back when, but through natural selection the animals that developed supernatural power obviously became the dominant species, and over time the supernatural animals were called “Pokemon,” and humans with their technology/taming abilities managed to survive the onslaught of dangerous creatures by using them as protection from others. Then, I guess way down the line humans can’t keep up and die out, creating the Mystery Dungeon series, since it’s strange how the Pokemon there seem to know what humans are despite them never existing in the series. That, or maybe Mewtwo just fuses people with Pokemon like he did in the Detective Pikachu movie.
Anyway, enough of me talking about stuff probably explained in the manga or whatever. Here’s the Poke-clone idea:
The creatures were there first, and at least most of them have human-level intelligence, if not higher-level intelligence. There are some supernatural animals around, alongside supernatural people and monsters, and there are 7 primary elements they can have: Earth, Fire, Water, Electricity, Air, Light, and Dark. Earth is basically Earth in Temtem, Fire/Water/Electricity/Air are all self-explanatory, Light is basically a more generalized Fairy type, and Dark is like Dark but with also including Ghost. These creatures can be born with any of these types, and can naturally wield them, getting better at it as they grow. But, these types can be combined using elemental essences, creating new types. Earth and Fire creates Metal (because smelting), Light and Earth creates Crystal (Again think Temtem, just can’t get enough of it) Water and Dark create Ice (because ice does feel different enough to be separated from water elementally imo) Fire and Water creates Air again (“steam” doesn’t feel special enough) and Electricity and Fire creates Plasma (basically generic magic). Chances are I’ll think of more combinations but whatever. There are different areas themed after these elements, and in these areas a specific element is boosted in power, so the “gym” equivalent will reside in these places, and “badges” will prove that you can defeat an element at its strongest level. Also, the areas that connect the main areas either double up types or use the secondary type which they combine into, so not all “gyms” would be super straightforward.
Here’s where humans come in, though. They’re not just “humans.” They’re actually a species of these creatures called “Humans” (capital) and are mythical beings which did not originally exist on the world. Originally, it was just the other creatures. At that point, humans were only a myth spread around as myths do, and they were said to be creatures with the ability to combine every primary type and use them simultaneously, being the most powerful species of creature of them all. However, when they did magically appear on Earth, they seemed awfully weak. In fact, they couldn’t wield any element naturally, but could by using elemental essence, which is just normal for creatures. Some thought they just needed to be trained and grow like the rest of them, but others just saw the myths as being dramatic. It was especially troubling to see Humans grow and die of old age without being able to use that mythical power. Regardless, many Humans were highly respected, and many teams of these creatures would look to them for guidance during battle, even though there are many teams that don’t even have a Human on them. Humans, of course, are expected to battle alongside their teammates, even if they’re not quite as capable, because that’s how their society is expected to work.
Over time, everyone sort of let go of the idea that Humans are somehow superior and they just became equals, although the trend of Humans advising a team stuck for the most part (partially because they can’t do much else, they wanted to feel inclusive). Even with their normal social standing, though, some creatures scoffed at them claiming they’re not even worth having on a team at all. Others tried their best to draw out the mythical Human power, sometimes by capturing and experimenting on them in less-than-ethical ways. Some of them claim it’s helping them draw out their full potential, but others unabashedly say that they want to harness the Human power for their own good.
So yeah, that’s basically the lore of the idea. I’ll probably think of a specific Pokemon-Digimon-Temtem-esque name equivalent for them eventually, but for now let’s just call everything a “creature.”
The overall design prospects of the creatures are basically at the same level as Pokemon, where some of them are clearly inspired by animals but others are just general monsters/humanoids. I’m not entirely sure if I want them to all be intelligent or if some should still be animal-like in behavior, but the latter definitely makes more sense world-wise. The areas the creatures live in are built up using the elements they wield, obviously. I could imagine the general usage of the elements being more like Avatar in a way, but obviously with more than the base four elements.
As for the elements, any individual creature would start out in only one of the primary elements, and I guess if you beat a “gym” you’d get the essence for that gym element, essentially unlocking new types for your team. You can use each essence infinitely, and outside of battle the form of the creature you use it on would permanently change if it creates a secondary type (until you use another one). However, in a battle, you can switch them on the fly, and they will revert back to however they were before the battle. I’m also thinking that they could only be used on a creature if they create a secondary type, and I’d just add more of those in so it’s less limited, and so not every creature has the exact same potential, which would make recruiting different ones just pointless (although I can see it being useful for just choosing your favorite creatures to fight with, so no loss either way). Maybe the effects could just be timed in a battle. Also, secondary typed creatures would be a different form entirely. So, if an Earth creature was given Fire essence, they’d go from looking like they’re made of stone to being made of steel, etc and etc. Think of it sort of like character customization. For the Human you’d inevitably play as, I guess you could just change their hair/eye color depending on the essence, maybe add some special particle effects or light textures on the skin.
Thematically, this is more like a tag-team sport than a battle. In a 1v1 or 2v2 scenario, you’d tag out with your teammates, since the term “tag” makes for a good reminder that everyone involved is working at the same level as you pretty much. You could also name your team as well, making it even more sport-like. Also, instead of “capturing” teammates, you’d just recruit them, logically by proving yourself in a battle. Maybe you’d have to fight them only with you as a Human so it’s much better proof that you’re a worthy leader. It also opens the possibility to just talk with NPCs and recruit them that way. Maybe you could even recruit different Humans with different body types, and therefore different stats. I guess the trouble would be how you’d keep them all by your side at all times. Maybe there could be some Telefang-esque communication device you’d use to call in the specific members of your team you’d want in your battle.
Gameplay-wise, it would definitely be cool if it were a much more live battle like Kindred Fates, where you’re controlling the active creature and using their moves on the fly. It would also make for fun multiplayer battles, where you could even have a full team go against another all at once, in a sort of battle royale. Maybe even have a true BR. Even though visually I’d love for it to be like Pokken Tournament with the circular battlefield and movement (no switching, just normal movement all the time), I definitely think having super limited and easily understandable movesets is better for having multiple teammates.
The main story of the game is sort of set out by the lore, too. Naturally it kind of has a “chosen one” protagonist who’d inevitably bring out the mythical Human power over the course of the story, with the people trying to capture you and stuff for that power being the evil team analog. Also, for those Pokemon fans anal about having an asshole rival, they could easily be a team of only creatures that doubts Human abilities. The “gyms” being for each type is pretty standard, and having combo “gyms” definitely makes things more challenging. Maybe you could enter the “league” at the end only once you get all the essences, and the secondary type “gyms” are just for a challenge.
I’m not sure if I want the overall age of the world to be more modern, but there would definitely be certain areas that are more ancient-looking. That’s basically why I thought the idea was a little Bionicle-like, because they have super ancient-looking areas that are themed but also have a weirdly cool degree of technology in them. It’s a seriously cool aesthetic that I want more of, but I guess certain towns and cities could keep a modern structure (roads, buildings, shops, etc) but with drastically different building designs based on the relevant element. Surely with such crazy elemental powers they wouldn’t need crazy technological transportation, but maybe that would be for the Electric elemental cities. Surely some Humans would need something to be proud of.
Obviously the biggest selling point for Pokemon (at least for me) is the monster designs, so I’ll probably put some stuff together in the future. Right now I can see there being some sort of tall Metal knight-like lady character who carries their Human around like a baby. That is, it’d be her character, and not a thing of the species. Also, I drew a cute fur seal pup recently that could easily be worked into one of these creatures. Designing monsters is too fun as is so giving me a good reason to do it is just perfect.
Of course, as is common with ideas that literally were thought up yesterday this isn’t going to be a thing unless some millionaire game designer contacts me right after posting this so yeah, I’m just spitballing right now. Spitballing is fun, though.
#pokemon#clone#pokeclone#idea#thoughts#rambling#monster#poole#etc#ideas#digimon#temtem#kindred fates
1 note
·
View note