Don't wanna be here? Send us removal request.
Text
Houses
Walking around my neighborhood, it strikes me that houses are a kind of magical object. On the surface, most of the ones I encounter are remarkably similar. Breaking things down into a few basic elements, there are walls arranged so as to form boxes (or occasionally cylinders), a roof (typically made of a different material, and often sloped), openings (mostly square) in the walls, filled with glass, and one or two doors for entering and exiting. If you start with these, or a similar set of basic components, and view each house with them in mind, an impressive degree of sameness begins to appear.
And yet, inside it is an entirely different story. Most houses have that incredible character such that inside they can seem much larger than they do from the outside, at least at a conceptual level. Inside each house is a world unto itself. Not only is there much more individual variation, (though here again a certain common grammar applies), there can be a surprisingly expansive sense of possibility. One room might be confining, but a set of rooms provides the opportunity for flow, contrast, rearrangement.
Doors clearly carry tremendous power, in allowing to transition between worlds, but windows provide a more pyschological transit between the inside and the outside. From inside, a view can show all that is happening beyond the walls of the house, reminding those who dwell inside that time continues on, no matter what personal pursuit they are engaged in.
From the outside, windows provide a pedestrian a glipse into the lives of others. So often, the interiors of houses look like miniature worlds from the outside, with people engaged in familiar yet somehow cartoonish micro pursuits. Sitting at a laptop, chopping vegetables, exercising, drinking wine, watching TV. We see people engaged in all manner of normal pursuits, and yet each one, seen from outside the house seems small, almost at the level of comedy or tragedy.
It's odd, on some level, that the power of a home can be contained in such a simple envelope. Some internal elements, such as a fireplace, almost give the impression of having been designed specifically to enhance a certain way of being. More structural elements don't so obviously seem tailored to serve that purpose, and yet perhaps there a power in how we tend to arrange them which transcends their basic functional nature.
3 notes
·
View notes
Text
Ongoing debates about architecture
I have been reflecting for a while on a series of blog posts in a certain part of the blogosphere about the state of modern architecture, and why it seems like older buildings and neighborhoods tend to be more beautiful. First there was a Scott Alexander piece on the decline of ornamentation (including in domains like fashion and furniture), along with a follow up summary of the comments. Then there was a Bloomberg piece from Tyler Cowen about why there are so few newer neighborhoods that can compare to the beauty of historic European capitals, or even older American neighborhoods. Scott Sumner discusses both of these pieces, and helpfully makes some distinctions, specifically between residential, commercial, and non-profit architecture, and between the elite, the educated, and the uninterested public.
It turns out this is a topic on which lots of people have opinions (as revealed by answers to related questions on Quora, for example), though not a lot of definitive answers, probably in part due to the number of interrelated factors, and a lack of clarity about precisely what question is being answered. Interestingly, Cowen himself also raised this same question almost two decades ago, making nearly identical points in an earlier series of blog posts from 2004, and raising similar theories, including cost disease, a possible over-estimation of (typical) historic beauty, changing tastes, the effect of cars, and the indirect influence of regulation and taxes. Responding to these at the time, Matt Yglesias suggests that the secret of a place like Paris is in fact due to heavy handed regulations (combined with wise aesthetic choices), and on and on.
Most recently, a piece by Samuel Hughes in Works in Progress argued that survivorship bias (i.e., that all the ugly ones were torn down) is not the right explanation for why old buildings tend to be beautiful, and that beauty was in fact typical of older buildings. Unfortunately, the evidence here seems a bit thin. Some of it depends on looking at old photographs and drawings to get a sense of what was typical, but these don't present enough detail to properly evaluate. There are also some potential case studies where we can actually examine which specific buildings have survived in places like Paris, but there is a concern that such places might be special in some way, and more generally we need a stronger connection to being able to actually look at these buildings in context.
Regardless, it is certainly true that many newer buildings are ugly in new and different ways from older ones. This is of course fully to be expected. Modern technology has brought a greater range of possibilities, and a modern economy has changed the incentives for creating certain types of construction.
There is far too much here to discuss concisely, but I just want to make a few brief points for possible future follow ups:
1. Tastes really do differ. Scott Alexander claims that he does not have taste, and yet he makes it quite clear in this post that he prefers ornamental styles, and says "it feels intuitively incontrovertible to me that the older stuff is more beautiful". Cowen identifies certain modern neighborhoods he finds to be exceptions to the rule, including the Ginza District and Diamond Head. Sumner states his preferences for modernist and certain traditional styles clearly, also suggesting that wealthier people have better taste, and that new buildings in conservative neighborhoods tend to be uglier than in liberal ones.
But taste is also complicated. Tastes can change with greater experience and understanding, and something that is beautiful because it is difficult is not quite simply just more or less beautiful. It has a certain potential that will only appeal to a subset of people, who have to some extent opted into the game. Moreover, this is all complicated by elements of status and signaling. It would be nice to think we could accurately infer people's tastes based on what they say, or at least what they do, but there are all kinds of reasons why people might be intentionally misleading about what they actually like. Combine this with the fact that for houses in particular, a huge consideration is potential resale value, and so we might expect people's choices to reflect more about what they think others will like than what they themselves truly value.
In addition, a factor that is hard to disentangle is that there is something about the process of aging that can increase beauty. A weathered wall or a rusty machine can often be more aesthetically pleasing than a brand new one (though not always), for reasons that are hard to define. One could imagine trying to create some sort of VR experience that would combine documentary evidence with image manipulation techniques to account for factors like age, but even then, so much might depend on how a place is inhabited, not just how it looks when empty of people.
2. Unsurprisingly, I think technology and economics are the dominant factors in all of this. Clearly the wealthiest person today could create a building today that would never have been possible in the past. They could even create a near perfect replication of an older building, though at a greatly increased cost (which is where things like cost disease really do enter the picture). That being said, because cost is such an important factor, most buildings being built (especially houses), will prioritize doing things cheaply, while ensuring that things are popular enough to sell. We could of course build more ornamental styles quite cheaply (just not the exact same ones as before), but my expectation is that this is a case where a certain type of demand is dictating style.
In particular, there might be a wide range of possible ornamental styles that individuals would prefer to the mainstream, but there is a need in certain domains like housing, furniture, and fashion to appeal to a lowest common denominator, so as to be able to do things at scale. Scale is such an important factor in determining cost, that most of the effort will go into creating something that vast numbers of people will find acceptable than something that a few will find sublime. As such, changes in technology and economy (which are inextricably linked) mean that many more things are possible today, but anything produced at scale will be a tiny slice of that. We can see this most clearly in cookie-cutter subdivisions and the commercial junkscapes that now litter so many smaller cities.
In addition, housing is unusual in that while we want to live in a beautiful neighborhood, we have very little control over the neighborhood, other than choosing where to live. If aesthetics were the only concern, I expect we would see people making vastly different choices, but of course for most people there are any number of constraints (commute, space, safety, etc.) and cost will likely be one of, if not the, dominant factor. Cowen also suggests that efforts have shifted from making beautiful exteriors to making beautiful interiors, perhaps in part due to the effects of regulation, which place greater restrictions on the "public" part of the building.
3. It's very hard to get a sense of what is typical in such domains, or even define the question clearly. A number of the pieces above mentioned churches as a useful case study, and I expect there would be strong agreement among people that older churches were much more beautiful than contemporary ones (both in terms of the typical church and the most beautiful from each period). But the nature of what a church is has also changed over time, not to mention the power of the groups responsible for such construction. (Universities provide another interesting case study, though so few new universities are being built it may not be a very productive one).
There are also questions about survivorship bias, as mentioned above, but even ignoring that, it's not clear if we should think in terms of the raw distribution of what was built, or correct for the extent to which things are actually used and viewed. Do we account for the distribution of people living in each building? Do empty cities being built in China today count towards the contemporary average?
4. Despite Scott Alexander claims about ornament, I think this depends greatly on how one defines art. He begins by suggesting that "Older art tends to have bright colors, ornate details, realistic representations, technical skill, and be instantly visually appealing to the average person. Newer art tends to be more abstract, require less obvious skill, and have less direct appeal." There is some truth here, but I think the first description is close to how a typical person would identify art, whereas a lot of what gets called art today is something which most people need to be told is art.
If we consider the category more broadly, and look at domains like video games, we will find an incredible profusion of color, detail, realism, and technical skill going into creating virtual worlds (as well as a huge amount of experimentation on the margins, much of which is likely to be more minimalist and abstract, due to both cost and aesthetic preferences). Much of modernist architecture is undeniably more cold and abstract than older styles, but at least for some people (including myself), it is just extremely effective as art, creating spaces that exhibit power and mystery, and which, when done well, create a kind of compulsion to experience.
5. All of the above notwithstanding, I still do find it odd that there has been so little dramatic innovation in widely-used architectural styles over the past 80 years or so. To me at least, a wide range of modernist homes are breathtaking, regardless of when they were built, and this still seems to be a dominant style for many elite custom homes, with only relatively small changes in the aesthetics. In part, this feels in part like a failure of rebellion and critique. Modernist architecture itself was obviously hugely effective as critique, but what have we seen successfully push back against it?
1 note
·
View note
Text
Historical Futures
Reading historical predictions about future technology is a delightful source of insights, both into past conditions of existence, and into what we take for granted today. The best are fascinating both for what they get wrong and for what they get right.
For example, in Technics and Civilization (1934), Lewis Mumford quotes Roger Bacon, writing in the 13th Century, prognosticating about the future of transportation:
"I will now mention some of the wonderful works of art and nature in which there is nothing of magic and which magic could not perform. Instruments may be made by which the largest ships, with only one man guiding them, will be carried with greater velocity than if they were full of sailors. Chariots may be constructed that will move with incredible rapidity without the help of animals. Instruments of flying may be formed in which a man, sitting at his ease and meditating in any subject, may beat the air with his artificial wings after the manner of birds . . . as also machines which will enable men to walk at the bottom of seas or rivers without ships."
Mumford himself was relatively prescient about the future of energy, and clearly recognized its centrality for the future:
"Modern technics began in Western Civilization with an increased capacity for conversion. While society faces a fairly imminent shortage of petroleum and perhaps natural gas, and while the known coal beds of the world give no longer promise of life, at the present rates of consumption, than three thousand years, we face no serious energy problem that we cannot solve even with our present equipment, provided that we utilize to the full our scientific resources. Apart from the doubtful possibility of harnessing inter-atomic energy, there is the much nearer one of utilizing the sun's energy directly in sun-converters or of utilizing the difference in temperature between the lower depths and the surface of the tropical seas; there is likewise the possibility of applying on a wide scale new types of wind turbine, like the rotor: indeed, once an efficient storage battery was available the wind alone would be sufficient, in all probability, to supply any reasonable needs for energy"
Similarly, in Where is My Flying Car? (2018), J. Storrs Hall quotes numerous predictions from past futurists as the groundwork to his lament about what he sees as the strangulating effects of excessive regulation. On telecommunications, for example, we have Arther C. Clarke, writing in Profiles of the Future (1962):
"We could be in instant contact with each other, wherever we may be, where we can contact our friends anywhere on earth, even if we don't know their actual physical location. It will be possible in that age, perhaps only 50 years from now, for a man to conduct his business from Tahiti or Bali just as well as he could from London... Almost any executive skill, any administrative skill, even any physical skill, could be made independent of distance."
Storrs himself makes some bold claims about what nanotech might yet achieve (e.g., "replac[ing] the entire capital stock of the United States" in "about a week"), but most of the predictions we hear about today tend to be fairly limited variations on the same visions. Based on my own informal impressions, the most popular seem to be:
human level (or stronger) artificial intelligence
brain-computer interfaces
life extension
space tourism
self-driving cars
clean renewable energy
widespread use of drones
killer robots
Many of these already exist in some form of course, while others seem perpetually far off. On AI in particular, Ray Kurzweil is among the best known and most confident forecasters, making claims such as
“Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity---technological change so rapid and profound it represents a rupture in the fabric of human history. The implications include the merger of biological and nonbiological intelligence, immortal software-based humans, and ultra-high levels of intelligence that expand outward in the universe at the speed of light.”
which as far as I know he has not backed down from.
As much as we have already fulfilled many of the visions of the past, most dreams of the future remain a direct outgrowth of past hopes. Despite the advances that have been made, we can never move fast enough, go far enough, be connected enough. There is always the drive to do more, experience more, and reduce our physical exertions, discomfort, and boredom.
When future people look back on this time, the particulars of our predictions will be highly revealing of contemporary anxieties. Aside from the usual desires for greater ease and comfort, they will read into our predictions the struggle to find a meaningful role in the world, the difficulty of connecting with other humans, the worries over personal economic and physical security, and the anguish at still having to confront death, eventually, even as we progressively eliminate risk from our lives.
0 notes
Text
Rewatching The Matrix
Like many others, I expect, I recently rewatched The Matrix, as a refresher for the upcoming sequel. Remarkably, I don't think I've previously sat down to watch it since I first saw it in theaters in 1999. I expect I would have gone back to it (and certainly have seen clips here and there), except that I felt so unfavorably towards the sequels that I never returned. (Yes, Reloaded had some great set pieces, but I think the pair of films fell flat and were a big disappointment).
It's interesting to look at it now and compare my reactions to what they were over twenty years ago. I distinctly remember seeing it with a particular group of friends, and (dare I admit it), I was definitely the least enthusiastic afterwards. As usual, I think this was mostly about expectations. The beginning of the film was so strong, but I didn't really feel like it delivered, for a few reasons.
I can't recall now exactly what I knew going into it. I'm pretty sure I had a vague idea of what to expect, but I'm not sure if I was actually aware of the "What is the matrix?" marketing campaign or not. Regardless, from the opening seconds, I was immediately hooked. I'm pretty sure that I'd never seen such a striking manipulation of the Warner Bros. logo (although clearly it was not actually the first to do so), and that sent such a strong signal that this was something special. Out of the gate, the movie is truly firing on all cylinders, most obviously with the first use of the rotating camera for Trinity's attack, but also with bits of dialogue delivered in ways that are still so memorable (e.g., "No Lieutenant, your men are already dead.")
So why wasn't I more enthusiastic at the end? It's hard to remember exactly why, and I'm sure there are things that I'm forgetting, but three things come to mind. First, I'm pretty sure that I didn't find Keanu Reeves to be convincing or believable. Second, I found the idea of using humans as batteries to be preposterous, and it partially derailed my suspension of disbelief. Third, and most importantly, I felt let down by where the film went. It started with what seemed like such a cool premise, but then somehow ended up mostly just loading up characters with lots of guns.
Going back to view it again, it truly is incredible how well the best parts of the movie hold up. Obviously some of the visual effects are showing their age, but the overall construction of the action scenes, especially the helicopter sequence, are just so well done, not to mention the overall visual style, which still seems remarkably distinctive, even today (at least within the matrix; less so perhaps in the "real world").
On at least two points, I now feel completely differently. First, Keanu Reeves now seems strangely perfect for this role. I'm not sure how much of that is based on subsequent roles that he has taken on, and how much is just seeing him a different light, but it now comes across as kind of a brilliant understated performance. I still find some of the line readings in his earliest scenes to be a bit unbelievable ("This is crazy!"), but other than that it seems pitch perfect.
It's especially interesting to watch the performances with different interpretations in mind. One reading which I've heard, for example, is the idea one could imagine Trinity as the real main character of the story, and Neo as representing a part of her self that she needs to learn to embrace. I'm not sure it quite works, but it is incredible to watch the two of them onscreen through that lens, with a beautiful kind of symmetry in their faces.
In terms of where they go with the idea of the matrix, I also feel somewhat different now. In fact, from my current perspective, the action sequences are just so good, that it seems perfectly fine if the idea of a simulation is only being used as a way of setting them up as remotely plausible. Obviously the film owes a massive amount to Hong Kong action cinema, but it's remarkably well executed here.
It's also fun to discover which parts of the action stuck with me. One scene that never left me was where Laurence Fishburne punches through the wall with his head, and it's still satisfying as hell. Do I still think they could have done more with the idea? Sure, but perhaps now that talk of simulations is so much more common, it feels like less of a missed opportunity, and more like a chance just to tell one particular story.
The one point on which my opinion has not changed is the idea of humans as batteries. I still think it's a bit too much of a dumb idea. In fact, this time around, I noticed that they say "combined with fusion" humans provide a good power source. Why not just use fusion?
However, the fact that it's so silly actually lends itself to a different interpretation. (For the sake of this, I'll simply ignore the narrative universe outside of this one film, mostly because I remember almost nothing from the other movies, but also because it feels like The Matrix was always made to stand on its own.)
The only people we really hear the battery theory from are the other humans outside the matrix, and Agent Smith. The interesting thing is it's somewhat unclear what role Agent Smith plays in this universe. He is clearly an agent of the machines, but we don't actually know how much he knows, or how credible his knowledge is. Certainly there seems to be some disagreement, even dissension, among the three agents in this film, with hints that Smith is the most unhinged of all of them (though also the most authoritative, without necessarily being their superior). Similarly, the humans are clearly in the dark about a lot of what has happened, like who fired the first shot, or what year it is.
In other words, none of the sources of information seem completely reliable, and the film is actually quite amenable to an interpretation in which humans are being kept alive for reasons that have nothing to do with power. The most obvious answer, in my mind, is a kind of experience machine scenario, in which humans chose to create a system which would sustain them, while living inside the Matrix, either voluntarily (individually) or by having it imposed by some smaller group of people. In this reading, the attempts to prevent anyone from disrupting the system could be seen as a kind of built in attempt at stability.
One could accept this reading, and yet still see the main characters of the film as heroes (though that is not the only interpretation), for the reasons that they themselves present. But it is much more ambiguous under this reading, and I rather like the fact that a particularly absurd justification provides an opening for a different way of reading it. To me at least, it's much more fun to think of the matrix not as the result of losing a war, but as the ultimate expression of high modernism.
In the end, I'm sad to say that I still feel a bit disappointed by the film as a whole. The best parts of it truly are exceptional, and are still eminently watchable (not merely for historical interest), but this time around I felt like much of the third act was narratively frustrating, with too much in the way of talking villains, late game coincidences, and characters coming back to life. Maybe my reasons have changed, or maybe this is the part that I'm not fully remembering from my first viewing, but I can't help but come away with the feeling that I wish it could have been something more.
2 notes
·
View notes
Text
A new experiment
A few months ago, a good friend of mine encouraged me to start writing more. I am fully on board with the premise, and have been trying to internalize the idea, and yet somehow actual output has been slow to manifest.
I think there are a few reasons for this. Obviously time has something to do with it; we're all busy and it can be hard to prioritize things which are not quite either work or play. I don't think that's the main reason however. I do spend plenty of time carrying out the act of writing, it's just that most of it in no way resembles something that I would want to make public.
A bigger factor then, is the feeling of needing to have something to say, especially something interesting or intelligent enough that others might find some value in reading it. Most of the time I do, in fact, feel like I have things to say that would be worth reading, and yet getting a piece of writing to be coherent and polished enough that it seems worth sharing can sometimes feel rather onerous. This may be in part due to having too high a stylistic standard for what is sufficiently high quality, but I expect it's more like a kind of perfectionism with respect to content. It always feels like one could do more reading into or dwelling on a topic, and I find myself reluctant to put out half formed thoughts, or things that have already been said, especially when there is already such amazing writing out there for people to read.
In addition, I admit to a kind of laziness and tendency towards procrastination. I love thinking about things I might want to write, and the ideas often feel almost perfectly crafted in my mind. And yet that's never quite the case when it comes down to actually putting them in some sort of order, and writing itself can feel just enough like work that it can be extremely hard to convince myself to start. Although it's far from ideal, it sometimes feels like my best strategy is to just ramble endlessly about nothing at the keyboard, and then once in a while an essay will pop out more or less fully formed. I don't mind that I have to trick myself into doing it; it's just frustrating that it feels so damned inefficient.
So why write and share this undercooked pontificating? It has something to do with aspiration. I really do place enormous value on all manner of human cultural production, and I am often especially impressed by really good critical commentary. The latter especially typically feels very much within reach, and yet, the ability to produce it remains elusive. Perhaps by deliberately working against some of my perceived limits, and playing at the role I aspire to, there is a chance to gain some insight into how others are able to do what they do.
In particular, three strategies are floating around which feel like they might be helpful. The first is anonymity, which I do think is a kind of a game changer. There is no true anonymity online of course, and I doubt that I'll write much that I wouldn't be willing take credit for. But knowing that you can put something out into the world without it being immediately searchable under your name provides a space that has a bit of very necessary freedom to experiment.
Second is community. I don't think there's anything more powerful for being creative than being part of a community of people who aspire to the same ideal, especially if there is sufficient overlap in interests and worldview. It's not exactly competitive pressure, but somehow seeing that other people you're in frequent communication with are regularly putting work out into the world, both individually and collectively, serves as a highly potent motivational elixir combining inspiration, encouragement, and an existence proof.
That being said, finding the right mix is probably critical. Some time ago I spent a fair bit of time working with some amazing friends on a screenplay that never really went anywhere. I think the biggest problem was that everything remained far too internal. We were too in love with our own ideas and writing, but there was no driving force to actually build out the connective tissue or make anything public. At this point, for me, a looser affiliation with people who are regularly writing and sharing more distantly related material feels like a more useful environment.
The third strategy is what I'm hoping will really push things over the edge, and that is something like embracing failure and deliberate experimentation. In writing this particular chunk of text, (and hopefully more to come), I'm trying to engage in a kind of auto-ethnographic investigation. As I say, I am continually impressed by the amount and the quality of work that people I admire are able to make. I don't think what I'm doing here is necessarily a recipe for creating great work. Rather, by mimicking the activity of those who are prolific, perhaps I can come to better understand how they are able to do it.
In other words, while admitting that there is always a possibility of some kind of success, I am now striving to take on a kind of deliberate practice that I fully expect to fail. I expect it to fail in terms of quality, and I expect it to fail in terms of persistence. To be clear, I don't ask or expect anyone join me in this failure; much will not be worth reading. Nor am I making any kind of promise; such promises don't seem to work anyway. I'm just saying I'm going to attempt an experiment, and see what I can learn from it.
3 notes
·
View notes
Text
Information Economies 1
I've been thinking recently about the organization of information, and the mapping of culture, and a prime example that sprung to mind is all of the secondary production that goes on around video games, specifically in providing documentation and organizing in-game information.
What are the outlines of this? A person (or more likely a team of people) creates a game. We can debate what games are, and that is a whole separate conversation, but let's just think of it as some sort of interactive media experience. There is a richness of detail. People play it, mostly for fun, although the reasons can be complicated. For various reasons, there is a demand for information about the game to be available outside of the game, with the GameFAQ or wiki being the canonical form of this today. Obviously this is part of a broader phenomena of people writing about and organizing information around culture, and yet video game wikis provide a particularly interesting example for a few reasons.
First, there is something about the nature of games that makes people especially interested in being able to obtain very specific pieces of information. Everything in the game is basically available to the player, as long as they are willing to put in the time and figure it out. But that can at times work against the nature of the enjoyment that people are after. For some people, in some settings, it can be fun to repeatedly fail at something, trying to git gud. Others might love exploring every nook and cranny of a world, mapping out the full space of possibilities. But for most players, they are getting something more like the mainstream of enjoyment that the game designers expected most players would get the most enjoyment out of.
There is a kind of circularity here -- designers are not just designing for a specific type of player, but also explicitly designing lots of edge cases -- but there is still a way in which most games (especially those with highly linear narratives) are welcoming the player into a very particular kind of experience -- one where the "fun" is often the overcoming of small obstacles (which might result in an increase in capabilities, allowing the player to overcome more challenging obstacles), combined with experiencing a traditional narrative. However, it can be quite difficult to calibrate the difficulty of such things, and so there may be moments where the player stops having fun, because they are stuck on something, and would rather get past it quickly so that they can get on to the next, hopefully better-calibrated challenge. This could be defeating a particular boss, or locating a particular item. As such, there may be frequent moments where players want that very specific type of information (how to beat the boss? where is that item?) and wikis are very much set up to provide all of this and more. (A variation on this -- the walkthrough, which provides a step by step guide to beating a game -- might be used in the same way, or it might be used by players that want an even more guided experience, experiencing the pleasure of carrying out each step, and experiencing the narrative, without having to figure things out for themselves).
Perhaps even more than helpful hints and puzzle-related information, wikis devote a lot of space to organizing the information about a game's world -- the names of all the characters and their backstories, all the pieces of lore that can be collected, references to other games or worlds, connections to source material, etc. I often wonder how much the actual role of such informational logistics is to be used as a reference, as opposed to being primarily about the pleasure of organizing the information. How many people sit down and read through a video game wiki to fill in all the pieces of the backstory? How common are such people compared to those who want to catalogue every plant species or map every world?
The latter activities obviously connect us strongly to long-standing traditions in science and administration -- the identifying and cataloging of actual plant species, the mapping of actual spaces, the organizing of information about trade routes and forest yields and everything else involved in trying to run a profitable business or administer a state. Is the amount of time people devote to organizing information about games (and other media such as movies and books) the result of a lack of demand for that sort of work within more traditional entities like business and government, or is there something about organizing information from fictional worlds that is inherently more pleasurable? It's interesting because the vast majority of games are designed so as to be enjoyable to play -- by placing well-calibrated challenges, etc. -- and so it is unsurprising that they are fun to play; but it is far less clear that they are designed to be fun to document and catalogue. It is possible that this is now in some sense a core feature of design -- perhaps seen in an otherwise unnecessary proliferation of in-game plant or animal species -- but it's ambiguous, and most likely is a relatively recent phenomenon. Think of all those in-game collectibles? Are they there because players actually want to collect them, or because others want to document their locations, simplifying the tasks of the players?
As much as these activities are all in some sense games, the lines get blurry, because of course all of this is deeply connected to making money. Most obviously, there is the business of selling the game. Video games are notoriously labor intensive to produce, and big games cost enormous sums of money, meaning that there is the expectation for a certain return on investment. Interestingly, many games are in some sense quite cheap to consume on a dollar per hour basis. Whereas people used to quite willingly pay ten, twelve or more dollars to see a two hour movie, video games which sell for sixty dollars might provide many dozens or even hundreds of hours of entertainment. In-game transactions aside (although this is clearly an important part of the story which cannot be ignored; it's just too big a piece to take on here), this means that video games need to sell a lot of units.
Although I expect that many people find that most games are actually far too long (especially those who have jobs), clearly some aspect of the player base wants a particularly rich, involved, and lengthy experience, with a narrative that cannot be completed in a mere ten or twelve hours. (As an aside, it would be interesting to explore the timeline of the evolving lengths of different media, like books, movies, TV series, and games). However, it is not just the primary market that matters here. Because there is so much demand for video game wikis, they can be enormously popular, and therefore frequently visited, meaning the potential for advertising revenues. As such, we can expect there to be competition for creating these indices of information. Indeed, presumably there is a particularly fierce competition with the release of every major new game, for which site will be able to provide the most popular walkthrough or wiki as soon as possible after the game's release. (Despite the length of modern games, this doesn't stop some players from getting through them extremely quickly in terms of total wall clock time from the first moment of play, which is itself a kind of competition, one taken to an extreme in spaces like speedrunning, again a whole other topic).
I don't mean to imply that all of this is just about business. Clearly people do like participating in a variety of activities related to fictional worlds -- from organizing the information, to writing stories set in those worlds, to enacting live-action versions of related narratives. In total, I would expect that the vast majority of this is undertaken without any expectation of an actual financial reward. And yet the market forces end up shaping so much of the content, because, for example, a wiki cannot succeed unless it gains a user base, and so the de-facto wiki for a popular game will almost certainly be one that has (or could be) commercialized.
Not only do people seemingly not have a problem with participating in spaces that are being commercialized by others, they are actually probably far more likely to do so than to create something from scratch. In other words, not only is much of this organizing of information undertaken on behalf of fictional worlds, it is done within an ecosystem organized by a third party.
There are numerous phenomena to be understood more deeply here -- peer production, internet commercialization, public goods, informational affinities, obsessive tendencies, virtual economies... not to mention the history of earlier manifestations of such cataloguing -- and all of this extends far beyond video games. In a certain way, I have become part of the phenomenon I am describing, by focusing on the secondary production around purely fictional creations, as opposed to secondary production around activities like scientific research. Nevertheless, precisely because the lines get so fully blurred among fun, profit, and obsession in this sphere, it provides something of an exemplary case study. More research required.
1 note
·
View note
Text
Many thanks for this reply! I think we are generally in strong agreement, so I'll just address a few small points.
First, on the Bayesian brain and heuristics, I think we only differ on what counts as a heuristic. I certainly agree that our brains work (in some sense) by having models of the world and updating those with new information as it comes in, which is certainly Bayesian in spirit. We can quite safely assume for theoretical reasons that doing exact inference is not within the realm of possibility, so there must be some kind of approximation happening. I guess my perspective here is that the fact that there are certain ways in which we can reliably get confused suggests that failures of approximation are not due to some sort of insufficiently precise approximation (e.g., a la Monte Carlo inference, where would expect failures to be more random), but rather due to the use of some sort of more heuristic mechanism. I can't really get any more precise than that, as I don't actually know what I'm talking about, but I just wanted to clarify that I think you took me to mean a greater level of simplicity than actually intended. My intended meaning was something more like approximate inference with some reliable failure modes.
Now I do agree that many of these failure modes are rather artificial. However, it is interesting to think through various ways in which we might have been fooled, even in the ancestral environment. I think there are actually numerous types of visual and auditory illusions we might have been subject to, and at least sometimes confused by, including things like the classic mirage in the desert. One could also potentially categorize the fluctuations in our own thinking as a kind of distortion (whether due to situational cues, various stimulants, or other), i.e., whether we think we can take on a lion in a given moment or not. And of course there is clear evidence of a predisposition to assume a kind of animacy or intentionality in the causes of things, and a corresponding failure to imagine a more systemic explanation. We could quibble about whether to blame this on heuristics or not, but it does seem to me like there is a predisposition (whether through prior or inference mechanism) to end up believing in the presence of some sort of God.
That being said, I do admit that it is foolhardy to assume we know what is in the mind of the deer, and leaning too hard on consciousness or intentionality can get us into trouble. I often feel like a major difficulty in communication is to sustain the utility of certain terms when they are at risk of collapse under an analysis at a lower level. Yes it is possible to read everything we do as writing, but this seems to me to wade into the waters of having writing become synonymous with the forward flow of time, i.e., the movement of atoms is itself just a kind of writing; how can we find any basis upon which to distinguish certain types of events in time from others? Whether it is "real" or not, intentionality still feels like it can be a useful frame for thinking about these things, even if it is only something we build into our models of others.
One way of trying to make a distinction could perhaps be based on the extent to which we consciously run forward simulations of possible outcomes which might result from the perceived choices. All of this is still nevertheless just us acting, and yet there certainly feels to be something special about circumstances in which we try to imagine the likely outcomes of our actions, and even briefly inhabit those mental worlds before acting, as distinct from situations in which we respond without consciously thinking about it. I often think this is clearest in cases where we are trying to decide among a fixed set of choices, especially when the evidence is insufficient for it to be obvious. It's fascinating to me how easily a kind of Kierkegaardian moment of undecidability can enter into our daily lives (such as, personally speaking, when I try to decide where to go for brunch).
Something similar happens with "manipulation". I agree that it is hard to draw a clean distinction between something like nefarious persuasion and influence more generally, but if we let "manipulation" expand into just meaning "influence", then I expect we would simply end up inventing another term to signal something closer to the nefarious end of the spectrum. At a minimum, there does seem to be a useful distinction between, say, telling someone what you are trying to do to influence them, versus deceiving or denying that you are doing what you are doing or that you are doing anything at all.
Regardless, I love your point that even connoisseurs love to be manipulated, it's just that they want it done well, not using hackneyed techniques. It's fascinating the degree to which we desire certain types of mental states, and in some ways work against ourselves in seeking them out, by gradually increasing the refinement of our tastes. There is something delightful about experiencing new ideas when in a state of ignorance that I fear gets lost as we begin to know more. Perhaps what becomes most difficult is sustaining a base level of wonder and capacity to be awed. How shall we seek out the most stimulating intellectual experiences once we have ensconced ourselves in what feels like a relatively coherent world view?
Suspended,
It feels somewhat strange to respond to a letter that was not addressed to me, especially that is now perhaps more of a historical artefact. Nor was your letter purloined, but simply made public, published even, as part of a correspondence. In that sense, this is perhaps more of a commentary, or simply a piece of fan mail.
The main reason I felt compelled to respond was your conceptualization of reading as inference, which is very close to how I have been thinking about related issues.
In particular, the example that kept coming to mind, while I read your letter, is that of "stotting". This is a behavior that has been observed in animals such as deer, in which they needlessly jump while running from a predator. Since this sort of jumping does not increase the speed at which they can flee from the predator (and may in fact slow them down or make them more visible), this behaviour has been interpreted by some of those who study it as a kind of signaling. In effect, those animals who stot are flaunting their ability, suggesting to the predator that it should not bother trying to chase them down, because they clearly have enough speed and stamina to outrun the danger. The predator, in turn, will read this signaling appropriately, redirecting its efforts towards an animal that is not indicating that it is capable of escaping, which is more likely to be one that it will be able to catch (perhaps because the prey is old, or injured, for example).
The key to this, and the only reason it works, as others have noted, is that this signal is hard to fake. It wouldn't work for animals to try to convey their ability to outrun a predator using some means that was easily available to all; if all individuals are equally capable of sending that signal, no matter what their ability to actually outrun a predator, then anti-inductivity would kick in, and all of them would do so (at which point, the message would lose all meaning, effectively conveying no information, so in fact none of them would). It is only because the signal really does represent something that is highly indicative of ability to run fast that it works as a signal. Those who are not fast enough to stot will get eaten.
The same sort of thing of course explains many strange behaviours among animals, as far as I understand, including things like bird plumage and mating behaviours (we can also think of obvious analogies among humans). That does not mean, of course, that all such signals retain their intrinsic meaning forever. Especially among humans, evolutionary psychologists have suggested that all kinds of preferences can be traced to things which were once signals representing various characteristics likely to be beneficial for offspring, such as the ability to have children, or overall health or freedom from disease. These are now basically evolutionary holdovers because a) many of these are no longer so hard to fake, and b) they are no longer strong indicators of the relevant outcomes. Nevertheless, there is of course in which they can continue to have power, in so far things that are desirable themselves become symbols of status, in a self-reinforcing feedback loop.
Whether or not the above explanation is correct (let's say for the sake of argument that it is), we might conventionally think of the observation and reaction by the predator as a kind of "reading", though it instinctively seems less appropriate to call the stotting itself a kind of "writing". The first explanation we might instinctively provide for this difference is that stotting is not a conscious, calculated choice by the prey ("if I leap now, then that wolf will infer that I am strong, and realize how foolish it would be to chase me"), but rather behaviour that is essentially automatic, like some part of the fight or (in this case) flight response. That seems reasonable, except that presumably the "reading" by the predator is just as similarly unconscious and automatic (if that is indeed a fair characterization of the prey's behaviour).
Even though this seems slightly inconsistent, I think it can actually be reasonably understood within your framework. If we think of reading as a kind of inference, then the predator is accurately inferring from the prey's stotting that it is a fast animal (assuming that the stotting is in fact a trustworthy signal), and adjusting its behaviour as a result. It obviously has not carried out a precise statistical inference, but rather has applied various heuristics to do a kind of approximate inference, incorporating the new information into its model of each deer's ability and it's likelihood of catching each of them. The deer is writing in a sense, signaling as it leaps, but "writing" still feels inappropriate. Perhaps the key is that it does not seem to be an individualized signal crafted to the particular situation, but just part of an automatic behaviour that would be triggered by any predator, and is similar across all prey, such that we would call it something more like "instinct".
All that is to say, we can think of both reading and writing as being more or less calculated or automatic. All of these notions clearly apply to humanity as well, with the added complexity that we humans have much richer models of the world, much more cognitive capacity, and often much more flexibility to craft our signals or carry out our readings over an extended period of time.
That's not to say, however, that we don't also have more automatic forms of these. A classic example (again from evolutionary theory), might be something like blushing. Blushing seems to be one of those evolutionary puzzles for which we can come up with numerous possible explanations. Part of the thinking on this will be guided by salient features of blushing, specifically the fact that it is automatic and uncontrollable, and also (perhaps to a lesser extent), that it is hard to fake. One could perhaps learn to blush on command, or to avoid blushing in any circumstance, but I expect it would take considerable practice in either case. I won't speculate here on the "true purpose" of blushing, but rather note that it a kind of writing that we carry out without intending it (to the extent that stotting is), and one that we similarly read in each other without needing to think about it; we see someone blush and immediately infer that they are embarrassed (or some more complex interpretation, depending on the context).
At the opposite extreme, I can finally turn to your discussion of more literal reading and writing, as with literature. Here there seems to be much more room for a more deliberate, calculated, drawn out form of intention and interpretation, though I think it is also more complicated than that. Especially on the writing side, it is no doubt the case that many times we very deliberately carry out actions that we believe will cause a reader to make particular inferences, and in some cases the calculated nature of this might sometimes lead this to be labeled manipulation.
To some extent this is what writers do. In editing a piece of work, a writer might think about how a particular sentence will be received, whether it is confusing, how it could be improved, based largely on how they think it will be read. At the same time, it seems implausible that writers begin purely with such calculated modeling. Rather, they begin. Either through practice, raw talent, or experience gained through feedback, they have some intuitive sense of what will work, and follow those intuitions, or even begin in the most non-deliberate writing they can, hoping that things will emerge, or that they will be able to shape the material into something later. One might also point to "automatic writing" as a tool used by writers to try to get past blocks or generate ideas, though ironically one seems to need to use fairly deliberate techniques in order to produce work that is adequately "automatic".
On the reading side, there is again a continuum. Even professional critics presumably have many automatic reactions while they are reading, though these of course will be shaped by their experience, and will likely be quite different from that of a more naive reader. (The critic is far less likely to be affected by simple manipulative techniques like music inserted to trigger emotional cues, and much more likely to have a negative reaction to overused tropes). Nevertheless, the interpretive part of a literal kind of reading (or listening or viewing) will likely be a more extended activity, with interpretations developed through reflection, discussion, debate, etc.
The scope for such reading is limitless of course. One very narrow type of reading is to assume that there is a single specific "meaning" intended by the writer, and that the goal is to figure out what that is. I'm not familiar with Knapp & Michaels, but based on your description, this seems to be the only type of meaning they are interested in, which seems to me to be an extremely impoverished view.
Certainly there are some settings in which that view is relevant. For example, in creating a crossword puzzle, the creator has an intended answer for each clue, such that the correct answers will cohere. The goal of the puzzle solver is largely to infer what the writer intended for each clue, although in rare circumstances, there could be multiple solutions which would fit equally well, in which case one might fail to infer the intended meaning, and yet still "solve" the puzzle. (Ironically, puzzle solving can be turned into puzzle creation, with enough creativity in interpreting the clues).
Another example would be something like detecting what seem to be typos, and trying to infer what the author intended to write, if they did in fact intend to write something different. In reading your letter, I noticed you say "in a class narrow-and-conquer method", which doesn't quite scan. I thought at first that you meant to write "crass", but upon reflection, you probably intended "classic". Perhaps it was both!
As for the "meaning" of most art, however, it seems overly simplistic to suggest that there is a single "intended" meaning by the creator, for the reasons discussed above. There might be ambiguities in the text for which the writer has a personal interpretation, but in most cases the larger "meaning" will not be a single intention by the author (except perhaps in their intention to create a particular emotional or affective response to a particular passage, etc.).
Moreover, there is simply much more than can be inferred. Indeed, much of the elaboration of criticism is in thinking about art as information which can be used to make inferences about the world, either about the world in which the work was created, or about aspects of the author which they themselves may not even be aware of, at least not consciously.
One subtlety I might suggest is worth drawing out a bit more is your suggestion that "inference is performed statistically". This is clearly true in the sense that we are dealing with uncertainties. We are not making syllogistic deductions, but rather assembling evidence into models. The reason I find this interesting is that we do now have very good theories and even systems for performing "correct" statistical inference. In fact, I was particularly struck (although I didn't notice it on my first read through) by your mention of a quest for a "so-called universal method of statistical inference which can be uniformly, automatically applied to any problem". This should be the subject for another letter, but there is a sense in which this is something that has in fact now been theorized and created by the statistical community.
The conclusion from such statistical theorizing and building, however, seems to be that this is not something that humans could plausibly be carrying out (because anything exact would be intractable). Hence my reference above to heuristics, and perhaps the reason that we can still be so easily fooled by crude signals. This also connects to your characterization of the link between color and capacity for action (within the context of traffic signals) as statistical, though I won't try to develop that here.
Regardless, reading will of course be dependent on our models of the world, and can be manipulated by misunderstanding. You mention GPT-3, which provides many such examples. In the absence of any prompting to that effect, many people would likely assume that various texts by GPT-3 were written by a person, and might proceed to interpret based on that assumption. Moreover, such interpretations might be "correct" within the confines of their assumptions. Upon learning that such text was in fact produced by an anthropomorphized computer program, many would similarly assume that the machine must in some sense be conscious or otherwise capable of various feats. Again, such an inference is not necessarily wrong within the confines of its assumption, though that is only because most people would be wrong about what they believe is possible or impossible to do with systems trained on large amounts of text.
In that sense, the ability to write coherently is something that, up until now, has been quite hard to fake. Because it must be learned, writing was a clear signal of a certain level of cognitive ability, and other abilities that could reliably be expected to come along with that. Now that GPT-3 exists, of course, people will need to recalibrate, though it seems likely that most people will remain perpetually behind in making truly accurate inferences, given the pace at which things are changing. Nevertheless, even if we do successfully adapt our conscious machinery in interpreting texts, it seems highly likely that we will continue to be "fooled" to some extent by convincing text. Just as we can't help but blush, and can't help but read blushing, we likely can't help but read text as meaningful, in the sense of having had a consciousness behind it, even if we then follow that with a more rigorous or creative reading.
PM
October 9, 2021
3 notes
·
View notes
Text
Suspended,
It feels somewhat strange to respond to a letter that was not addressed to me, especially that is now perhaps more of a historical artefact. Nor was your letter purloined, but simply made public, published even, as part of a correspondence. In that sense, this is perhaps more of a commentary, or simply a piece of fan mail.
The main reason I felt compelled to respond was your conceptualization of reading as inference, which is very close to how I have been thinking about related issues.
In particular, the example that kept coming to mind, while I read your letter, is that of "stotting". This is a behavior that has been observed in animals such as deer, in which they needlessly jump while running from a predator. Since this sort of jumping does not increase the speed at which they can flee from the predator (and may in fact slow them down or make them more visible), this behaviour has been interpreted by some of those who study it as a kind of signaling. In effect, those animals who stot are flaunting their ability, suggesting to the predator that it should not bother trying to chase them down, because they clearly have enough speed and stamina to outrun the danger. The predator, in turn, will read this signaling appropriately, redirecting its efforts towards an animal that is not indicating that it is capable of escaping, which is more likely to be one that it will be able to catch (perhaps because the prey is old, or injured, for example).
The key to this, and the only reason it works, as others have noted, is that this signal is hard to fake. It wouldn't work for animals to try to convey their ability to outrun a predator using some means that was easily available to all; if all individuals are equally capable of sending that signal, no matter what their ability to actually outrun a predator, then anti-inductivity would kick in, and all of them would do so (at which point, the message would lose all meaning, effectively conveying no information, so in fact none of them would). It is only because the signal really does represent something that is highly indicative of ability to run fast that it works as a signal. Those who are not fast enough to stot will get eaten.
The same sort of thing of course explains many strange behaviours among animals, as far as I understand, including things like bird plumage and mating behaviours (we can also think of obvious analogies among humans). That does not mean, of course, that all such signals retain their intrinsic meaning forever. Especially among humans, evolutionary psychologists have suggested that all kinds of preferences can be traced to things which were once signals representing various characteristics likely to be beneficial for offspring, such as the ability to have children, or overall health or freedom from disease. These are now basically evolutionary holdovers because a) many of these are no longer so hard to fake, and b) they are no longer strong indicators of the relevant outcomes. Nevertheless, there is of course in which they can continue to have power, in so far things that are desirable themselves become symbols of status, in a self-reinforcing feedback loop.
Whether or not the above explanation is correct (let's say for the sake of argument that it is), we might conventionally think of the observation and reaction by the predator as a kind of "reading", though it instinctively seems less appropriate to call the stotting itself a kind of "writing". The first explanation we might instinctively provide for this difference is that stotting is not a conscious, calculated choice by the prey ("if I leap now, then that wolf will infer that I am strong, and realize how foolish it would be to chase me"), but rather behaviour that is essentially automatic, like some part of the fight or (in this case) flight response. That seems reasonable, except that presumably the "reading" by the predator is just as similarly unconscious and automatic (if that is indeed a fair characterization of the prey's behaviour).
Even though this seems slightly inconsistent, I think it can actually be reasonably understood within your framework. If we think of reading as a kind of inference, then the predator is accurately inferring from the prey's stotting that it is a fast animal (assuming that the stotting is in fact a trustworthy signal), and adjusting its behaviour as a result. It obviously has not carried out a precise statistical inference, but rather has applied various heuristics to do a kind of approximate inference, incorporating the new information into its model of each deer's ability and it's likelihood of catching each of them. The deer is writing in a sense, signaling as it leaps, but "writing" still feels inappropriate. Perhaps the key is that it does not seem to be an individualized signal crafted to the particular situation, but just part of an automatic behaviour that would be triggered by any predator, and is similar across all prey, such that we would call it something more like "instinct".
All that is to say, we can think of both reading and writing as being more or less calculated or automatic. All of these notions clearly apply to humanity as well, with the added complexity that we humans have much richer models of the world, much more cognitive capacity, and often much more flexibility to craft our signals or carry out our readings over an extended period of time.
That's not to say, however, that we don't also have more automatic forms of these. A classic example (again from evolutionary theory), might be something like blushing. Blushing seems to be one of those evolutionary puzzles for which we can come up with numerous possible explanations. Part of the thinking on this will be guided by salient features of blushing, specifically the fact that it is automatic and uncontrollable, and also (perhaps to a lesser extent), that it is hard to fake. One could perhaps learn to blush on command, or to avoid blushing in any circumstance, but I expect it would take considerable practice in either case. I won't speculate here on the "true purpose" of blushing, but rather note that it a kind of writing that we carry out without intending it (to the extent that stotting is), and one that we similarly read in each other without needing to think about it; we see someone blush and immediately infer that they are embarrassed (or some more complex interpretation, depending on the context).
At the opposite extreme, I can finally turn to your discussion of more literal reading and writing, as with literature. Here there seems to be much more room for a more deliberate, calculated, drawn out form of intention and interpretation, though I think it is also more complicated than that. Especially on the writing side, it is no doubt the case that many times we very deliberately carry out actions that we believe will cause a reader to make particular inferences, and in some cases the calculated nature of this might sometimes lead this to be labeled manipulation.
To some extent this is what writers do. In editing a piece of work, a writer might think about how a particular sentence will be received, whether it is confusing, how it could be improved, based largely on how they think it will be read. At the same time, it seems implausible that writers begin purely with such calculated modeling. Rather, they begin. Either through practice, raw talent, or experience gained through feedback, they have some intuitive sense of what will work, and follow those intuitions, or even begin in the most non-deliberate writing they can, hoping that things will emerge, or that they will be able to shape the material into something later. One might also point to "automatic writing" as a tool used by writers to try to get past blocks or generate ideas, though ironically one seems to need to use fairly deliberate techniques in order to produce work that is adequately "automatic".
On the reading side, there is again a continuum. Even professional critics presumably have many automatic reactions while they are reading, though these of course will be shaped by their experience, and will likely be quite different from that of a more naive reader. (The critic is far less likely to be affected by simple manipulative techniques like music inserted to trigger emotional cues, and much more likely to have a negative reaction to overused tropes). Nevertheless, the interpretive part of a literal kind of reading (or listening or viewing) will likely be a more extended activity, with interpretations developed through reflection, discussion, debate, etc.
The scope for such reading is limitless of course. One very narrow type of reading is to assume that there is a single specific "meaning" intended by the writer, and that the goal is to figure out what that is. I'm not familiar with Knapp & Michaels, but based on your description, this seems to be the only type of meaning they are interested in, which seems to me to be an extremely impoverished view.
Certainly there are some settings in which that view is relevant. For example, in creating a crossword puzzle, the creator has an intended answer for each clue, such that the correct answers will cohere. The goal of the puzzle solver is largely to infer what the writer intended for each clue, although in rare circumstances, there could be multiple solutions which would fit equally well, in which case one might fail to infer the intended meaning, and yet still "solve" the puzzle. (Ironically, puzzle solving can be turned into puzzle creation, with enough creativity in interpreting the clues).
Another example would be something like detecting what seem to be typos, and trying to infer what the author intended to write, if they did in fact intend to write something different. In reading your letter, I noticed you say "in a class narrow-and-conquer method", which doesn't quite scan. I thought at first that you meant to write "crass", but upon reflection, you probably intended "classic". Perhaps it was both!
As for the "meaning" of most art, however, it seems overly simplistic to suggest that there is a single "intended" meaning by the creator, for the reasons discussed above. There might be ambiguities in the text for which the writer has a personal interpretation, but in most cases the larger "meaning" will not be a single intention by the author (except perhaps in their intention to create a particular emotional or affective response to a particular passage, etc.).
Moreover, there is simply much more than can be inferred. Indeed, much of the elaboration of criticism is in thinking about art as information which can be used to make inferences about the world, either about the world in which the work was created, or about aspects of the author which they themselves may not even be aware of, at least not consciously.
One subtlety I might suggest is worth drawing out a bit more is your suggestion that "inference is performed statistically". This is clearly true in the sense that we are dealing with uncertainties. We are not making syllogistic deductions, but rather assembling evidence into models. The reason I find this interesting is that we do now have very good theories and even systems for performing "correct" statistical inference. In fact, I was particularly struck (although I didn't notice it on my first read through) by your mention of a quest for a "so-called universal method of statistical inference which can be uniformly, automatically applied to any problem". This should be the subject for another letter, but there is a sense in which this is something that has in fact now been theorized and created by the statistical community.
The conclusion from such statistical theorizing and building, however, seems to be that this is not something that humans could plausibly be carrying out (because anything exact would be intractable). Hence my reference above to heuristics, and perhaps the reason that we can still be so easily fooled by crude signals. This also connects to your characterization of the link between color and capacity for action (within the context of traffic signals) as statistical, though I won't try to develop that here.
Regardless, reading will of course be dependent on our models of the world, and can be manipulated by misunderstanding. You mention GPT-3, which provides many such examples. In the absence of any prompting to that effect, many people would likely assume that various texts by GPT-3 were written by a person, and might proceed to interpret based on that assumption. Moreover, such interpretations might be "correct" within the confines of their assumptions. Upon learning that such text was in fact produced by an anthropomorphized computer program, many would similarly assume that the machine must in some sense be conscious or otherwise capable of various feats. Again, such an inference is not necessarily wrong within the confines of its assumption, though that is only because most people would be wrong about what they believe is possible or impossible to do with systems trained on large amounts of text.
In that sense, the ability to write coherently is something that, up until now, has been quite hard to fake. Because it must be learned, writing was a clear signal of a certain level of cognitive ability, and other abilities that could reliably be expected to come along with that. Now that GPT-3 exists, of course, people will need to recalibrate, though it seems likely that most people will remain perpetually behind in making truly accurate inferences, given the pace at which things are changing. Nevertheless, even if we do successfully adapt our conscious machinery in interpreting texts, it seems highly likely that we will continue to be "fooled" to some extent by convincing text. Just as we can't help but blush, and can't help but read blushing, we likely can't help but read text as meaningful, in the sense of having had a consciousness behind it, even if we then follow that with a more rigorous or creative reading.
PM
October 9, 2021
3 notes
·
View notes
Text
This is incredibly important territory, not least because almost all realistic games involve more than two players (even if one of the additional players might sometimes be a different version of oneself).
You primarily highlight the situation where a naive players does something foolish, and another player is able to exploit it, and this is clearly something that does occur; winning may come down to being the one to happens to go next, and such a win may end up feeling (at least to the player who does not win) as undeserved in a way that would not be the case when a player loses a two player game in a similar way.
However, I think this also arises in a way that has more to do with the fact that the goal in a multiplayer game is fundamentally undefined in the way that it is not for a two player game. In the latter, clearly both players are just trying to win (at least, let's make that assumption for the sake of simplicity). The same is true in a multiplayer game, at least initially, but it's far from obvious that that should remain every players goal as the game progresses.
From a game theoretical perspective, that is the right answer. Everyone should basically be trying to get the highest score at all times, no matter what their positioning. In practice, however, coming second in terms of score may mean relatively little (horseshoes and hand grenades, as they say). The winner gets to win. Everyone else is basically a loser (although not coming last may still be worth something).
There are three very particular ways in which competing objectives arises. The first vengeance. Especially a player who gets screwed early on, getting revenge on the player that screwed them over may become the primary objective. Coming last may be a small cost to pay for the chance to deny victory to one's adversary.
Second is simple collusion for rebalancing. At least in my experience, it is a common situation for there to be collaborative opposition to whoever happens to lead early on. Especially in three player games, it is very hard to avoid the potential for king-making scenarios, in which the player who will ultimately lose, will nevertheless have the ability to dictate who will win. Although not every player will react this way, it seems fairly common to want to punish the winner (and so of course remaining in second place until the very end becomes a common strategy).
Finally there is the metagame, as I commented briefly on earlier. Winning the most games over an extended series of play may become the more important objective than winning any one game, and so using a lost game to gain favour with another player may become valuable in itself.
All of these dynamics clearly spill over to other aspects of life, and it could be worth exploring how they might apply to writing in particular.
Stupid Leverage
Today I want to talk to you about a concept I’ve been developing called “stupid leverage.” First, let me say where it doesn’t apply because I think those cases are clear and make us think in the right game-theoretic mindset.
So, in a player-versus-player (PvP) game, like chess, let’s say (chess doesn’t have any randomness) playing optimally is always the best decision. Now, we don’t know what optimal play is for chess, and in fact we don’t even know if White is actually at an advantage; we can’t prove that though, I’d be willing to put down a lot of money to say that it is—of course it could be mental. But chess is kind of analogous to checkers, and with checkers, we do know that the person who moves first, wins, assuming that they’re playing optimally. And that if you play sub-optimally against someone who’s playing optimally, you’re just not going to do that well, you just can’t possibly beat their advantage because they’re playing inexploitably. And when I say inexploitably, I mean, they’re not, basically, leaving money on the ground, they’re not doing something where by reacting to it, you could get a leg up on them in the game.
Now, if you start scoring things—so, for instance, you start scoring games instead of just by winning, also by how many pieces you took—then there are ways that if someone else is playing sub-optimally, you could end up winning “harder” by playing sub-optimally, i.e., in an exploitable way that you think your opponent won’t exploit. But as long as we’re talking about binary winning, this is never the case, because there is always an optimal method to play a game in any of the games that we can think of, basically, to be very rigorous, in games where there are a discrete set of moves, and where every move determines the next game state, and there is allowed to be randomness that is part of that sampling process, but basically in games that work like state machines, the way we think about them. There are always optimal strategies, as long as these games are finite. (And I’m not talking about “Finite and Infinite Games” by James Carse, I’m talking about games where we have an assurance that they are going to end.) Because if you don’t, then you get into this undecidability territory, but basically if you can define the game as a state machine, you’re going to end up in a place where there’s an optimal strategy to winning.
And the interesting thing though is that in games where there are more than two players, there’s a situation you can end up in where playing optimally doesn’t ensure a win anymore. The easiest place to see this is, let’s say I’m Player One and Player Two is doing something incredibly stupid. And then Player Three rushes in to exploit it. And as it turns out in this game, whoever scores the most points against their opponents wins. Well, playing optimally is no longer going to be good enough, because I’m playing in a way where I’m assuming someone is going to punish me, if I’m punishable. But optimal, mathematically, the way we tend to think about it in game theory is against any possible strategy, right, so my assumption is that other people could suddenly shift this strategy and start exploiting the crap out of me, and I need to always be careful to guard against that. But in this case, you can design a scenario where I’m playing defensively, but other people are playing incredibly offensively leaving themselves open. Maybe because one of them doesn’t “get it,” and the other one gets that the other person doesn’t get it, and they’re both kind of ignoring me.
And if that happens, I’m not going to get the highest score, because Player Three will take advantage of Player Two to the maximum ability they can and will rack up all the points. Now, the clincher about this is that the losing strategy in this individual case may still be “optimal”. The reason why optimal strategies could consistently lose is because we’re in a certain part of the strategy space that humans occupy. And this is true for some other biased part of the strategy space but the main bias we care about is the way humans think because that’s the kind of games we robably care about most.
And the kicker here is that we expect opponents to play consistently, so it would be crazy to some extent to imagine that Player Two would do something really dumb consistently to set us up, unless there is some way they could eventually scoop it back, turn it all around, and when, and theoretically that strategy has been cashed in; that’s been priced into the optimal strategy that I’m using and I’m guarding against the possibility that they could flip and change. But there are plenty of strategies that people try to pull off that don’t work, that you literally just could not recover from because they don’t understand the rules or because they had an idea that they aren’t capable of executing: there lots of reasons why people do dumb things that aren’t optimal.
And once one person is suboptimal and other people are taking advantage of it, you’re dragged into the sub-optimal space if you want to win. And here’s where the game becomes interesting: it becomes interesting because you’re trying to find the specific subspace of strategies, all of which are suboptimal, that you’re playing in, because if you know the subspace of strategies that people are messing around in that is dragging you into this sub-optimal style of gameplay, then you can redefine the game to only occur in that subspace of strategies, and then once again there’s an optimal strategy, and you can win using that optimal strategy. The problem is, and the reason why this doesn’t just end up with people finding a small subspace of strategies and game theorizing it themselves into a new optimal solution, is that once people tend to saturate a space of strategies… Well, let’s say, a strategy is small enough to be saturatable, then people tend to break those kinds the kinds of strategies they use into further sub-optimal strategies that other people aren’t familiar enough with that breaks their rhythm, that even though they’re suboptimal other people don’t know how to exploit properly. And this kind of creates a [garbled] treadmill of sub-optimal strategies and we call this property of these kinds of subspaces anti-inductivity.
Another situation you can end up in, is that the space of suboptimal strategies is so ridiculously large, that you can’t really know the kind of strategy people are using ahead of time. So instead, you try to see patterns that are gonna give you some kind of understanding of the behavior that you’re going to get. And then you try to work with that. And in those cases, often, I mean you could still call this anti-inductivity, to some extent because I’m still going to try to exploit the sub-optimal patterns in my opponent’s strategy, but it is in some sense very different kind of anti-inductivity, because in a vast space which has only been sparsely populated by real attempts, people have only tried very few possible options of possible strategies, and there are an incredible amount of strategies that you could draw from, you end up not being able to actually create a proper taxonomy. And so people end up focusing on strategies that are “bucketable.” And then, focusing on finding opponents that fall into a certain bucket and exploiting them. And this is a very common thing, right. Predators don’t really care about exactly the species of animals they catch, but they’re looking for species of animals that have certain kinds of weaknesses that they can exploit in order to catch them. Now the kinds of animals that exist—there’s a lot of them, but it’s, it’s still a pretty damn finite space. Evolution is slow, so plenty of predators aren’t super specifie, to a certain kind of ecological niche because they’re really in the middle of being specified and everyone’s changing at the same time, Red Queen problem kind of stuff, but there are plenty of things where this is the case.
I would argue that this is more or less the case with writing, where I think there are an incredible number of ways to write an essay. I think we tend to be forced to write essays for people who are reading essays, and to try the thread the needle of uniqueness and understandability, and in this case the predator is actually the reader, looking out for new essays to read, and you’re trying to produce something for the predator to catch. And you have to produce something that they’ll recognize as prey. Your competition, as a creator is other people who are also trying to produce stuff that these readers are going to end up selecting—right, this is the classic selection game. And the problem is, we end up not filling out the space of possible essays, because it’s often much, much easier to fight over a recognizable niche than to get people interested in trying to explore hard-to-recognize niches.
My point about stupid leverage is that in human affairs, optimal play is selected against because most of the time, most games are about exploiting something that has already put everyone into sub-optimal play (almost any way you could think about the game). And that’s basically because usually, the sub-set of strategies that other people are using for a contested resource is such a small part of the possibility space. Almost all of the free energy you could end up exploiting—all of the places where inventing a new strategy could end up giving you an advantage—has to do with the fact that your enemies are leaving something on the table.
6 notes
·
View notes
Text
Eggs
A lovely example of market inefficiency from Nature's Metropolis, William Cronon's great book about the economic development of Chicago and the surrounding region during the nineteenth century:
Discussing the lack of readily available information about demand and supply, Cronon quotes a plaintive letter sent by storekeeper in Illinois to a potential seller in Iowa.
"We have a great demand here for Eggs, and hear that there are plenty of them in your place, and request you, to send us 5 or 6 Barrels of them imediately... But [noting their fragility] you must pack them in plenty of oats, for which you may charge us."
We're so used to thinking of the challenge of business as trying to find adequate demand for what you are producing, it's comical and sad to think of an eggless community writing around in search of someone to sell them breakfast.
Three modern parallels come to mind, though all are quite different: 1) Trying to obtain a sought after scarce commodity, such as a PS5. 2) Various experiences during the pandemic of trying to find a store that still had a supply of toilet paper or hand sanitizer or masks or some other product that temporarily became hard to find. 3) Also during the pandemic, the challenge of hiring labor, as was apparently experienced by many business as things started to recover.
A key difference however, in all these cases, is that the desired commodity was presumably readily available, just not for the "usual" price. It's a whole different matter to think of an isolated community having a dearth of an everyday commodity like eggs, and not knowing where they might be obtained.
The kicker, Cronon continues, is that "there was good money to be made in such a letter, but only if one could get the eggs to their wood-be buyer before anyone else. All too often, a merchant went to the great expense to send goods in the direction of a recent rumor, only to find the market glutted by the time they arrived."
When information travels slowly, it can be hard to find what we need, but also risky to send a costly response.
0 notes
Text
After much delay, I thought I would pick up on the point you made here about multiple levels of optimizers. As a caveat, I feel like I'm not sufficiently conversant in some of the terminology that is being deployed (ironically, since the original post is about clearing up terminology), and I'm especially unconvinced it makes sense to speak here in terms of base and mesa optimizers. Nevertheless, I'll just try to build things up starting from where I see the light.
For me, the cleanest example of a reference system for working with many of these concepts is a multiplayer strategy game (think Risk, Catan, all that has grown from that). One of course could substitute more stripped down, precise examples (e.g., Chess, Poker), but I like the fullness of the example provided by games with a fair bit of flavor.
Games such as these provide particularly nice examples, if we take them at face value, because there is a well defined objective for each player (basically, get the highest score), and a well defined set of rules. In practice of course things are somewhat more complicated than that. One relevant question that arises is, how does one player manage to win at such a game?
In many cases, luck may be one component. Sometimes the dice just happen to land a certain way. There is also some component of what we might call cleverness or skill -- basically the ability to see more moves into the future, or to better anticipate what others will do, or to work the social dynamics to encourage players to take certain actions, etc. Finally, there can also be tremendous value in knowing the rules. This might seem trivial, but is actually quite important in practice, in part because the rules for such games can often be quite baroque. (For a simpler example, one can imagine someone losing a game of chess because they didn't know that capturing en passant was a legal move).
There are two things which make such settings fundamentally unusual. First is the fact that such games (again, almost but not quite) exist as closed worlds, where the objectives are perfectly well defined and don't bleed over beyond the start and end of the game. Second is the fact that there actually is (usually, hopefully) a well-defined set of rules. In principle, one can simply consult the rule book to determine the proper ruling for any edge case, though the expertise of a rules lawyer may be required to parse them in some instances.
There are, of course, multiple ways in which this idealization tends to break down in practice. The first is that the in-game objective isn't necessarily a player's only true objective. For many (most?) people, there is likely a higher order objective, which is something like have fun (though some people really just do want to win), or perhaps to create a positive experience for all, such that there will be additional games in the future.
Second is the existence of the metagame. This can be made quite formal, and I think traces its roots to trying to make sense of iterated prisoner's dilemma games, but has a much more interesting texture in the board game arena. Assuming you're going to play the same game (or even a variety of games) with the same people over an extended period of time, doing well in general may become more important than winning any particular game, and this may influence one's decisions with respect to things like trust and even apparent competence. (Of course for the true aficionado, it only adds to the challenge when everyone else already expects from the beginning that you are going to inevitably win).
Third, for any reasonably complex game, it is naturally impossible (or at least very hard) to write down the rules with sufficient clarity and detail such that no ambiguous cases remain. It is a delight to find a rulebook that is sufficiently clear and comprehensive, but the more typical experience is finding gaps and confusion. Community-driven solutions will often merge (FAQs, etc.), but in the moment, there may be no obvious way to resolve a discrepancy in the rules, and such a discrepancy is probably most likely to be noticed when at least two parties have conflicting investments in the outcome. A sensible strategy might be to have a house rule in place for how to resolve such disagreements (e.g., voting), but already we are sipping beyond the rules as defined on paper. Turning to the best accredited rules lawyer provides another alternative (perhaps they could say what would usually be the case in a game by this designer, for example) but of course they must be careful not to overextend themselves, lest they risk their credibility.
I'm getting somewhat more bogged down in this idealized scenario than I intended, but I do think it is useful to have in the back of one's mind. The main thing I want to suggest here is that the key differences between such games and everything else is that outside of such constructed scenarios, a) objectives are less singular and more varied; b) the rules, such as they are, are typically not well defined; and c) everything is in game.
It is definitely useful to consider a system like the stock market, which has many similarities to a more traditional game, in so far as there are clearly many players, most of whom are pursuing more or less the same objective (making as much money as possible, over some particular time horizon, within some acceptable level of risk, or maybe just doing better than the next person), and what seems like a set of rules which governs everything. Unfortunately that is very much a simplification because a) the lack of a clear end state makes any evaluation transitory and contingent; b) the prize in this case has much more significance than an in-game score (to the point where we can meaningfully talk about the overall effect of people getting richer); and c) the rules are not only not well defined, they are not even fully established, and are subject to both revision, clarification, and filling in the empty spaces, for a cost. This is somewhat important in the stock market case, but critical in many more typical scenarios.
I'm finally getting closer to the thought which originally motivated me to write about this, and which now feels like a sidebar, but one thing which I think is interesting is that the notion of efficiency / adequacy (to the extent that I understand them) seem to depend on something like a kind of utilitarian premise, in which the overall goodness of a configuration can be simply computed as a sum of individual value functions. If we accept such a premise, then one way of interpreting efficiency / adequacy is whether there is another possible configuration of people's choices in which everyone would be better off. If so, then it would seem obviously preferable to move towards that, though there may nevertheless be things which prevent such movement (making it an inadequate equilibrium?), as you and Crispy have discussed above. The reason I think this requires some sort of big-U thinking, is that it seems computationally intractable to evaluate possible states if the overall value depends on individual values which in turn depend on the value of all others. At a minimum, it would seem to be far less likely that one would be able to find any state in which literally everyone would be better off than wherever one presently is
That of course is a high bar, and perhaps too high. In some sense the "magic" of markets is Adam Smith's basic insight that, under the right conditions, everyone pursuing one's self interest can make everyone better off on average (even if not actually better for everyone), and perhaps that is all we want from efficiency. The giant caveat here is the "under the right conditions" clause, which finally brings us to the question of where such systems come from.
In the case of board games, some individuals have taken it upon themselves to create systems of rules, and various groups of players have chosen to devote their time to playing within those rules. In the larger world, it is more the case that we are all operating within systems which we did not opt into, and which no one person created, often governed by systems that predate anyone currently living.
It is sometimes convenient to factor this into a set of rules (the game), and the people within it (the players), but I think we lose a lot in that simplification. Rather, we have many individuals pursuing their objectives (trying to push the world towards a state that they would prefer, at least according to their preferences in that moment). Rules, for lack of a better term, shape what actions will be taken, but they are quite different from the rules of a game in the more limited sense. Yes, external forces (instantiated by rules followers) will eventually push, restrain, and even murder you should you push too hard against these rules, but by more subtle tactics, almost any one of them could be changed. Most importantly, the changing of the rules is fully within the game!
This has not been as clear or coherent as I would like, but I want to try to bring this to a close with one final extended point. What I find truly remarkable is that we have systems such as we have in place at all, which in many cases are productive of well being, even though they are no doubt flawed. The truly remarkable thing is that most of these systems came about through the same mechanisms as any other actions (people pursuing a particular vision of the world), and yet that the most important effects of the most important actions of those particular people extend far beyond any specific benefits that they could personally experience.
As a simple example, we could imagine a very wealthy person conspiring to change a rule so as to enable them to get slightly wealthier, and yet such a rule change might have vastly greater consequences for the wealth or poverty of others, both now and in the future. In fact, it's entirely possible that such a person would view that broader intended consequence as their primary purpose in wanting to bring about such a change, though this can always be viewed as a pursuit of their preferences, and of course in most settings, they would have very little reason to know with certainty what the broader effects would be.
To conclude, those who do strive to modify the rules in such a fashion (your base optimizers, perhaps, though again I don't love the terminology) are presumably aiming at a set of outcomes that are assumed to be an appropriate target based at least partly on extracting a utilitarian ideal from the behaviour of individuals operating within existing systems. This is not even necessarily inappropriate; it just feels like somewhat of an unexamined assumption. We believe we know what people want based on how they behave within certain complicated systems, and may try to shift the terrain such that they will get more of it, even as they (as we) continue to exploit opportunities in the evolving system in pursuit of a particular set of objectives.
Inadequate Definientia
I'd like to attempt to clear-up the bundle of "anti-inductive", "inadequate equilibria", and the implied "adequate equilibria" all at once. I think they're all part of the same confusing knot, for which specific kinds of teleological blindnesses are to blame.
"Inadequate Equilibria" is a termed coined by Eliezer Yudkowsky in the book of the same name. It describes situations in which a certain part of a system has reached an "equilibrium"—used casually to specify a stationary state that is difficult to escape from under certain assumptions, similar to a stable equilibrium—that EY feels are "inadequate". I think the definition of inadequate here is very vague, and digging hard to pretend that it's coherent seems a bit silly. However, what I think EY is pointing at is that the games certain people claim to be playing, e.g. the Bank of Japan claiming to try to be regulating Japan's economy for maximum profit and health, are often not the games they're actually playing. This is an extremely broad set of phenomena, but EY restricts himself to games where the players are kinda sort playing the game you think they're playing but with some caveats/extra rules/unexpected incentives.
I enjoyed the book and I think it's a decent intro to spotting "the games people are actually playing and when you should trust their claims", but the main issue is that EY doesn't seem to have any criteria for what makes the equilibria inadequate vs. the people telling you the rules just straight-up lying. Clearly both happen.
To make this worse, EY conflates the fact that certain games don't have the incentives you would want them to have (if you really wanted them to complete their stated goal) with the fact that certain games don't have the right conditions to make solution finding highly efficient, e.g. because one party has a monopoly.
Let's clear this up.
The latter case, when a game is currently blocked by something within the game that could be changed without breaking the rules or changing the game is the definition of market inefficiency. Let's just call it "inefficiency" because people get fussy about what counts as a market.
The former case, where you could redistribute or inject resources into the game, but the game isn't setup to incentivize what you—the protagonist of Reality—want from it is the definition of surrogation, i.e. "you're optimizing one thing and dreaming that it correlates with another".
Easy peasy.
If you think "surrogation" is often subjective, because not everyone will agree on what a game is "for" you're right! But if you think "efficiency" is objective, you're wrong: real systems are way more complex than most toy examples of efficient markets and we never really know "how good this game is at finding solutions" in comparison to the best possible game. Thus, "adequate equilibria" are judgement calls.
Sidenote: Despite my immense respect for LessWrong and the Rationalist community more broadly for incentivizing open, critical discourse and the construction of new and useful vocabulary, a lot of this vocabulary ends-up secretly being about "what I think system X should be/claims to be/is about vs. what it really is". There's often lots of interrogation of "what it really is" but relatively little of "maybe it's complicated how to define what something is for and most names/descriptions are just short-hand or for convenience?" I call this "the teleological lens". Everybody tends to fall into it, because communication is naturally about what you want from people, but literalists tend to think themselves immune because they define things so clearly. Most rationalists have at least a literalist streak, because it is what starting from first principles demands. Literalists are always in danger of taking other people's words too much at face value and completely missing the point, a complementary error to most people's inability to see literal meaning.
Finally, we reach "anti-inductive" which, in the article people tend to point to, seems to be something along the lines of "games where common knowledge is priced in". While this is a perfectly reasonable definition to inspire discussion, I think it's difficult to assess for many reasons, two of which are: (i) how do we agree what's common knowledge at time T? (ii) how the hell do we know it's priced in?
I propose the following definition for anti-inductive play:
anti-inductive (adj) — describing a property of certain kinds of game play where moves, on average, (i) leak information about your strategy and (ii) knowing such leaked information allows for the creation of counter-strategies.
In other words, anti-inductive plays are "scoopable", not just because effective counter-strategies exist (ii) but because their construction is a function of observing other strategies at play (i). Without both stipulations, pure randomness might be the winning strategy, in order to ensure eventually hitting on an effective counter-strategy.
Usually equilibria are sustained by anti-inductive play, in which different parties are constantly developing counter strategies and counter counter strategies, keeping the system locked onto some other fundamental factor, e.g. the actual overhead of a service during a price war. Adequate equilibria are ones where the byproduct of the game matches the in-game score well enough and is optimized quickly enough that the person saying "adequate" is happy with it. This is usually a result of anti-inductive play (e.g. web browser features that are useful spreading to all browsers) but not always (e.g. playing the cooperative game Hanabi when everyone is invested in doing well). Inadequate equilibria simply describe situations in which the system is "stuck" from the point of view of the speaker.
This might seem like "ruining the magic" of the term: "What do you mean it's all just about whether people like stuff?" I remember being struck by the "aha!" feeling of EY's description of inefficient and misaligned systems, and I don't think this at all changes his guide to spotting inefficiencies and misalignments from one's own point of view. It does require use to ask "inadequate to what?" and to answer with how you imagine the current system could be (i) more efficient (ii) better aligned. This seems like a low-bar, and it's one EY certainly passes, but which a lot of people referencing the concept eschew. Real systems are complex and we can almost only make relative judgements, efficiency and alignment give us axes to make these relative judgements comparable to each other. Anti-inductivity gives a sketch of the most common mechanism used for keeping efficiency and alignment stable.
The video game industry is at an equilibria where it is inadequate to create as many story rich games as I would like, since investing more in narrative design doesn't yield linear (or even predictable) returns. Anti-inductive play forces us to ask: why hasn't someone scooped up the free energy from my willingness to buy such games?
6 notes
·
View notes