Text
Working As Intended: The Politics of War Shooter Games
Hi and welcome to Working As Intended, a series which delves into how game design interacts with players to create our experiences, intended or not.
Though before, WAI examined specific mechanics, this time we look at the more macro-scale design decisions of war shooter games--using the Battlefield V as an example--and how they affect the inherent politics of these games in particular.
Politics, in MY Videogames?
A Little Bit of Them
Before we move any further, we need to establish how politics works into videogames, and not just in the obvious way that some clearly political games push certain messages.
In other media like books, movies, or pictures, it’s already well established that a work’s creator will inevitably leave a bit of themselves in their work, intentional or not. Each medium has different aspects which can betray its creator: What is included in the frame in a photograph, how certain perspectives are handled in a book, or the editing style a film adheres to, etc.
And so do a developer’s own perspectives color their games, and these games can show it in ways unique to videogames. In this post, we’re going to focus particularly on Inclusion, Gameplay mechanics, and perhaps a bit meta: the very fact that game exists.
So what’s in here, anyway?
Inclusion in videogames is a bit different from other media. Whereas inclusion in books, photos, or films changes what an audience might see, inclusion in videogames is more pressing in that it can fundamentally change what an audience must be. For example, it is one thing to have a minority character on a film go through discrimination, but it is another thing entirely to have a player play as said minority character, and have their actions in-game directly affected by the discrimination.
More significantly, it can mean giving a minority player--who in some genres of videogames never saw themselves in the characters they were asked to play--a way to finally immerse themselves with a character they could identify with.
Inclusion can expand beyond race and sex, of course: In war games in particular, another key way inclusion is expressed is by which war-fronts, battlefields, factions, and perspectives they choose to portray and let the player experience. Either way, inclusion is particularly powerful for videogames in that the player will be made to undergo these experiences themselves.
Meaning in interaction
Even if two developers choose the same things to include in their games, how the player is made to experience them can fundamentally change the implicit message carried by the game.
After all, game mechanics fundamentally control (and limit) how the player can play: One horror game may use clunky, difficult controls to mechanically disempower the player, creating a more desperate atmosphere. Another may use smooth and responsive controls, creating a more fast-paced, action horror where player action is more powerful.
Game mechanics can also be important when considered in the larger context of its genre--deviation from genre norms in mechanics may indicate a conscious developer intent for the player to similarly experience something deviating from that genre’s normal experiences. This will be especially important in our discussion, as the action war gaming genre is a relatively well-developed genre, with a long history of mechanical development.
A Little Bit of Us
Beyond the minutia of what’s part of the game, even the existence of the game--and that the game decided to include what it did--can carry a political message. This is especially true for AAA game development which produces games explicitly for the purpose of being sold to a mass market.
And this is thus especially important because this means that as much as the developers are putting a little bit of themselves in the game, their creations are still meant to be consumed...meaning that the game was designed to have a little bit of US in it as well.
Sometimes, as much as we like to point fingers at creations we dislike, this also means that it’s an uncomfortable mirror into what others perceive as what we want. And sometimes, it’s an uncomfortable truth we don’t like to admit. Especially for war videogames.
A Political Battlefield
The creation of this post right after the launch of Battlefield V is no coincidence. Intentional or not, Battlefield V became the straw which broke the proverbial camel’s back on the discussion of politics in videogames. The gaming industry had long been simmering under the perceived encroachment of "pushed” political messages into what many would like to believe is their beloved, totally apolitical hobby. But for a while, the AAA (large studio) development scene was largely considered “safe” from political pandering due to their corporate backing, with most of the ire being directed towards smaller “SJW” developer with the creative freedom--and lack of shareholders to please--to push their politics.
But Battlefield V changed all that. Their reveal trailer overtly focused on a prosthetic-limbed female Allied soldier in battle fatigues, fighting alongside male soldiers on the battlefield.
youtube
This female soldier also serves as the game’s chief box art protagonist, a role served in all previous Battlefield titles by men. The trailer and box art were the first clear declaration from a huge AAA developer of first-person shooters that they would break corporate industry norms and be openly progressive. Though Call of Duty: WW2 technically was the first to offer playable female soldiers in a major AAA shooter, Battlefield V was the first to openly put it in their promotional and main box art material.
And people were not happy. Below are some of the most popular Youtube videos on the topic (those view counts are fairly large for gaming channels), whose thumbnails alone make clear what they’re displeased about:
I swear politics has nothing to do with it! (uploader names removed for privacy)
Upon release, the game would draw further ire for a campaign mission which replaced the historically Norwegian male commandos with a single female soldier--leading to further criticisms of “inclusive revisionism” alongside the playable female multiplayer soldiers. Keep this example in mind for later.
However, most of these criticisms which merely contend with yelling about “social justice” and “inclusion” fail to adequately unpack the whole of the politics underlying Battlefield V. There’s much more than just female soldiers to discuss.
Inclusivity Wars
Going in the same order as discussed in the first section, let’s start with Inclusion, and how Battlefield V fits into WW2 and the greater genre of war shooter games.
Taking Credit
War games are particularly difficult when it comes to inclusivity. Unlike sports titles like FIFA or Madden, where teams have strict, finite, and well-documented rosters that games can represent, war is large and messy. It is particularly difficult for games portraying multinational wars, where the combatants can hail from dozens of nations and men, women, and children from every walk of life became casualties. Such games must then make a conscious decision on what they will and will not portray.
A popular understanding of who then deserves inclusion in a war game seems to be based off one of credit, whether through their accomplishments (such as the overwhelmingly American defeat of Japan) or the price they paid (such as the Soviet sacrifices to victory). Such is the argument given against the inclusion of playable female soldiers in a WW2 game: That female soldiers were a negligible percentage of the military combatants in the war. Further criticism is leveled against a campaign mission called “Nordlys” featuring a lone female commando on a mission to destroy German heavy water facilities...when the actual Operation Gunnerside used 8 male Norwegian commandos and is considered one of the most successful commando actions of the Allies. In sum, Battlefield V is accused of “Inclusive Revisionism” wherein women are not only afforded battlefield credit where none are due, but are actively taking credit for male soldier actions.
Nordlys’ female protagonist and the actual Operation Gunnerside crew (credits 1, 2)
Going off this criteria for inclusion in war games, only looking at total numbers in the military, it’s easy to have such an opinion. Though hard numbers are hard to come by, here are some estimates of the peak strength of some example countries, followed by statistics I could find on the total number of female members who served. Note that because this is NOT a 1:1 comparison, since peak strength is a single timepoint, whereas total female soldiers is summed across all years--hence the true percentage probably changed over time and this chart inflates the percentage of female soldiers.
Very, VERY rough numbers
However, percentages of total participation can be deceiving: Despite being only 3% of total military personnel at a given time, Soviet women made up 40% of the paramedics, 43% of the surgeons, 46% of the doctors, 57% of the medical assistants, and 100% of the nurses employed by the Soviet Union in WW2. Though total numbers can be hard to come by for arguably combat roles such as anti-aircraft batteries, about 300,000 Soviet women worked and fired anti-aircraft batteries. Similar numbers for other countries couldn’t be found, but at least for the Soviet Union it is clear that women served a significant proportion of specific roles, particularly the medical ones.
This also ignores the undeniably important participation of women on the homefront: in Great Britain alone, 90% of single and 80% of married women were working in war-critical roles such as agriculture, industry, or the military. In the USSR, the sex ratio of men to women fell to 0.6 in the 20-29 age group due to massive casualties of the male soldiers, leaving women to pick up much of the home industrial front. Even in the US, which suffered far fewer casualties, 5 million women would join the workforce from 1940-1945 to fill vacancies left by soldiers, of which 2 million would work in war industries, with about half in aerospace alone--accounting for 65% of its workforce. For reference, just as many American women worked in war industries as American soldiers would fight on the European front. One could keep going, but it’s obvious that without women, many countries’ industries and militaries would have been crippled.
If women were, then, significant participants of WW2, we can better understand the popular criterion for war game inclusion to be based on direct front-line combatants. But even here, the argument runs into a few flaws. If female soldiers are to be excluded based on frontline contribution on the battlefield, then where is the furor over the lack of the Soviet Union as a playable faction (or campaign mission) in Battlefield V? According to a 2000 study officially endorsed by the German Armed Forces Military History Research Office, the Eastern Front alone accounted for about 80-90% of all German military casualties, therefore arguably the USSR did the vast majority of the “work” for the Allies in Europe--yet they are not included. Or the anger over the lack of Chinese armed forces, which, combining Nationalist and Communist forces, numbered around 5 million at peak strength, rivaling that of France and Great Britain’s peak strength? This is not even considering the fact that China had the third highest military casualties of WW2. Where was the outrage over the lack of Japanese forces in the “Axis” faction of playable genders/ethnicities, when their peak military strength at 6 million was the largest in the Asian front? Or, even if one were to focus on the American experience, what excuses the lack of the Pacific Campaign, where twice as many American soldiers fought than in Europe?
And we must always remember that focusing on the active battlefields of World War 2 itself could be considered an overrepresentation (something critics love to hate when diversity is brought up): After all, TWICE (!) AS MANY CIVILIANS DIED FOR THE WAR THAN MILITARY COMBATANTS. Even within the US military (the only one I could find numbers for), 38.8% of enlisted personnel (remember that the women only comprised of 3% of the US’ peak strength, meaning at least a third of all male enlisted personnel) served “rear echelon” duties--further diluting how much of a given military was actually involved in the combat portrayed in videogames. Behind every marketable heroic front-line soldier were legions of logistics personnel, industry workers, farmers, and dead civilians.
LOOK AT THOSE ORANGE BARS (credit)
More baffling still is the developer’s decision to rewrite history with an undeniably disrespectful replacement of Operation Gunnerside’s commandos with a female one. They could have portrayed historically accurate (and NOT revisionist) campaigns of female Soviet bomber squadrons, British intelligence officers, Ukranian partisans, or the unsung roles widowed and single women played in defending and providing for their now-men-less homes in the face of approaching occupying forces. These are all important stories of the war, and their exclusion is just as revisionist of women’s participation as the use of a female commando.
And most crucially: Where then were these critics of revisionism when Call of Duty: Modern Warfare 1 & 2, Battlefield 2, Battlefield 3, and Battlefield 4 (all set in modern time and focused on the American military) excluded female soldiers despite the U.S. military having 10-25% of its branches comprised of enlisted women?
If female soldiers in a WW2 game is inclusively revisionist based on the reasoning of participation, then surely Battlefield V and war games as a genre must also be guilty of exclusive revisionism on multiple occasions, if not for the sole reason of ignoring the civilian tragedy of modern wars. Inclusion, at worst, over-represents, but exclusion, at its least, completely erases. It’s a fine line war games must walk--and hopefully with more consistency to its own internal logic in the future.
Authenticity
Another common criteria for inclusion in war games is “authenticity”--i.e, that any period-accurate aspect is fair game for inclusion. This is commonly given as defense for inclusion of rare or exotic weapons, settings, or units in war games. Sturmtigers, exotic wonder-weapons, and experimental prototypes all make for fantastic marketing material. However, authenticity is inherently a contradictory criteria to statistics: Authenticity allows for the smallest minority to be represented, whereas that same element would not pass muster by statistics. This contradiction is one not adequately addressed anywhere in discussions of war games, but I think it’s worth exploring.
For example, in Battlefield V alone, there are 49 non-gadget (grenades/launchers) weapons available for the infantry. Of these, 9 lack any source verifying their use in WW2. Using Soviet female solider number of 300,000 (AA batteries alone, not including tanks, pilots, or snipers) as a benchmark, a full 28 of 49 (57%) weapons were produced less than there were Soviet female soldiers facing combat.
Angry about women, but not about a gun which didn’t ever leave prototype AND wasn’t in WW2?
Though it’s easy to reflexively react to female soldiers in a WW2 game, closer inspection shows glaring logical contradictions that regularly show up in war games. The desire to showcase exotic elements to differentiate a game is just as understandable as the desire to showcase the largest players in a war, but it’s a paradox that developers--especially as they begin to navigate an increasingly political scrutiny--will have to learn to balance better.
How to Fight a War
Fighting Different Wars
Even if war shooters mostly overlap in the included content, they tend to vary the most in their mechanics. While certainly some core elements that define the shooter genre are shared, all other mechanics as a whole are typically gauged in the industry along a spectrum from “casual” to “hardcore/tactical.” While such terms are colloquially meant to describe the types of player they cater to, they also describe the manner in which war is portrayed in a particular game.
The “Heads-Up Display” of Battlefield (credit)
For example, the Battlefield series portray a rather “casual” war: Body parts do not have individual health points, and instead one number dictates whether a player is alive or dead, with no meaningful penalties for lower health points. Unlimited medical kits automatically replenish health points without any interaction required. Upon death, players can quickly “respawn,” magically popping back into existence right next to their teammates, having lost no personal resources beyond time. Crosshairs float in the player’s vision, indicating their gun’s firing spread.
Intentional or not, these mechanics portray war as not only completely inconsequential, but also as power fantasies where the player is granted magical powers augmenting their experience.
This contrasts with a game like Escape from Tarkov, which takes a much grittier approach to war: Every body part has separate health points, with distinct penalties should they become damaged. Healing supplies are strictly limited, and require lengthy interactions to use. There are even hydration and hunger meters that one must manage in longer play sessions. There are no crosshairs, and weapons have myriads of modifications that affect its handling characteristics. Upon death, the player loses everything, and cannot “respawn”—they must instead wait ten minutes to deploy into another map completely anew. Gear must be earned or bought, and careless play can lead to permanent losses.
Battlefield doesn’t even have such a health/inventory screen (credit)
Tarkov’s mechanics thus clearly embody a much more high-stakes approach to combat: Choosing to play a game of Tarkov is a careful, conscious decision which requires planning. Quitting early means losing all of your gear. In Battlefield, however, players can easily drop-in and drop-out of games with no consequence.
While unlikely that either game is trying to make any overt political statement about war through its mechanics, it is important to stress that by mediating the manners in which players are allowed to make war, the mechanics do clearly portray very different wars.
Forced Choice
Though not initially obvious as a mechanic, the fact that Battlefield V even lets the player choose their soldier’s ethnicity and gender has political ramifications of its own.
It is important to first contextualize the importance of the mechanic of choice—Battlefield V is the first Battlefield which grants players customization options beyond their soldier’s camouflage in the 11 games of the series. This is significant given the earlier mentioned noted lack of female soldiers of any capacity in those set in modern wars. It follows very closely behind Call of Duty: WW2 (year), which also marks its own series first of giving the choice of choosing a soldier’s sex and ethnicity.
Mechanics of choice are also further contextualized by their presence in a multiplayer game—in Single-player games, forced perspectives are to be expected to enforce artistic direction, much in the same way a movie can force characters on audiences (imagine if each moviegoer could choose if they would see Harry Potter as female or male!). However, multiplayer games have increasingly focused on player expression, whether through the weapons they use, or the cosmetic details of their soldiers. As multiplayer games inherently rely on emergent player behavior to create meaningful narratives, freedom of player expression is consequently key to enabling as much emergent gameplay as possible.
In fact, in Battlefield games, such moments of zany emergent gameplay deriving from player freedom has its own community-made name: A “Battlefield Moment.”
Thus, beyond any consideration of “proper” inclusion, simply offering players the choice to choose representative characters makes such multiplayer games more inclusive (and immersive) to those players and potentially expands the possibilities for self-driven narratives in multiplayer games. An obvious non-Battlefield V evidence of this can be seen in the creativity seen in media based on the ARMA games, which allow players to even customize their faces, leading to greater narrative possibilities from that choice alone.
(Picture of funny ARMA moment with face textures)
However, sometimes even offering a choice can contradict a war game’s mission to be historically accurate or authentic. While one can debate the merits of including female soldiers at all, it is undeniably historically INaccurate to allow players to choose a female soldier for multiplayer maps on the Western Front where both Allied and German troops forbade female frontline soldiers. Or to allow players to pick African American Allied soldiers in squads with Caucasian soldiers when America forced segregation and France had colonial troop divisions. Neither are particularly pleasant artifacts of the time period, but it would nonetheless be a mistake to ignore certain ugly facets of the past in the name of present-day comfort.
That Battlefield V chose to allow such contradictions when it is logistically simple to restrict customization per-map (something which Battlefront 1, set in Star Wars, could do back in 2004) shows that DICE cared more about letting players be themselves, rather than making players play a character.
In other words, Battlefield V’s mechanic of forcing inaccurate options means that DICE didn’t intend for players to play AS a soldier in WW2, but instead intended players to PLAY in WW2-SETTINGS. Keep this important distinction in mind.
A Dirty Mirror
Consumer Feedback
In all this talk of inclusiveness and mechanical authenticity, it’s easy to point fingers at the developers and shake our heads. “Why couldn’t they just do more research? Some random idiot on the internet did it!”
However, the uncomfortable truth is that AAA games are designed to be sold. Sold to us. Based on what we like and expect to see. This much is painfully clear based on the lack of controversy over the exclusion of certain key factions in Battlefield V--in a Western-dominated market for the game, the developers literally didn’t think we cared enough about the Soviets or the Chinese war efforts to pay for it, nonetheless complain about it. And they were RIGHT.
Award for killing 80-90% of the Nazi military: Not being invited to the party.
This conclusion is further corroborated by much of the Western gaming media’s instant reflexive outrage over female soldiers--understandable given the decades of exclusively male war media we’ve been fed--but mum over the inclusion of weapons which saw nearly no use in WW2 (or wasn’t in WW2 at all).
The reality is that we--in focus groups, in our reviews, and in our feedback--have told the developers that we don’t care about including the civilian perspective of war, no matter if it was literally TWICE the size of the military perspective of the war in terms of casualties. That we don’t care about including the aftermath horrors of war. That we don’t care about including doctors and nurses tending to the millions of casualties behind the lines. That we don’t care about including the inventors, engineers, and accountants that pushed technology lightyears forward. But there ARE more female gamers than ever before--a new market to sell to. The truth is that they DID do their research. And they are delivering.
Zooming out from the game to the market-level, other flaws of Battlefield V gain new context: From the market perspective, historically inaccurate cosmetic alterations is merely another manifestation of modern gaming’s embrace of individual expression (PUBG and Fortnite have complete character customization, and even Call of Duty--Battlefield’s largest competitor--has steadily increased customizability in their games). Shooter games’ inherent reliance on their firearms have led to a continual expansion of arsenals marketed to players over the years, leading to Battlefield V dipping into historically inaccurate weapons to expand its otherwise limited list of widely-used WW2 weaponry. Again, we must face our own part in these flaws through our own market demands.
If Battlefield V has flaws, they’re not unique to Battlefield V. They were long in development since the first war shooter came to the market decades ago.
Playing Soldier
Returning full circle, a common criticism of Battlefield V’s portrayal of minority soldiers in the multiplayer mode was that it was “disrespectful” to the veterans who fought the war. This argument is sometimes compounded by added claims that its “casual” depiction of war further makes light of a war which deserves the utmost respect and veneration.
However, it is with such sentiments that the dirtiest secret of Battlefield V and ourselves is revealed: ALL war games, shooters or not, and no matter where it falls on the “casual-hardcore” spectrum, are inherently disrespectful. And so are we for playing them.
To reiterate, all games are designed to be sold. But a sale of a mass-marketed game is more restrictive than, say, the sale of other media such as books, films, or even a WW2 museum ticket: While books, films, and museums are permitted to be distressing--and sometimes, even sell extra well for it!--large-developer AAA games are inherently made to be enjoyed. To be beaten. To do so, such games are simply not allowed to be overwhelmingly difficult, or give the player much suffering. Such games are inherently limited by their necessity to be fun.
I.E: There are books, films, and museums about the Holocaust, but the Holocaust can never be a AAA game.
Unlike videogames, other media are not beholden to “fun” (credit)
What remains, then, are war games that (which even at their most “realistic”) sell corrupt power fantasies of what it would be like to play in a war-themed theme park. Arguing “realism” for such games is the equivalent of arguing to what pedantry said theme park is willing to invest in its costumes. It’s the equivalent of useless bickering over the rules in children’s make-play of soldiers.
But MOOOOOM, he didn’t tap his arm 3 times with a bandage to heal!
Neither manages to escape the fact that they are merely playing soldier, reveling in the glory of a past they never had to earn.
Pushing Forward
Unfortunately, the last point about the very nature of war games is rather unavoidable. No amount of meta-level arguments about entertainment and respect will convince such a dominant genre of the gaming market to suddenly stop making their games.
However, if such games must continue being made, there’s some ways they can do better with the topics presented here.
Better Inclusion Done Better
Inclusion need not necessarily get in the way of historical accuracy. If anything, the merging of both has the potential to teach players, while still retaining immersion and inclusive benefits.
Foremost, games could add little unit emblems for different ethnicities/nationalities/genders players can choose. For example, upon choosing an African male soldier, players could then choose what emblem shows on their sleeve. From French/British colonial units to American segregated units, each emblem could come with a little flavortext which explains their real-life unit’s history and participation in said battle.
Black Panther 71st Tank Division, a segregated American unit (credit)
Such an effort would require very little resources (it’s a tiny image with accompanying text), make little impact in the gameplay, but it would also legitimize said demographic’s presence in the game to deflect criticism.
In addition, individual maps can have different permitted player customization which accurately reflect the forces present in that battle. Should a player swap maps where their customization are no longer valid, they could be randomly selected to a new set of cosmetics and emblems.
Second, for historical wars, games can tap into the unused space which is the Team Spawn Zone (TSZ)--the very rear of any team where all players spawn at the beginning of the game. For WW2 in particular, the TSZ could feature female mechanics, truck drivers, nurses, anti-aircraft gunners, radio operators, and intelligence officers. Though players cannot interact with them, they would nonetheless still accurately portray the non-homefront participation of women in the war. Furthermore, an explicit inclusion of behind-the-lines medical services would make the magical pop-in of respawning players less magical, boosting immersion.
American Army Nurse Corps, serving in Sicily (credit)
Third, war games should strive for better internal consistency no matter where they fall on the accuracy vs authenticity line. If a war game is to choose to ignore smaller minority participants, then their choice of weapons and vehicles should be held to the same standard.
Mechanical Accuracy
Though the mechanic of choice was covered above, there are still more things which war games can do to better respect the consequences of the war they depict. The following recommendations are made to respect a game’s “casual” or “hardcore” leaning, which if changed may entirely change the direction of the game.
First, there are mechanics which could emphasize the consequences of the player’s actions. Dead player bodies could be left on the map (perhaps becoming static objects to save computing resources). Player profile screens could list the total number of times they died and killed, represented not only as a number but also perhaps graphically, with a dot for each count.
The most intriguing possibility for this aim was actually done in Battlefield 1′s first campaign mission, where upon death, the game displayed the birth and death years of their character. The next character they assumed control of had entirely new names, birth years, and presumably death years. While names and years could be randomly generated, such a mechanic would greatly reinforce the personal, individual costs of war.
youtube
4:21 shows the first of such a death mechanic
Second, there are mechanics that focus on the consequences of a particular match--Battlefield: Bad Company 2 did this to a degree, where after every match ended, a short vignette would play showing the effect of the winning team. For example, a dam may be destroyed, or the dam could stand tall as helicopters flew by.
youtube
One such match end video
Currently, Battlefield V and its confusingly-named predecessor, Battlefield 1 both do this to a degree with a narrator explaining what is at stake, but never really shows it. Especially with historical battles, perhaps both narration and visuals could be combined to provide a better context to the player’s match.
Having Fun
So war games inherently will always be problematic depending on their criteria of inclusion, their mechanics, and by their very nature of existing as a commodity designed to be sold. Somebody will always be left out. Something will always not quite work as it does in life. Some message will always have to be restricted for the audience it’s sold to.
So what’s left for us to do?
Earlier, I mentioned that war games, especially the high-production-value “AAA” ones, will inherently be restricted by their necessity to be “fun”--thus dooming them to be essentially theme parks, no more or less “respectful” than make-believe roleplaying.
However, perhaps “respect” is too heavy of a responsibility to foist on something designed to be fun. Perhaps what instead the word we should use to judge them is “celebrate.” These games can only but do a poor job at “respecting” the tragedy of war, but re-contextualized in the age-old tradition of celebrating war, their design and purpose make much more sense.
Think of Homer’s works on Greek wars. Beowulf. The Three Kingdoms. The general Western obsession with Roman military conquest. Just in the US, we hold two separate holidays to celebrate our soldiers.
Washington D.C. Memorial Day parade (credit)
All of these works, parades, and gaudy displays do no more to respect war (note: NOT the same as respecting soldiers), involving several common elements: The fetishization of the exertion of martial power, the vicarious feeling of said power for the celebrants (who never actually participated, yet take some credit through national/cultural affiliation), and the paradoxical martyrizing of soldiers for bravely suffering the horrors of war while actively suppressing the idea that maybe we should avoid war for those very same horrors.
All, coincidentally, shared with war games.
All problematic when put that way, yet we simply nod and go along with the flag-waving and fireworks. For all of its issues, it’s a celebration, an activity in which partakers all subconsciously accept the terrible ugly truth of things, but simultaneously all mutually agree to put those worries aside and focus on the things we like--Birthdays, weddings, sports victory parades, and even holidays are no different. All timelessly chased by legitimate criticisms like Adam Ruins Everything, yet obstinately impervious to them because celebrations inherently ignore criticism.
youtube
That is, after all, what makes celebrations so fun: They can only be so joyous because they are designed to ignore all things negative.
And that’s what theme parks--places where consumers are able to celebrate something by inhabiting a facade of it--like Disney Land/World are. And hence, what war games truly are: A celebration. Re-contextualized as such, all of the above discussions--even my proposed suggestions--become naught more than discussions about how to better that theme park, to perhaps create a less disrespectful celebration.
However, celebrations (defined by an explicit refusal to see the whole picture) cannot ever be truly respectful. We understand that implicitly about all other celebrations in our lives, and do not hold them to the standard of respectfulness.
So why expect games to be any more pious? Perhaps such games should be judged for the big, problematic celebrations that they are, instead of expecting them to be the tour-de-force documentary that they cannot be.
youtube
0 notes
Text
Return of the Analog: The Reality Crisis of the Digital Era
Fake news, faked videos, faked images--in the era of digitally shared media, it’s becoming increasingly difficult to tell the real from the forged. From hacks to Twitter bots, “digital” is increasingly becoming synonymous with “fake.” Let’s explore how we got to this point, and attempt to explain why, when the digital portion of our lives are ever-expanding, the analog may be more important than ever.
New Symptoms, Old Problems
Below is what is now called a “DeepFake”: The use of machine learning to simulate a subject’s facial movements with eery realism.
youtube
DeepFakes have recently garnered media scorn as the opening of a new Pandora’s box, where politicians, celebrities, and just about anyone with a good amount of reference data (photos and videos) can be puppeteered into saying disastrous things or performing sexual acts.
Though DeepFakes are being treated as a separate news story, the truth is that DeepFakes are just a new manifestation of a long-developing crisis in the digital medium: It is one more symptom, which along with doctored photos, spliced videos, and leaked credit account details, are indicative of a larger, more ominous trend.
The roots of this problem are more extensive than the hand-waving excuse of the human nature to lie and forge. It’s tied deeply into a greater pattern of tool development and our problematic blind faith in new technologies.
1) The Egalitarian Treachery
While the full brunt of this problem includes faked content, faked users, and hacking, let’s focus on faked content (specifically images and videos) for the time being, as they are the best at showing the origin and ongoing development of the corrosion of the digital world.
Niche to Everyday
Just a few generations ago, editing images or a video meant physically altering the delicate film. It required highly trained, experienced specialists working with equally specialized and expensive equipment. Both the equipment and its users were unlikely to be seen outside of high-budget movie studios or the experimental workshops where they were developed.
Rotoscoping and its use for lightsabers in StarWars (1977) (credits 1, 2)
Seemingly far removed from this cloistered niche of film editors, the viral spread of personal computers in the 1980′s would change everything.
Before the personal computer revolution, computer hardware and software was a closely guarded, prohibitively expensive specialist tool, privy only to well-connected academics or professionals. To give you a sense of just how huge of an explosion the personal computer revolution is, here is below an estimate of units sold, from the 1960′s through today. Note that the Smartphones figure is only from the sales from the last quarter of 2017 alone. In the full year of 2017, 1.5 billion smartphones were sold.
Units of personal computers sold per year (credits 1, 2, 3)
Now, the general populace was being given access to hardware capable of supporting similar software from the comfort of their home, without needing security clearances to MIT or the experience to sit in a film studio rich enough to afford computer-assisted film editing tools. Software like Microsoft Paint.
Microsoft Paint (1985) in all its glory (credit)
One of the first image editing programs, Microsoft Paint, came out for Microsoft ME in 1985. Though rudimentary and a far cry from a professional tool, Microsoft Paint heralded the tidal wave to come. A tidal wave it helped trigger.
The thing about progress in any technological field is that understandably, it goes much faster with more people involved. More people means more viewpoints, approaches, and prototypes that can be developed and tested. For this reason, highly exclusive fields tend to progress very (relatively) slowly--as was the case with image and movie editing, where in the last untold thousands of years of physical paintings, last 200 years of film photography, and the last 100 years of film video, the most editing had progressed was to “manually have somebody paint/cut stuff over it.”
With computers, this explosion meant more people able to code, and involved in improving both the hardware and software. This explains the exponential progress in all things digital: Though the first digital computer was arguably created in the 1940′s, it had only improved significantly in its size and power over 40 years till the 1980′s. From the personal computer revolution in the 1980′s to now, a similar 40 year period, computers have gone from command prompt screens to near-magical, automated, slick, and smooth wonders we carry in our pockets. Painstaking image editing which required rooms and rooms of computers to accomplish, we can now comfortably carry around in our hands. 1.5 billion of them in 2017 alone, apparently.

Computers in the 40′s, 70′s...versus now (credits 1, 2, 3)
And part of this exponential progress was image editing. What previously required a manual expert for tens of thousands of years now is mostly automated. For example, consider (what is now) a most basic edit: Changing the brightness. For anyone today that has ever used Instagram or Photoshop, this is as simple as sliding a bar and voila, every pixel of the image has been made brighter or darker.
Only 50 years ago, this couldn’t be done on physical film after it was shot. On physical medium like paintings, this meant painstakingly taking every single unique color and doing your best to mentally imagine what it would look like if it were brighter/darker.
In other words, what most teenagers do in a second today with the simplest of Instagram color/brightness filters would have been considered a masterpiece work to be displayed in the Louvre only 50 years ago.
Another seemingly unrelated offshoot of this revolution came in the form of machine learning, which--as technology tends to do--would eventually collide with image editing to help automate even more complex image editing procedures.
Procedures like actively editing thousands of frames of a video of a face to do what you want.
Procedures like DeepFakes.
Fall from Grace
This revolution undeniably made the technology of image editing more egalitarian, accessible to all, usable by all. But it’s precisely this egalitarian wide-spread distribution of image editing power that has led to images being held in suspicion.
For somebody recently born into a world where every person in an ad is assumed to be doctored and every tabloid or political image is called out for being edited, faked, or taken out of context, it’s hard to imagine that photographic images once used to be the bastion of truth.
Unlike spoken or written word, which could easily be dismissed as hearsay, a photographic image used to be held as a bastion of truth due to the difficulty of editing them, and the obviousness of their edits. But now, due to the widespread accessibility of powerful (and convincing) image editing tools, images draw the same skepticism its predecessors used to draw.

Unrealistic expectations, from the 17th century to the 21st (credits 1, 2)
As further example, take self-portraits. Not too long ago, paintings were discarded in favor of photography as presenting a more realistic depiction of its subjects--paintings were (and still) seen as notoriously misleading due to painters being able to take artistic liberty. This is why what historical figures looked like are still heavily debated to this day. However, it is now considered common sense to severely doubt a flattering picture on Instagram or Tinder for exactly the same reason.
When images fell from grace, videos took up its mantle--just as images were held to a higher standard for its difficulty in editing, the relative difficulty of editing videos made videos relatively more trustworthy than a single image. Unlike images, where you only had to edit it once, videos--as a rapid series of images--required consistent editing of every single frame. This was an intensive task, and especially difficult to do with human faces, which the human brain is exceptionally adept at noticing small changes in.
That is, until rapid developments machine learning were recently applied to automate this tedium with frightening accuracy. As with Photoshop, DeepFakes are already accessible to anyone with a decent computer. Though still in prototype stages, it is no unrealistic prediction that DeepFakes will begin to be included in professional video editing software for even more widespread distribution.
As with image editing, this egalitarian spread will also lead to an exponential advance in the technology of DeepFakes. As with image editing, DeepFakes will only get better, and at an increasing rate.
And like image editing, this egalitarian spread will begin to undermine the trust of the general public on videos.
Except unlike image editing, there is currently nothing left after videos too, fall from grace.
This Pandora’s Box wasn’t opened by any one Pandora. It was opened by all of us, and it cannot be closed.
With every passing day, it opens even further at an ever-increasing rate.
2) “New and Improved”
Marketing
Of course, nobody asked us to believe in these suggestive images and videos to begin with. Or did they?
Every day, we are bombarded with news of technological advances in every field, from cosmetics to smartwatches. Always, the refrain: “New and Improved!”
Over the last few centuries, arguably beginning with the scientific revolution of the Enlightenment which set the common conception of science as inherently progressive, this mantra has been drilled into our collective consciousness. Through advertising campaigns, presentations, and the backs of the boxes of items we buy, we are told of what is supposed to be a logical link between the New, and the Improved. After all, in the secular post-Enlightenment era of scientific progress, why would anyone make something new if it had nothing better to offer?

They don’t even have to say much beyond “New”. The “Improved” is implied.
To give the march of progress its due, the claim often is true: More the reason we have stopped questioning the claim the next time we hear it. So when advances in digital images, videos, and their sharing came around, we were told “New and Improved” and we went along with it. After 40 years of unbelievable leaps in progress in other things digital, it could only be true, right?
When Facebook rolled out its Trending section and began showing users what its algorithms determined were images and videos we might be interested in, we believed in it. It was new, and improved.
When Twitter unveiled its “verified users” program in 2009 and assured users it was only given to real human users of decent character, we believed in it. It was new, and improved.
You know how this actually turned out.
Humans vs “Humans”
When it all came crashing down in during the 2016 election cycle, we were caught off guard. Facebook’s Trends would be removed in 2018 and Twitter’s verified blue check is now ignored as a meaningless stamp. The former would appear before the Senate to try to explain how so many faked images, videos, and posts could end up in so many potential voter’s feeds and shared by automated accounts. The latter is still attempting to calm investors that its users are human and not generated by automated accounts and it is now a common suspicion that many followers aren’t human at all.
Oof. (Credit)
Setting aside each systems’ bugs and exploits, the largest and most avoidable problem by far were their inexcusable lack of oversight. Oversight which nobody thought necessary because we inherently believed in what was supposed to be new and improved. Oversight which developers didn’t implement because they were designing the new, and surely it was improved. Oversight which we failed to demand, because we too believed in their promises.
In the case of Facebook Trends, it was actually designed to eliminate oversight by using algorithms to replace human editors. If there was an exemplar of human hubris and the danger of blind faith in the new, there is no better one than this aspect of Facebook Trends’ design.
While we were initially startled, the truth was that the systems were rigged from the start. They were new, yes. But we were so blinded by its newness that we could not see what we retrospectively see as horrendously obvious, stupid, and utterly contemptible structural flaws. New, but not improved. If anything, new and even worse, an unthinkable combination for many in this technological boom.
And though we would like to think that we’ve learned from our mistakes, the truth is that we have only just begun to truly appreciate the extent to which our digital world is not only compromised, but has consequences in our own world. Though security measures across the board seems to have increased--from hospitals to corporate businesses, USB-locked laptops, approved phones, and restricted network access is becoming the norm--hack after hack, leak after leak, and bots after bots continue to be discovered. This paragraph had to be inserted after the entire post had been written because Reddit had just discovered yet another third party fake user campaign.
Who even is considered a “real” user anymore? (credit)
Appropriating a common saying, one might be tempted to claim that “You can lead a user to a faked image or video, but you cannot make the user consume it.” The truth is that our blind faith in the new and improved has managed to do both.
3) Hacking
Up till now, we’ve only really handled faked images and videos, because the process and the concept is very easily grasped. But this post would not be complete without discussing the elephant in the room: Hacking.
The word “Hack” has come to mean a lot of things over the years--startups even use the term to describe the act of creatively disrupting some market. “Lifehacks” refer to nifty tricks in life. It’s also a term for a fraudulent, or unqualified person posing as a professional.
But in this context, hacking is simply the illicit gain of access into data. If breaking and entering might not exactly sound like a forgery threatening our perception of truth, you’re right. That’s because hacking isn’t so much as physically forcing access as it is more akin to tax loopholes. Let me explain:
Computers and their coded environment is dictated by countless logical rules. All processes go through some preset protocol which allows the computer to understand and execute the process, and computers are quite literally incapable of running anything which breaks the rules written into this protocol. Hacking, though illicit, still follows these protocols. It just finds technically valid ways to do something which you spiritually didn’t want to do.
In a spy movie, hacking isn’t so much the use of explosives to breach a wall as it is to use a forged ID to get into the vault. Oddly enough, “lifehack” actually is a good analogy: The objects being used don’t literally break reality, but are instead still doing their function to accomplish some unintended goal.

Hacking: More subterfuge than violence (Mission Impossible III and Ghost in the Shell)
However, just like images and videos, hacking’s widespread threat can trace its origin to the personal computer revolution mentioned above.
Previously, only a select well-connected few were privy to coding and intimate knowledge of computer systems. After the personal computer revolution, the general populace were not only exposed to coding and hardware, but were encouraged to do so to better use their new device. Today, many students take up hacking before formally being trained in college--you don’t even need a powerful computer to practice the fundamentals of coding.
The tools of coding have improved, too: IDE’s, or Integrated Development Environments, are like the Photoshops of coding, making coding much more accessible and transparent than the mysterious black command prompts of yesteryear. Like Photoshop, the capabilities of IDE’s and coding languages are ever-expanding. As the number of people engaged with coding expand, so too do the number of modules for every language which grant new and more efficient powers to code.
In an accelerating arms-race, this increased potential is being used both to create stronger security systems and stronger hacking tools. In the nature of any one-sided arms-race, however, it is all security systems can do to prevent previous hacking tools. Hacking will forever remain an active threat precisely because it is impossible to defend against incursions security teams don’t know about.
Though the casualties of hacking tend to get buried in media under the World Cup, school shootings, and the latest political slander, we should never forget just how pernicious and dangerous this aspect of the digital reality crisis can get. In the last 5 years alone, we’ve seen a troubling pattern of large-scale data breaches:

Boring. Dangerous. (Credit)
Yahoo, 2013: 3 billion accounts compromised.
eBay, 2014: 145 million accounts compromised.
Anthem, 2015: 80 million medical records compromised.
Equifax, 2017: 143 million American credit details compromised, including social security numbers and credit details.
In each situation, the security vulnerability was only patched up after they were exploited.
No large-scale data breaches of this scale (even at a relative scale) had ever been recorded previously in history. Even our oft-romanticized bank robberies in media are a peasant’s foolery compared to the potential havoc of stealing the personal identification records of numbers rivaling the population of entire countries. For comparison, note that the most populous European country, Germany, sits at 82 million.
In the hiatus I took while writing this piece, 50 million Facebook users were hacked, potentially further compromising linked accounts on apps like Snapchat and Instagram.
Let Ocean’s 11 (or 8, with the newest movie) pat themselves on the back at stealing a few million dollars. The Equifax hack affected almost half of the entire population of the United States, and almost all credit holding adults. The Stuxnet virus almost singlehandedly crippled the Iranian nuclear program.
Of course, just like images or videos, nobody told us that digital data storage was perfectly secure. But they didn’t need to, because like images or videos, we told that to ourselves. We threw all our eggs into a convenient and speedy basket, and we reaped (and continue to reap) the consequences.
4) Analog Lessons
So there’s clearly a problem in the digital realm, a problem created by a combination of accelerated technological progress, widespread distribution of tools, and a persistent futurist faith in new technologies.
But what can we do about it?
Interestingly enough, some parts of this question may have already been answered in the analog world we are striving so desperately to leave behind.
Centralize, Legitimize: Tackling Fake Content
The exponential acceleration of digital development and the sharing of their tools have let us achieve amazing feats of image and video synthesis we only dreamt about only a few decades ago. While an unbelievable boon for the entertainment industry, it’s time that content platforms stepped up their self-moderation on battling fake content.
Journalism had to learn to do the same thing around the 1900′s.
Recent rancor over mainstream journalistic misdeeds (coverage of the Afghanistan War, ignorance of non-European tragedies, etc.) and “Fake News!” aside, we forget too easily that the front pages of papers used to look a little too much like the front pages of internet forums like Reddit today.
“Yellow journalism,” describing the accelerating war over readers waged by American newspapers in the 1890′s and 1900′s, was the original and quite literal “Fake News.” They often made use of drawings to outrage readers, some which unfairly misrepresented a real world event, or simply made up lies wholesale, such as the image below--meant to enrage Americans against the Spanish. Sound familiar?

Fake content in the original era of Fake News (credit)
Over time, newspaper organizations began to reign in their antics, setting much stricter standards and repairing their reputations. Misrepresentative photographs taken out of context were still used to sway opinions, but their photos were still required to be real, with very strict standards on the verification and accreditation of both content and their providers. Larger news organizations began to establish oversight committees to verify and carefully vet even the most scandalous rumors and leaks.
Oversight committees, which, to this day, are heavily human-dependent, built on a foundation of trust and the understanding that their jobs as a profession depended on their reputation.
With this historical context in mind, Facebook’s story now becomes an eerie echo of our past. We can only hope that Facebook and other content platforms crack open a history book sometime.
To some degree, we already have--the formalized editors and fact-checkers of yesteryears are better known as “mods” (moderators) on sites today, arbiters of the rules their communities set for themselves. Though their responsibilities far exceed that of editors--frequently acting as law enforcement, peacekeepers, and spokespeople--their importance for maintaining the legitimacy of their domains cannot be understated. Human moderators have their own problems and scandals, but their use as humans (and NOT failures of automation as Facebook saw) should inspire other sites to also consider community-led moderators.
Granted, a large part of the joy of the Internet is dependent on the constant exploration of the limits of our ability to create digital content. Not a week goes by without some clever video editing, 3D rendering, or computer graphic achieving viral status. While sites seeking legitimacy centralize and legitimize their positions, it is important to maintain the experimental playground that makes the Internet so creatively rich. It’s just that we need to get better at drawing lines.
However, to avoid confusion through juxtaposition between “real” content and fun/fake content, it may be time for the internet to draw harder lines.
No Shirt, No Service: Tackling Fake Users
Compared to the chaotic wild west of the Internet’s origins, platforms have (largely) become stabilized institutions. Youtube, Twitch, Tumblr, etc. all offer highly polished experiences for user and creator alike.
However, access to these sites remains problematically uncontrolled. Even as platforms are slowly realizing that “Hey, maybe there’s a reason why physical companies take such pains to create oversight organizations,” we must face the fact that oversight can only go so far. A filter, no matter how large, can’t hope to work properly if the floodgates are not just open, but completely nonexistent.
Due to the ever-advancing nature of internet bots and bought accounts, even human moderators will have a tough time removing suspicious accounts faster than they can be created. By the nature of post sharing, even if the accounts manage to post and massively share a single piece of faked content in the span of a minute, it’s too late--the content has now probably been seen (and shared) by other humans.
In other words, we need to do a better job at controlling who even gets to make accounts in the first place. We need something more than just an email address and a Captcha.
“Security”
The kicker is that the analog world had already figured this out long ago.
For anything important, like a job or a bank account, you need to show up, in person, to an interview, with government-issued photo-ID, and a government-issued unique identifier (Social Security number for US citizens). This therefore not only proves your valid identity, but prevents duplicate identities or false personas. Not that fake ID’s or stolen SSN’s don’t exist, but these require resources, time, and risk on a scale not feasible for anything but a highly organized government directive such as the Witness Protection Program. That’s not including additional requirements, such as personal recommendation letters from other verified humans, or government background checks.
Even for the most casual of human social groups, say, an a-capella group, you still need to physically be there, which alone prevents simultaneous duplicate presence.
For online presence, nothing so drastic as recommendations, resumes, and background checks may be necessary, but simply requiring some form of government-issued identification can already prevent much misuse of the system.
Take South Korea’s system, where due to their severe hacking problem, many online games require accounts to be linked to a Resident Registration Number (their equivalent of a Social Security Number), such that banning an account actually bans the person. Furthermore, an imposter would also require their “i-PIN”, a government-verified PIN in case an RRN was leaked. Even with those 2, somebody would also need their phone number.

A wee bit harder to fake than a checkbox
Of course, there are workarounds, such as buying SSN’s from other citizens who couldn’t care less about online games (and thus don’t care about the consequences of being banned from them), but again: This exponentially increases the hassle and costs of any mass botting/fake account operation.
Whichever method is chosen, we need some way more than a stupid checkbox or identifying images of stopsigns to make sure humans are not only who they say they are, but that they are only who they say they are. Compared to others, the West’s approach to Internet security is lax at best, and laughably nonexistent at worst. We need to catch up.
However, we need to respect the freedom which inherently gives the Internet its strength. In countries with authoritarian repression of information, the anonymity of the Internet has been their salvation. Without anonymity, potentially shattering leaks on corrupt governments would not be possible. Therefore such verification methods should only be sought by sites which seek to be validated.
Much in the same way one can choose to enter a trusted, but controlling shop (say, an Apple store), or a chaotic and free, but untrustworthy street marketplace, a verified, “humans-only” part of the internet and the wild, untamed side can coexist and offer their unique strengths to consumers. However, the sites that want to be trusted have to make themselves trustworthy.
More Baskets, Fewer Eggs: Tackling Risk
Even with heavy moderation and verification of human users, this will mean little without another very important step: We need to make ourselves less dependent on the conveniences of centralized online databases. Improved database security
Much of the threat hacking poses can be directly attributed to the two problems of 1) We store too much sensitive information online and 2) The ability to “link” accounts greatly increase security breaches from affecting more of your life. But fortunately, there are already better alternatives. Not surefire solutions--nothing can stop a dedicated effort--but certainly better approaches.
On the first, consider how many sites have your payment information stored. Target, Walmart, Amazon, Expedia, Travelocity, and a myriad of shopping sites likely all have access to your payment information for no other reason than expedience, while creating more opportunities for your identification to be stolen.
How about nah?
This isn’t even a problem of having all your eggs in one basket--this is having all your eggs in multiple baskets, each of which can catastrophically compromise all your eggs.
The solution: One really, really, really sturdy basket.
We can’t exactly buy things online without, you know, paying for it. However, the likes of PayPal and Amazon Pay can at least limit our baskets to a single, (hopefully) more secure one. For the uninitiated, these services act as middlemen to transactions. Instead of Target keeping your card information, it authorizes payment through PayPal, which alone has your information and separately sends money without passing along your bank account details with it.
And while probably database security isn’t the top priority of a storefront like Target, PayPal’s whole purpose in life is to provide security. Again, while a dedicated hack is nigh unstoppable, we might as well leave our eggs to people who genuinely care about them.
Even Google acknowledges this concept of delegating security to larger entities:
Google does a good (credit)
Actually, humans learned to stop doing this thousands of years ago. Why carry all your wealth on your person, making it vulnerable to every person you come across, when you can entrust a secured location with hired guards to be a middle-man? When Banks are doing something more advanced than we are, maybe we should reassess our lives a little.
...
On the second, recall the Facebook situation where linked Snapchat and Instagram accounts were also on the line. There are even more sites which can be directly linked with Google accounts. Again, convenient, but it also creates a bigger liability. This is not even considering the fact that these companies will happily sell your information to others--no hacking needed!

Seriously, why do we keep linking accounts to this guy?
Google calls this method of linked logins “Federated Logins”, claiming higher security based on the logic of the PayPal and Amazon Pay: Trust bigger companies with all your security at once.
But there is a critical difference: You don’t need to compromise here. Unless you’re incredibly rich and have a bejillion overseas bank accounts to hide away your ill-gotten gains, you probably have a central bank account. Having access to that central bank on multiple sites is giving out multiple keys to your only castle. Having one bank account per shopping site is prohibitively inconvenient (unless you can afford somebody to manage that for you), so in this instance securing that central bank is supremely important. With account logins, however, there’s nothing stopping you from creating new emails/usernames. Making a bank account comes with a mountain legal shenanigans. Making a new account literally takes a little imagination and a few minutes.
How about nah? Once more, with feeling (credit)
See those buttons up there? STOP CLICKING ON THEM. As much of a pain in the ass it can be to set up entirely new accounts with entirely different passwords, it can be worth every hassle in security. For the particularly neurotic, even different emails can be used, further compartmentalizing the risks.
You shouldn’t do either on reflex, however. Rule of Thumb? If the company is as large as a linked service, and you think their security is about on par, then there’s no need to link. Security being equal, you might as well create a new account. If it’s a small, niche website--say, for a fandom--you might be better off actually linking your account.
I’ll be honest, not really an analog thing to compare to, but we still really need to stop doing this.
A Digital Masquerade: Street-smarts on the Web
In 15th-18th century Europe, Masquerades--balls/festivals noted for the tradition of donning costumes and face-covering masks--were notorious for the unseemly behaviors people would engage in under the anonymity of their costumes and pseudonyms. Doesn’t that sound familiar? Nowadays, we have our user icons and our usernames, but not much has changed about the indulgent human behavior the internet’s anonymity has fostered.
The difference is, even so many centuries ago, they had the common sense to stick to alternate identities, pseudonyms, and contacts to hide their identities. Sometimes insidiously, to distance themselves from their scandalous indulgences. But also sometimes legitimately, to avoid getting mugged by being identified as a noble, or to avoid getting killed by a rival, enemy, or you know--because you were identified as a noble.

Gustav III of Sweden should’ve gone by “BananFish2099″ instead (credit)
So why do we continue to use our real identities and contact information on the web in situations where we have no obligation to? Much of the reasoning hasn’t changed in the centuries that have passed. Even if we don’t do anything scandalous on the internet (like this blog), it is still important to protect oneself in the public sphere as long as we’re not legally obligated (spoiler, my Tumblr’s information isn’t actually my main information).
People have had SWAT teams called to their home for the heinous crime of daring to stream videogames. “Doxxing” has threatened the livelihoods of online personas over little more than petty online drama. Not everyone at a masquerade is your friend, and not everyone on the internet is either.
To maximize security while maintaining the above-proposed “humans-only” regions of the internet, we could set up a system where in a website would cross-check with some third-party (perhaps the government, like in Korea’s case) to see if a given identification has been used. After verification, however, the site would not retain the identifying information, much in the same way a site would not store your payment information after use (unless you saved it, which you shouldn’t). To avoid fraud, a two-factor verification system could be used, wherein said third-party would notify the person that a new account was set up under their identification. Furthermore, the third-party would not know the resulting username of the account, only that an account was indeed created using the person’s identifications, thus removing the risk of a leak in the other direction.
But hey, that’s just an idea.
Epilogue: To Regulate or Not to Regulate?
Alas, poor Yorick. I knew him, Horatio. He wouldn’t stop sharing fake news though. (credit)
This post was too long. But I swear you’re finally at the end. So stick with me here.
Information technology as a whole--from the mundane spreadsheets that run the world to the images and videos that entertain us daily--is quickly facing a crisis of faith, one it helped create.
As scandals and catastrophes only mount, its audience--its users, its consumers--is similarly beginning to turn against it.
And as the tensions grow and relationships sour, the word regulate rolls a bit easier off the tongue.
In April 2018, Mark Zuckerberg was called before the US Senate as the US legislature begins toying with the idea of enacting new laws.
Following in July 2018, the UK Parliament’s Digital, Media, Sports and Culture committee recommended that social media sites be liable to heavy fines for content deemed “harmful and illegal.”
In that same month, the French Senate rejected proposed bills which would force social media to enable users to flag content as “fake,” which would be reported to the authorities.
In October, the UK humorously banned the phrase “Fake News” outright--because the phrase itself has become so misleading as to be Fake News itself.
And the legal heat is only rising further.
However, I’m going to hop out of my storytelling mode a little and make a passionate, personal plea against our desire to use our legal pitchforks to finally give these companies what they’ve got coming for them.
Because in this one last conundrum, we really can’t afford to ignore lessons taught to us by other media.
We need to take a step back and realize what we’re calling for here: We’re asking state authorities to begin policing what can be counted as “truth.” We’re asking people in power to begin enforcing their definitions of what is “harmful and illegal.”
And when those who decide what’s illegal are allowed to decide what’s “harmful,” we WILL have a problem.
And countries such as North Korea, China, and Russia already have that problem. Millions of people--if not BILLIONS, in the case of China, already live under a system which is allowed to decide what is “harmful” and therefore illegal. These systems cannot easily be uprooted. How can they, when they control the truth? While not all slippery slides are irrecoverable, this is a one-way slippery slope where climbing back out requires lots of blood, tears, and human death.
It is not a journey we want to take.
Even when it comes to database security, authorities have only proven themselves selfishly interested in furthering their own strength.
In the name of “security”, major world governments keep demanding back-end access to encrypted devices--something well-known widely across the tech field as inherently dangerous, establishing a vulnerability in its security. This is like demanding that all bank accounts like a master key. Like demanding that all houses have a universal access point. In theory it lets the police get to you faster. It also lets anyone with that key to do untold devastating harm at inconceivable scale.
It is incredibly easy to become furious, and rightly so, at corporations and their seeming unwillingness to take the necessary steps. But sweeping laws are not, and CANNOT, be the solution.
...Because the inconvenient truth is that the line between Fake News and subjective content is much blurrier than we’re comfortable admitting. Of course, while wholesale lies are easy to point to as justification, actually sitting down and trying to differentiate “Fake News” and “misleading content” from simply subjective reporting, or opinions, becomes problematically difficult. A problem this nuanced, this difficult, cannot be allowed to be solved by any one entity, such as a government.
The solutions which are left may not be as palatable nor fully sate our desire for revenge. Our current alternatives--heavier community moderating, greater transparency on the types of content hosted, and greater pressure on companies to enact stricter security--are not perfect. And quite likely, those in the future will look back on these recommendations as folly, having been outdated by newer, better approaches.
But fighting through these gray areas, teetering on the edge of uncertainty, is still an infinitely better option that the seeming simplicity and security of demanding that the authorities handle it. We will figure out better solutions. We will become more secure and adapt more fully to the still-nascent online environment. But we must put serious effort into getting there.
Future generations will thank us for it.
Even when there are billions in this world for whom it is too late.
0 notes
Text
Working as Intended: Battle Royale from a Game Design Perspective
Hi and welcome to Working As Intended, a series which delves into how game design interacts with players to create our experiences, intended or not. In this post, we dive into Battle Royale, the undeniably wildly popular game mode of recent years.
But what makes Battle Royale, from a game design perspective, tick? What can it learn from other game modes before it? Join us as we dissect its double-edged approach to multiplayer combat, and see in what directions Battle Royale can evolve into the future.
Battle Royale, A Brief History
Battle Royale has a lot of small variations between its implementations in PUBG, Fortnite, H1Z1, Minecraft, and soon Call of Duty: BlackOps 4 and Battlefield 5. Yet in all of them, there is an alluringly simple, easily understood core definition:
A large number of players start. The last one standing wins.
Outside of gaming, this is probably most popularly remembered as the basis for the original Japanese film Battle Royale (2000) and its spiritual Western successor, The Hunger Games (2012-2015, film).
In the films, the winning squad is less than happy to win chicken dinners
However, this central idea of “Battle Royale” precedes even the Japanese film it’s named after. It’s more generally known as the “Last Man Standing” game mode, and have been implemented in limited form in older games, especially custom Quake, Unreal Tournament, and Doom Free-for-All modes. CounterStrike: Global Offensive and Rainbow Six: Siege, two popular first-person-shooters today, are in fact team-based Last Man Standing modes.
DOOM multiplayer, the grandfather of first-person shooter deathmatch (credit)
So what makes Battle Royale distinct from Last Man Standing? Two key elements:
First, Battle Royale involves an unprecedented number of players per game. While older games could only manage maybe 8 players at a time, Battle Royale is implicitly understood to involve up to 100 players at once, a technical limitation of both processing power and network optimizations which made it impossible before now.
Second, whereas many Last Man Standing game modes are a stricter ruleset applied on top of existing DeathMatch rules regarding weapons and equipment, Battle Royale starts every player without anything, and randomly distributes its equipment. For a clearer example, a custom Quake III Last Man Standing game would take place on a normal DeathMatch map, where weapon spawns were as stationary as before. In Battle Royale games, the same building will contain different equipment every game.
In other words, Battle Royale is the logical conclusion of increasingly powerful technology married to the idea of Last Man Standing.
And it’s popular beyond belief. Love it or hate it, its rapid success over other, older, more established game modes such as Team DeathMatch (Call of Duty, Battlefield), or Multiplayer Online Battle Arenas (MOBA’s, like League of Legends or DOTA 2) is undeniable:
Just look at that viewer count lead!
After its Beta launch in March 2017, PUBG broke Steam’s all-time high for concurrent players with 3.2 million players on December 29th, 2017. Fortnite’s Battle Royale mode launched in September 26, 2017, and has recently overtaken PUBG for concurrent number of players with its own 3 million mark as PUBG still maintains a healthy 1.7 million concurrent players. Checking Twitch.tv’s videogame livestreaming section shows Fortnite reliably holding first place, with PUBG not too far behind.
Battle Royale--befitting its name--is King. Long live the King.
The Royale Recipe
To understand how Battle Royale got here, let’s look at 3 key pillars of its game design: Chaos, Reward, and Player Density, and see how they affect player engagement, for better or for worse.
The Beauty in Chaos
On its surface, Battle Royale is a modified ruleset of the standard Free-for-All Deathmatch gamemode found as early as the original DOOM multiplayer days. Every player for themselves, murdering each other.
But the inherent draw of Deathmatch combat cannot be the reason why the Battle Royale game mode has become popular. If so, Battle Royale games would not have surpassed Call of Duty, Battlefield, and Halo, which have honed their mechanical craft of Deathmatch over decades, and none of those would have surpassed Quake or Unreal Tournament, which are the purest distillation of mechanically complex, skill-based Deathmatch combat.
Quake Live, a surviving remnant of purist arena deathmatches (credit)
Instead, Battle Royale’s biggest draw lies in its inherent randomness, and the chaos which ensues due to it.
In traditional Deathmatch games, there is a structure which the player can reliably expect: In Call of Duty and Battlefield, players know they will fight over objectives using pre-set loadouts they decide before hand. In Halo, Quake, and Unreal Tournament every spawn is guaranteed a specific base loadout and power weapon spawns are predetermined. The chaos in these games lies in the combat, but the macrostructure which founds that combat is stationary, if not at least predictable.
In contrast, Battle Royale expands the chaos to beyond just combat. Like a lethal game of 100-player poker, every player is dealt a vastly varying hand: Where you spawn, and who spawns with you. What weapons and equipment have spawned in that location, and who gets to them first. Loot package drop locations, vehicle spawns, and the location of the ever-shrinking “safe” zone are also randomly determined.

An airdrop in PUBG, which is, you guessed it, random (credit)
In a Battle Royale game, this randomness creates a focus on adaptation and ensures that every game can be vastly different. In one game, you might still be using a cruddy pistol and barely any items for half the match. In another, you will come across a stash of high-level equipment, luckily claim a vehicle before anyone else, and try to protect yourself from other players who notice your loot.
This game-to-game variance is key to extended player engagement, and in single-player games is called “replayability,” though this term is curiously not used for multiplayer games. A predictable game is boring, and can lose one’s interest quickly. A chaotic game, therefore, commands attention by promising some crazy new adventure with every iteration.
Without this chaos, Battle Royale condenses into just another Free-for-All Deathmatch, and it is this chaos which makes Battle Royale distinct.
It’s OK, Try Again!
Beyond the game mechanics of the game mode, its implementation in the most popular Battle Royale games like Fortnite and PUBG has a lot to do with its popularity: Namely, mechanics which seek to minimize punishment while maximizing reward.

Tfw your squad gets wiped but you remember you can re-queue (credit 1, 2)
Any game, single-player or multiplayer, is inherently a reward-punishment machine. Player does something, and the game rewards them for a successful completion of an action, or punishes them for failing to do so. In the simplest games like Pong, you either score, or you get scored on. PacMan gets more points, or you lose a life. In other words, you win (reward) or you lose (punishment). Though not always appreciated by the player, how games handle distributing and handling both has a huge impact on player engagement.
According to Behavioral Psychology, training an individual to perform an action well involves both high reward for success and harsh punishment for failure. This can be seen in formal ranked systems in sports and competitive games, where losses gain nothing and winner takes all. But for arcade-y games like Fortnite or PUBG, the goal isn’t to train, but to be fun for a long time--and this means trying to minimize punishment.
Fortnite and PUBG succeed particularly because when you die/lose, this punishment is made as short as possible. Unlike CounterStrike: Global Offensive or Rainbow Six: Siege, the player is not held hostage and forced to face their failure for the remaining duration of the match. When players are punished by in-game death in Fortnite and PUBG, they are free to leave immediately, and find a new game. Like as we often wish in real life when we make a social blunder, Fortnite and PUBG let you quickly move past your failure and try again for success with no judgment or penalty. Comparatively, in CounterStrike: Global Offensive or Rainbow Six: Siege, leaving a losing game will result in a decrease in ranking and additional penalties such as bans.
This principle of a rapid reset is why so many difficult platformers, notably Super Meat Boy, implement a near-instantaneous re-try, to minimize frustrations by immediately giving the players another chance.

You would NOT try this with long respawn timers (credit)
Meanwhile, both Fortnite and PUBG always give you some sort of reward, even when you aren’t the last player alive (and thus, “won”). Though winning obviously gives the highest in-game currency, losing still rewards a decreased amount, scaling by high how well you did.
Hence, currently successful Battle Royale games always give you some reward, with a near-negligible punishment, making player engagement always feel worthwhile.
Map Sizes and Player Density
The problem with permanent-death Deathmatch game modes is that typically the game map is made for a certain ideal number of players. Once that number reduces, the map becomes too large for the players left, and meaningful player-versus-player interaction like combat decreases to a point where it is more boring than stimulating. Anyone who tries to play the gigantic Conquest Large maps in Battlefield titles with a half-full server understands this feeling.

Awkward... (credit)
To maintain an ideal player density, all modern Battle Royale games implement a “safe” zone mechanic, where as the game progresses, the valid playable portion of the map decreases. Be left outside the safe zone when it shrinks, and you will take a constant drain on health. This not only preserves player density by forcing smaller numbers of people closer on a smaller map, but also guarantees that if there are more people alive than expected, the shrinking map will result in more combat and more deaths.
From a developer’s perspective, this also means controlling resources being used per match. By better controlling the number of players left over the course of the game, the developers can better predict and allocate server resources. For the player, this translates to some structure in the otherwise rampant chaos of Battle Royale games, by guaranteeing players that dwindling players don’t mean dwindling action.
LESS than the Sum of its Parts
While the above pillars of game design, individually, have made the Battle Royale game mode great, they are also contradictory to each other’s purposes, resulting in games which, unfortunately, end up being less than the sum of its parts.
Chaos is Unfair
Chaos is what makes Battle Royale, Battle Royale. But chaos, in its current implementation, inherently undermines any reward and punishment structure which is so critical in any game. Though most games depart from traditional Behavioral Psychology by opting for minimized punishment and near-constant states of reward, it is a universal wisdom that to be effective, any punishment or reward must be consistent. Getting punished sometimes and getting rewarded sometimes only leads to confusion, frustration, and--most deadly for games--disengagement.
Technically, Battle Royale games consistently match actions with reward and punishment: Your health goes to 0, you die and lose. You survive more, you get more in-game currency. But the conditions under which you reach these requirements is anything but consistent, and a large part of this is problematically outside of player control.
MarioKart’s similar dilemma with item boxes (credit)
A good thought experiment for considering the balance of player skill of any game is to ask “what would occur if 2 clones every fought each other?” Assuming both players to be at full health and of equal skill, if the answer is anything BUT a 50%-50% outcome, it’s a problem. And in Battle Royale games, it’s the luckier clone that wins: If you cannot find good starting gear to compete with other players with better guns, armor, and healing supplies, you will most likely lose. If other players just happened to also spawn with you in the same area, your chances of securing any loot at all is very low. If you cannot find a vehicle, your chances of dying en-route to a safe zone are very high due to your slow speed making you an easy target at range. If you cannot get good optical scopes and the end-match safe zone is in a wide-open grassy hill, your chances of surviving the long-range sniping match for first place is low. If the safe zone is instead in an urban area but you have no good close-range weaponry, you are also similarly screwed. Fortnite doesn’t have vehicles and has a smaller map, but all the other randomized determinants of success remain.
To be fair, as the match progresses, killing others often will even the playing field as players, by scavenging corpses, naturally accrue better gear like a food chain. However, the early culling period one is expected to survive to get to that fairer playing field is far too chaotic.
TSM’s vsnz took 22 player deaths to find his first pistol. 24 for a shotgun.
In short, the very chaos which makes each game so dynamic is also what makes them inherently and undeniably unfair. This is why tournament skill-based games such as Quake have such well-considered map design (as opposed to the “take random assets and scatter them over a landscape” approach of Battle Royales) and stationary equipment spawn locations--to hold all extenuating conditions to a strict standard to let player skill be the only determinant of victory.
And this is why Battle Royale games, in their current state, cannot ever be viably competitive. With so many random factors, traditional ranking systems of punishing losses isn’t fair all the time. Determining who is “better” at the meta-game (and not just individual shootouts) requires an inordinately large number of games. Even a best-of-five structure means nothing when everything--from your damage output to ability to absorb damage to mobility to effective combat range--is largely out of your control.
This is why no Battle Royale game implements ranked matchmaking systems: It’s mechanically impossible.
For the players, this means that current Battle Royale games can feel more like a slot-machine than a fair fight. Sure, by the mid-game with more similar loot, they can have a say in their fate, but getting there isn’t guaranteed. The short-reset period and hopping into another game is just another pull of the lever, another roll of the dice. Victory feels cheaper when you realize that most of the players of that 100 died due to poor luck rather than your own superior skill.
This isn’t to say, however, that any chaos in game design is inherently bad--some level of unpredictability and chaos is necessary for replayability. But there is such a thing as poorly designed chaos. Chaos, to be most effective, demands the same kind of consideration and purposeful focus as stationary game mechanics.
Chaos might be fun, but unfair game design isn’t.
Chaos, sometimes. Maybe.
The safe zone mechanic is an ingenious solution to preserving player density on a large map. But just like Chaos, its current implementations doesn’t quite solve the problem it was applied to.
When the game starts, the map has to be large enough to hold 100 players and all the shenanigans they might get into. And yet, this humongous map still has to be viable for consistent play for the last 2 people as well as it had for 100. With the safe zone system, this means that on average, players have a tremendous distance to cover, because they are expected to move the same distance that the safe zone is shrinking by.
For example, PUBG’s first map, Erangel, is 8x8 kilometers. Player movement speed was clocked in at 6.3 m/s. Assuming the player dropped on the southern or northern shore of the largest island, only 2 km from the center, reaching said center without a vehicile would take five full minutes of nothing but running.

PUBG’s Erangel (credit)
Anyone that’s played Fortnite or PUBG knows just how much time each match is just spent running. And running. And running. And often, you just get sniped from the distance from somebody you couldn’t see who had set up before you.
Granted, there is some argument for the heightened tension while running to the new safe zone. It’s a welcome downtime between the adrenaline spikes of direct combat. But again, as with Chaos, there isn’t much control that you have over it. Since safe zones are random, even spawning in the center of the map will involve lots and lots of running. Downtime’s good, but too much downtime becomes the chore between the actually entertaining gameplay loop of combat.
This wisdom is something that Battlefield in particular has been aware of for some time. Battlefield, for its large 64-player combined-arms matches, requires a sufficiently large map to house the chaos, but it makes sure to inundate each team’s spawn with rapidly respawning vehicles to ensure that players won’t spend long before careening into their next firefight.
Because the irony about chaos is that as fun as it is, you still want chaos to be consistent.
Making Battle Truly Royale
Despite the above critique, Battle Royale is still an incredibly enjoyable game mode--just with many bumps left to be ironed out. This shouldn’t be surprising: Any new medium in any entertainment industry has growing pains. First Person Shooters took over a decade to get to the standardized, ergonomic controls we take for granted today. Fighting games went through countless iterations to work out its exact science of combos, interrupts, animation cancels, and balance.
And Battle Royale can learn from others’ mistakes to fix its existing problems, and avoid future ones. Why replicate somebody else’s mistake, when you can reap benefits from their solutions?
Chaos, structured
Battle Royale needs chaos, but currently it’s a bit too much--the mid to late game are far fairer and interesting, but surviving the first few minutes seem more random than skill when somebody else grabs the shotgun before you do and kills you before you take a few steps. You immediately re-queue for another game again and again until you’re the one that picks up that first gun and survives. To minimize frustration even more, Battle Royale needs to tamper this early chaos, but not so much that it loses its identity.
A potential solution is to give every player a terrible--but still lethal--starting weapon, perhaps a low-capacity pistol with only two magazines. Something to kill one or two other players with and take their weapon, but no more, kind of like the core concept of the Liberator pistols from World War 2. Perhaps they could be Liberator pistols in implementation.
The Liberator and its spiritual link, Quake 3′s machine gun (credits 1, 2)
Quake 3 and Unreal Tournament implement something like this with the weak machinegun. It won’t win over higher level weapons, but it’s JUST enough to defend yourself until you can find a weapon spawn. The proposed pistol’s low magazine capacity, and the two magazines punishes wasting it on trying to hit players that are still landing, instead driving players to use it on each other on the ground.
Another potential solution is to make the starting melee option a little bit more deadly. Currently, in both PUBG and Fortnite, the starting melee weapons of fists and pickaxe, respectively, barely do any damage and require repeated headshots to be lethal. Again, they could learn from Quake and Unreal Tournament, where the base melee weapon is incredibly deadly, but obviously limited by its range. In such ranged-focused games like Battle Royale, this will allow freshly landed players opportunities to fight each other, and reward them for creatively ambushing other players who had gotten the gun first.
Either way, Battle Royales’ problematic early game can easily be remedied with the introduction of some form of early self-defense which is weak enough to quickly be outmoded, but still capable enough to give players a more skill-expressive way to survive the early game.
Game Maps vs Literal Maps
Currently, Battle Royale games feature maps which seem like an actual deserted sprawling wasteland. While these maps can immerse you in the fantasy of survival, Battle Royale should realize why other games have avoided real-world logic when designing their maps.
The problem with “realistic” maps is that they’re great for looking realistic. But they are NOT great at being conducive to play. Even Battlefield’s larger Conquest maps have clear thought put into them, designed in ways to funnel action and make sure the map itself is balanced. Each capture point is designed with considerations as to its vulnerability to ranged bombardment, and approach paths for infantry, vehicles, and aircraft. Interiors of bunkers are arranged to give teams defensive options, as well as ample flanking paths for attackers. Countours of the map and vehicle-friendly roadways are arranged between these capture points to offer both main and side paths between every objective. There are always options for players who want to stick to close-quarters, and those who prefer holding longer sightlines. A smart player can reach top of the leaderboard playing with just a shotgun in Battlefield--most players of Battle Royale recognize this is currently impossible in the unfairly range-favored maps of current maps.

Left: Nevada’s Toponah mining town. Right: PUBG’s Miramar. (credits 1, 2)
Likewise, Battle Royale would do well to really give its map more consideration. Not that there hasn’t already been work put into it, but the maps shouldn’t get any less attention because “pssh it’s a Battle Royale, let people just fight it out.” Especially as we talk about reworking Safe Zones later, map design should also consider where safe zones spawn, and the approach paths to them. Currently, Battle Royale games universally give unfair advantage to players who luck out and get powerful sniper rifles or high-powered optics for assault rifles. A more focused map design which gives more options would relieve this imbalance, and give less fortunate players a fairer chance versus those god-forsaken snipers behind that one tree you never could’ve seen (you know what I’m talking about). Map design is an incredibly subtle but critical aspect of any action-driven PvP game, large and small. The real world is captivating, but not made for play.
Running a Little Less
For Battle Royale games, their large maps are a defining characteristic. So making it smaller by default (though Fortnite IS much, much smaller compared to PUBG) will only make people question why they aren’t just playing a Free-for-All DeathMatch where the action is more fair and directed.
Yet, as discussed before, current Battle Royales still have players running for so long, and unfairly punishing players based on vehicle spawns and safe zone spawns.
Any solution should address this issue, giving players a more reliable option to traverse long distances, but this solution should also be player skill-dependent, so that it doesn’t feel unfair and creates opportunity for counter-play.
Fortnite already has a pseudo-solution, where launch pads can be used to trigger the glider, letting players traverse faster than running at the cost of some building materials and the vulnerability of being in midair. For a more vertical game than PUBG, this also offers valuable vertical movement on-the-go.
Still, it isn’t very effective for moving a large distance. A possible solution could be like the light-cycles from TRON, where players must take time to summon an unarmored, fast, single-rider motorcycle. It could also possibly consume some limited resource, so that it forces players to consider when they would activate it. Taking damage while summoning could interrupt it, to make sure it can’t be used for a rapid getaway. The single-rider aspect would prevent a squad member from riding on the back as a mobile self-defense platform, making this mode of transport extremely vulnerable.
Obviously, this would work better for a more cartoonish game like Fortnite, but surely a more creative person would be able to come up with a similar reliable method for players to traverse long distances.
Approaching Safe Zones
There is a more controversial solution to both the map design and movement problems: Making the Safe Zone always spawn in the same central location. This “ultimate” safe zone would make map design much easier, now it gives developers a predictable play pattern to work around.
For example, having a stationary “safe zone” could have more valuable weapons around it, so that players could plan out their strategies from the very beginning: Do you drop near the center for good loot, where you can stay put for a while and not expose yourself by moving? But it will also make you an easier target, since more people might have the same idea. Or do you drop further away from the center, where you might be safer for longer, but you have a longer trek towards the center?

Hunger Games’ horn, the OG stationary safe zone (credit)
This central safe zone would also allow map designers to apply more traditional map design, making use of chokepoints, flanking routes, alternative approach routes, sightlines, and giving players more options given their playstyle. Close-range favoring players could move through small rooms towards the center. Longer-range players could sit at the top and center, overseeing incoming threats.
This safe zone design, however, takes away much of the chaos inherent in modern Battle Royales, and this implementation is ultimately up to game designers and what their vision of their game is.
Another different approach to the safe zone is to make the penalty much less arbitrary. Currently, Battle Royales use a mysterious, arbitrary wall which damages players outside its bounds. For a more thematic penalty to safe zones, in-game hazards which players can fight against creates a more skill-driven penalty system.
For example, Fortnite Battle Royale is built on a single player zombie survival game. Why not have the safe zone spawn zombies of increasing strength outside its bounds? That way, players are at least given the chance to fight their way to safety, giving skilled players more leeway than just being forced to slowly take damage. This also forces players to consider staying a little longer to loot while expending resources to fight off zombies, versus just speeding straight to the safe zone.
Battle Royale, Moving Forward
Battle Royale, as a fresh new game mode in the otherwise stagnant shooter genre, has been experiencing unprecedented success.
And with unprecedented success, comes the inevitable spread, as Call of Duty: Black Ops 4 and Paladins announce their own Battle Royale game modes. Beyond officially licensed PUBG and Fortnite mobile ports, there have been a rash of other mobile developers who have released clones. There is still active speculation around Battlefield 5, and there is no doubt more game developers will try their own hand at the game mode before the craze dies down.
Though many on Fortnite and PUBG scoff at these “clones” and Call of Duty and Paladins fans bemoan their established franchises hopping on the bandwagon, gaming history can give us insight as to what this will mean for Battle Royale:
Not much, really.
Like with Capture the Flag, King of the Hill, Horde Mode, or Zombies Mode--all game modes which have innovated the shooter genre over the years--Battle Royale is a new and innovative remix of existing game mode rulesets that have been developed over the years for shooters. A dash of Free-for-All, a pinch of Last Man Standing, a little King of the Hill, all built upon a foundation of DeathMatch fundamentals. And as with those other game modes, we can expect Battle Royale to become more popular in existing shooter games which already feature some of those game modes.

At some point, these modes, too, were new (credit)
And just like those game modes, Battle Royale in the gaming industry will continue to evolve on a per-game basis. Battlefield found itself ideal for the Conquest game mode, whereas Call of Duty found itself better suited for Team DeathMatch and Search and Destroy. Whether or not, Fortnite and PUBG will keep surviving to be the respective top games in Battle Royale, there will also be game franchises for whom Battle Royale fits more naturally.
Scoff though we might on what we see as bandwagoners, never forget similar waves in the past when every first person shooter was called DOOM clones, and streamlined multiplayer arena shooters were Unreal Tournament clones. More recently, any horde mode was a Gears of War clones, and any MOBA was a DOTA clone. And yet, today we see those game modes in newer games without any bat of the industry’s eye.
Battle Royale will continue to improve, and innovate, too, the same way military shooters learned to diverge and specialize. We’ve already begun to see this with Fortnite’s more jazzed-up and family friendly approach, versus the more hardcore PUBG with H1Z1 somewhere in the middle.
And who knows, maybe just as Battle Royale is the innovative cocktail of game modes before it, Battle Royale itself might be used as the foundational basis for more new innovative game modes.
Either way, players will always shoot each other in the time-tested tradition established since the original DOOM.
We’ll just keep finding new and creative modes to do it.
0 notes
Text
An Early Eulogy for Text-Based Social Media
Chatting
Social Media, as a method of sharing our lives with others, has almost always been text-based. Before the internet, we shared their lives with distant friends and relatives through written letters. we would declare major life updates, crises, and events through local bulletin boards at the center of town, or through the local newspapers. Sure, we would sometimes share our albums with friends, but those pictures and home videos were never the core message.
Arguably, it was text which created the modern distinction between “direct” social media (sharing your life directly to someone) and “indirect” social media, where we post in a public space for others to view on their own time. Before text, sharing one’s life had to be in the moment of speech--sound couldn’t just stay in the air and wait until someone came along to hear it. Major announcements required town criers and scheduled gatherings where everyone could hear it at once. England still has them, for royal births.
Imagine Tweeting, except this man would scream it at your neighborhood
Text, though, offered a way to hold onto information through time, separating direct from indirect social media. This lineage remains in our jargon, even as we leave pen and pencil behind: You don’t “post” a text or SnapChat to a friend, but you do “post” something to your Tumblr or Facebook in the same way that hundreds to thousands of years ago, one would quite literally post an announcement or update on the town notice board.
Even as the digital revolution came, we shared our lives through text: Direct social media of letters transitioned into digital text in the form of the first email sent in 1971. This would be done more instantaneously and portably when the first mobile text message would be sent in 1992. And, since we’re on Tumblr, we should acknowledge the digital transition of indirect social media through the first blog’s creation in 1994.
Blogging as a form of social media--the almost diary-like blog posts we used to demeaningly attribute to middle aged adults who weren’t cool, hip or young enough to transition to MySpace or Facebook--would hit its peak near the early 2000′s, prompting a response from Facebook in the form of Facebook Notes in 2006. Tumblr itself would launch later in 2007, presumably to ride in on the popularity of the new medium.
But really, though our generation made fun of the quaint, long-form text-based social media of a blogpost, we weren’t too far off. MySpace and Facebook, which launched in 2003 and 2004, respectively, was a distillation and centralization of the fundamentals of text-based social media. If we begin the evolution of (digital) text-based indirect social media with blogs, then Facebook was its next logical step: A place for us to post shortened text and accompanying picture to notify others of what we thought were important in our lives. A vacation. A new dog. A birthday.The only real difference between the blog of the stay-at-home and our own cool-kids Facebook post was that ours was shorter, and confined to our friends instead of the general internet.
In an odd coincidence, 2006 was also the year that Twitter launched. If Facebook was a distillation of the blog, then Twitter was the bare essence of Facebook. The purest form of text-based social media. Before its implementation of picture-embedding in 2011, Twitter was, in essence, a 140-character blog. It caught on like wildfire, much as Facebook had before it.
Though pundits, alarmed at this trend and increasingly bemoaning the perceived growing impatience of newer generations, they only really saw the decrease in the length of text involved in social interactions: Kids aren’t reading books! They’re too stuck to short Facebook posts, Tweets, and text messages!
The fact remained, however, that the cornerstone of social interaction was still text in the 2000’s: The legendary all-consuming texting sprees between youth was a long-running joke for our parents and a reality for us involved. Apart from the occasional photos, the way we chose to share our lives with others--whether more indirectly in a Facebook post, or more directly in an IM/Facebook chat/text message--was based on written text. A whole new sub-language formed around this use of text, with ASCII emojis like :) or >:O, abbreviations like lol, lmao, wtf. To anyone who doubts the text-heavy focus of 2000’s social media, one only needs to look at the massive phone industry push for full QWERTY keyboards to support our voracious appetite for typing. QWERTY keyboards on phones far predated ease of internet surfing on phones--the massive texting marathons of our childhood days could be the only cause.
We had jumped into a digital age, but we still weren’t all that far removed from the handwritten letters of our parents. Now it was just faster. More streamlined. Perhaps with more profanity and memes. But still, in the end, text. Back in those days of furiously typing texts between multiple different friend groups, long-winded Facebook posts and comment threads, and nearly daily status posts from friends, social media then didn’t kill text--social media was the peak of text. At no previous point in history was the collective populace generating, exchanging, or consuming so much text. Perhaps it wasn’t Shakespeare, but our gradeschool dramas were getting pretty close.
SnapChatting
Time skip to the present day. A mind-boggling 1.4 billion people still use Facebook daily. Twitter and Tumblr don’t release daily active user data, but Twitter still outputs 300 million tweets daily while Tumblr outputs 30 million blogposts a day.
And yet, open your Facebook, Tumblr, and Twitters. And try to remember what they looked like in the 2000′s, if you had an account then. How much of your feed is now pictures, videos, and GIF’s? How many posts or messages do you see nowadays that is actually only text? Instead, how many messages do you see that have a picture or video attached with it?
...
As popularity is only measured by user count and not what they share or use, it is hard to track the exact decline of the text-based social media. However, we can track the rise of audiovisual-based social media as a proxy: Even if one contests that people are using text less as a medium of communication, we can’t deny the invasion of videos and images on the feed, and we can most definitely see ourselves pull out our smartphones not to hit the Facebook app, but Snapchat or Instagram instead.
And by making a timeline of site/app launch dates, we can construct a timeline of the rise of text-based social media and the rise of audiovisual-based social media which followed: Text-based social media launched in its own cluster: MySpace launched in 2003, followed by Facebook in 2004, Twitter in 2006, and Tumblr in 2007.
Then 4 years passed, and a new cluster: Kik launched in 2010, Instagram in 2010, Snapchat in 2011, and Twitch.tv (more on that later) in 2011.
And, like a rash in response, perhaps alarmed at the rapid rise of Snapchat and Instagram, another cluster from 2013 to 2016: This time of Twitter, Facebook, and Tumblr adapting to be more audiovisual-focused, introducing many of the features that distinguished the audiovisual social media that launched 2010-2011.
This is why earlier, I referred to the 2000’s as the peak of text: By the turn of the decade, the horsemen of the apocalypse had arrived...and stayed. And grew. Even the old champions of text, Facebook, Tumblr, and Twitter, had changed since. The millionth embedded video of cute dogs and memes are testament to the horsemen’s coming.
...
A few years ago, when I told my little sister, only 3 years younger than me, that I didn’t have Snapchat or Instagram, she said “What?” in the way that I used to when my parents asked what Facebook was. She barely uses Facebook anymore. When I asked a few of my friends with younger siblings, they also reported that they either didn’t use Facebook at all, or mostly used Snapchat or Instagram. I’m sure that you’ve heard similar stories.
It’s not just anecdotal. ComScore, a giant digital media analytics company, releases annual reports analyzing the latest trends in consumption of everything from TV to social media. They collaborate with other companies to do a big panel on this kind of information that you can find here. In their 2018 report, they reported the following statistics about age demographics of various social media sites:
The picture doesn’t scale down very well, but the numbers tell a clear story: Only 25% of Facebook’s users are in the 16-24 age group, and about 55% are between 16 to 34. Meanwhile, Snapchat sits at 57% of users in the 16-24 range, and what seems to be near 80% of users between 16-34. 80%. Below 34. Sheesh. Kik, another app anecdotally popular with newer generations, shares remarkably similar user age demographics to Snapchat. The fact that it’s a messaging app doesn’t necessarily mean the saving of text: Kik’s youth usage (50% <34) is notably higher than other messaging apps like Messenger (35% <34), Whatsapp (31% <34), probably due to Kik’s combination of anonymity with its base integration of pictures, videos, and embedding of said visual media compared to the aforementioned text-focused messaging apps. Even a quick browse through Tumblr, boasting very similar user age demographics to Instagram, will reveal the majority of the posts on the “blog” site to be more audiovisual than text.
We can only wonder what the user age demographics for Facebook, Tumblr, and Twitter would have become had they not implemented audiovisual media features from 2013-2016.
We shouldn’t be surprised. If the purpose of social media is to share your life with others, isn’t the audiovisual medium, therefore, the pinnacle of efficient social media? Instead of writing an essay of how your day looked, felt, and what you did, a 5 second Snapchat video will convey all that and more. A text Facebook status, at its best, leaves 99% of the work to your audiences’ imagination in recreating context. An image or video, meanwhile, is your life, no recreation required. In this way, SnapChat has risen to the occasion for direct social media, whereas Instagram has risen for indirect social media.
The youth may not always be right, but maybe without intending to, they are simply flocking to more efficient modes of social media communication, the same way our generation moved from the physical letters of our parents to text messaging and to Facebook.
A picture is worth a thousand words. And a thousand words is inefficient.
Stream Chatting
An IRL livestream on Twitch.tv
Video as a digital medium is nothing new: Youtube launched in 2005, and it has seen an explosive growth, with the combined number of views for the top 5 videos of each year going from 400 million in 2005 to 1.53 billion in 2006 to an unfathomable 12.88 billion views in 2017, officially surpassing the total human population of the world.
However, the videos responsible for Youtube’s early growth could moreso be categorized as amateur recreations of pre-existing video genres: Lectures, critiques, reviews, original theatrical content. But underneath the unparalleled popularity of music videos (accounting for almost every most viewed videos every year), video blogging (”vlogging”) has been becoming more and more popular.
The most successful (and controversial), Jake Paul of the Paul brothers, easily boasts 2+ million views per vlog on Youtube, with his more popular videos breaking 9 million views. Even as Tumblr and Wordpress continue to grow, the sudden and rapid gain in popularity of vlogging cannot be ignored.
And in 2011, vlogging saw itself evolve once again.
In 2011, Twitch.tv launched, and made live-streaming efficient and accessible the same why Youtube did for standard videos in 2005.
Not streaming in the sense of one streaming a Netflix movie, but a livestream involving thousands, if not sometimes millions of concurrent viewers tuning in at a time to watch and interact with their favorite streamer as the streamer does everything from playing games to going about their daily lives (the latter now called “IRL” streams). There are 15 million daily active viewers as of 2018, which only keeps growing as livestreaming, like any other new media, grows, expands, and matures in producing unique content. Twitch.tv’s pressure is evident in just how many other platforms quickly implemented live-streaming features, even in the home of vlogging: Youtube.
It is much more difficult to get demographics data for Youtube or Twitch streaming (as that information is only available to the content creator or streamer), but it is widely understood in both communities that the most popular streams are fueled by the young, mostly under the age of 20. If you are inclined to search Youtube, there are videos aplenty of other streamers or content creators bemoaning how younger viewers are somehow unfairly boosting the popularity of other streams. Whether it is fair or not that the youth have so much say in the streaming world is debatable, but the widespread debate is still based on the assumption that the youth are largely backing the latest evolution in the audiovisual-based social media.
If this word-of-mouth assumption were to be true, it should not be any more surprising than SnapChat’s user age demographics. In many ways, live-streaming video is the pinnacle of social media, its core intent made manifest: It’s not just a direct, real-time sharing of one’s life to others, but Twitch and now Youtube both have live chat fields for interaction with their audiences. Snapchat lets you share seconds of your life. Live-streaming shares entire hours of yourself.
Live-streaming’s greater demand on the time and resources of the content creator (compared to the seconds it takes to generate a Snap) may keep the number of creators from exploding the same way Youtube’s channels did. Even so, as the number of viewers grow, it might not matter.
Over My Dead Body Paragraph
It should be made clear that text in social media is not dead, nor will die. Kik still has text messaging, and the vast majority of Snaps still have some text on it. Even that 10000th dog video on your feed probably has a burgeoning meme-filled comment section. Twitch.tv’s chat is still text-based. With 1.4 billion daily active users, Facebook still outpaces Snapchat’s 186 million and Instagram’s 500 million daily active users. Even memes, GIF’s, and video loops would not be nearly so effective without text.
And yet, the numbers don’t lie. Even as Facebook continues to grow, audiovisual social media like Snapchat’s meteoric rise in popularity with the younger generations should be heeded as the swell before the wave.
Facebook and Twitter seems to have definitely done so, with the addition of new audiovisual-heavy features as mentioned before. Though the text-based social media giants have survived in the literal sense, their focus has changed, as evident in how they present themselves to the world:
Facebook, from 2007 to 2018, has shifted from sharing raw information to marketing itself as photos first and foremost, with no longer any explicit mention of “information.”
Even for Tumblr, a blog, the successor to the direct progenitor of text-based social media, shows just how far even blogs have come from the text-filled diary-like blogs of yesteryear. These are login screens, also from 2007 and 2018:

And what exactly, does a blog look like anymore, according to Tumblr?
Only Twitter remains aloof and vague, remaining closer to its text-based roots than the others.
And our very expectation of what social media is has changed, too--like the chicken or the egg, maybe Tumblr and Facebook have so drastically rebranded themselves to match what consumers today expect social media sites to be able to do. When Facebook added GIF support in 2015, it was met with exasperated “FINALLY”s by tech commentators, instead of “cool!”. Twitter’s photo limit increase was met with impatience, not excitement. And we betray this paradigm shift within ourselves in how we use social media today.
The wave is already here. We are it.
A Full Circle. Almost.
“GATHER AROUND AND LISTEN YE TO THE TALES OF MY SHOPPING TRIPPE”
As tempting as it is to scowl as the social media torch passes from text to the audiovisual, we should remember that the context of social media goes even further back. Text isn’t where social media started. Far from it.
As noted in the very beginning, all text really did was allow information to pass more indirectly; direct social media had been audiovisual long before humans wrote. Even when written language was developed, sharing of one’s life to others remained an oral tradition even out of the BCE’s until widespread literacy was achieved very recently. Even still, through the rise of books, letters, text messages, emails, Facebook, Twitter, Tumblr, Snapchat, Instagram, Youtube, and Twitch, the deeply personal and direct oral foundation of social media never died: It remains today in the form of the family dinner table, where close people convene to share their lives over food and drink. It lives on around the cubicle corner where pregnancies are declared, pipe leaks are complained of, and sports is discussed in person.
It revives every time you make the conscious decision to hold off telling your friend something until you decide to meet them next in person, or whenever you wait an extra week or two to see a movie together with a friend at the same theater.
After all, even the most eloquently worded text status or text message to your friend cannot match actually seeing and hearing that friend.
In this context, perhaps text wasn’t so high and mighty after all, but in a certain interpretation, a devolution. Remember your childhood friends, separated now by long distance. Text and post as you may, it will never compare to being together in the same space, directly sharing an experience. In many ways, text is a cruel and limited distillation of the human action of social interaction, leading to misunderstandings and frustration at having to write so much what could be conveyed in a few words and a gesture. It is undeniably useful, when faced with the alternative of nothing at all, but in retrospect, it’s odd we ever thought it sufficient for social media.
The shift to the audiovisual format alleviates this somewhat--instead of trying to imagine how a friend is doing, we can see their face, at least, and see how their surroundings have changed. As a social medium, and from a humanistic perspective, I think it’s hard to cling onto text as some bastion of civilized interaction. Keep text for academia, but human socializing shouldn’t be restrained to words alone.
..and yet, memes about millenials’ inability to socialize in public aside, never forget that modern social media itself is a crutch, only a half-decent bastardized replacement for actually being with somebody else. Maybe instead of pining for the olden glory days of text, or worrying about how good the lighting looks in our latest Snap, we should remember the reason we’re on it in the first place and take a moment to hug the ones we care about while we still physically can.
Don’t just share your life to others. Focus on sharing your life with others.
#social media#digital trends#text post#audiovisual#demographics#twitter#tumblr#facebook#snapchat#instagram#twitch#myspace#texting
0 notes