#his back could be described by a logarithm function
Explore tagged Tumblr posts
caemidraws · 3 years ago
Photo
Tumblr media
---
2K notes · View notes
componentplanet · 5 years ago
Text
Fujitsu Has an Employee Who Keeps a 1959 Computer Running
If you’ve ever worked for a company, you’re probably aware that they tend to keep computers running after they should’ve been replaced with something newer, faster, and/or less buggy. Fujitsu Tokki Systems Ltd, however, takes that concept farther than most. The company still has a fully functional computer it installed back in 1959, the FACOM128B. Even more impressive, it still has an employee on staff whose job is to keep the machine in working order.
The FACOM128B is derived from the FACOM100, described as “Japan’s first practical relay-based automatic computer.” The 100, an intermediate predecessor known as the 128A, and the 128B were classified as electromechanical computers based on the same kind of relays that were typically used in telephone switches. Technologically, the FACOM 128B wasn’t particularly cutting-edge even when constructed; vacuum tube designs were already becoming popular by the mid-1950s. Most of the computers that used electromechanical relays were early efforts, like the Harvard Mark I (built in 1944), or one-off machines rather than commercialized designs.
Relay computers did have advantages, however, even in the mid-to-late 1950s. Relay computers were not as fast as vacuum-tube-powered machines, but they were significantly more reliable. Performance also appears to have continued to improve in these designs as well, though finding exact comparison figures for performance on early computers can be difficult. Software, as we understand the term today, barely existed in the 1950s. Not all computers were capable of storing programs, and computers were often custom-built for specific purposes as unique designs, with significant differences in basic parameters.
Wikipedia notes, however, that the Harvard Mark I was capable of “3 additions or subtractions in a second. A multiplication took 6 seconds, a division took 15.3 seconds, and a logarithm or a trigonometric function took over one minute.” The FACOM128B was faster than this, with 5-10 additions or subtractions per second. Division and multiplication were also significantly faster.
Image and data from the IPSJ Computer Museum
The man responsible for maintaining the FACOM128B, Tadao Hamada, believes that the work he does to keep the system running is a vital part of protecting Japan’s computing heritage and making sure future students can see functional examples of where we came from, not just collections of parts in a box. Hamada has pledged to maintain the system forever. A year ago, the FACOM128B was registered as “Essential Historical Materials for Science and Technology” by the Japanese National Museum of Nature and Science. The goal of the museum, according to Fujitsu, is “to select and preserve materials representing essential results in the development of science and technology, that are important to pass on to future generations, and that have had a remarkable impact on the shape of the Japanese economy, society, culture, and the lifestyles of its citizens.”
A video of the FACOM128B in-action can be seen below:
youtube
The FACOM128B was used to design camera lenses and the YS-11, the first and only post-war airliner to be wholly developed and manufactured in Japan until the Mitsubishi SpaceJet. While the YS-11 aircraft was not commercially successful, this wasn’t the result of poor computer modeling; the FACOM128B was considered to be a highly reliable computer. Fujitsu’s decision to keep the machine in working order was itself part of a larger program, begun in 2006. The company writes:
The Fujitsu Relay-type Computer Technology Inheritance Project began activities in October 2006, with the goal of conveying the thoughts and feelings of the technical personnel involved in its development and production to the next generation by continuing to operate the relay-type computer. In this project, the technical personnel involved in the design, production, maintenance, and operation of the computer worked with current technical personnel to keep both the FACOM128B, which is fast approaching its 60th anniversary, and its sister machine, the FACOM138A, in an operational state.
Credit: Fujitsu
Hamada has been working on the electromechanical computer since the beginning of this program. He notes that in the beginning, he had to learn how to translate the diagrams the machine’s original operators had used. Asked why he believes maintaining the machine is so important, he stated: “If the computer does not work, it will become a mere ornament,” said Hamada. “What people feel and what they see are different among different individuals. The difference cannot be identified unless it is kept operational.”
It’s always interesting to revisit what’s been done with older hardware or off-the-wall computer projects, and I can actually see Hamada’s point. Sometimes, looking at older or different technology is a window into how a device functions. Other times, it gives you insight into the minds of the people that built the machine and the problems they were attempting to solve.
One of my favorite off-the-wall projects was the Megaprocessor back in 2016, a giant CPU you could actually see, with each individual block implemented in free-standing panels. Being able to see data being passed across a physical bus is an excellent way to visualize what’s happening inside a CPU core. While maintaining the FACOM128B doesn’t offer that kind of access, it does illustrate how computers worked when we were building them from very different materials and strategies than we use today.
Update (5/18/2020): Since we first ran this story, YouTuber CuriousMarc arranged for a visit to Fujitsu and an extensive discussion of the machine. You can see his full video below. It’s a bit lengthy, but it dives into the history of the system and Hamada himself.
youtube
Now Read:
Meet the Megaprocessor: A 20kHz behemoth CPU you can actually see in action
Apollo Guidance Computer Restored, Used to Mine Bitcoin
Inside IBM’s $67 billion SAGE, the largest computer ever built
from ExtremeTechExtremeTech https://www.extremetech.com/computing/296022-fujitsu-has-an-employee-dedicated-to-keeping-a-1959-computer-up-and-running from Blogger http://componentplanet.blogspot.com/2020/05/fujitsu-has-employee-who-keeps-1959.html
0 notes
filiplig · 7 years ago
Text
Gleick, James - The Information: A History, a Theory, a Flood
str. 23
In the name of speed, Morse and Vail had realized that they could save strokes by reserving the shorter sequences of dots��and dashes for the most common letters. But which letters would be used most often? Little was known about the alphabet’s statistics. In search of data on the letters’ relative frequencies, Vail was inspired to visit the local newspaper office in Morristown, New Jersey, and look over the type cases. He found a stock of twelve thousand E’s, nine thousand T’s, and only two hundred Z’s. He and Morse rearranged the alphabet accordingly. They had originally used dash-dash-dot to represent T, the second most common letter; now they promoted T to a single dash, thus saving telegraph operators uncountable billions of key taps in the world to come. Long afterward, information theorists calculated that they had come within 15 percent of an optimal arrangement for telegraphing English text.
str. 27
Neither Kele nor English yet had words to say, allocate extra bits for disambiguation and error correction. Yet this is what the drum language did. Redundancy—inefficient by definition—serves as the antidote to confusion. It provides second chances. Every natural language has redundancy built in; this is why people can understand text riddled with errors and why they can understand conversation in a noisy room.
str. 27
After publishing his book, John Carrington came across a mathematical way to understand this point. A paper by a Bell Labs telephone engineer, Ralph Hartley, even had a relevant-looking formula: H = n log s, where H is the amount of information, n is the number of symbols in the message, and s is the number of symbols available in the language. [...] The formula quantified a simple enough phenomenon (simple, anyway, once it was noticed): the fewer symbols available, the more of them must be transmitted to get across a given amount of information. For the African drummers, messages need to be about eight times as long as their spoken equivalents.
str. 74
In Napier’s mind was an analogy: differences are to ratios as addition is to multiplication. His thinking crossed over from one plane to another, from spatial relationships to pure numbers. Aligning these scales side by side, he gave a calculator a practical means of converting multiplication into addition—downshifting, in effect, from the difficult task to the easier one. In a way, the method is a kind of translation, or encoding. The natural numbers are encoded as logarithms. The calculator looks them up in a table, the code book. In this new language, calculation is easy: addition instead of multiplication, or multiplication instead of exponentiation. When the work is done, the result is translated back into the language of natural numbers. Napier, of course, could not think in terms of encoding.
str. 136
Signs and symbols were not just placeholders; they were operators, like the gears and levers in a machine. Language, after all, is an instrument. It was seen distinctly now as an instrument with two separate functions: expression and thought. Thinking came first, or so people assumed. To Boole, logic was thought—polished and purified.
str. 148
To eliminate Russell’s paradox Russell took drastic measures. The enabling factor seemed to be the peculiar recursion within the offending statement: the idea of sets belonging to sets. Recursion was the oxygen feeding the flame. In the same way, the liar paradox relies on statements about statements. “This statement is false” is meta-language: language about language. Russell’s paradoxical set relies on a meta-set: a set of sets. So the problem was a crossing of levels, or, as Russell termed it, a mixing of types. His solution: declare it illegal, taboo, out of bounds. No mixing different levels of abstraction. No self-reference; no self-containment. The rules of symbolism in Principia Mathematica would not allow the reaching-back-around, snake-eating-its-tail feedback loop that seemed to turn on the possibility of self-contradiction. This was his firewall.
s. 163
It seemed intuitively clear that the amount of information should be proportional to the number of symbols: twice as many symbols, twice as much information. But a dot or dash—a symbol in a set with just two members—carries less information than a letter of the alphabet and much less information than a word chosen from a thousand-word dictionary. The more possible symbols, the more informationeach selection carries.
s. 170
Turing was programming his machine, though he did not yet use that word. From the primitive actions—moving, printing, erasing, changing state, and stopping—larger processes were built up, and these were used again and again: “copying down sequences of symbols, comparing sequences, erasing all symbols of a given form, etc.” The machine can see just one symbol at a time, but can in effect use parts of the tape to store information temporarily. As Turing put it, “Some of the symbols written down … are just rough notes ‘to assist the memory.’ ”The tape, unfurling to the horizon and beyond, provides an unbounded record. In this way all arithmetic lies within the machine’s grasp. Turing showed how to add a pair of numbers—that is, he wrote out the necessary table of states. He showed how to make the machine print out (endlessly) the binary representation of Π. He spent considerable time working out what the machine could do and how it would accomplish particular tasks. He demonstrated that this short list covers everything a person does in computing a number. No other knowledge or intuition is necessary. Anything computable can be computed by this machine. Then came the final flourish. Turing’s machines, stripped down to a finite table of states and a finite set of input, could themselves be represented as numbers. Every possible state table, combined with its initial tape, represents a different machine. Each machine itself, then, can be described by a particular number—a certain state table combined with its initial��tape. Turing was encoding his machines just as Gödel had encoded the language of symbolic logic. This obliterated the distinction between data and instructions: in the end they were all numbers. For every computable number, there must be a corresponding machine number.
s. 172
So Turing’s computer—a fanciful, abstract, wholly imaginary machine—led him to a proof parallel to Gödel’s. Turing went further than Gödel by defining the general concept of a formal system. Any mechanical procedure for generating formulas is essentially a Turing machine. Any formal system, therefore, must have undecidable propositions. Mathematics is not decidable. Incompleteness follows from uncomputability.
s. 178
Information is uncertainty, surprise, difficulty, and entropy:  “Information is closely associated with uncertainty.” Uncertainty, in turn, can be measured by counting the number of possible messages. If only one message is possible, there is no uncertainty and thus no information.
Some messages may be likelier than others, and information implies surprise. Surprise is a way of talking about probabilities. If the letter following t (in English) is h, not so much information is conveyed, because the probability of h was relatively high.
“What is significant is the difficulty in transmitting the message from one point to another.” Perhaps this seemed backward, or tautological, like defining mass in terms of the force needed to move an object. But then, mass can be defined that way.
Information is entropy. This was the strangest and most powerful notion of all. Entropy—already a difficult and poorly understood concept—is a measure of disorder in thermodynamics, the science of heat and energy.
s. 185
This is where the statistical structure of natural languages reenters the picture. If the thousand-character message is known to be English text, the number of possible messages is smaller—much smaller. Looking at correlations extending over eight letters, Shannon estimated that English has a built-in redundancy of about 50 percent: that each new character of a message conveys not 5 bits but only about 2.3. Considering longer-range statistical effects, at the level of sentences and paragraphs, he raised that estimate to 75 percent [...]
s. 186
Quantifying predictability and redundancy in this way is a backward way of measuring information content. If a letter can be guessed from what comes before, it is redundant; to the extent that it is redundant, it provides no new information. If English is 75 percent redundant, then a thousand-letter message in English carries only 25 percent as much information as one thousand letters chosen at random. Paradoxical though it sounded, random messages carry more information. The implication was that natural-language text could be encoded more efficiently for transmission or storage.
s. 199
There was a difference in emphasis between Shannon and Wiener. For Wiener, entropy was a measure of disorder; for Shannon, of uncertainty. Fundamentally, as they were realizing, these were the same. The more inherent order exists in a sample of English text—order in the form of statistical patterns, known consciously or unconsciously to speakers of the language—the more predictability there is, and in Shannon’s terms, the less information is conveyed by each subsequent letter. When the subject guesses the next letter with confidence, it is redundant, and the arrival of the letter contributes no new information. Information is surprise.
s. 216
A hot stone plunged into cold water can generate work—for example, by creating steam that drives a turbine—but the total heat in the system (stone plus water) remains constant. Eventually, the stone and the water reach the same temperature. No matter how much energy a closed system contains, when everything is the same temperature, no work can be done. It is the unavailability of this energy—its uselessness for work—that Clausius wanted to measure. He came up with the word entropy, formed from Greek to mean “transformation content.”
s. 217
It became a totemic concept. With entropy, the “laws” of thermodynamics could be neatly expressed: First law: The energy of the universe is constant. Second law: The entropy of the universe always increases. There are many other formulations of these laws, from the mathematical to the whimsical, e.g., “1. You can’t win; 2. You can’t break even either.”
s. 218
Order is subjective—in the eye of the beholder. Order and confusion are not the sorts of things a mathematician would try to define or measure. Or are they? If disorder corresponded to entropy, maybe it was ready for scientific treatment after all.
s. 222
The demon sees what we cannot—because we are so gross and slow—namely, that the second law is statistical, not mechanical. At the level of molecules, it is violated all the time, here and there, purely by chance. The demon replaces chance with purpose. It uses information to reduce entropy.
s. 224
But information is physical. Maxwell’s demon makes the link. The demon performs a conversion between information and energy, one particle at a time. Szilárd—who did not yet use the word information—found that, if he accounted exactly for each measurement and memory, then the conversion could be computed exactly. So he computed it. He calculated that each unit of information brings a corresponding increase in entropy—specifically, by k log 2 units. Every time the demon makes a choice between one particle and another, it costs one bit of information. The payback comes at the end of the cycle, when it has to clear its memory (Szilárd did not specify this last detail in words, but in mathematics). Accounting for this properly is the only way to eliminate the paradox of perpetual motion, to bring the universe back into harmony, to “restore concordance with the Second Law.”
s. 229
The earth is not a closed system, and life feeds upon energy and negative entropy leaking into the earth system.… The cycle reads: first, creation of unstable equilibriums (fuels, food, waterfalls, etc.); then use of these reserves by all living creatures. 
Living creatures confound the usual computation of entropy. More generally, so does information. “Take an issue of The New York Times, the book on cybernetics, and an equal weight of scrap paper,” suggested Brillouin. “Do they have the same entropy?” If you are feeding the furnace, yes. But not if you are a reader. There is entropy in the arrangement of the ink spots. For that matter, physicists themselves go around transforming negative entropy into information, said Brillouin. From observations and measurements, the physicist derives scientific laws; with these laws, people create machines never seen in nature, with the most improbable structures.
s. 236
By now the word code was so deeply embedded in the conversation that people seldom paused to notice how extraordinary it was to find such a thing—abstract symbols representing arbitrarily different abstract symbols—at work in chemistry, at the level of molecules. The genetic code performed a function with uncanny similarities to the metamathematical code invented by Gödel for his philosophical purposes. Gödel’s code substitutes plain numbers for mathematical expressions and operations; the genetic code uses triplets of nucleotides to represent amino acids. Douglas Hofstadter was the first to make this connection explicitly, in the 1980s: “between the complex machinery in a living cell that enables a DNA molecule to replicate itself and the clever machinery in a mathematical system that enables a formula to say things about itself.”
s. 256
“Memes have not yet found their Watson and Crick,” said Dawkins; “they even lack their Mendel.”
s. 259
Wheeler said this much, at least: “Probability, like time, is a concept invented by humans, and humans have to bear the responsibility for the obscurities that attend it.”
s. 267
“At each given moment there is only a fine layer between the ‘trivial’ and the impossible,” Kolmogorov mused in his diary.
s. 268
The three are fundamentally equivalent: information, randomness, and complexity—three powerful abstractions, bound all along like secret lovers.
s. 271
It is another recursive, self-looping twist. This was Chaitin’s version of Gödel’s incompleteness. Complexity, defined in terms of program size, is generally uncomputable. Given an arbitrary string of a million digits, a mathematician knows that it is almost certainly random, complex, and patternless—but cannot be absolutely sure.
s. 272
As Chaitin put it, “God not only plays dice in quantum mechanics and nonlinear dynamics, but even in elementary number theory.”
s. 272
Kolmogorov-Chaitin (KC) complexity is to mathematics what entropy is to thermodynamics: the antidote to perfection. Just as we can have no perpetual-motion machines, there can be no complete formal axiomatic systems.
s. 280
According to this measure, a million zeroes and a million coin tosses lie at opposite ends of the spectrum. The empty string is as simple as can be; the random string is maximally complex. The zeroes convey no information; coin tosses produce the most information possible. Yet these extremes have something in common. They are dull. They have no value. If either one were a message from another galaxy, we would attribute no intelligence to the sender. If they were music, they would be equally worthless. Everything we care about lies somewhere in the middle, where pattern and randomness interlace.
s. 282
The more energy, the faster the bits flip. Earth, air, fire, and water in the end are all made of energy, but the different forms they take are determined by information. To do anything requires energy. To specify what is done requires information. —Seth Lloyd (2006)
0 notes
componentplanet · 5 years ago
Text
How an Article on Game Difficulty Explained My Own Modding, 18 Years Later
As the pandemic leaves an awful lot of people at home with not-much to do, we’re resurfacing some older coverage on topics and news that isn’t particularly time-sensitive. Last fall, I read an article that literally explained to me why I got into modding two decades ago. If you’ve been a PC modder yourself or simply enjoy using mods, you might find it an interesting discussion of the topic. 
A game’s difficulty level can make or break the title. Games that are perceived as too difficult become boring, depressing grinds, while games that are too easy become boring and tedious, with little challenge. One of the most profound differences between World of Warcraft Classic and Retail is the difference in difficulty. Of course, every player has their own ideas about how hard a game should be, but there’s no arguing that the difficulty of a title is important.
But according to game developer Jennifer Scheurle, game developers think about game difficulty very differently than players do, which may be part of why conversations on this topic sometimes seem to break down. Her piece resonated with me, partly because it reminded me of the reasons why I became a game modder, once upon a time. According to Scheurle, difficulty is all about trust.
“At the core of the difference between how game designers and players speak about difficulty,” she writes, “is the fact that we discuss it in terms of skill progression. All difficulty design is essentially that: crafting how players will learn, apply skills, and progress through challenges.”
Graphic by Jennifer Scheurle for Polygon
She then walks through examples of how this plays out in games, using the Dark Souls series as an example. DS games ask you to accept that you will die (frequently) as part of learning how encounters function. You aren’t simply being killed by mechanics you can’t master, beat, or counter, you’re learning how the game functions and how to counter incoming attacks. The game, in turn, obeys its own internal rules. Players often become angry at a game if they feel it isn’t holding up its end of the bargain in some particular, whether that refers to drop rates, spawn rates, boss difficulty, or the damage you take versus the damage you deal. She also discusses the importance of how a game teaches players to play it, and the various in-game ways that developers communicate game difficulty and associated rules. It’s a very different view of the topic than simply boiling it down into whether a game is “hard” or “easy,” and it leads to a much more nuanced view of how and why different titles may put difficulty in different places.
The article resonated with me in part because it describes part of why I became a Diablo II modder and taught me something about my own motivation. I don’t want to seem as if I’m hijacking Scheurle’s excellent discussion of game difficulty because it’s worth a read in its own right, but I’m going to switch gears a bit and talk about my own experience. To put it simply: I was pissed.
Diablo II’s Trust Fail
This was the early days of Diablo II, before the Lord of Destruction expansion had even come out. Patch 1.03 dropped not long before I started modding, to put a date on things. On Normal difficulty, Diablo II worked pretty well, but as you progressed into Nightmare and Hell difficulty modes, deficiencies became apparent.
Back then, Diablo II used a linear leveling curve in which the amount of XP you needed to gain for each additional level increased by a flat amount — the amount you needed for your previous level, plus a flat modifier. This was exacerbated by a leveling penalty, introduced in Nightmare, in which you lost XP gained towards your next level if your character died. You couldn’t drop a level due to this XP loss, but you could theoretically be 99 percent of the way to Lvl 50 and fall back to 0 percent through repeated deaths. The net result of this was that the amount of time required for each additional level increased sharply, and this became increasingly noticeable as you moved into the later game.
Now for the coup de grace: The game was poorly balanced outside of Normal difficulty. I became a game modder specifically because my Barbarian character with maximum Fire Resist was being one-shotted by mini-bosses with Fire Aura even when he used abilities that temporarily increased his HP. These mini-bosses and bosses could one-shot a character virtually as soon as you saw them. Death meant losing a portion of gold and dropping equipped items. Attempting to retrieve those items (using whatever alternate gear you had access to) was virtually guaranteed to get you killed at least once more because you’d have to drag monsters away from your corpse in order to try and retrieve what you originally had. Mini-bosses could also spawn with these modifiers in critical areas, where it was exceptionally difficult to move them away from a critical spawn point. There was no way to see the exact location of the fire aura on the ground; you knew you’d touched it when you died.
It was cheap. That’s what I called it. I didn’t consider it any kind of legitimate difficulty spike. It just felt like a way for Blizzard to make the game harder by killing players in a manner they couldn’t even fight. I became a modder because I was angry about the way that these imbalances had changed the game. I felt betrayed.
Looking back (and using Scheurle’s article for reference), I’ve realized that I was angry because Diablo II had broken trust with me. Some of these flaws existed in Normal as well, but they weren’t as apparent due to the influence of how other scaling factors impacted the title. Some of the changes between Normal and later difficulties that impacted how poorly the game scaled included the much-slower pace of leveling and the fact that there were no unique items in-game for the Nightmare and Hell difficulty modes. This made it pointless to spend gold on gambling (since gambling, at the time, only produced normal weapons). The slow speed of leveling meant that one of a player’s primary means of gaining power was substantially curtailed. There were also notable power imbalances created by the use of percentages for some metrics (like life steal). In original vanilla D2, life steal was absurdly overpowered — and absolutely essential to surviving the late game. Certain classes were locked into endgame strategies as a result of bad math and poorly balanced game mechanics. It grated on me.
The changes to Diablo II from Normal to later difficulties weren’t just the result of Blizzard trying to be jerks. It’s common for RPGs to have poorly balanced endgames because most people do not play them for long enough to actually experience the endgame. This was a topic of discussion around Skyrim when that game was new, and it explains much of what happened with Diablo II way back then.
I developed the Fusion 2 mod for Diablo II, followed by a much larger overhaul, Cold Fusion. I and a team of three other people — Justin Gash, John Stanford, and Matt Wesson — cumulatively poured in several thousand man-hours of development time into Cold Fusion. I led the effort, which was a core part of my best friend’s senior project in computer science and consumed no small chunk of my own senior year in college. I’m not sure the game files exist on the internet any longer, but you can see the original website archived by the Wayback Machine. Fair warning: I was not a web designer. Still, it gives some idea of the scope of the project, if you’re familiar with Diablo II.
While I don’t expect anyone reading this to have ever played the mod — I never released an LoD-compatible version of the project — it was a pretty major part of my life for the time I worked on it. We overhauled the entire title, tweaking drop rates, fixing bugs, and implementing a new leveling curve, a new difficulty curve, new monsters, and new unique items intended for both Nightmare and Hell difficulty levels. We developed new audio effects, visuals, and skills using pieces of code that developers had left in place in the engine and audio effects another friend created. We pulled certain unique items over from Diablo I (with Diablo I art) and reworked the skill trees to better balance the game. Our goal, in every scenario, was to build a more consistent Diablo II that didn’t just funnel characters into a single endgame build but allowed other skills to compete as well. I was quite proud of the fact that when Lord of Destruction came out, it adjusted Diablo II in some of the same ways we had, and even introduced new spells that were similar to some of the ones we built. I’m absolutely not claiming that Blizzard took inspiration from our work — it was just neat to see that we’d been thinking along the same lines as people at the company.
For example: We implemented a logarithmic curve for CF’s level scaling — one that was designed to allow a player to run the game once at each difficulty level and finish “Hell” near maximum level. Blizzard wanted a game that would require many, many, many runs through maximum difficulty to reward Lvl 99 and used a differently-shaped curve to do it — but they still moved away from the linear curve they used in the early phases of the title when they launched the expansion, Lord of Destruction.
Until now, I never really understood why I was so unhappy with the base game in the first place. Now I do. I felt as though the collective changes to Diablo II that happened after Normal weren’t just the result of making the game harder — they made the game different, in ways that felt like they’d broken the trust Blizzard had established in building the game.
It’s not often that you discover the explanation for why you spent a few thousand hours rebuilding someone else’s project in an article written 18 years after the fact. I suppose Cold Fusion has always felt a bit like a road-not-taken path for me. It had its fans, but it was one reasonably popular mod among many, not a DOTA or a Counter-Strike. Either way, I appreciate Scheurle’s discussion of difficulty and how developers think about the topic. It shed some light on an episode of my own life.
Now Read:
Meet the PiS2: A PS2 Portable Built with a Raspberry Pi 2 Server
World of Warcraft Classic vs. Retail, Part 1: Which Early Game Plays Better?
PC Gamers Who Didn’t Play Classic Console Games Missed Out on Great Experiences
from ExtremeTechExtremeTech https://www.extremetech.com/gaming/299138-how-an-article-on-game-difficulty-explained-my-own-modding-18-years-later from Blogger http://componentplanet.blogspot.com/2020/04/how-article-on-game-difficulty.html
0 notes
componentplanet · 6 years ago
Text
How an Article on Game Difficulty Explained My Own Modding, 18 Years Later
A game’s difficulty level can make or break the title. Games that are perceived as too difficult become boring, depressing grinds, while games that are too easy become boring and tedious, with little challenge. One of the most profound differences between World of Warcraft Classic and Retail is the difference in difficulty. Of course, every player has their own ideas about how hard a game should be, but there’s no arguing that the difficulty of a title is important.
But according to game developer Jennifer Scheurle, game developers think about game difficulty very differently than players do, which may be part of why conversations on this topic sometimes seem to break down. Her piece resonated with me, partly because it reminded me of the reasons why I became a game modder, once upon a time. According to Scheurle, difficulty is all about trust.
“At the core of the difference between how game designers and players speak about difficulty,” she writes, “is the fact that we discuss it in terms of skill progression. All difficulty design is essentially that: crafting how players will learn, apply skills, and progress through challenges.”
Graphic by Jennifer Scheurle for Polygon
She then walks through examples of how this plays out in games, using the Dark Souls series as an example. DS games ask you to accept that you will die (frequently) as part of learning how encounters function. You aren’t simply being killed by mechanics you can’t master, beat, or counter, you’re learning how the game functions and how to counter incoming attacks. The game, in turn, obeys its own internal rules. Players often become angry at a game if they feel it isn’t holding up its end of the bargain in some particular, whether that refers to drop rates, spawn rates, boss difficulty, or the damage you take versus the damage you deal. She also discusses the importance of how a game teaches players to play it, and the various in-game ways that developers communicate game difficulty and associated rules. It’s a very different view of the topic than simply boiling it down into whether a game is “hard” or “easy,” and it leads to a much more nuanced view of how and why different titles may put difficulty in different places.
The article resonated with me in part because it describes part of why I became a Diablo II modder and taught me something about my own motivation. I don’t want to seem as if I’m hijacking Scheurle’s excellent discussion of game difficulty because it’s worth a read in its own right, but I’m going to switch gears a bit and talk about my own experience. To put it simply: I was pissed.
Diablo II’s Trust Fail
This was the early days of Diablo II, before the Lord of Destruction expansion had even come out. Patch 1.03 dropped not long before I started modding, to put a date on things. On Normal difficulty, Diablo II worked pretty well, but as you progressed into Nightmare and Hell difficulty modes, deficiencies became apparent.
Back then, Diablo II used a linear leveling curve in which the amount of XP you needed to gain for each additional level increased by a flat amount — the amount you needed for your previous level, plus a flat modifier. This was exacerbated by a leveling penalty, introduced in Nightmare, in which you lost XP gained towards your next level if your character died. You couldn’t drop a level due to this XP loss, but you could theoretically be 99 percent of the way to Lvl 50 and fall back to 0 percent at Lvl 49. The net result of this was that the amount of time required for each additional level increased sharply, and this became increasingly noticeable as you moved into the later game.
Now for the coup de grace: The game was poorly balanced outside of Normal difficulty. I became a game modder specifically because my Barbarian character with maximum Fire Resist was being one-shotted by mini-bosses with Fire Aura even when he used abilities that temporarily increased his HP. These mini-bosses and bosses could one-shot a character virtually as soon as you saw them. Death meant losing a portion of gold and dropping equipped items. Attempting to retrieve those items (using whatever alternate gear you had access to) was virtually guaranteed to get you killed at least once more because you’d have to drag monsters away from your corpse in order to try and retrieve what you originally had. Mini-bosses could also spawn with these modifiers in critical areas, where it was exceptionally difficult to move them away from a critical spawn point. There was no way to see the exact location of the fire aura on the ground; you knew you’d touched it when you died.
It was cheap. That’s what I called it. I didn’t consider it any kind of legitimate difficulty spike. It just felt like a way for Blizzard to make the game harder by killing players in a manner they couldn’t even fight. I became a modder because I was angry about the way that these imbalances had changed the game. I felt betrayed.
Looking back (and using Scheurle’s article for reference), I’ve realized that I was angry because Diablo II had broken trust with me. Some of these flaws existed in Normal as well, but they weren’t as apparent due to the influence of how other scaling factors impacted the title. Some of the changes between Normal and later difficulties that impacted how poorly the game scaled included the much-slower pace of leveling and the fact that there were no unique items in-game for the Nightmare and Hell difficulty modes. This made it pointless to spend gold on gambling (since gambling, at the time, only produced normal weapons). The slow speed of leveling meant that one of a player’s primary means of gaining power was substantially curtailed. There were also notable power imbalances created by the use of percentages for some metrics (like life steal). In original vanilla D2, life steal was absurdly overpowered — and absolutely essential to surviving the late game. Certain classes were locked into endgame strategies as a result of bad math and poorly balanced game mechanics. It grated on me.
The changes to Diablo II from Normal to later difficulties weren’t just the result of Blizzard trying to be jerks. It’s common for RPGs to have poorly balanced endgames because most people do not play them for long enough to actually experience the endgame. This was a topic of discussion around Skyrim when that game was new, and it explains much of what happened with Diablo II way back then.
I developed the Fusion 2 mod for Diablo II, followed by a much larger overhaul, Cold Fusion. I and a team of three other people — Justin Gash, John Stanford, and Matt Wesson — cumulatively poured in several thousand man-hours of development time into Cold Fusion. I led the effort, which was a core part of my best friend’s senior project in computer science and consumed no small chunk of my own senior year in college. I’m not sure the game files exist on the internet any longer, but you can see the original website archived by the Wayback Machine. Fair warning: I was not a web designer. Still, it gives some idea of the scope of the project, if you’re familiar with Diablo II.
While I don’t expect anyone reading this to have ever played the mod — I never released an LoD-compatible version of the project — it was a pretty major part of my life for the time I worked on it. We overhauled the entire title, tweaking drop rates, fixing bugs, and implementing a new leveling curve, a new difficulty curve, new monsters, and new unique items intended for both Nightmare and Hell difficulty levels. We developed new audio effects, visuals, and skills using pieces of code that developers had left in place in the engine and audio effects another friend created. We pulled certain unique items over from Diablo I (with Diablo I art) and reworked the skill trees to better balance the game. Our goal, in every scenario, was to build a more consistent Diablo II that didn’t just funnel characters into a single endgame build but allowed other skills to compete as well. I was quite proud of the fact that when Lord of Destruction came out, it adjusted Diablo II in some of the same ways we had, and even introduced new spells that were similar to some of the ones we built. I’m absolutely not claiming that Blizzard took inspiration from our work — it was just neat to see that we’d been thinking along the same lines as people at the company.
For example: We implemented a logarithmic curve for CF’s level scaling — one that was designed to allow a player to run the game once at each difficulty level and finish “Hell” near maximum level. Blizzard wanted a game that would require many, many, many runs through maximum difficulty to reward Lvl 99 and used a differently-shaped curve to do it — but they still moved away from the linear curve they used in the early phases of the title when they launched the expansion, Lord of Destruction.
Until now, I never really understood why I was so unhappy with the base game in the first place. Now I do. I felt as though the collective changes to Diablo II that happened after Normal weren’t just the result of making the game harder — they made the game different, in ways that felt like they’d broken the trust Blizzard had established in building the game.
It’s not often that you discover the explanation for why you spent a few thousand hours rebuilding someone else’s project in an article written 18 years after the fact. I suppose Cold Fusion has always felt a bit like a road-not-taken path for me. It had its fans, but it was one reasonably popular mod among many, not a DOTA or a Counter-Strike. Either way, I appreciate Scheurle’s discussion of difficulty and how developers think about the topic. It shed some light on an episode of my own life.
Now Read:
Meet the PiS2: A PS2 Portable Built with a Raspberry Pi 2 Server
World of Warcraft Classic vs. Retail, Part 1: Which Early Game Plays Better?
PC Gamers Who Didn’t Play Classic Console Games Missed Out on Great Experiences
from ExtremeTechExtremeTech https://www.extremetech.com/gaming/299138-how-an-article-on-game-difficulty-explained-my-own-modding-18-years-later from Blogger http://componentplanet.blogspot.com/2019/09/how-article-on-game-difficulty.html
0 notes