#virtual cache
Explore tagged Tumblr posts
Text
My L1 cache clears itself when I kiss a computer, genuinely cant get a cache hit for the life of me. Takes me hundreds of thousands of clock cycles to get back to where i'm at
fun fact: girls can have computers
#Dont even get me started on my L2 L3 and L4 cache#it puts all virtual memory onto disk#kissing is still fun though#reset my cpu memory pls
22K notes
·
View notes
Text
TECHNOLOGY ID PACK
NAMES︰ admin. ajax. alexa. am. atari. audio. auto. bailey. binary. blank. blu. blue. bluesse. browser. browsette. bug. byte. cache. calware. chip. circe. click. clicker. clickie. clicky. cloud. coda. code. codette. codie. cody. computette. crypt. cursor. cy. cyber. cybernet. cybernetica. cyberweb. cypher. cypherre. data. dell. digi. digitalia. digitelle. digitesse. disc. dot. electronica. electronique. emoticon. emoticonnie. fax. file. gig. gizmo. glitch. glitche. glitchesse. glitchette. graphique. hacker. hal. halware. hijack. index. informationne. intelligette. internette. interweb. java. javascript. juno. key. link. linuxe. lotus. lovebytes. mac. mal. malakai. malware. malwaria. memorette. memorie. meta. mic. micah. mickey. morphe. mouse. mousette. myspace. nano. neo. net. netette. nett. netty. paige. pascal. payton. peyton. pixel. programatha. programette. programme. pulse. reboot. rom. router. ruby. sam. sammy. screene. screenette. sean. shock. solitaire. spy. static. stutter. talia. tap. tecca. tech. techette. tessa. tetris. trojan. troubleshoot. ts. user. vir. virus. virusse. volt. vyrus. webbe. wheatley. whirr. widget. will. wirehead. wiresse. zap. zett. zetta. zip.
PRONOUNS︰ ai/ai. alt/alt. anti/antivirus. arc/archive. audio/audio. bat/battery. beep/beep. beep/boop. bit/bit. bit/byte. blue/blue. board/board. bright/bright. brow/browser. browser/browser. brr/brr. bu/bug. bug/bug. buzz/buzz. byt/byte. byte/byte. c/cpu. charge/charger. cir/circuit. cli/click. click/clack. click/click. click/scroll. co/code. code/code. color/color. com/com. com/computer. comp/computer. compute/computer. computer/computer. cor/corrupt. corrupt/corrupt. CPU/CPU. crash/crash. cre/creeper. crtl/crtl. cy/cyber. cyb/cyber. cyber/cyber. da/data. data/data. delete/delete. di/disk. dig/digital. digi/digi. digi/digital. digital/digital. dra/drag. e/exe. electronic/electronic. enter/enter. er/error. err/error. error/error. exe/exe. fi/file. file/file. gi/gif. gli/glitch. glit/glitch. glitch/glitch. graphic/graphic. hac/hacker. hack/hack. hard/hardware. head/phone. hij/hijacker. ho/home. info/info. information/information. int/internet. intelligent/intelligence. intelligent/intelligent. inter/net. internet/internet. it/it. jpg/jpg. key/board. key/cap. key/key. key/keyboard. key/keylogger. lag/lag. lap/laptop. ligh/light. linux/linux. load/load. log/login. main/mainframe. mal/malware. me/media. memory/memorie. mon/monitor. mou/mouse. nano/nano. net/net. net/network. org/org. over/overwrite. page/page. pix/pix. pix/pixel. pixel/pixel. plu/plug. png/png. pop/popup. port/port. pow/power. pro/program. program/program. ram/ram. ran/ransom. reboot/reboot. reload/reload. res/restore. ret/retro. route/router. sca/scan. scr/scroll. scre/screen. scre/screencap. scree/screen. screen/screen. scri/script. script/script. sentient/sentience. shift/shift. site/site. skip/skip. soft/software. spa/spam. space/space. spy/spyware. stop/stop. te/tech. tech/nology. tech/tech. technology/technology. tou/touchpad. txt/txt. typ/type. upload/upload. user/user. vi/viru. vi/virus. vir/virtual. web/page. web/web. whir/whir. wi/wire. win/dow. win/window. wire/wire. wire/wired. zip/zip . ⌨ . ☣ . ⚙ . ⚠ . 🎞 . 🎨 . 🎭 . 🎮 . 🎵 . 👀 . 👁 . 💔 . 💡 . 💢 . 💣 . 💳 . 💵 . 💻 . 💽 . 💾 . 💿 . 📀 . 📱 . 🔇 . 🔈 . 🔉 . 🔊 . 🔋 . 🔌 . 🔎 . 🖥 . 🖱 . 🗡 . 🗯 . 🛠 . 🧿 .
#pupsmail︰id packs#id pack#npt#name suggestions#name ideas#name list#pronoun suggestions#pronoun ideas#pronoun list#neopronouns#nounself#emojiself#techkin#robotkin#internetkin
453 notes
·
View notes
Text
Round 3 - Mammalia - Eulipotyphla




(Sources - 1, 2, 3, 4)
Our next mammalian order is Eulipotyphla, sometimes called the “true insectivores” (as they used to be grouped paraphyletically with some afrotherians, colugos, and treeshrews in an order called “Insectivora”). Eulipotyphla includes the families Solenodontidae (solenodons), Talpidae (“moles”), Soricidae (“shrews”), and Erinaceidae (“hedgehogs” and “gymnures.”) Yes, we have finally come to the true moles and shrews!
Eulipotyphlans resemble rodents with pointed snouts and small or reduced eyes. Scientifically, they are set apart by their lack of cecum in the large intestine. Most are terrestrial insectivores or omnivores, and they have many sharp, spike-like teeth. Some of these animals (solenodons and shrews of the genus Sorex and Blarina) emit clicking noises, the sound waves of which bounce off objects in their vicinity. This form of echolocation helps these nearly blind animals navigate as well as find food. Eulipotyphlans also have an above average sense of smell. Many have unusually high metabolic rates, and need to eat almost constantly. Eulipotyphla contains the majority of venomous mammals, the only others being the Platypus (Ornithorhynchus anatinus), 3 species of vampire bat, and Slow Lorises (of the genera Nycticebus and Xanthonycticebus).
Eulipotyphlans are generally solitary, highly territorial animals that only tolerate each other for breeding. Only the mother raises the young. Litter size depends on species. Solenodons only have 1-2 young per litter once a year, while shrews can have 1-11 pups per litter, and can become pregnant soon after giving birth. Baby hedgehogs (called hoglets) are born with their quills covered by a protective membrane which dries and falls off several hours after birth, allowing their sharp quills to emerge.
Eulipotyphla is one of the oldest mammalian orders, having already begun to diversify in the Late Cretaceous, before the K-Pg extinction.
Propaganda under the cut:
Many shrews have a venomous bite. They use their venom to render invertebrate prey paralyzed, caching them for sustenance in the Winter months when food is more scarce. Their venom also allows them to take down prey their size or even larger, such as rodents and lizards. The European Mole (Talpa europaea), and possibly other species of mole, also have toxic saliva that allows them to cache paralyzed earthworms for later consumption. As an added measure, solenodons have grooves in their teeth which allow them to more effectively deliver venom. Fossil records show that some other now-extinct mammal groups also had the dental venom delivery system, indicating that solenodons’ most distinct characteristic may have been a more general ancient mammalian characteristic that has been lost in most modern mammals and is only retained in a couple of very ancient lineages.
The contents of the venom glands of one American Short-tailed Shrew (genus Blarina) are enough to kill 200 mice.
Solendons are often called "living fossils" because they have remained virtually unchanged for the past 76 million years.
The Hispaniolan Solenodon (Solenodon paradoxus) (image 4) was once thought to be extinct, due to its secretive and elusive behavior. The Hispaniolan Solenodon and the rat-like Hispaniolan Hutia (Plagiodontia aedium) live in the same habitats and are the only surviving mammals native to Hispaniola.
The Cuban Solenodon (Atopogale cubana) is endangered due to predation from invasive animals like domestic cats, domestic dogs, and the Small Indian Mongoose (Urva auropunctata) which was introduced to Cuba to control snakes and rodents. It is also threatened by deforestation as well as habitat degradation due to logging and mining. The animal can take a long time to recover because it only breeds a single litter of 1-2 young per year. Unfortunately, solenodons are not very charismatic, and very little conservation attention is given to the species.
Desmans (see gif above) are uniquely aquatic moles, though they excavate dry sleeping chambers. They have waterproof undercoats and oily guard hairs, elongated and flattened tails, and webbed paws to aid in swimming.
While the Star-nosed Mole (Condylura cristata) (image 3) is known to share its burrow, other moles are very territorial, and can engage in extraordinarily fast battles.
The Star-nosed Mole is adapted for both subterranean life and for swimming. Star-nosed Moles are able to smell underwater, accomplished by exhaling air bubbles onto objects or scent trails and then inhaling the bubbles to carry scents back into the nose.
A report in the journal Nature gives the Star-nosed Mole the title of fastest-eating mammal, taking as little as 120 milliseconds (average: 227 ms) to identify and consume individual food items. Its brain decides in approximately eight milliseconds if prey is edible or not.
The Small Japanese Mole (Mogera imaizumii) is extinct in central Tokyo, but still found on the grounds of the Imperial Palace.
The Etruscan Shrew (Suncus etruscus) is the smallest known terrestrial mammal, with an adult body length of about 4 cm (1.6 in), excluding the tail. On average, they weigh only about 1.8 g (0.063 oz). Like other shrews, it has a very fast metabolism, eating about 1.5–2 times its own body weight per day.
In some shrew species, exposed areas of the teeth are dark red due to the presence of iron in their tooth enamel. The iron reinforces the surfaces that are exposed to the most stress, which helps prolong the life of their teeth.
Shrews are considered beneficial to humans, as they are voracious predators of many insects and rodents that humans consider pests, such as cockroaches and House Mice (Mus musculus).
Shrews do this really cute thing where the babies will each bite onto the tail of the one in front of them and all follow their mom in a line so they don’t get lost. This is called “caravanning.” I call it a Shrew-shrew Train. (I’d like to see what the people who get upset about those joint child daycare leashes think of this.)
The Dalat Gymnure (Hylomys macarong) gets its species name, macarong, from the Vietnamese word for “vampire”, Ma cà rồng. This is a reference to the animals’ prominent long fangs, specifically the first upper incisors, that distinguish mature males of this species.
Hedgehogs (subfamily Erinaceinae) are one of the many mammal groups to convergently evolve spines from hair. Unlike the quills of a porcupine, hedgehog spines do not easily detach from their bodies. However, immature hedgehog’s spines normally fall out as they are replaced with adult spines. The animal will roll into a tight spiny ball when threatened, tucking in their furry face, feet, and belly. Some light-weight desert hedgehog species with fewer spines are more likely to flee or attack, ramming an intruder with their spines, rolling up only as a last resort.
Since 2000, the European Hedgehog (Erinaceus europaeus) population in Great Britain has been declining rapidly, down by 30%-75%. European Hedgehogs are common roadkill in Britain, especially during the breeding season when they are wandering for a mate.
Hedgehogs can suffer from a unique condition called balloon syndrome, in which gas is trapped under the hedgehog’s loose skin from injury or infection, causing the animal to inflate. Trying to research this syndrome can be difficult, as searching “hedgehog inflation” can often yield unintended results.
The Four-toed Hedgehog (Atelerix albiventris) (image 1) is a highly energetic predator, sometimes covering miles of ground in a single night as it forages for insects, grubs, snails, spiders, some plant matter, and even small vertebrates. It has a high tolerance for toxins and has been recorded consuming scorpions and even venomous snakes.
Hedgehogs are usually a welcome visitor to gardens, as they eat many garden pests such as beetles, slugs, and grasshoppers, and only eat a very small amount of plant matter.
Moles and hedgehogs have natural immunity against some snake venoms due to the protein erinacin in their muscles (though in such small amounts that a viper bite may still be fatal).
93 notes
·
View notes
Text
How I got my Sims 2 game working on my new Windows 11 laptop, step by step.
OKAY. Gadies and Lentlemen.
I have seen plenty of these around, but I wanted to share my process!
So I bought an MSI creator laptop. The specs are as follows:
CPU: 13th gen Intel i7-13700H
RAM: 16 GB DDR5
GPU: NVIDIA GeForce RTX 4050 laptop GPU
Step One: Fresh install. I used the EA App to install the UC version on my new laptop.
Step Two: Download and install RPC launcher. This will automatically apply the 4gb patch. Run as administrator, but not in any kind of compatibility which renders the 4gb patch useless.
Step Three: Download and install Graphics Rule Maker. I used all of the recommended settings, aside from texture memory which I set at 2048 mb for reasons that will become clear later.
Step Four: Memory allocation fix (empty standby list). Explanation here.
Step Five: Setting virtual memory. I used instructions from this post at MTS - My virtual memory paging file is now minimum of 25000 and maximum of 30000. You'll need to adjust to your system's own specs.
Step Six: In game settings. Shadows Off. Neighbours Off. Lighting Medium. RPC Settings. Apply 4GB patch. Automatically Clean Cache. Lot Imposters Optimized. Sim/Object Shadows Classic. I also have lot view ocean reflections ticked.
If your game works like this with no flashing and crashing, awesome. Mine did not. I firstly tried several different texture memory sizes, but they had 0 impact.
I believe the next step is only for NVIDIA cards, but may be wrong.
Step Seven: DXVK. The most recent version from, here. There are plenty of instructions on how to install out there. But make sure you install the 32bit version. I have these two following lines in my DXVK.conf file (and do make sure it is saved as a conf file, NOT a txt or similar).
d3d9.maxAvailableMemory=2048
d3d9.presentInterval=0
The first line corresponds to the texture memory mentioned earlier. DXVK installs won't recognise more than that and having it higher can cause crashes apparently. The second line... No idea what it does but it was mentioned in several guides and reddit posts.
I don't know if the newer versions of DXVK allow fullscreen mode as the older ones did not, but I play in borderless mode anyway which works.
I also delete my thumbnails folder every so often.
I hope this helps someone, this silly old game can be cantankerous but I was determined to get it running again!
452 notes
·
View notes
Text
NetNavi Headcanons
Just some things that I think about.
-Navis need sleep. Since they're so complex, they accumulate a lot of cache data throughout the day. Sleeping clears their system of the cache, and refreshes all their systems so they're at top-condition when they wake up.
-There are generic Navis and custom Navis. Generic Navis can have custom personality programming, but their appearance is much harder to customize, like Mick's NetNavi in MMBN6.
-NetNavis are programmed to mimic human-like behaviours, as they're meant to be friends and companions. That means breathing, making jokes, and being pushy at times. (Megaman is especially pushy though, even for a highly-customized Navi.)
-There are a variety of accessories out there for NetNavis, since Operators may want to dress them up or give them something to stand out, the same way somebody may dress up a virtual pet.
-PETs can also be customized with furniture. Same concept as having a virtual pet. You want to buy things and add things to make your own cooler.
-Megaman is a nerd. He reads everything and anything when Lan is off talking to his friends or travelling somewhere. He has all kinds of virtual books that he buys for himself or friends and family give him as gifts. How else do you explain him knowing everything without looking it up when Lan asks him a question?
-Megaman also has a Battle Chip addiction. Lan likes collecting them, and he gets excited when he does, but for him, it's like trading cards. For Megaman, it's a toy. A toy that could potentially wipe out an entire horde of viruses, but it's still a toy, and he begs Lan to get new ones. (There's an in-game HumorProgram quote where he does this.)
-Navis treat their job as a real job. It's an intense job, but they love it an take it seriously. Some take it too seriously. Also, NetNavis can just run off if they start getting too annoyed with their Operator. Sure their Operator can find them with the PET and potentially force-recall them to it, but the Navi can still rebel.
-Gossiping about their Operators is a normal behaviour. It's just like talking about your siblings, friends and co-workers.
40 notes
·
View notes
Text
𓎟 ̊ names and prns relating to the internet, cyberspace, and technology𓈒
requested by anon
Zett﹐ Zetta﹐ Disc﹐ Data﹐ Virus﹐ Vir﹐ Code﹐ Codette﹐ Interweb﹐ Cyberweb﹐ Binary﹐ Morphe﹐ Index﹐ Byte﹐ File﹐ Auto﹐ Nett﹐ Crypt﹐ Trojan﹐ Mac﹐ Pixel﹐ Router﹐ Cache﹐ Java﹐ Malware﹐ Coda
ai ais﹐ glitch glitches﹐ screen screens﹐ web webs﹐ pix pixel﹐ .png .pngs﹐ .exe .exes﹐ .jpg .jpgs﹐ file files﹐ cyber cybers﹐ code codes﹐ script scripts﹐ .com .coms﹐ tech techs﹐ .net .nets﹐ .org .orgs﹐ error errors﹐ nano nanos﹐ vir virtual﹐ net network﹐ key keys﹐ .zip .zips﹐ wire wires
[PT: names and pronouns relating to the internet, cyberspace, and technology
requested by anon
Names: Zett, Zetta, Disc, Data, Virus, Vir, Code, Codette, Interweb, Cyberweb, Binary, Morphe, Index, Byte, File, Auto, Nett, Crypt, Trojan, Mac, Pixel, Router, Cache, Java, Malware, Coda
Pronouns: ai/ais, glitch/glitches, screen/screens, web/webs, pix/pixel, .png/.pngs, .exe/.exe, .jpg/.jpg, file/files, cyber/cybers, code/codes, script/scripts, .com/.coms, tech/techs, .net/.nets, .org/.orgs, error/errors, nano/nanos, vir/virtual, net/network, key/keys, .zip/.zips, wire/wires
/end PT.]
#✚𓈒 ― np lists#✚𓈒 ― requests# #mogai safe#mogai#liom safe#mogai coining#name ideas#name suggestions#npt ideas#npt suggestions#npt list#technology names#technology pronouns#neopronoun suggestions#neopronoun ideas#neopronouns#pronoun ideas#pronoun suggestions
128 notes
·
View notes
Note
anon that requested the g-177 npts here, itz rlly no worries!! i jus wanted to see if it was possible and i get why itz not,, ^^’
may i please request technology & angel based npts, then? once again, if not, i completely understand, and thank you if you do fufill the request!!
angel ++ technology npts!
names
cache, cyber, virus, cloud, ram, celle, eden, mal / malware, saint, divinity, neo, pixel, byte, cypher, astra, auren, zephyr, seraph, seraphim, etheria, cassette, widget, tera / terabyte
2. pronouns
vi/vir, virtual/virtuals, wire/wires, click/clicks, glitch/glitches, code/codes, light/lights, sy/syr, cy/cyr, tech/techs, digital/digitals, AI/AIs, key/keys, byte/bytes, pix/pixs, neo/neos, divine/divines, eth/ethereal, aura/auras, halo/halos
3. titles
the digital seraph, prn who is confined within a screen, the cyber angel, the programmer, prn who is made of code, the AI, the technology-loving angel, prn created with binary
#♱ npt ₊#npt#npt list#npt ideas#npt suggestions#npt pack#pronouns#pronoun ideas#pronoun suggestions#pronoun list#name ideas#name inspiration#name help#pronoun help#name list#name suggestions#title suggestions#title ideas#title help#npt help#angel npt#technology npt
72 notes
·
View notes
Text
new file found . [ frag : 002 ]
names : Elseif , Falliay , Cache , Stylus , Pawn , Chipper , Rook , Byte age : chrono ageless , presents as 17-23 gender : neutrois , agenderflux prns : he / him , xe / xem , it / its , they / them , void / voids , null / nulls attraction : sex-repulsed , romance-indifferent , aroflirt , lithromantic , fictosexual srce : roblox game : kaleidoscope chess by joyful , chess board by amanda fagan (a tad bit) species : robloxian , sentient chess piece , software roles : observer , persecutorflux , serpent . obtruder
cisID : cataplexy , biid , synesthesia , reckless , objectum , undead , traumatized , immortal , virtual transID : ampusoma , dextrocardia , itervictim , nullhuman , nullvoice , typing quirk , programmed , NPC
paras / kinks / etc. : chronophilia , objectophilia, technophilia , autassassinophilia , voyeurism , autophilia
sign-offs / emojis : ♟ , 💻 , 🔗 , 💢
personality / behaviours : Elseif prefers to keep people at a distance, believing that everyone should fit neatly into defined categories. His communication is blunt, and he has little patience for ambiguity. He often expresses frustration when things don’t make sense or when people act irrationally.
He fears things that don't make sense.
The fear turns into rage, and the rage turns into a public safety hazard.
#info : alter pack#srce : kaleidoscope .#bah#build a headmate#alter creation#headmate creation#build an alter#alter pack#radqueer please interact
6 notes
·
View notes
Text
This NYT podcast about a woman who fell in love with an AI she 'designed' to uphold a very specific fantasy and that she ultimately relied on more than her spouse was wild. It goes to show how people can knowingly dillude themselves in order to feel loved.
It kind of blew my mind and I felt like bringing it up because damn, I once thought reading rp fics was taboo (i got over it), but apparently there's even more out there to cater to our fantasies than even I knew about.
So two things.
1. My brain immediately went to a very dark place: Your own personal pocket BTS boyfriend using AI speech and that you are able to prompt with scenarios. I hope we NEVER get AI versions of the members presented to us in order to have fake fantasy lead conversations or, shudder, relationships with. Have you ever seen those fake videos where two members 'do stuff' 🤢🤢🤢 Why did my brain even conjure such a nightmare??
A couple of years ago, HyBe bought a company that used virtual technology to create holographic images through full body scans. It was used once during an end of year performance, BE era, when Yoongi was recovering from his shoulder surgery. To me it felt like doom.
While hybe never hinted at using the tech in any other fashion other than for that performance, man, when it comes to turning a profit, you know one day all breaks will be off. Even if it's years in the future. Let's just hope the members still actually own their own portrait rights.
2. Delusions. In the article, the woman kept having to re-prompt the AI to get it to remember their 'relationship'. Because the AI was only able to hold a certain amount of cache, it would forget what had happened before, and for her, it felt like heartbreak every time.
Yet she kept coming back because the fantasy, or more specifically, the feeling of understanding, prompted by her conversations with the AI, was that good. Mind you, this person was married! To a human man, yet she says this AI was the greatest relationship she's ever had!
Again. It blew my mind. Not because I think she's crazy, although she does come across as having issues, but because of the level of delusions we humans are willing to accept in order to reach a certain end goal. Rinse and repeat. Reminding me of a certain group of nutcases actually 🤔
The podcast goes further in explaining the various apps or services that are already available, enabling consumers to explore AI relationships.
Me: 🤯 why am I such a baby deer????
A psychologist on the podcast said it was actually becoming a trend amongst middleschoolers to have an AI friend to engage with (romantically, I guess) like a 'safe' co-hort. Like training wheels for human to human interaction 😭 wild! As if it's a good thing. When we all know how addictive our phones and all these apps are. It takes a lot of self-control to be able to extract yourself from the virtual world and back into the real one.
Seldom have I been as entertained by an article as this one. Give it a listen and come back to tell me what you'd prompt your AI partner to talk to you about.
Wait, are any of you reading this even real? How would I know???????
#mind kept being blown by human behavior#psychology will keep you insane#bts#i guess#mind map is as far as I got to fantasizing#hybe corp#hybe is apparently the devil incarnate#in my fantasies it is
12 notes
·
View notes
Text
i know this looks like nothing to those who don't know but for me this is the culmination of my slaving away for 3 days lmao the homebrew i'm running, gbarunner3, is a virtual machine that uses the GBA hardware in DS Mode to run GBA games with a higher layer of customization and access to traditional DS hardware. an example of something similar that comes to mind is Nintendont, the gamecube loader for Wii. Technically speaking, the vWii on the Wii U is also this concept, albeit underutilized.
anyway, it's intended to run ROMs off the SD card exclusively. Meaning that it cannot run actual cartridges. I wanted to use my pokemon carts alongside gbarunner3's wireless emulation to facilitate wireless adapter features without having the more expensive GBA hardware and peripherals, for an example.
i spent the last few days learning a bit of how the DS hardware and the codebase works, and trying to implement the option of using actual hardware than only ROMs off the SD card.
the DS only has 4mb of memory! and most GBA ROMs are between 8mb-32mb. how gbarunner3 gets the ROM loaded, is that it has 1mb worth of cached ROM loaded at a time and dynamically loads a ROM region based on when the game's code needs to access it. learning how to deal with memory in the context of the SLOT2 data bus in a codebase designed around SD card caching...
It was very annoying! I had no way to debug anything other than forcing a crash to tell when I got a certain result. it effectively was trial and error for hours and hours of blind crash debugging. the dev of gbarunner3 uses an IS-Nitro development kit to do his debugging. I didn't have that lmao
this is me oversimplifying everything incredibly because it's very hard to describe but i'm very proud of it. i'm sure it looks like nothing from an outsider's perspective though but i think it opens the gateway for some cool features that you'd otherwise have to use digital-only ROMs for.
I have a lot more to do but the fact i got this far on a whim is really neat!
37 notes
·
View notes
Note
redbo what kinda things do your cool goggles do? you mentioned them a while ago and im intrigued :) did you make them?
Good question, Anon. I actually have both a visor and a pair of goggles that I keep on my person at all times. And yes, I've made every piece of complex technology in this place.
The goggles aid in repairs. They can pick up on broken and faulty parts far quicker than I ever could on my own, and I've programmed them to give me a list of the exact tools and replacement parts I need for any given job. I can also use them to scan through any code and debug it. Additionally, they have built-in earpieces I can wear when I want to listen to music on the job. That's how I've been listening to the suggestions you Viewers send in.
The visor is essentially a HUD for all the inner workings in this facility. It's connected to the PC in my room. Most of the time, I use it to find what rooms are in need of repairs (assuming the workers actually reported any malfunctions). But I can also use it to access my files remotely, check cameras, and access my inbox here. I'm usually wearing them, but the red tint starts hurting my eyes after a while (akin to playing a Virtual Boy for too long), so occasionally I have to take breaks from wearing them.
I have some other gadgets I keep on my belt for repairs. Measuring tape that stores measurements into a temporary cache, a fully portable soldering iron, the works. And I have some functional jewelry, but I really only use my watch (primarily for sending verbal announcements or messages to workers) and a ring that has a blade built into it (for cutting open boxes in storage).
I know that's all a lot, but I figured I would share regardless.
#simulation evbo#redbo#redbo and simbo blog#pvp civilization#parkciv#parkour civilization#pvpciv#ask blog#rp blog
9 notes
·
View notes
Text
Sights of Eorzea Alpha is live!
Fellow sightseers, we are thrilled to announce the alpha version of the Sights of Eorzea website, a tool to help us explore and share the breathtaking landscapes of our beloved Etheirys!
Sync with the Sights of Eorzea Discord Server
The site seamlessly integrates with our Discord server, where you can share discoveries and chat with fellow adventurers! Posts to our gpose-locations forum will be synced and available on the public website; no Discord sign-in is required!
Tag Search
Searching for that special place just got a whole lot easier! Whether finding gorgeous nooks and crannies within instances or uncovering those mesmerizing Open World shots, the tagging system got your back.
By creating a forum post with the in-game <pos> information, the tag engine accurately determines the associated zone, patch, and expansion - making it much easier to look for that perfect location!
You can also add search tags to your post to help fellow adventurers locate specific features, like grass, wood-floor or stone-building:
Or search for Photo Studios by Data Center, Server, and Location - and even publish your own!
Known Issues
Keep in mind that this is a very early alpha release!
Our gallery tiling has a few missing spots, where we plan to feature captivating callouts that showcase tag combinations like Studios, Instances, and Hidden Places, to name a few.
Occasionally, some scenes may have their locations mistagged. We'll get that fixed Soon™!
Most of the UX work up to this point was focused on a functional desktop experience. We'll be tackling mobile next!
Caching large images is hard, and we're still fine-tuning it. You may experience some lag because of that, but rest assured we'll be working on optimization.
Future Plans
This is only the beginning of our journey; our plans include synchronizing other sections of the server, like Guides, Community resources (including Preset Collections and Tools), and eventually the Gallery itself - so public, non-Discord users can enjoy everything our community put together.
Improving navigation and stability is next on our list. We understand that this alpha release will inevitably encounter bugs and downtime; we'll be working on stabilization next.
I hope you enjoy what we made so far; suggestions and comments are more than welcome!
Oh, I almost forgot! Here's the link to the Test version: https://test.sightsofeorzea.com/
And with that - happy exploring!
(P.S. - If you made it to the end of this post, you might as well join our discord server!)
107 notes
·
View notes
Text
In 2017, soon after Google researchers invented a new kind of neural network called a transformer, a young OpenAI engineer named Alec Radford began experimenting with it. What made the transformer architecture different from that of existing A.I. systems was that it could ingest and make connections among larger volumes of text, and Radford decided to train his model on a database of seven thousand unpublished English-language books—romance, adventure, speculative tales, the full range of human fantasy and invention. Then, instead of asking the network to translate text, as Google’s researchers had done, he prompted it to predict the most probable next word in a sentence.
The machine responded: one word, then another, and another—each new term inferred from the patterns buried in those seven thousand books. Radford hadn’t given it rules of grammar or a copy of Strunk and White. He had simply fed it stories. And, from them, the machine appeared to learn how to write on its own. It felt like a magic trick: Radford flipped the switch, and something came from nothing.
His experiments laid the groundwork for ChatGPT, released in 2022. Even now, long after that first jolt, text generation can still provoke a sense of uncanniness. Ask ChatGPT to tell a joke or write a screenplay, and what it returns—rarely good, but reliably recognizable—is a sort of statistical curve fit to the vast corpus it was trained on, every sentence containing traces of the human experience encoded in that data.
When I’m drafting an e-mail and type, “Hey, thanks so much for,” then pause, and the program suggests “taking,” then “the,” then “time,” I’ve become newly aware of which of my thoughts diverge from the pattern and which conform to it. My messages are now shadowed by the general imagination of others. Many of whom, it seems, want to thank someone for taking . . . the . . . time.
That Radford’s breakthrough happened at OpenAI was no accident. The organization had been founded, in 2015, as a nonprofit “Manhattan Project for A.I.,” with early funding from Elon Musk and leadership from Sam Altman, who soon became its public face. Through a partnership with Microsoft, Altman secured access to powerful computing infrastructures. But, by 2017, the lab was still searching for a signature achievement. On another track, OpenAI researchers were teaching a T-shaped virtual robot to backflip: the bot would attempt random movements, and human observers would vote on which resembled a flip. With each round of feedback, it improved—minimally, but measurably. The company also had a distinctive ethos. Its leaders spoke about the existential threat of artificial general intelligence—the moment, vaguely defined, when machines would surpass human intelligence—while pursuing it relentlessly. The idea seemed to be that A.I. was potentially so threatening that it was essential to build a good A.I. faster than anyone else could build a bad one.
Even Microsoft’s resources weren’t limitless; chips and processing power devoted to one project couldn’t be used for another. In the aftermath of Radford’s breakthrough, OpenAI’s leadership—especially the genial Altman and his co-founder and chief scientist, the faintly shamanistic Ilya Sutskever—made a series of pivotal decisions. They would concentrate on language models rather than, say, back-flipping robots. Since existing neural networks already seemed capable of extracting patterns from data, the team chose not to focus on network design but instead to amass as much training data as possible. They moved beyond Radford’s cache of unpublished books and into a morass of YouTube transcripts and message-board chatter—language scraped from the internet in a generalized trawl.
That approach to deep learning required more computing power, which meant more money, putting strain on the original nonprofit model. But it worked. GPT-2 was released in 2019, an epochal event in the A.I. world, followed by the more consumer-oriented ChatGPT in 2022, which made a similar impression on the general public. User numbers surged, as did a sense of mystical momentum. At an off-site retreat near Yosemite, Sutskever reportedly set fire to an effigy representing unaligned artificial intelligence; at another retreat, he led colleagues in a chant: “Feel the AGI. Feel the AGI.”
In the prickly “Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI” (Penguin Press), Karen Hao tracks the fallout from the GPT breakthroughs across OpenAI’s rivals—Google, Meta, Anthropic, Baidu—and argues that each company, in its own way, mirrored Altman’s choices. The OpenAI model of scale at all costs became the industry’s default. Hao’s book is at once admirably detailed and one long pointed finger. “It was specifically OpenAI, with its billionaire origins, unique ideological bent, and Altman’s singular drive, network, and fundraising talent, that created a ripe combination for its particular vision to emerge and take over,” she writes. “Everything OpenAI did was the opposite of inevitable; the explosive global costs of its massive deep learning models, and the perilous race it sparked across the industry to scale such models to planetary limits, could only have ever arisen from the one place it actually did.” We have been, in other words, seduced—lulled by the spooky, high-minded rhetoric of existential risk. The story of A.I.’s evolution over the past decade, in Hao’s telling, is not really about the date of machine takeover or the degree of human control over the technology—the terms of the A.G.I. debate. Instead, it’s a corporate story about how we ended up with the version of A.I. we’ve got.
The “original sin” of this arm of technology, Hao writes, lay in a decision by a Dartmouth mathematician named John McCarthy, in 1955, to coin the phrase “artificial intelligence” in the first place. “The term lends itself to casual anthropomorphizing and breathless exaggerations about the technology’s capabilities,” she observes. As evidence, she points to Frank Rosenblatt, a Cornell professor who, in the late fifties, devised a system that could distinguish between cards with a small square on the right versus the left. Rosenblatt promoted it as brain-like—on its way to sentience and self-replication—and these claims were picked up and broadcast by the New York Times. But a broader cultural hesitancy about the technology’s implications meant that, once OpenAI made its breakthrough, Altman—its C.E.O.—came to be seen not only as a fiduciary steward but also as an ethical one. The background question that began to bubble up around the Valley, Keach Hagey writes in “The Optimist: Sam Altman, OpenAI, and the Race to Invent the Future” (Norton), “first whispered, then murmured, then popping up in elaborate online essays from the company’s defectors: Can we trust this person to lead us to AGI?”
Within the world of tech founders, Altman might have seemed a pretty trustworthy candidate. He emerged from his twenties not just very influential and very rich (which isn’t unusual in Silicon Valley) but with his moral reputation basically intact (which is). Reared in a St. Louis suburb in a Reform Jewish household, the eldest of four children of a real-estate developer and a dermatologist, he had been identified early on as a kind of polymathic whiz kid at John Burroughs, a local prep school. “His personality kind of reminded me of Malcolm Gladwell,” the school’s head, Andy Abbott, tells Hagey. “He can talk about anything and it’s really interesting”—computers, politics, Faulkner, human rights.
Altman came out as gay at sixteen. At Stanford, according to Hagey, whose biography is more conventional than Hao’s but is quite compelling, he launched a student campaign in support of gay marriage and briefly entertained the possibility of taking it national. At an entrepreneur fair during his sophomore year, in 2005, the physically slight Altman stood on a table, flipped open his phone, declared that geolocation was the future, and invited anyone interested to join him. Soon, he dropped out and was running a company called Loopt. Abbott remembered the moment he heard that his former student was going into tech. “Oh, don’t go in that direction, Sam,” he said. “You’re so personable!”
Personability plays in Silicon Valley, too. Loopt was a modest success, but Altman made an impression. “He probably weighed a hundred and ten pounds soaking wet, and he’s surrounded by all these middle-aged adults that are just taking in his gospel,” an executive who encountered him at the time tells Hagey. “Anyone who came across him at the time wished they had some of what he had.”
By his late twenties, Altman had parlayed his Loopt millions into a series of successful startup investments and become the president of Y Combinator, the tech mega-incubator that has spun off dozens of billion-dollar companies. The role made him a first point of contact for Valley elders curious about what was coming next. From Jeff Bezos, he borrowed the habit of introducing two people by e-mail with a single question mark; from Paul Graham, Y Combinator’s co-founder, he absorbed the idea that startups should “add a zero”—always think bigger. It was as if he were running an internal algorithm trained on the corpus of Silicon Valley-founder lore, predicting the next most likely move.
To the elders he studied, Altman was something like the tech world’s radiant child, both its promise and its mascot. Peter Thiel once remarked that Altman was “just at the absolute epicenter, maybe not of Silicon Valley, but of the Silicon Valley zeitgeist.” (Altman is now married to a young Australian techie he met in Thiel’s hot tub.) Graham offered his own version: “You could parachute him into an island full of cannibals and come back in five years and he’d be king.” Some kind of generational arbitrage seemed to be under way. In 2008, Altman began attending Sun Valley Conference, an exclusive annual retreat for industry leaders, where he eventually became “close friends,” we learn, with Barry Diller and Diane von Furstenberg. Yet, in the mid-twenty-tens, he still shared an apartment with his two brothers. Hao records a later incident in which he offered ketamine to an employee he’d just fired. He was both the iconic child to the tech world’s adults and the iconic adult to its children.
An interesting artifact of the past decade in American life is that the apocalyptic sensibility that came to grip U.S. politics during the 2016 Presidential campaign—the conviction, on both right and left, that the existing structure simply could not hold—had already bubbled up in Silicon Valley a few years earlier. By 2015, Altman had been donating to Democratic candidates and seemed to have seriously considered a run for governor of California. But he also told Tad Friend, in a New Yorker Profile, that he was preparing for civilizational collapse and had stockpiled “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force, and a big patch of land in Big Sur I can fly to.”
One view is that tech billionaires saw the brink early because they understood just how unequal—and therefore unstable—American society was becoming. But, inside the Valley, that anxiety often expressed itself in the language of existential risk. In particular, fears about runaway artificial intelligence surged around the time of the 2014 publication of “Superintelligence,” by the philosopher Nick Bostrom. According to Hao, Elon Musk became fixated on an A.I. technologist, Demis Hassabis—a co-founder of DeepMind, which had recently been acquired by Google—whom Musk reportedly viewed as a “supervillain.” That same year, at an M.I.T. symposium, Musk warned that experiments in artificial intelligence risked “summoning the demon.”
Altman had been itching for a bigger project. The next Memorial Day weekend, he gathered hundreds of young Y Combinator protégés for an annual glamping retreat among the redwoods of Mendocino County. The night before, he had beaten a group of Y Combinator staffers at Settlers of Catan. Now, standing before them, he announced that his interests had narrowed—from, roughly, all of technology to three subjects that he believed could fundamentally change humanity: nuclear energy, pandemics, and, most profound of all, machine superintelligence.
That same month, Altman sent an e-mail to Musk. “Been thinking a lot about whether it’s possible to stop humanity from developing AI,” he wrote. “I think the answer is almost definitely not. If it’s going to happen anyway, it seems like it would be good for someone other than Google to do it first.” Altman proposed his Manhattan Project for A.I. so that the technology, as he put it, would “belong to the world,” through some form of nonprofit. Musk replied, “probably worth a conversation.”
It fell to Chuck Schumer, of all people, to offer the secular-liberal benediction for the project—by then consolidated as OpenAI and led by Altman, who had sidelined Musk. “You’re doing important work,” the New York senator told the company’s employees, seated near a TV projecting a fire, during an off-the-record visit to OpenAI’s headquarters in 2019, as Hao documents. “We don’t fully understand it, but it’s important.” Schumer went on, “And I know Sam. You’re in good hands.”
How do people working in A.I. view the technology? The standard account, one that Hao follows, divides them into two camps: the boomers, who are optimistic about AI’s potential benefits for humanity and want to accelerate its development, and the doomers, who emphasize existential risk and edge toward paranoia. OpenAI, in its original conception, was partially a doomer project. Musk’s particular fear about Demis Hassabis was that, if Google assigned a potential A.G.I. the goal of maximizing profits, it might try to take out its competitors at any cost. OpenAI was meant to explore this technological frontier in order to keep it out of malign hands.
But in early 2018 Musk left. The organization was struggling to raise funds—he had pledged to raise a billion dollars but ultimately contributed less than forty-five million—and a faction within OpenAI was pushing to convert it to a for-profit entity, both to attract capital and to lure top researchers with equity. At the meeting where Musk announced his departure, he gave contradictory explanations: OpenAI, he said, wouldn’t be able to build an A.G.I. as a nonprofit, and that Tesla had more resources to pursue this goal, but he also suggested that the best place to pursue A.G.I. was elsewhere. An intern pointed out that Musk had insisted that the for-profit dynamic would undermine safety in developing A.G.I. “Isn’t this going back to what you said you didn’t want to do?” he asked. “You can’t imagine how much time I’ve spent thinking about this,” Musk replied. “I’m truly scared about this issue.” He also called the intern a jackass.
As OpenAI evolved into a nonprofit with a for-profit subsidiary, it came to house both perspectives: a doomer group focussed on safety and research, whose principal advocate was the Italian American scientist Dario Amodei; and a boomer culture focussed on products and applications, often led by Greg Brockman, an M.I.T. dropout and software engineer who pushed the organization toward embracing commercialization. But these lines crossed. Amodei ultimately left the company, alongside his sister, Daniela, insisting that OpenAI had abandoned its founding ethos, though, in Hao’s view, the company they founded, Anthropic, would “in time show little divergence” from OpenAI’s model: the same fixation on scale, the same culture of secrecy. From the other direction came Ilya Sutskever, who had made a major breakthrough in A.I. research as a graduate student in Toronto, and who would become perhaps OpenAI’s most influential theorist. He had once been an unabashed boomer. “I think that it’s fairly likely,” he told the A.I. journalist Cade Metz, “that it will not take too long of a time for the entire surface of the Earth to become covered with data centers and power stations.” By 2023, however, when he helped orchestrate a briefly successful corporate coup against Altman, he was firmly aligned with the doomers. The trajectories of Sutskever and the Amodeis suggest a more fluid category—the boomer-doomers.
Those who most believe in a cause and those who most fear it tend to share one essential assessment: they agree on its power. In this case, the prospect of a technology that could end a phase of civilization drew both camps—boomers and doomers—toward the same flame. Helen Toner, an A.I.-safety expert and academic who eventually joined OpenAI’s board, had spent time studying the fast-evolving A.I. scene in China, the United States’ chief rival in the global race. As Hagey recounts, “Among the things she found notable in China was how reluctant AI engineers were to discuss the social implications of what they were doing. In the Bay Area, meanwhile, they seemed to want to do nothing but.”
Yet OpenAI’s success hinged less on speculative philosophies than on more familiar systems: the flexibility of American capital, and Altman’s personal charm. In 2018, while attending the Sun Valley Conference, in Idaho, Altman ran into Microsoft’s C.E.O., Satya Nadella, in a stairwell and pitched him on a collaboration. Though Bill Gates was skeptical, most of Nadella’s team was enthusiastic. Within a year, Microsoft had announced an investment of a billion dollars in OpenAI—much of it in the form of credits on its cloud platform, Azure. That figure later rose beyond ten billion. Hao speaks with a Chinese A.I. researcher who puts it plainly: “In China, which rivals the U.S. in AI talent, no team of researchers and engineers, no matter how impressive, would get $1 billion, let alone ten times more, to develop a massively expensive technology without an articulated vision of exactly what it would look like and what it would be good for.”
Nadella appears only in passing in both of these books—he’s the adult in the room, and adults are famously not so interesting. But after Microsoft’s multibillion-dollar investments, his influence over OpenAI has come to appear at least as consequential as Altman’s. It was Nadella, after all, who intervened to end the brief 2023 coup, after which Altman was swiftly reinstalled as C.E.O. The year before, Sutskever remarked that “it may be that today’s neural networks are slightly conscious”—a comment to which a scientist at a rival A.I. company replied, “In the same sense that it may be that a large field of wheat is slightly pasta.” Nadella, by contrast, seems broadly allergic to boomer-doomer metaphysics.
The deeper dynamic of contemporary artificial intelligence may be that it reflects, rather than transcends, the corporate conditions of its creation—just as Altman mirrored the manners of his Silicon Valley elders, or as a chatbot’s replies reflect the texts it has been trained on. Appearing recently on Dwarkesh Patel’s influential tech podcast, Nadella, a smooth and upbeat presence, dismissed A.G.I. as a meaningless category. When Patel pressed him on whether A.I. agents would eventually take over not only manual labor but cognitive work, Nadella replied that this might be for the best: “Who said my life’s goal is to triage my e-mail, right? Let an A.I. agent triage my e-mail. But after having triaged my e-mail, give me a higher-level cognitive-labor task of, hey, these are the three drafts I really want you to review.” And if it took over that second thing? Nadella said, “There will be a third thing.”
Nadella seemed quite convinced that A.I. remains a normal technology, and his instinct was to try to narrow each question, so that he was debating project architecture rather than philosophy. When Patel wondered if Nadella would add an A.I. agent to Microsoft’s board, a fairly dystopian-sounding proposition, Nadella replied that Microsoft engineers were currently experimenting with an A.I. agent in Teams, to organize and redirect human team members, and said that he could see the use of having such an agent on Microsoft’s board. It did sound a bit less scary, and also maybe a bit less interesting.
Much like Altman, Nadella is now trying to shift the way the public thinks about A.I. by changing the way it’s talked about—less science fiction, more office productivity. It’s an uphill fight, and at least partly the industry’s own fault. The early, very public bouts of boomerism and doomerism helped attract investment and engineering talent, but they also seeded a broad, low-level unease. If Sutskever—who knew as much about the technology as anyone—could declare it “slightly conscious,” it becomes markedly harder for Nadella, three years later, to reassure the public that what we’re really talking about is just helpful new features in Microsoft Teams.
In other ways, too, Altman is contending with a shifting cultural tide. Sometime around 2016, the tone of tech coverage began to darken. The hagiographic mode gave way to a more prosecutorial one. David Kirkpatrick’s “The Facebook Effect” (2010) has its successor in Sarah Wynn-Williams’s “Careless People” (2025); Michael Lewis’s “The New New Thing” (1999) has been countered by Emily Chang’s “Brotopia” (2018); even Amazon’s great chronicler, Brad Stone, moved from “The Everything Store” (2013) to the more skeptical “Amazon Unbound” (2021).
Hao’s reporting inside OpenAI is exceptional, and she’s persuasive in her argument that the public should focus less on A.I.’s putative “sentience” and more on its implications for labor and the environment. Still, her case against Altman can feel both very personal and slightly overheated. Toward the end of “Empire of AI,” she writes that he has “a long history of dishonesty, power grabbing, and self-serving tactics.” (Welcome to the human race, Sam.) Hao tries hard, if not very successfully, to bolster an accusation made public in 2021 by his sister Annie Altman—that, beginning when she was three and Sam was twelve, he climbed into her bed and molested her, buried memories that she says she recovered during therapy in her twenties. (Altman denies the allegation.) This new, more critical vision of the tech founders risks echoing Musk’s vendetta against Hassabis—inflating contingent figures into supervillains, out of ambient anxiety.
Altman’s story is at once about a man changing artificial intelligence and about how A.I.’s evolving nature has, in turn, changed him—quieting, without resolving, the largest questions about work, power, and the future. Hao’s book opens in late 2023, with the brief ouster of Altman by Sutskever and several senior OpenAI executives, an episode now referred to internally as “the Blip.” When Altman learns of the attempted coup, he is in Las Vegas for a Formula 1 race. Sutskever calls him over Google Meet and tells him that he is being fired. Altman remains serene. He doesn’t appear to take the moment too seriously—perhaps because, in Sutskever’s zeal, he recognizes a version of his former self. Calmly, he replies, “How can I help?” He has become, in every sense, all business.
3 notes
·
View notes
Text
USA-G10 'Glitter Boy'

Art credit: Kevin Long
Model Type: USA-G 10 Class: Laser Resistant Infantry Personnel Assault Unit. Crew: One pilot.
Height: 10 feet, 5 inches (3 .1 m). Width : 4 feet, 4 inches ( 1 .3 m). Length : 4 feet (1.2 m). Weight: 1 .2 tons fully loaded.
Suggested proxies: Shadowhawk, Phoenix hawk
Glitter Boy power armor is an amazingly small and mobile, oneperson, armored robot vehicle. The robot suit stands approximately 10 feet tall (3 m) and offers fully articulated hands and the mobility of the human body. As such, it is considered an all terrain vehicle. The superdense chrome armor is constructed on a molecular level and can withstand more Mega-Damage than any other power armor created since. The robotic frame is nearly indestructible, resilient, and virtually maintenance free. The armor-shielded joints and padded pilot's compartment enable the machine to absorb impacts and cushion its pilot. It is also one of the few robots or power armors designed to comfortably accommodate a pilot for days or even weeks at a time. A refrigeration unit holds 10 gallons of drinking water, while another contains a high protein, multi-vitamin nutrient paste (about a four week supply), as well as a few other storage compartments where additional food or personal items can be contained. However, it is not recommended that the pilot remain inside the cushioned and air-conditioned unit for more than 24 hours at a time....
The former Coalition State of Free Quebec is the only kingdom in North America that manufactures and deploys Glitter Boys as a part of its army and national defenses. In fact, Free Quebec deploys and maintains the largest contingent of Glitter Boys in the world. A scheme that helped put a quick end to the Coalition's plans to invade and conquer Free Quebec when it proclaimed its independence and seceded from the Coalition States. Other than the secret factories at Free Quebec, there are no known manufacturers of Glitter Boy power armor in North America. However, there has been a sudden proliferation of new looking suits over the last few years, spawning a rumor that a pre-Rifts cache of Glitter Boy armor was recently excavated from an old American military installation in the western US or Canada and sold by hightech bandits. However, the Coalition suspects there may be a manufacturing facility in production in the West, and if so, intends to find it and destroy it. Bandito Arms is a top suspect, but so is Free Quebec, though few talk about that possibility.
5 notes
·
View notes
Text
A few years ago, during one of California’s steadily worsening wildfire seasons, Nat Friedman’s family home burned down. A few months after that, Friedman was in Covid-19 lockdown in the Bay Area, both freaked out and bored. Like many a middle-aged dad, he turned for healing and guidance to ancient Rome. While some of us were watching Tiger King and playing with our kids’ Legos, he read books about the empire and helped his daughter make paper models of Roman villas. Instead of sourdough, he learned to bake Panis Quadratus, a Roman loaf pictured in some of the frescoes found in Pompeii. During sleepless pandemic nights, he spent hours trawling the internet for more Rome stuff. That’s how he arrived at the Herculaneum papyri, a fork in the road that led him toward further obsession. He recalls exclaiming: “How the hell has no one ever told me about this?”
The Herculaneum papyri are a collection of scrolls whose status among classicists approaches the mythical. The scrolls were buried inside an Italian countryside villa by the same volcanic eruption in 79 A.D. that froze Pompeii in time. To date, only about 800 have been recovered from the small portion of the villa that’s been excavated. But it’s thought that the villa, which historians believe belonged to Julius Caesar’s prosperous father-in-law, had a huge library that could contain thousands or even tens of thousands more. Such a haul would represent the largest collection of ancient texts ever discovered, and the conventional wisdom among scholars is that it would multiply our supply of ancient Greek and Roman poetry, plays and philosophy by manyfold. High on their wish lists are works by the likes of Aeschylus, Sappho and Sophocles, but some say it’s easy to imagine fresh revelations about the earliest years of Christianity.
“Some of these texts could completely rewrite the history of key periods of the ancient world,” says Robert Fowler, a classicist and the chair of the Herculaneum Society, a charity that tries to raise awareness of the scrolls and the villa site. “This is the society from which the modern Western world is descended.”
The reason we don’t know exactly what’s in the Herculaneum papyri is, y’know, volcano. The scrolls were preserved by the voluminous amount of superhot mud and debris that surrounded them, but the knock-on effects of Mount Vesuvius charred them beyond recognition. The ones that have been excavated look like leftover logs in a doused campfire. People have spent hundreds of years trying to unroll them—sometimes carefully, sometimes not. And the scrolls are brittle. Even the most meticulous attempts at unrolling have tended to end badly, with them crumbling into ashy pieces.
In recent years, efforts have been made to create high-resolution, 3D scans of the scrolls’ interiors, the idea being to unspool them virtually. This work, though, has often been more tantalizing than revelatory. Scholars have been able to glimpse only snippets of the scrolls’ innards and hints of ink on the papyrus. Some experts have sworn they could see letters in the scans, but consensus proved elusive, and scanning the entire cache is logistically difficult and prohibitively expensive for all but the deepest-pocketed patrons. Anything on the order of words or paragraphs has long remained a mystery.
But Friedman wasn’t your average Rome-loving dad. He was the chief executive officer of GitHub Inc., the massive software development platform that Microsoft Corp. acquired in 2018. Within GitHub, Friedman had been developing one of the first coding assistants powered by artificial intelligence, and he’d seen the rising power of AI firsthand. He had a hunch that AI algorithms might be able to find patterns in the scroll images that humans had missed.
After studying the problem for some time and ingratiating himself with the classics community, Friedman, who’s left GitHub to become an AI-focused investor, decided to start a contest. Last year he launched the Vesuvius Challenge, offering $1 million in prizes to people who could develop AI software capable of reading four passages from a single scroll. “Maybe there was obvious stuff no one had tried,” he recalls thinking. “My life has validated this notion again and again.”
As the months ticked by, it became clear that Friedman’s hunch was a good one. Contestants from around the world, many of them twentysomethings with computer science backgrounds, developed new techniques for taking the 3D scans and flattening them into more readable sheets. Some appeared to find letters, then words. They swapped messages about their work and progress on a Discord chat, as the often much older classicists sometimes looked on in hopeful awe and sometimes slagged off the amateur historians.
On Feb. 5, Friedman and his academic partner Brent Seales, a computer science professor and scroll expert, plan to reveal that a group of contestants has delivered transcriptions of many more than four passages from one of the scrolls. While it’s early to draw any sweeping conclusions from this bit of work, Friedman says he’s confident that the same techniques will deliver far more of the scrolls’ contents. “My goal,” he says, “is to unlock all of them.”
Before Mount Vesuvius erupted, the town of Herculaneum sat at the edge of the Gulf of Naples, the sort of getaway wealthy Romans used to relax and think. Unlike Pompeii, which took a direct hit from the Vesuvian lava flow, Herculaneum was buried gradually by waves of ash, pumice and gases. Although the process was anything but gentle, most inhabitants had time to escape, and much of the town was left intact under the hardening igneous rock. Farmers first rediscovered the town in the 18th century, when some well-diggers found marble statues in the ground. In 1750 one of them collided with the marble floor of the villa thought to belong to Caesar’s father-in-law, Senator Lucius Calpurnius Piso Caesoninus, known to historians today as Piso.
During this time, the first excavators who dug tunnels into the villa to map it were mostly after more obviously valuable artifacts, like the statues, paintings and recognizable household objects. Initially, people who ran across the scrolls, some of which were scattered across the colorful floor mosaics, thought they were just logs and threw them on a fire. Eventually, though, somebody noticed the logs were often found in what appeared to be libraries or reading rooms, and realized they were burnt papyrus. Anyone who tried to open one, however, found it crumbling in their hands.
Terrible things happened to the scrolls in the many decades that followed. The scientif-ish attempts to loosen the pages included pouring mercury on them (don’t do that) and wafting a combination of gases over them (ditto). Some of the scrolls have been sliced in half, scooped out and generally abused in ways that still make historians weep. The person who came the closest in this period was Antonio Piaggio, a priest. In the late 1700s he built a wooden rack that pulled silken threads attached to the edge of the scrolls and could be adjusted with a simple mechanism to unfurl the document ever so gently, at a rate of 1 inch per day. Improbably, it sort of worked; the contraption opened some scrolls, though it tended to damage them or outright tear them into pieces. In later centuries, teams organized by other European powers, including one assembled by Napoleon, pieced together torn bits of mostly illegible text here and there.
Today the villa remains mostly buried, unexcavated and off-limits even to the experts. Most of what’s been found there and proven legible has been attributed to Philodemus, an Epicurean philosopher and poet, leading historians to hope there’s a much bigger main library buried elsewhere on-site. A wealthy, educated man like Piso would have had the classics of the day along with more modern works of history, law and philosophy, the thinking goes. “I do believe there’s a much bigger library there,” says Richard Janko, a University of Michigan classical studies professor who’s spent painstaking hours assembling scroll fragments by hand, like a jigsaw puzzle. “I see no reason to think it should not still be there and preserved in the same way.” Even an ordinary citizen from that time could have collections of tens of thousands of scrolls, Janko says. Piso is known to have corresponded often with the Roman statesman Cicero, and the apostle Paul had passed through the region a couple of decades before Vesuvius erupted. There could be writings tied to his visit that comment on Jesus and Christianity. “We have about 800 scrolls from the villa today,” Janko says. “There could be thousands or tens of thousands more.”
In the modern era, the great pioneer of the scrolls is Brent Seales, a computer science professor at the University of Kentucky. For the past 20 years he’s used advanced medical imaging technology designed for CT scans and ultrasounds to analyze unreadable old texts. For most of that time he’s made the Herculaneum papyri his primary quest. “I had to,” he says. “No one else was working on it, and no one really thought it was even possible.”
Progress was slow. Seales built software that could theoretically take the scans of a coiled scroll and unroll it virtually, but it wasn’t prepared to handle a real Herculaneum scroll when he put it to the test in 2009. “The complexity of what we saw broke all of my software,” he says. “The layers inside the scroll were not uniform. They were all tangled and mashed together, and my software could not follow them reliably.”
By 2016 he and his students had managed to read the Ein Gedi scroll, a charred ancient Hebrew text, by programming their specialized software to detect changes in density between the burnt manuscript and the burnt ink layered onto it. The software made the letters light up against a darker background. Seales’ team had high hopes to apply this technique to the Herculaneum papyri, but those were written with a different, carbon-based ink that their imaging gear couldn’t illuminate in the same way.
Over the past few years, Seales has begun experimenting with AI. He and his team have scanned the scrolls with more powerful imaging machines, examined portions of the papyrus where ink was visible and trained algorithms on what those patterns looked like. The hope was that the AI would start picking up on details that the human eye missed and could apply what it learned to more obfuscated scroll chunks. This approach proved fruitful, though it remained a battle of inches. Seales’ technology uncovered bits and pieces of the scrolls, but they were mostly unreadable. He needed another breakthrough.
Friedman set up Google alerts for Seales and the papyri in 2020, while still early in his Rome obsession. After a year passed with no news, he started watching YouTube videos of Seales discussing the underlying challenges. Among other things, he needed money. By 2022, Friedman was convinced he could help. He invited Seales out to California for an event where Silicon Valley types get together and share big ideas. Seales gave a short presentation on the scrolls to the group, but no one bit. “I felt very, very guilty about this and embarrassed because he’d come out to California, and California had failed him,” Friedman says.
On a whim, Friedman proposed the idea of a contest to Seales. He said he’d put up some of his own money to fund it, and his investing partner Daniel Gross offered to match it.
Seales says he was mindful of the trade-offs. The Herculaneum papyri had turned into his life’s work, and he wanted to be the one to decode them. More than a few of his students had also poured time and energy into the project and planned to publish papers about their efforts. Now, suddenly, a couple of rich guys from Silicon Valley were barging into their territory and suggesting that internet randos could deliver the breakthroughs that had eluded the experts.
More than glory, though, Seales really just hoped the scrolls would be read, and he agreed to hear Friedman out and help design the AI contest. They kicked off the Vesuvius Challenge last year on the Ides of March. Friedman announced the contest on the platform we fondly remember as Twitter, and many of his tech friends agreed to pledge their money toward the effort while a cohort of budding papyrologists began to dig into the task at hand. After a couple of days, Friedman had amassed enough money to offer $1 million in prizes, along with some extra money to throw at some of the more time-intensive basics.
Friedman hired people online to gather the existing scroll imagery, catalog it and create software tools that made it easier to chop the scrolls into segments and to flatten the images out into something that was readable on a computer screen. After finding a handful of people who were particularly good at this, he made them full members of his scroll contest team, paying them $40 an hour. His hobby was turning into a lifestyle.
The initial splash of attention helped open new doors. Seales had lobbied Italian and British collectors for years to scan his first scrolls. Suddenly the Italians were now offering up two new scrolls for scanning to provide more AI training data. With Friedman’s backing, a team set to work building precision-fitting, 3D-printed cases to protect the new scrolls on their private jet flight from Italy to a particle accelerator in England. There they were scanned for three days straight at a cost of about $70,000.
Seeing the imaging process in action drives home both the magic and difficulty inherent in this quest. One of the scroll remnants placed in the scanner, for example, wasn’t much bigger than a fat finger. It was peppered by high-energy X-rays, much like a human going through a CT scan, except the resulting images were delivered in extremely high resolution. (For the real nerds: about 8 micrometers.) These images were virtually carved into a mass of tiny slices too numerous for a person to count. Along each slice, the scanner picked up infinitesimal changes in density and thickness. Software was then used to unroll and flatten out the slices, and the resulting images looked recognizably like sheets of papyrus, the writing on them hidden.
The files generated by this process are so large and difficult to deal with on a regular computer that Friedman couldn’t throw a whole scroll at most would-be contest winners. To be eligible for the $700,000 grand prize, contestants would have until the end of 2023 to read just four passages of at least 140 characters of contiguous text. Along the way, smaller prizes ranging from $1,000 to $100,000 would be awarded for various milestones, such as the first to read letters in a scroll or to build software tools capable of smoothing the image processing. With a nod to his open-source roots, Friedman insisted these prizes could be won only if the contestants agreed to show the world how they did it.
Luke Farritor was hooked from the start. Farritor—a bouncy 22-year-old Nebraskan undergraduate who often exclaims, “Oh, my goodness!”—heard Friedman describe the contest on a podcast in March. “I think there’s a 50% chance that someone will encounter this opportunity, get the data and get nerd-sniped by it, and we’ll solve it this year,” Friedman said on the show. Farritor thought, “That could be me.”
The early months were a slog of splotchy images. Then Casey Handmer, an Australian mathematician, physicist and polymath, scored a point for humankind by beating the computers to the first major breakthrough. Handmer took a few stabs at writing scroll-reading code, but he soon concluded he might have better luck if he just stared at the images for a really long time. Eventually he began to notice what he and the other contestants have come to call “crackle,” a faint pattern of cracks and lines on the page that resembles what you might see in the mud of a dried-out lakebed. To Handmer’s eyes, the crackle seemed to have the shape of Greek letters and the blobs and strokes that accompany handwritten ink. He says he believes it to be dried-out ink that’s lifted up from the surface of the page.
The crackle discovery led Handmer to try identifying clips of letters in one scroll image. In the spirit of the contest, he posted his findings to the Vesuvius Challenge’s Discord channel in June. At the time, Farritor was a summer intern at SpaceX. He was in the break room sipping a Diet Coke when he saw the post, and his initial disbelief didn’t last long. Over the next month he began hunting for crackle in the other image files: one letter here, another couple there. Most of the letters were invisible to the human eye, but 1% or 2% had the crackle. Armed with those few letters, he trained a model to recognize hidden ink, revealing a few more letters. Then Farritor added those letters to the model’s training data and ran it again and again and again. The model starts with something only a human can see—the crackle pattern—then learns to see ink we can’t.
Unlike today’s large-language AI models, which gobble up data, Farritor’s model was able to get by with crumbs. For each 64-pixel-by-64-pixel square of the image, it was merely asking, is there ink here or not? And it helped that the output was known: Greek letters, squared along the right angles of the cross-hatched papyrus fibers.
In early August, Farritor received an opportunity to put his software to the test. He’d returned to Nebraska to finish out the summer and found himself at a house party with friends when a new, crackle-rich image popped up in the contest’s Discord channel. As the people around him danced and drank, Farritor hopped on his phone, connected remotely to his dorm computer, threw the image into his machine-learning system, then put his phone away. “An hour later, I drive all my drunk friends home, and then I’m walking out of the parking garage, and I take my phone out not expecting to see anything,” he says. “But when I open it up, there’s three Greek letters on the screen.”
Around 2 a.m., Farritor texted his mom and then Friedman and the other contestants about what he’d found, fighting back tears of joy. “That was the moment where I was like, ‘Oh, my goodness, this is actually going to work. We’re going to read the scrolls.’”
Soon enough, Farritor found 10 letters and won $40,000 for one of the contest’s progress prizes. The classicists reviewed his work and said he’d found the Greek word for “purple.”
Farritor continued to train his machine-learning model on crackle data and to post his progress on Discord and Twitter. The discoveries he and Handmer made also set off a new wave of enthusiasm among contestants, and some began to employ similar techniques. In the latter part of 2023, Farritor formed an alliance with two other contestants, Youssef Nader and Julian Schilliger, in which they agreed to combine their technology and share any prize money.
In the end, the Vesuvius Challenge received 18 entries for its grand prize. Some submissions were ho-hum, but a handful showed that Friedman’s gamble had paid off. The scroll images that were once ambiguous blobs now had entire paragraphs of letters lighting up across them. The AI systems had brought the past to life. “It’s a situation that you practically never encounter as a classicist,” says Tobias Reinhardt, a professor of ancient philosophy and Latin literature at the University of Oxford. “You mostly look at texts that have been looked at by someone before. The idea that you are reading a text that was last unrolled on someone’s desk 1,900 years ago is unbelievable.”
A group of classicists reviewed all the entries and did, in fact, deem Farritor’s team the winners. They were able to stitch together more than a dozen columns of text with entire paragraphs all over their entry. Still translating, the scholars believe the text to be another work by Philodemus, one centered on the pleasures of music and food and their effects on the senses. “Peering at and beginning to transcribe the first reasonably legible scans of this brand-new ancient book was an extraordinarily emotional experience,” says Janko, one of the reviewers. While these passages aren’t particularly revelatory about ancient Rome, most classics scholars have their hopes for what might be next.
There’s a chance that the villa is tapped out—that there are no more libraries of thousands of scrolls waiting to be discovered—or that the rest have nothing mind-blowing to offer. Then again, there’s the chance they contain valuable lessons for the modern world.
That world, of course, includes Ercolano, the modern town of about 50,000 built on top of ancient Herculaneum. More than a few residents own property and buildings atop the villa site. “They would have to kick people out of Ercolano and destroy everything to uncover the ancient city,” says Federica Nicolardi, a papyrologist at the University of Naples Federico II.
Barring a mass relocation, Friedman is working to refine what he’s got. There’s plenty left to do; the first contest yielded about 5% of one scroll. A new set of contestants, he says, might be able to reach 85%. He also wants to fund the creation of more automated systems that can speed the processes of scanning and digital smoothing. He’s now one of the few living souls who’s roamed the villa tunnels, and he says he’s also contemplating buying scanners that can be placed right at the villa and used in parallel to scan tons of scrolls per day. “Even if there’s just one dialogue of Aristotle or a beautiful lost Homeric poem or a dispatch from a Roman general about this Jesus Christ guy who’s roaming around,” he says, “all you need is one of those for the whole thing to be more than worth it.”
26 notes
·
View notes
Text
Dr. Zayne's Mood Rating Guide
Part 1
Back home, I stare at our haul of gaming stuff. A light bulb goes off in my head.
I pat the sofa beside me and beckon Zayne, who just finished drying his hair, to come over and hear my master plan.
Zayne: You found another "show of the century"?
MC: No, I want to set up our very own cozy gaming corner right here at home!
Zayne: In other words, you ran out of space for your gaming consoles?
MC: ...Basically. So, what do you think of my plan?
Zayne: Since you said I have a stake in this, we might as well start setting it up now.
We clear out a large space in the living room and go through our first batch of gaming gear.
As I'm arranging the controllers, an unexpected sound suddenly comes from my pocket.
Virtual Pet: I'm huuungry!
When our gazes meet, I swear I can see OTTO-MART's question marks in Zayne's eyes.
Zayne: ?
MC: …
Zayne: Is this your new smart emotion detector? It's very realistic and almost as desperate as my patients after their eight-hour preoperative fast.
MC: You've got the wrong idea...
He watches me pull out a Twinkle collab virtual pet device from my pocket.
MC: It's this guy who's hungry. I got it from the supermarket. I couldn't help but play with it after we got home.
Zayne: ...What does it eat exactly? Batteries?
I carefully study his face to confirm that he's not making a joke while deadpan.
MC: I guess these things wouldn't have caught your attention when we were kids. Looks like it's my turn to teach Dr. Zayne something.
Sitting on the sofa again, I eagerly show Zayne how the device works.
The screen lights up and displays countless Snowyblobbus. They gather together to form... a Happy Snowman.
Zayne: Now that I'm looking at it, I suppose batteries wouldn't be appropriate for a meal.
MC: Come to think of it, this cutie patootie's taste is similar to yours.
I select "Street Food Platter" from the Food menu to feed the snowman. Hearts instantly fill the screen.
MC: You saw that, right? This gives the highest possible boost to its happiness.
[ Playtime Cache Guide ]
3 notes
·
View notes