Do you have any hcs for any fallout characters :p
Oh fuck I have a lot. Let me think of a few. I’m just gonna choose random ones across any of the games.
Paladin Danse is more squeamish than most would take him for. His self-control extends to control over visceral reactions, so in the field this doesn’t really come up, ergo he can see someone sustain a lethal injury in combat and appear unphased. However, Danse is the kind of person who can’t look at a needle going in while receiving an injection. When there isn’t the obligation of duty keeping him together, blood and viscera really bother him.
Raul Tejada is excellent with animals and has a special fondness for horses, as his family had them on their ranch where he grew up. I like to think Raul at some point had a mule sleipnir. Maybe he still does and they were separated when Raul got stuck on Black Mountain.
The Stealth Suit Mk. II is capable of full sentience, however had restraints written into their code to bar them from full awareness. The suit is aware they’re being held back from independent thought. Most of the things they say are automated, but they are capable of limited problem solving and critical thinking. The suit asking do you like me? Is an example of this.
Butch DeLoria has a manageable but noticeable fear of the dark. He insists that he didn’t develop that fear until after leaving the vault.
152 notes
·
View notes
As much as I complain about the (at its core, justified) backlash to corporate AI going in counterproductive-at-best directions here, I would like to take a moment to talk about what I would like to see done about the problem of corporate swarming all over AI as a moneymaking fad.
First, I must address the true root of the problem: as we all know, a lot of the types of people known derogatorily as techbros jumped ship from cryptocurrency and NFTs to AI after crypto crashed...multiple times. Why AI? Why was it the next big thing?
Well, why was crypto the previous? Because it was novel and unregulated. Why did it crash? Because of the threat of regulation.
It is worth mentioning, at this point, that the threat of regulation ended up doing massive harm to people who used crypto for reasons OTHER than speculative investment scams. This included a lot of people who engaged in business that is illegal but lifesaving (e.g., gray-market pharmaceuticals), and people who engaged in business that is technically legal but de facto illegal due to payment processors hating it (e.g., porn and other online sex work) - i.e., a lot of extremely vulnerable people. Stick a pin in this, it will be important.
AI is a novel and largely unregulated field. This makes it EXTREMELY appealing to venture capitalists and speculative investors - they can fuck around and do basically whatever they want with little to no oversight, and jump ship the moment someone says "all right, this is ridiculous you CANNOT just keep pretending it's a rare fluke when your beefed-up autocomplete chat bot makes up garbage information, and the next clown who decides that a probability function trained on the ableism and pop-psych poisoned broader internet is a viable substitute for trained mental health counselors is losing any licenses they have and/or getting fined into bankruptcy." They've always been like this - when technology is too new for us to even know how we SHOULD regulate it, the greedy capitalists flock to it, hoping to cash out quick before an ounce of responsibility catches up to them, doubly so when it's in a broader field that's already notoriously underregulated, such as the tech sector in the US right now.
That tendency is bad for literally everyone else in the process.
Remember what I said about how the crypto crackdown hurt a lot of very vulnerable people? Well, developers aren't lying when they say that AI can have extremely valuable, pro-human applications, from AAC (which it's already serving as; this is, imo, THE most valuable function of ChatGPT), to health and safety - while we absolutely should not entrust things like reading medical images and safety inspections to AI without oversight, with oversight it's already helping us find cancers faster, because while computers are fallible, so are humans, and we're fallible in different ways. When AI is developed with human-focused applications in mind over profit-focused ones, it can very easily become another slice of Swiss cheese to add to one of our most useful safety models.
It can also be used for automation...for better, and for worse. Of course, CEOs and investors are currently making a hard push for "worse".
That's why I find it very important to come up with a comprehensive plan to regulate AI and tech in general against false advertisement/scams and outright endangerment, without cutting too deep into the potential it has for being genuinely good.
My proposals are as follows:
PRI. VA. CY. LAW. PRIVACY LAW. PRIVACY LAW. As it stands now, US law regarding online privacy and data security - which is extremely pertinent because most of the most unscrupulous developers are US-based - is at best a vicious free-for-all that operates entirely on manufactured "consent", and at worst actively hostile to everyone but corporate interests. We need to change that ASAP. As it stands, robots.txt instructions (and other similar things, such as Do Not Track flags) are legally...a polite request that developers are 100% allowed to just ignore if they feel like it. The entire mainstream internet is spyware. This needs to change. We need to impose penalties for bypassing others' privacy preferences and bring the US up to speed with the EU when it comes to privacy and data security. This will solve the problem that many are counterproductively trying to solve by tightening copyright law with more side benefits and none of the drawbacks.
Health and safety audits and false advertising crackdowns. Penalties must be imposed on entities who knowingly use AI in inappropriate and unsafe applications, and on AI developers who misrepresent the utility of their tools or downplay their potential for inaccuracy. Companies using AI in products with obvious potential hazards, from robotics to counseling, are subject to safety audits to make absolutely sure they're not cutting corners or understating risks. Developers who are found to be understating the limitations of their software or cutting safety features are subject to fines and loss of licenses.
Robust union protections, automation taxes, and beefing up unemployment/layoff protection. Where automation can and cannot be used in the professional sector should never be a matter of law beyond the safety aspect, but automation rollouts do always come with drawbacks - both in the form of layoffs, and in the form of complicating the workflow in the name of saving a buck. The government cannot make sweeping judgments about how this will work, because it's literally impossible for them to account for every possibility, but they CAN back unions who can. Workers know their workflow best, and thus need the power to say, for instance, "no, I need to be able to communicate with whoever does this step, we will not abide by it being automated without oversight or only overseen by someone we can't communicate with adequately, that pushes the rest of our jobs WAY beyond our pay grade" or "no, we're already operating on a skeleton crew, we will accept this tool ONLY if there are no layoffs or pay cuts; it should be about getting our workload to a SUSTAINABLE level, not overworking even fewer of us". Automation taxes can also both serve as an incentive for bosses to take more time considering what they do and do not want to automate, and contribute to unemployment/layoff protection (and eventually UBI). This will ensure that workers will be protected, even when they're not in fields as visible and publicly appreciated as arts.
In conclusion, the AI situation is a complicated one that needs nuance, and it needs to be approached and regulated in a pro-human, pro-privacy way.
26 notes
·
View notes