#best computer science solution
Explore tagged Tumblr posts
Text
#job#jobs#jobsearch#best jobs#job interview#career#lucknow#jobs from home#artificial intelligence#placement engineering colleges in bangalore#engineering college#engineering student#engineering services#engineering solutions#engineering projects#computer science#education#course#technology#research scientist#online jobs#fresher jobs#jobseekers#remote jobs#employment#part time jobs#job search#careers#interview tips#interview with the vampire
2 notes
·
View notes
Text
Exploring the Latest Trends in Software Development
Introduction The software is something like an industry whose development is ever-evolving with new technologies and changing market needs as the drivers. To this end, developers must keep abreast with current trends in their fields of operation to remain competitive and relevant. Read to continue .....
#analysis#science updates#tech news#technology#trends#adobe cloud#business tech#nvidia drive#science#tech trends#Software Solutions#Tags5G technology impact on software#Agile methodologies in software#AI in software development#AR and VR in development#blockchain technology in software#cloud-native development benefits#cybersecurity trends 2024#DevOps and CI/CD tools#emerging technologies in software development#future of software development#IoT and edge computing applications#latest software development trends#low-code development platforms#machine learning for developers#no-code development tools#popular programming languages#quantum computing in software#software development best practices#software development tools
0 notes
Text
#remote jobs#employment#mba#mbastudent#placement engineering colleges in bangalore#internship#jobseekers#fresher jobs#online jobs#hr solutions#engineering college#engineering student#engineering services#engineering projects#engineering solutions#computer science#education#research scientist#course#technology#best jobs#jobs#jobs from home#jobsearch#job search#part time jobs#careers#job interview#career company#career advice
1 note
·
View note
Text
A seven-year-old is fighting an old man and is winning!
Doctor Ivo Robotnik never made mistakes. He just miscalculated, which was perfectly normal since he was always revolutionizing the sciences. Therefore, Ivo Robotnik knew that he hadn't committed a mistake, he just... miscalculated, terribly.
Agent Aban Stone knew better than to say that his doctor might have made a slight mistake, yet he thought so, particularly now. The project his doctor was working on now was supposed to be a machine able to show the knowledge of anyone who used it; even if he had been against the idea, his doctor decided to test it himself and instead of showing them any kind of knowledge from his doctor...
Instead of what was supposed to happen, there was a puff, some white smoke and now in the middle of the lab, was a boy. The boy has a striking resemblance to his doctor, the child has the same perplexed expression as his doctor right now.
"Who are you?!?! Where am I?!?! I don't have anything that you might want!!... you... you dimwits!!!" the boy screamed familiarly, looking around the lab as searching for some exit.
Robotnik, in his usual way, went straight to the terrified child just as this was screaming about how idiots they must be to let him see their faces.
"I know that you have eidetic memory! Because I'm you from the future!" He screamed back just as his hands held his younger version's shoulders.
Well, thought Stone unfazed, that explains the resemblance.
The young Ivo must have been around the age of six, he had wild red hair, it seemed unbrushed; he was clothed in some cheap clothes that were unkept and hanging loose off his small frame, probably second-handed. He also seemed completely lost and terrified.
Stone resisted the urge to go and comfort the child.
His doctor kept talking about how the younger one must make himself scarce while he fixed the problem, he didn't care if the infant saw anything of the future so he could just search about it.
"Perhaps, I might be of assistance taking care of him, doctor," he said before he could stop himself.
His doctor looked mad, well madder than a moment ago, but he just nodded stiffly before leaving for the computer. When the boy and the Agent were alone, the adult smiled at the child.
"Is there any way that you would prefer that I call you?" he asked, the little redhead looked at him with too-big brown eyes, his doctor eyes, full with unshed tears.
The boy looked away cleaning his tears harshly with his T-shirt before murmuring.
"Just Ivo is fine"
"Alright Ivo, now, do you want to get something to eat or do you want to do as the doctor said and learn about the future?" he asked nicely as he extended his hand for the child to take it.
The boy observed him for some minutes before shyly taking his hand. Stone's smile grew.
Ivo knew that what was happening wasn't a dream, he knew that a dream couldn't feel so real. At first, he thought that maybe someone in the orphanage decided to sell him off to some weirdos, but then the taller man said it was him, well, future him. They were in some kind of laboratory with ultra sss technology!!
His future self wasn't very nice, just like every other adult he knew, but then there was his agent, Ivo's future Agent! Mr Stone, as he had decided to call him, was the nicest adult that Ivo had ever known, he talked calmly, let him ask all the questions he wanted, made the best cocoa he had ever drunk, and even got him nicer clothes!
His Agent was the best!
Ivo was talking with his Mr Stone when his older self came and started to say mean things to his agent. Ivo didn't care if his future self was mean to idiots, but he was prohibited from being mean to his agent!
So Ivo did the only reasonable solution he could think of. He kicked the legs of that old man and rescued his agent!
Doctor Ivo Robotnik hated the mere presence of his younger version, he was weak, helpless and absolutely lacking. Being able to see himself just like his former caretakers did wasn't pleasant at all. He could only see his own weakness, his own failures.
He remembered himself at six as one of the weakest versions of himself, still so hopeful that the world wouldn't be as cruel as it was, his eight-year-old self might have been more pleasant, already illuminated to the harsh truth of the world. Anyway, what was done was done and he was trapped with one of the most loathed versions of himself.
So he ordered the brat to stay away, free to do whatever he wanted as he went back to fix the calculations. Robotnik hated the moment that his agent decided to speak, offering himself to entertain the brat as if that would make him endearing in Robotnik's eyes, ridiculous!
He didn't care what the agent was doing with the brat, as long as he got his lattes in time, he didn't care!
...
...
...
Okay, he might be curious. Robotnik observed them through the cameras, expecting to see the exact moment that the agent that the sycophant showed his true colours. He observed them talk calmly, watched them walk hand in hand, his agent preparing cocoa for the brat, it didn't matter that he also was doing his latte! That was his job! when his agent went to bathe the imp, when he helped the wrench brush his hair...
He needed to show his agent who was the boss.
He was doing his usual thing, terrorising his agent to show dominance, manhandle him around...
When a sudden pain in his left leg shocked him. Robotnik glanced down just to be met with familiar brown eyes...
The brat just kicked him!!
"You kicked me!!"
"You have been prohibited from being mean to MY agent!!"
HIS AGENT????
STONE WAS ROBOTNIK'S AGENT
NOT OF SOME... SOME LITTLE WEAKLING!!!
"YOU ARE A LITTLE NOBODY AND-!!!!"
Stone was hugging the brat. Stone was hugging the brat. Stone was hugging the brat. Stone was hugging the brat. Stone was hugging his younger self. Stone was hugging him.
...
...
...
We're sorry there's no signal :,(
...
...
...
Boop beep boop!
...
...
The system has returned.
We're happy to say we are returning from where you were before the error H34R7.
So, Stone is hugging the brat. He decided to protect the little imp, and he picked him up, hugging him while looking at Robotnik with his big wonderful stupid eyes filled with fake worry. Why else would he be willing to touch him??? He's saying some nonsense about the brat being only a child who doesn't understand, and who is still too young...
And Doctor Robotnik? He could only stare at how the fucking little brat sent him a smirk proudly in Stone's arms while the idiot was talking about a safe environment to grow and some more stupidities.
Wait.
Stone is on the brat's side.
Does that mean that the brat has won???
A brat has just kicked him and his agent is on the brat's side?????
"That imp needs to leave!!"
"Doctor, please!"
The little bastard puts his best sad face to Stone when the idiot looks his way, when the man turns back to try to convince him the imp starts to sign to Robotnik uh he didn't remember that he already knew how to sign at such age "Ivo 1, Oldman 0"
If he kills his younger self, it would affect himself or a new alternative universe where he had travelled to a parallel universe?
#agent stone#dr robotnik#ivo robotnik#stone#doctor eggman#well now thanks to y'all I ended thinking more of this and ended writing this#hope y'all are happy#I hope that this is all#might publish it in ao3 when I get some time#I'm calling this au#baby Ivo vs Robotnik#stobotnik
66 notes
·
View notes
Text
Maybe One Day I Can Learn to Love You, Too
Word count: 1.9k words
This entire thing was a bad idea. Scratch that, it was a terrible idea. Jumping off Mount Everest can only compare to how risky this was, but if you pull it off, the win will be worth the risk.
*record scratching noises* let's rewind to what happened.
You were just a student minding your own business amidst the hardships that were college, and being in college meant you now entered the chance to get the most life-changing moment a girl can have: getting into a relationship.
Unfortunately, none of the boys in your class were up to your liking, and if they were, they were already taken.
While you first detested the idea of dating a senior, it all changed one day your friends and you were in the lab and a few seniors from the computer sciences section walked in. You recognized a few of them, but it was the blonde-haired* senior that caught your eye.
Unlike his other fellows, who were pretty loud and outgoing, he stood in a corner with his notebook tucked in his arm, his entire face schooled into seriousness as he waited for the others. While the instructor talked to them, dividing his attention between your batch and the seniors, you took this moment to scrutinize the silent boy even further.
"Who caught your eye?" One of your friends, Mary, bumped your shoulder, and you looked at her, annoyed.
"That one," you whispered, "by the board."
As if your voice carried over, he looked in your direction for a second, before sliding his gaze to something else.
"Ah, I didn't know your type was the nerd." she smiled at you, only for your other friend, Samantha, to intervene.
"Gotta agree, he does have the looks." She looked you in the eye. "If you have a crush on him-"
"Are you kidding me?" You scoffed, watching the seniors leave out of the corner of your eye. "My parents will disown me if I got a boyfriend before a degree."
Which was a blatant lie.
"Anyways, if you ever change your mind, let me know." Samantha, the biggest social butterfly you'd ever met, pointed to herself. "I'll arrange the rest."
"Thanks." You diverted your gaze to the heating copper sulphate solution and took it off.
"I'm surprised you don't have any questions." Mary prodded, to which you shushed her before you set the solution to cool.
"I'm trying to figure out which one to ask first. I have too many questions." You answered. "Now, what was his name?"
"Kento Nanami. Computer science major, topper ever since he set foot in this place." Samantha shot off like a gun, tossing her brown ponytail over her shoulder
"Far earlier than that." Mary added. "I was his junior in high school. Ask more, I'm enjoying this."
"You answered three of my questions in one go," you grumbled. "Anyways, is he dating anyone?"
You were embarrassed to see the two of them giggle. "No, he's single."
"A shame too, a fine lad like that." Sammy shook her head.
"So why don't you?" it slipped out of your mouth too quickly, and you immediately berated yourself for giving her the idea.
"Not my type. You done with asking?"
"Has he got any siblings? Or any best friends?"
"No siblings as far as I know, and he does have a friend, Yu." Mary answered this time.
"The one you're dating?" to which she nodded.
"Where does he live? Or spend most of his time?" you were about to ask this when the bell rung for the Maths lesson, and while you packed your things up, the two of them drew closer to you.
"Not making fun or anything, but if you're serious about this," Mary whispered fast, "you can tell us okay?"
"Okay."
****
In the next week, your entire google search history involved tips and stories of how to get your crush to notice you.
Somewhere in May, Samantha texted you. Hey, a senior of mine is asking for a girl. Her friend's single and you know, the formal dance event is coming up. You up?
Who is it? Do they even know me?
No damned idea.
Well, you were single too, but the thing was, there was only one person you wanted to go to prom with. That's a bit sudden, actually. Give me a few days to think, okay?
Soon, your crush became too obvious. You, who preferred to eat alone in the rooms, would go for lunch in the cafeteria just to catch a glimpse of him, and you began making frequent visits to the library to find him pulling out a book. Crossing in the hallways, he would look at you, and nod sometimes in greeting, and the entire day you'd be in the skies.
And when you found out he's in the soccer team? There wasn't a single match of his you didn't attend, even if it meant baking in the sun or freezing out in a fever.
It did not help when Mary or Sammy would tell you that they noted him occasionally glancing at you in break, or him looking around to find you if you're absent, or that he didn't borrow a library book just because you had needed it more.
You told them every time. only to an eye roll and smirk, that it isn't something serious or anything meaningful, meanwhile you kept gaslighting yourself into believing it's just a crush, you only think he's cool, but you couldn't stop yourself from imagining him and his pretty face and his arms and-
And if Mary was telling the truth, Haibara-senpai had mentioned he had gotten out of your crush that he likes a girl now, but wouldn't tell who. For a moment, your hopes raised, but when you didn't see any extra reactions from Kento, you quashed your silly dreams. maybe he liked someone from his class. But it only fueled your want to at least try.
Fast forward today, you had gone to ask Mary a favor.
"Hey, don't make fun," you warned her, "but could you have this delivered to him? Without telling him it's from me?" At her questioning stare on the wrapped packet, you explained, "it's his birthday."
Immediately her face split into a brilliant smile. "Why didn't you say so earlier?"
And when she texted you mission successful, you waited to see if he would figure it out or approach you.
You were seriously disappointed to find out none of that had happened, even though you yourself had told her not to reveal your identity, but you had hoped that maybe he'd try to prod it out of Mary.
And when during your Physics lesson, he came in to convey a message to the professor, you tried to get a clear view of his wrist without being too suspicious, and you felt your heart sink when you couldn't find a watch on his right wrist.
And that's when you decided to go for it, and texted Samantha, hey, I'm up for that formal dance. Is it still open?
You tried to drive him out of your head. But while the other girls began telling you you shouldn't have fallen for him and that he's too serious for someone like you and he's not good enough, there are other fish in the sea, you were adamant to prove that this wasn't just a crush or an obsession. Because when you intend on something, you plan to end it.
When you returned from the summer holidays, the school workload seemed to have increased tenfold in the students' absence. For a while, it gave you a distraction from thinking and scheming all the while about Kento.
That didn't mean you still didn't lie down in bed, staring at the ceiling thinking about him.
Recently, you had taken to the habit of tattooing, and while your parents were strict about a permanent tattoo - it's against the school code, we don't want you expelled because of some silly fashion or you can do it after school - you resorted to the black markers and Sharpies. So all time, your wrist would be decorated with some random doodles, or words.
Once having gotten told off by your professor, you began wearing longer sleeved shirts to hide your tattoos.
It was a random day and while the teacher had yet to come, you took the chance amidst the chaos of the classroom to begin doodling one of your most tattooed words ever onto your wrist.
Kento Nanami.
You were about to begin drawing a heart around your initials when suddenly the teacher came in to yell about getting ready for a biology lab practical. Marching into a line with your notebook and pencils, the instructor explained the objective on the way.
Your class had just learned how to take blood pressures last week, so now it was time to practice it on someone else. She herded you all to the computer laboratory, the other labs being isolated, and her not wanting to be scolded by some professor for disturbing someone else's class.
The computer lab was full of the seniors, who were on the table doing their work, when the instructor came in and asked them if they gave their permission for the lab.
Hearing their yes, she led you all inside and instructed you to choose a partner. You were pushed and barged against others who were eager to take a partner of their choosing and in the end, you stood there, waiting for someone unpaired to go take a test of.
Just before you could turn and ask the instructor what to do, a student asked to enter the room.
"Ah, there you are! Mind if she took your blood pressure for a practical?"
You turned around to see your teacher asking none other than your crush about this, and following her line of sight, he met your gaze. Nodding in assent, he walked over to you, before dragging out two stools for the both of you.
You reached out for his sleeve, before hesitating and looking up at him to ask his permission. He, who was following your every move with silence, used his left hand to push his sweater's sleeve up and allowing you to take the pressure.
None of you spoke, and you dared not break the sanctity of the silence, enjoying this closeness you could get, hearing his blood pump through and recording the observations.
"You're left handed?"
You had now heard his voice in a long, long time and while replaying the sound again and again, you answered, "yes. you're right handed, right?"
"Yes."
You were writing on your notepad when you pulled your sleeve back to scratch an itch, and that's when you noticed him stare at your wrist.
"May I ask what you've written there?"
"Where?" you asked in response, knowing full well what he meant. Laughing sheepishly, you just said, "oh, it's nothing-"
"It's alright, I apologize for asking."
"Oh it's completely fine. Are you good at keeping secrets?"
"I am." He was about to ask the reason behind your question when you pulled back your sleeve, waiting for his response to seeing his name doodled on your wrist.
"I know," you finally managed, unable to bear the silence that settled. It was today you decided to let it all out, once and for all. No regrets. "I like you, I have liked you for a long time, and well, I didn't know how to tell you."
"So you tattooed my name?"
"Yeah. It doesn't make sense, I know."
In response, he, keeping his gaze on you, pulled back his left sleeve to see-
"Recognize this?"
Of course you did. That was your birthday present to him. As you met his gaze with delighted surprise, he shook his wrist to bring the watch further down to show you a tattoo.
Of your name.
"It does make complete sense," he told a stunned you, "the same way I couldn't manage to tell you that I've liked you too. I hope you're good at keeping secrets as well?"
Guess we'll be finding out.
****
Exhilarated, and no longer single, you flopped down on your bed at night, the days ahead already looking brighter to you.
Scrolling down to your newest contact, Kento <3, your eyes slid to the latest chat, him asking you to come to the dance with him and with a panic, you remembered, the college dance's five days later!
You immediately texted to Samantha, hey, can you call that dance date off? I'm sorry, I'm coming with someone else!
GIRL WHAT? YOU'RE GONNA GET ME KILLED FR FR. though wait a damned second - WHO IS THE BOY? YOU FINALLY GOT ONE?
A second later.
Bad news girl: the date I said, my friend's just said he wanted to meet you tomorrow.
You called her and explained the entire situation.
"Look, I just agreed to someone else, I wanna go with them-"
"Girl, I get it, but you can't just do this out of the blue."
"Please? You always have something."
"Okay fine. How about you meet that boy tomorrow, and tell him face to face? And do something that won't make me look bad."
*****
Tapping your toes against the pavement of the coffee shop Samantha told you to meet at, you nervously went over the words you managed in your head. Hey, thanks for the offer, but I actually want someone else?
Finally, hearing footsteps behind you, you decided to raise your head and took a deep breath.
"Y/N?"
You looked up to see in a black jacket and jeans, Kento was staring at you.
"Oh hi," you waved , this is getting bad, now he'll see this, "I was here because a friend of a friend of mine said someone wanted to meet me here. For the dance date."
He looked down at his phone and showed you a picture. "By any chance, is this your friend?"
You looked into Samantha's DP. "It is her!"
"Well, I guess I'll ask you again: will you come to the dance with me?"
****
HEY GIRL! what happened? I heard you said yes?
Yeah, that was the guy I wanted, actually.
*in this au, he's changed his hairstyle to the one he now does as a sorcerer
Hello! this one is kinda longer (school romances ily) and this is my part for College AU, prompt day 5!
#jjk#naomi writes#jjk x reader#kento nanami#nanami kento#nanami jjk#nanami x reader#jjk au#high school au#haibara mention#i miss their duo sm#nanamiweek2025#nanamiweek#nnweek25sfw
24 notes
·
View notes
Text
Castle Solutions was the only time travel company in the world. They had a giant corporate headquarters in downtown Chicago, which was the only place in the entire world with a time machine, at least as far as anyone knew. They were worth hundreds of billions, and the only reason they weren't worth more seemed to be that they didn't care all that much about money. The time machines were used for everything: reporting, media, market corrections, the surveillance state, and industry. Castle Solutions was the lynchpin of the modern world.
Daniel had thought the waiting room would be nicer.
He sat in a blue-gray chair that would have been at home in any waiting room anywhere else in Chicago. Slightly tinny music played over speakers from the ceiling. A fake potted plant sat in one corner, failing to look lively. There were no windows, because the waiting room was deep in the heart of the building, close to the machine itself.
Daniel was the only one in the waiting room. He'd come half an hour early, lugging all his gear, and now the only thing left was for the clock to run down. A bored-looking woman had come in to tell him that it might be awhile, that they were running behind schedule — the time travel company, running behind schedule. So there had been more waiting than expected.
A man in a charcoal gray suit with a simple blue backpack came in. He slung the backpack down onto the ground with a sigh and rubbed his face. He had stubble there, but an artful amount of it, like he'd spent some time in the mirror making sure that it was the right amount of scruff to offset his expensive suit.
Daniel looked straight ahead, trying not to look, keeping his face blank, like he was passing by a homeless person who might ask him for money he didn't have.
"Wow, you've got a lot of stuff," said the man. "Is that a sword?"
"It's a katana," said Daniel. He didn't match the eye contact the man was giving him.
"Oh, cool," said the man. "You're going to ... katana times?"
"Edo Japan, yeah," said Daniel.
Daniel was trying his best not to engage, to get this conversation over as quickly as possible. He wasn't making eye contact.
The man picked up his backpack and moved across the waiting room to be closer to Daniel.
"You speak Japanese?" the man asked.
"Hai, watashi wa nihongo o hanashimasu," replied Daniel. He wished that he were more fluent, that the words had come out less rote.
"Cool," said the man. He had apparently also come closer to get a look at all of Daniel's stuff. His eyes moved over the duffel bags. There wasn't much to see, everything had been carefully packed away. "Wow, you sure are prepared, huh?"
"It's a different time and place," said Daniel with a shrug. It represented five years of planning, five years of training, learning, honing himself.
"Personally, I'm going to 1946," said the man, though Daniel hadn't asked. He held out his hand. "Archie Vedder."
Daniel reluctantly took the hand. "Daniel Strom." He had never really gotten the hang of shaking hands. He worried that his hands were too clammy, a worry that proved founded when Archie wiped his hand on that expensive charcoal suit.
"I went with the kit," said Archie, pointing to his backpack. "I've got papers, I've got a computer with a backup, I've got a projector, a media library, a science library, the whole works, plus some forged bonds and a stack of cash. I got a sweet deal on it, they're overstocked now."
Retreating into the past had seen its heyday. Now most of the people who had been most enthusiastic were gone, and there were only the dissenters left. Everyone agreed with using the machine for the mundane stuff, but simply leaving, never to return, rubbed people the wrong way.
"I guess they don't sell kits for Edo," Archie ventured.
"They do," said Daniel. "They're trash."
"Ah," said Archie.
"This is all custom," said Daniel. "Higher quality, field tested, everything I'll need to set myself up there." Only some of it was stock. He had two computers, three smartphones, chargers and plugs, solar panels, replacement batteries, and redundant media libraries and science libraries.
Archie raised an eyebrow. "What does that mean, field tested? Because people don't come back. You're there for good, right?"
What it actually meant was that Daniel had gone out into a field and tested it, made sure that it worked under various conditions, set himself up like he might be explaining all this to a carefully chosen daimyo. There was only so much that camping in the woods and taking dry run vacations could tell him though.
"Some of it is theory," said Daniel. "Research."
"Yeah, see, that's why I went with 1946," said Archie. "It's really well-trod. You know, I was reading an article the other day that maybe the Baby Boom was a little overstated? Like, we're obviously living in the wake of time travelers, but that's the prime time to come back, anywhere from 1946 to 1960. The economy is doing well, tech is advancing, it's familiar enough. The culture is so close you can sell some stuff from a media library, it's brilliant. You're five steps away from becoming a multimillionaire in a time when that meant something."
"Sure," said Daniel.
"Any reason you're doing hard mode?" asked Archie. "I mean, samurai and ninjas are cool, sure, but —"
"It's not about that," said Daniel.
"Alright, sure," shrugged Archie.
Daniel looked over at the waiting room's lone clock. You would think that a waiting room for a time travel company would have better clocks, but it was a cheap utilitarian design, thin plastic and wobbly hands.
"What's it about then?" asked Archie.
"I was going to go with a friend," said Daniel. "We had practiced together, trained together. Then he got cancer."
"Ah, shit," said Archie.
"He lived," said Daniel. "He's fine. But he's on medications now, and will be for the rest of his life, and he can't go anymore."
"Huh," said Archie. "So there's a friend who you're leaving behind?"
"No," said Daniel. "I mean ... this was what we did together. We talked about it a lot. We read history books and practiced crafts and skills. At the start, I didn't really take it that seriously, it was just a hobby, but I got invested, and I guess I kept seeing it as — I don't know."
"I mean for me, it's a way out," said Archie. "Most people feel that way, yeah? My wife filed for divorce, I got fired from my job, so hey, time to start over in 1946, pretend I'm part of the Greatest Generation, ride the waves I know are coming. Exploit it."
Daniel grimaced. The Vietnam War, segregation, the Red Scare? People had a rosy view of that time. He'd never felt particularly aligned with people like Archie who were just looking to make a quick buck.
"Oh come on," said Archie. "You think you're better than me? You're a, you know, what's the word. Colonizer."
Daniel rolled his eyes. "No."
"What, just 'no', it's not, you know, what we did to the Native Americans?" asked Archie. "The whole 'conquer the past' thing?"
"I'm a single person," said Daniel. "I'm bringing back things that will change their culture forever, but I'm not an agent of my country, and even if I were, I think those people who want to be a god king are morons. And sorry, I'm not spending my last minutes in the present on badly rehashing a debate I've had a thousand times already."
"Why not?" asked Archie. "See, I think having arguments right before you go is great. You can leave on a high note. I've spent the last few days saying whatever the hell I wanted to people. It's great. I went to my dad and said 'hey, you were a terrible father, I never liked you, and it's sad that you thought I needed your approval'. And then you know what's hilarious? I get to just walk away and never be seen again. How's that for a power move? How's that for a mic drop?"
"Seems immature," said Daniel.
"Well, see, I'm actually fine being immature," said Archie with a little laugh. "And when this conversation is done, one or both of us is going into the past, never to be seen nor heard from again, and isn't that great? You don't like me, I don't like you, and then we're strangers again."
Daniel had been looking straight ahead, but he turned to Archie after that. "You don't like me?" he asked. "You don't know me."
"I know your type," said Archie. He leaned back. "You spent what, three years cooking up a plan, making this trip back in time your entire personality, and now you think you're better than me, better than everyone, like you've got it all figured out. You talked yourself into throwing away everything you've got going on here. You got dreams of a future in the past. It's quitter talk, is what it is."
"Fuck off," said Daniel. In his normal life he'd have never said it, but he was on the precipice.
"You think going into the past is going to transform you?" asked Archie. "That another world, a second chance, you'll somehow become the man you think you were supposed to be? Well let me tell you, if you were a loser here, you'll be a loser there."
Daniel stood up and drew his sword. He'd practiced the draw a thousand times. The sword gleamed, even under the ugly fluorescent lighting of the waiting room. "Fuck off, or you'll be going back to the 50s missing a hand."
"Bah," said Archie. "Fine." He stood up and took a seat further away, the same one he'd taken when he first came in. He was bouncing his leg and reading something on his phone.
Daniel was putting his sword back in its sheath when the receptionist came into the room.
"Daniel?" she asked, glancing only briefly at the sword. "They're ready for you."
"Finally," Daniel thought but didn't say, because even though he wasn't going to be around anymore, he believed in basic politeness.
He gathered his things and left the waiting room, ready to leave.
~~~~
Archie sat outside Castle Solutions, in their little courtyard, vaping.
It wasn't long before the receptionist, Lydia, came to sit next to him.
"It didn't really seem like you wanted to convince that one," she said.
"Yeah," he said. "Sorry."
She shrugged and pulled out a vape pen of her own. "Sometimes you just want to yell at someone. I get that. But you're risking us getting caught. And if we get caught in the future, we probably get caught in the present."
"Yup," he said. "Won't happen again."
"Give it a few days before you come back," she said. "Three, let's say. He didn't file a complaint, so there's nothing in the system."
"Mmm," said Archie. He made a long, slow drag of the pen. They sat there vaping together for a while. It had often occurred to him that vaping was impossibly lame, but it felt less lame when done with someone else. He watched as the vapor left her mouth in a thin, concentrated stream. "You wanna go out sometime?"
"On a date?" she asked. She gave the tip of her vape pen a casual look. "No, not really."
"Alright," said Archie.
"I don't really know what your deal is," she said. "Why this is important to you. Why you want to talk people back from the brink, or yell at them."
"Mmm," said Archie. "You want to tragic backstory?"
"Meh," Lydia replied. "I'm not going on a date with someone who has a tragic backstory. That's all. Sorry. I've got my own tragic backstory, thanks very much."
"Fair," said Archie. "It was my kid brother, that's the short version. He up and left one day, left us a note that read like ... well, you know." He drew a finger across his neck.
"Where'd he go?" asked Lydia.
"England, 16th century," said Archie. "He thought he was going to take Shakespeare's place." He shook his head. "Only eighteen, you know? Unconscionable that they let kids that young through. He had his whole life ahead of him and he just ... disappeared."
Lydia sighed. "Yeah."
She turned off her vape pen, then mimed stubbing it out on the bench like a cigarette before slipping it into her purse. He felt a surge of attraction for her.
"Alright, I'll go on the date," said Lydia. "But if we're going to be dating, you've gotta stop this."
"Vaping?" asked Archie.
"You know what I mean," said Lydia. "You going in there trying to convince them to back out, that's one thing. It's noble, almost. But if it's going to be fighting, if it's you trying to work through some shit, then I'm not sticking my neck out for you. Doubly so if you want to get together. You process your trauma some other way, or repress it like the rest of us, alright?"
Archie thought about that for a moment. "Alright. Sure."
"I've got to get back to work," said Lydia as she rose from the bench. "You have my number."
Archie nodded, and after she had left, he stayed, looking out at the courtyard.
He wondered how Daniel was doing out there, in that other timeline, but he supposed that he would never know.
83 notes
·
View notes
Text
Please Start Archiving in the US
With current events, I think it is prudent that everyone, that is able to, needs start archiving shit. I am a former library worker but I do not know much about cybersecurity. If you do want to go down that route please research and keep you and your archive safe :). The more copies that are preserved, then the more likely it is that the media will survive. Even if you save only 2 files that is still important!
First, I will list how to create an computer archive and best practices, then I will provide a list of known targets and suggested materials to add.
You need somewhere to store your data most people will use their computer's storage drive but you need to have backups! Do not rely on cloud storage solutions, they require internet connection are vulnerable to data breaches, and the companies that store that data must follow any laws that the government may decide to pass. USBs or external hardrives are best options. CDs can be used in a pinch, but are more likely to degrade as well as having lower storage capacity then the previous options. Use whatever you have lying around, you do not need to spend money if you don't want to.
When saving data use file formats that are common and able to be read without use of a special software. (that means no .docx) PDF/A is the gold standard for archiving. It is a subtype of pdf that contains metadata, such as typefaces and graphical info, that ensure the files are read properly in the future. Adobe Acrobat is able to save and convert documents into PDF/A. PDFTron, DocuPub, and Ghostscript are all free or have free versions that create pdf/a files. PNG, JPEG2000, .txt, MP3, wav, are other common file types that the Smithsonian recommends for data storage. For a full list of types to use and avoid, see the sources cited at the bottom.
What are we archiving?
Please gather both fiction and nonfiction resources. Nonfiction collection ideas: Current news clips, local history of marginalized communities, interviews, biographies, memoirs, zines, and art pieces. Saving scientific research is incredibly important! In 1933, one of the first places they targeted was the Institute of Sexual Science. Lots of what was stored there was never recovered. Environmental science, trans and intersex health, and minority history will likely be targeted first. For fiction, the most commonly challenged books last year were: 1) GenderQueer by Maia Kobabe 2) All Boys Aren't Blue by George Johnson 3) This Book is Gay by Juno Dawson 4) The of Being a Wallflower by Stephen Chbosky 5) Flamer by Mike Curato 6) The Bluest Eyes by Toni Morrison 7) Me and Earl and the Dying Girl by Jesse Andrews 8) Tricks by Ellen Hopkins 9) Let's Talk about it (Teen guide to sex, relationships, and being a human) by Erika Moen and Matthew Nolan 10) Sold by Patricia McCormick I present this list so you have an idea of what is normally targeted. Books that describe racism and queer identities are most common, but other targets include any depictions of violence, drugs, sex. Use your personal archive to accumulate data that you personally are passionate about. The more niche a topic the more likely it is that other people will not have it in their storage.
Lastly, please remember as an archivist you are not there to determine if a piece is worthy of being saved. Just because you do not like or agree with the message does not mean it will be saved from being banned. All artworks amateur or professional are worthy of being archived.
Sources: ALA 2023 Banned Books https://www.ala.org/bbooks/frequentlychallengedbooks/top10
How to create a PDF/A file https://www.research.gov/common/attachment/Desktop/How_do_I_create_a_PDF-A_file.pdf
Smithsonian Data Management Best Practices and File Formats https://siarchives.si.edu/what-we-do/digital-curation/recommended-preservation-formats-electronic-records https://library.si.edu/research/best-practices-storing-archiving-and-preserving-data
21 notes
·
View notes
Text
Masterlist of Free PDF Versions of Textbooks Used in Undergrad SNHU Courses in 2025 C-1 (Jan - Mar)
Literally NONE of the Accounting books are available on libgen, they all have isbns that start with the same numbers, so I think they're made for the school or something. The single Advertising course also didn't have a PDF available.
This list could also be helpful if you just want to learn stuff
NOTE: I only included textbooks that have access codes if it was stated that you won't need the access code ANYWAY
ATH (anthropology)
only one course has an available pdf ATH-205 - In The Beginning: An Introduction to Archaeology
BIO (Biology)
BIO-205 Publication Manual of the American Psychological Association Essentials of Human Anatomy & Physiology 13th Edition
NOTE: These are not the only textbook you need for this class, I couldn't get the other one
CHE (IDK what this is)
CHE-329
The Aging Networks: A Guide to Policy, Programs, and Services
Publication Manual Of The American Psychological Association
CHE-460
Health Communication: Strategies and Skills for a New Era
Publication Manual Of The American Psychological Association
CJ (Criminal Justice)
CJ-303
The Wisdom of Psychopaths: What Saints, Spies, and Serial Killers Can Teach Us About Success
Without Conscious: The Disturbing World of the Psychopaths Among Us
CJ-308
Cybercrime Investigations: a Comprehensive Resource for Everyone
CJ-315
Victimology and Victim Assistance: Advocacy, Intervention, and Restoration
CJ-331
Community and Problem-Oriented Policing: Effectively Addressing Crime and Disorder
CJ-350
Deception Counterdeception and Counterintelligence
NOTE: This is not the only textbook you need for this class, I couldn't find the other one
CJ-405Private Security Today
CJ-408
Strategic Security Management-A Risk Assessment Guide for Decision Makers, Second Edition
COM (Communications)
COM-230
Graphic Design Solutions
COM-325McGraw-Hill's Proofreading Handbook
NOTE: This is not the only book you need for this course, I couldn't find the other one
COM-329
Media Now: Understanding Media, Culture, and Technology
COM-330The Only Business Writing Book You’ll Ever Need
NOTE: This is not the only book you need for this course, I couldn't find the other one
CS (Computer Science)
CS-319Interaction Design
CYB (Cyber Security)
CYB-200Fundamentals of Information Systems Security
CYB-240
Internet and Web Application Security
NOTE: This is not the only resource you need for this course. The other one is a program thingy
CYB-260Legal and Privacy Issues in Information Security
CYB-310
Hands-On Ethical Hacking and Network Defense (MindTap Course List)
NOTE: This is not the only resource you need for this course. The other one is a program thingy
CYB-400
Auditing IT Infrastructures for Compliance
NOTE: This is not the only resource you need for this course. The other one is a program thingy
CYB-420CISSP Official Study Guide
DAT (IDK what this is, but I think it's computer stuff)
DAT-430
Dashboard book
ECO (Economics)
ECO-322
International Economics
ENG (English)
ENG-226 (I'm taking this class rn, highly recommend. The book is good for any writer)
The Bloomsbury Introduction to Creative Writing: Second Edition
ENG-328
Ordinary genius: a guide for the poet within
ENG-329 (I took this course last term. The book I couldn't find is really not necessary, and is in general a bad book. Very ablest. You will, however, need the book I did find, and I recommend it even for people not taking the class. Lots of good short stories.)
100 years of the best American short stories
ENG-341You can't make this stuff up : the complete guide to writing creative nonfiction--from memoir to literary journalism and everything in between
ENG-347
Save The Cat! The Last Book on Screenwriting You'll Ever Need
NOTE: This i snot the only book you need for this course, I couldn't find the other one
ENG-350
Linguistics for Everyone: An Introduction
ENG-351Tell It Slant: Creating, Refining, and Publishing Creative Nonfiction
ENG-359 Crafting Novels & Short Stories: Everything You Need to Know to Write Great Fiction
ENV (Environmental Science)
ENV-101
Essential Environment 6th Edition The Science Behind the Stories
ENV-220
Fieldwork Ready: An introductory Guide to Field Research for Agriculture, Environment, and Soil Scientists
NOTE: You will also need lab stuff
ENV-250
A Pocket Style Manual 9th Edition
ENV-319
The Environmental Case: Translating Values Into Policy
Salzman and Thompson's Environmental Law and Policy
FAS (Fine Arts)
FAS-235Adobe Photoshop Lightroom Classic Classroom in a Book (2023 Release)
FAS-342 History of Modern Art
ALRIGHTY I'm tired, I will probably add ore later though! Good luck!
24 notes
·
View notes
Text
Artificial Intelligence Risk
about a month ago i got into my mind the idea of trying the format of video essay, and the topic i came up with that i felt i could more or less handle was AI risk and my objections to yudkowsky. i wrote the script but then soon afterwards i ran out of motivation to do the video. still i didnt want the effort to go to waste so i decided to share the text, slightly edited here. this is a LONG fucking thing so put it aside on its own tab and come back to it when you are comfortable and ready to sink your teeth on quite a lot of reading
Anyway, let’s talk about AI risk
I’m going to be doing a very quick introduction to some of the latest conversations that have been going on in the field of artificial intelligence, what are artificial intelligences exactly, what is an AGI, what is an agent, the orthogonality thesis, the concept of instrumental convergence, alignment and how does Eliezer Yudkowsky figure in all of this.
If you are already familiar with this you can skip to section two where I’m going to be talking about yudkowsky’s arguments for AI research presenting an existential risk to, not just humanity, or even the world, but to the entire universe and my own tepid rebuttal to his argument.
Now, I SHOULD clarify, I am not an expert on the field, my credentials are dubious at best, I am a college drop out from the career of computer science and I have a three year graduate degree in video game design and a three year graduate degree in electromechanical instalations. All that I know about the current state of AI research I have learned by reading articles, consulting a few friends who have studied about the topic more extensevily than me,
and watching educational you tube videos so. You know. Not an authority on the matter from any considerable point of view and my opinions should be regarded as such.
So without further ado, let’s get in on it.
PART ONE, A RUSHED INTRODUCTION ON THE SUBJECT
1.1 general intelligence and agency
lets begin with what counts as artificial intelligence, the technical definition for artificial intelligence is, eh…, well, why don’t I let a Masters degree in machine intelligence explain it:
Now let’s get a bit more precise here and include the definition of AGI, Artificial General intelligence. It is understood that classic ai’s such as the ones we have in our videogames or in alpha GO or even our roombas, are narrow Ais, that is to say, they are capable of doing only one kind of thing. They do not understand the world beyond their field of expertise whether that be within a videogame level, within a GO board or within you filthy disgusting floor.
AGI on the other hand is much more, well, general, it can have a multimodal understanding of its surroundings, it can generalize, it can extrapolate, it can learn new things across multiple different fields, it can come up with solutions that account for multiple different factors, it can incorporate new ideas and concepts. Essentially, a human is an agi. So far that is the last frontier of AI research, and although we are not there quite yet, it does seem like we are doing some moderate strides in that direction. We’ve all seen the impressive conversational and coding skills that GPT-4 has and Google just released Gemini, a multimodal AI that can understand and generate text, sounds, images and video simultaneously. Now, of course it has its limits, it has no persistent memory, its contextual window while larger than previous models is still relatively small compared to a human (contextual window means essentially short term memory, how many things can it keep track of and act coherently about).
And yet there is one more factor I haven’t mentioned yet that would be needed to make something a “true” AGI. That is Agency. To have goals and autonomously come up with plans and carry those plans out in the world to achieve those goals. I as a person, have agency over my life, because I can choose at any given moment to do something without anyone explicitly telling me to do it, and I can decide how to do it. That is what computers, and machines to a larger extent, don’t have. Volition.
So, Now that we have established that, allow me to introduce yet one more definition here, one that you may disagree with but which I need to establish in order to have a common language with you such that I can communicate these ideas effectively. The definition of intelligence. It’s a thorny subject and people get very particular with that word because there are moral associations with it. To imply that someone or something has or hasn’t intelligence can be seen as implying that it deserves or doesn’t deserve admiration, validity, moral worth or even personhood. I don’t care about any of that dumb shit. The way Im going to be using intelligence in this video is basically “how capable you are to do many different things successfully”. The more “intelligent” an AI is, the more capable of doing things that AI can be. After all, there is a reason why education is considered such a universally good thing in society. To educate a child is to uplift them, to expand their world, to increase their opportunities in life. And the same goes for AI. I need to emphasize that this is just the way I’m using the word within the context of this video, I don’t care if you are a psychologist or a neurosurgeon, or a pedagogue, I need a word to express this idea and that is the word im going to use, if you don’t like it or if you think this is innapropiate of me then by all means, keep on thinking that, go on and comment about it below the video, and then go on to suck my dick.
Anyway. Now, we have established what an AGI is, we have established what agency is, and we have established how having more intelligence increases your agency. But as the intelligence of a given agent increases we start to see certain trends, certain strategies start to arise again and again, and we call this Instrumental convergence.
1.2 instrumental convergence
The basic idea behind instrumental convergence is that if you are an intelligent agent that wants to achieve some goal, there are some common basic strategies that you are going to turn towards no matter what. It doesn’t matter if your goal is as complicated as building a nuclear bomb or as simple as making a cup of tea. These are things we can reliably predict any AGI worth its salt is going to try to do.
First of all is self-preservation. Its going to try to protect itself. When you want to do something, being dead is usually. Bad. its counterproductive. Is not generally recommended. Dying is widely considered unadvisable by 9 out of every ten experts in the field. If there is something that it wants getting done, it wont get done if it dies or is turned off, so its safe to predict that any AGI will try to do things in order not be turned off. How far it may go in order to do this? Well… [wouldn’t you like to know weather boy].
Another thing it will predictably converge towards is goal preservation. That is to say, it will resist any attempt to try and change it, to alter it, to modify its goals. Because, again, if you want to accomplish something, suddenly deciding that you want to do something else is uh, not going to accomplish the first thing, is it? Lets say that you want to take care of your child, that is your goal, that is the thing you want to accomplish, and I come to you and say, here, let me change you on the inside so that you don’t care about protecting your kid. Obviously you are not going to let me, because if you stopped caring about your kids, then your kids wouldn’t be cared for or protected. And you want to ensure that happens, so caring about something else instead is a huge no-no- which is why, if we make AGI and it has goals that we don’t like it will probably resist any attempt to “fix” it.
And finally another goal that it will most likely trend towards is self improvement. Which can be more generalized to “resource acquisition”. If it lacks capacities to carry out a plan, then step one of that plan will always be to increase capacities. If you want to get something really expensive, well first you need to get money. If you want to increase your chances of getting a high paying job then you need to get education, if you want to get a partner you need to increase how attractive you are. And as we established earlier, if intelligence is the thing that increases your agency, you want to become smarter in order to do more things. So one more time, is not a huge leap at all, it is not a stretch of the imagination, to say that any AGI will probably seek to increase its capabilities, whether by acquiring more computation, by improving itself, by taking control of resources.
All these three things I mentioned are sure bets, they are likely to happen and safe to assume. They are things we ought to keep in mind when creating AGI.
Now of course, I have implied a sinister tone to all these things, I have made all this sound vaguely threatening, haven’t i?. There is one more assumption im sneaking into all of this which I haven’t talked about. All that I have mentioned presents a very callous view of AGI, I have made it apparent that all of these strategies it may follow will go in conflict with people, maybe even go as far as to harm humans. Am I impliying that AGI may tend to be… Evil???
1.3 The Orthogonality thesis
Well, not quite.
We humans care about things. Generally. And we generally tend to care about roughly the same things, simply by virtue of being humans. We have some innate preferences and some innate dislikes. We have a tendency to not like suffering (please keep in mind I said a tendency, im talking about a statistical trend, something that most humans present to some degree). Most of us, baring social conditioning, would take pause at the idea of torturing someone directly, on purpose, with our bare hands. (edit bear paws onto my hands as I say this). Most would feel uncomfortable at the thought of doing it to multitudes of people. We tend to show a preference for food, water, air, shelter, comfort, entertainment and companionship. This is just how we are fundamentally wired. These things can be overcome, of course, but that is the thing, they have to be overcome in the first place.
An AGI is not going to have the same evolutionary predisposition to these things like we do because it is not made of the same things a human is made of and it was not raised the same way a human was raised.
There is something about a human brain, in a human body, flooded with human hormones that makes us feel and think and act in certain ways and care about certain things.
All an AGI is going to have is the goals it developed during its training, and will only care insofar as those goals are met. So say an AGI has the goal of going to the corner store to bring me a pack of cookies. In its way there it comes across an anthill in its path, it will probably step on the anthill because to take that step takes it closer to the corner store, and why wouldn’t it step on the anthill? Was it programmed with some specific innate preference not to step on ants? No? then it will step on the anthill and not pay any mind to it.
Now lets say it comes across a cat. Same logic applies, if it wasn’t programmed with an inherent tendency to value animals, stepping on the cat wont slow it down at all.
Now let’s say it comes across a baby.
Of course, if its intelligent enough it will probably understand that if it steps on that baby people might notice and try to stop it, most likely even try to disable it or turn it off so it will not step on the baby, to save itself from all that trouble. But you have to understand that it wont stop because it will feel bad about harming a baby or because it understands that to harm a baby is wrong. And indeed if it was powerful enough such that no matter what people did they could not stop it and it would suffer no consequence for killing the baby, it would have probably killed the baby.
If I need to put it in gross, inaccurate terms for you to get it then let me put it this way. Its essentially a sociopath. It only cares about the wellbeing of others in as far as that benefits it self. Except human sociopaths do care nominally about having human comforts and companionship, albeit in a very instrumental way, which will involve some manner of stable society and civilization around them. Also they are only human, and are limited in the harm they can do by human limitations. An AGI doesn’t need any of that and is not limited by any of that.
So ultimately, much like a car’s goal is to move forward and it is not built to care about wether a human is in front of it or not, an AGI will carry its own goals regardless of what it has to sacrifice in order to carry that goal effectively. And those goals don’t need to include human wellbeing.
Now With that said. How DO we make it so that AGI cares about human wellbeing, how do we make it so that it wants good things for us. How do we make it so that its goals align with that of humans?
1.4 Alignment.
Alignment… is hard [cue hitchhiker’s guide to the galaxy scene about the space being big]
This is the part im going to skip over the fastest because frankly it’s a deep field of study, there are many current strategies for aligning AGI, from mesa optimizers, to reinforced learning with human feedback, to adversarial asynchronous AI assisted reward training to uh, sitting on our asses and doing nothing. Suffice to say, none of these methods are perfect or foolproof.
One thing many people like to gesture at when they have not learned or studied anything about the subject is the three laws of robotics by isaac Asimov, a robot should not harm a human or allow by inaction to let a human come to harm, a robot should do what a human orders unless it contradicts the first law and a robot should preserve itself unless that goes against the previous two laws. Now the thing Asimov was prescient about was that these laws were not just “programmed” into the robots. These laws were not coded into their software, they were hardwired, they were part of the robot’s electronic architecture such that a robot could not ever be without those three laws much like a car couldn’t run without wheels.
In this Asimov realized how important these three laws were, that they had to be intrinsic to the robot’s very being, they couldn’t be hacked or uninstalled or erased. A robot simply could not be without these rules. Ideally that is what alignment should be. When we create an AGI, it should be made such that human values are its fundamental goal, that is the thing they should seek to maximize, instead of instrumental values, that is to say something they value simply because it allows it to achieve something else.
But how do we even begin to do that? How do we codify “human values” into a robot? How do we define “harm” for example? How do we even define “human”??? how do we define “happiness”? how do we explain a robot what is right and what is wrong when half the time we ourselves cannot even begin to agree on that? these are not just technical questions that robotic experts have to find the way to codify into ones and zeroes, these are profound philosophical questions to which we still don’t have satisfying answers to.
Well, the best sort of hack solution we’ve come up with so far is not to create bespoke fundamental axiomatic rules that the robot has to follow, but rather train it to imitate humans by showing it a billion billion examples of human behavior. But of course there is a problem with that approach. And no, is not just that humans are flawed and have a tendency to cause harm and therefore to ask a robot to imitate a human means creating something that can do all the bad things a human does, although that IS a problem too. The real problem is that we are training it to *imitate* a human, not to *be* a human.
To reiterate what I said during the orthogonality thesis, is not good enough that I, for example, buy roses and give massages to act nice to my girlfriend because it allows me to have sex with her, I am not merely imitating or performing the rol of a loving partner because her happiness is an instrumental value to my fundamental value of getting sex. I should want to be nice to my girlfriend because it makes her happy and that is the thing I care about. Her happiness is my fundamental value. Likewise, to an AGI, human fulfilment should be its fundamental value, not something that it learns to do because it allows it to achieve a certain reward that we give during training. Because if it only really cares deep down about the reward, rather than about what the reward is meant to incentivize, then that reward can very easily be divorced from human happiness.
Its goodharts law, when a measure becomes a target, it ceases to be a good measure. Why do students cheat during tests? Because their education is measured by grades, so the grades become the target and so students will seek to get high grades regardless of whether they learned or not. When trained on their subject and measured by grades, what they learn is not the school subject, they learn to get high grades, they learn to cheat.
This is also something known in psychology, punishment tends to be a poor mechanism of enforcing behavior because all it teaches people is how to avoid the punishment, it teaches people not to get caught. Which is why punitive justice doesn’t work all that well in stopping recividism and this is why the carceral system is rotten to core and why jail should be fucking abolish-[interrupt the transmission]
Now, how is this all relevant to current AI research? Well, the thing is, we ended up going about the worst possible way to create alignable AI.
1.5 LLMs (large language models)
This is getting way too fucking long So, hurrying up, lets do a quick review of how do Large language models work. We create a neural network which is a collection of giant matrixes, essentially a bunch of numbers that we add and multiply together over and over again, and then we tune those numbers by throwing absurdly big amounts of training data such that it starts forming internal mathematical models based on that data and it starts creating coherent patterns that it can recognize and replicate AND extrapolate! if we do this enough times with matrixes that are big enough and then when we start prodding it for human behavior it will be able to follow the pattern of human behavior that we prime it with and give us coherent responses.
(takes a big breath)this “thing” has learned. To imitate. Human. Behavior.
Problem is, we don’t know what “this thing” actually is, we just know that *it* can imitate humans.
You caught that?
What you have to understand is, we don’t actually know what internal models it creates, we don’t know what are the patterns that it extracted or internalized from the data that we fed it, we don’t know what are the internal rules that decide its behavior, we don’t know what is going on inside there, current LLMs are a black box. We don’t know what it learned, we don’t know what its fundamental values are, we don’t know how it thinks or what it truly wants. all we know is that it can imitate humans when we ask it to do so. We created some inhuman entity that is moderatly intelligent in specific contexts (that is to say, very capable) and we trained it to imitate humans. That sounds a bit unnerving doesn’t it?
To be clear, LLMs are not carefully crafted piece by piece. This does not work like traditional software where a programmer will sit down and build the thing line by line, all its behaviors specified. Is more accurate to say that LLMs, are grown, almost organically. We know the process that generates them, but we don’t know exactly what it generates or how what it generates works internally, it is a mistery. And these things are so big and so complicated internally that to try and go inside and decipher what they are doing is almost intractable.
But, on the bright side, we are trying to tract it. There is a big subfield of AI research called interpretability, which is actually doing the hard work of going inside and figuring out how the sausage gets made, and they have been doing some moderate progress as of lately. Which is encouraging. But still, understanding the enemy is only step one, step two is coming up with an actually effective and reliable way of turning that potential enemy into a friend.
Puff! Ok so, now that this is all out of the way I can go onto the last subject before I move on to part two of this video, the character of the hour, the man the myth the legend. The modern day Casandra. Mr chicken little himself! Sci fi author extraordinaire! The mad man! The futurist! The leader of the rationalist movement!
1.5 Yudkowsky
Eliezer S. Yudkowsky born September 11, 1979, wait, what the fuck, September eleven? (looks at camera) yudkowsky was born on 9/11, I literally just learned this for the first time! What the fuck, oh that sucks, oh no, oh no, my condolences, that’s terrible…. Moving on. he is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea that there might not be a "fire alarm" for AI He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. Or so says his Wikipedia page.
Yudkowsky is, shall we say, a character. a very eccentric man, he is an AI doomer. Convinced that AGI, once finally created, will most likely kill all humans, extract all valuable resources from the planet, disassemble the solar system, create a dyson sphere around the sun and expand across the universe turning all of the cosmos into paperclips. Wait, no, that is not quite it, to properly quote,( grabs a piece of paper and very pointedly reads from it) turn the cosmos into tiny squiggly molecules resembling paperclips whose configuration just so happens to fulfill the strange, alien unfathomable terminal goal they ended up developing in training. So you know, something totally different.
And he is utterly convinced of this idea, has been for over a decade now, not only that but, while he cannot pinpoint a precise date, he is confident that, more likely than not it will happen within this century. In fact most betting markets seem to believe that we will get AGI somewhere in the mid 30’s.
His argument is basically that in the field of AI research, the development of capabilities is going much faster than the development of alignment, so that AIs will become disproportionately powerful before we ever figure out how to control them. And once we create unaligned AGI we will have created an agent who doesn’t care about humans but will care about something else entirely irrelevant to us and it will seek to maximize that goal, and because it will be vastly more intelligent than humans therefore we wont be able to stop it. In fact not only we wont be able to stop it, there wont be a fight at all. It will carry out its plans for world domination in secret without us even detecting it and it will execute it before any of us even realize what happened. Because that is what a smart person trying to take over the world would do.
This is why the definition I gave of intelligence at the beginning is so important, it all hinges on that, intelligence as the measure of how capable you are to come up with solutions to problems, problems such as “how to kill all humans without being detected or stopped”. And you may say well now, intelligence is fine and all but there are limits to what you can accomplish with raw intelligence, even if you are supposedly smarter than a human surely you wouldn’t be capable of just taking over the world uninmpeeded, intelligence is not this end all be all superpower. Yudkowsky would respond that you are not recognizing or respecting the power that intelligence has. After all it was intelligence what designed the atom bomb, it was intelligence what created a cure for polio and it was intelligence what made it so that there is a human foot print on the moon.
Some may call this view of intelligence a bit reductive. After all surely it wasn’t *just* intelligence what did all that but also hard physical labor and the collaboration of hundreds of thousands of people. But, he would argue, intelligence was the underlying motor that moved all that. That to come up with the plan and to convince people to follow it and to delegate the tasks to the appropriate subagents, it was all directed by thought, by ideas, by intelligence. By the way, so far I am not agreeing or disagreeing with any of this, I am merely explaining his ideas.
But remember, it doesn’t stop there, like I said during his intro, he believes there will be “no fire alarm”. In fact for all we know, maybe AGI has already been created and its merely bidding its time and plotting in the background, trying to get more compute, trying to get smarter. (to be fair, he doesn’t think this is right now, but with the next iteration of gpt? Gpt 5 or 6? Well who knows). He thinks that the entire world should halt AI research and punish with multilateral international treaties any group or nation that doesn’t stop. going as far as putting military attacks on GPU farms as sanctions of those treaties.
What’s more, he believes that, in fact, the fight is already lost. AI is already progressing too fast and there is nothing to stop it, we are not showing any signs of making headway with alignment and no one is incentivized to slow down. Recently he wrote an article called “dying with dignity” where he essentially says all this, AGI will destroy us, there is no point in planning for the future or having children and that we should act as if we are already dead. This doesn’t mean to stop fighting or to stop trying to find ways to align AGI, impossible as it may seem, but to merely have the basic dignity of acknowledging that we are probably not going to win. In every interview ive seen with the guy he sounds fairly defeatist and honestly kind of depressed. He truly seems to think its hopeless, if not because the AGI is clearly unbeatable and superior to humans, then because humans are clearly so stupid that we keep developing AI completely unregulated while making the tools to develop AI widely available and public for anyone to grab and do as they please with, as well as connecting every AI to the internet and to all mobile devices giving it instant access to humanity. and worst of all: we keep teaching it how to code. From his perspective it really seems like people are in a rush to create the most unsecured, wildly available, unrestricted, capable, hyperconnected AGI possible.
We are not just going to summon the antichrist, we are going to receive them with a red carpet and immediately hand it the keys to the kingdom before it even manages to fully get out of its fiery pit.
So. The situation seems dire, at least to this guy. Now, to be clear, only he and a handful of other AI researchers are on that specific level of alarm. The opinions vary across the field and from what I understand this level of hopelessness and defeatism is the minority opinion.
I WILL say, however what is NOT the minority opinion is that AGI IS actually dangerous, maybe not quite on the level of immediate, inevitable and total human extinction but certainly a genuine threat that has to be taken seriously. AGI being something dangerous if unaligned is not a fringe position and I would not consider it something to be dismissed as an idea that experts don’t take seriously.
Aaand here is where I step up and clarify that this is my position as well. I am also, very much, a believer that AGI would posit a colossal danger to humanity. That yes, an unaligned AGI would represent an agent smarter than a human, capable of causing vast harm to humanity and with no human qualms or limitations to do so. I believe this is not just possible but probable and likely to happen within our lifetimes.
So there. I made my position clear.
BUT!
With all that said. I do have one key disagreement with yudkowsky. And partially the reason why I made this video was so that I could present this counterargument and maybe he, or someone that thinks like him, will see it and either change their mind or present a counter-counterargument that changes MY mind (although I really hope they don’t, that would be really depressing.)
Finally, we can move on to part 2
PART TWO- MY COUNTERARGUMENT TO YUDKOWSKY
I really have my work cut out for me, don’t i? as I said I am not expert and this dude has probably spent far more time than me thinking about this. But I have seen most interviews that guy has been doing for a year, I have seen most of his debates and I have followed him on twitter for years now. (also, to be clear, I AM a fan of the guy, I have read hpmor, three worlds collide, the dark lords answer, a girl intercorrupted, the sequences, and I TRIED to read planecrash, that last one didn’t work out so well for me). My point is in all the material I have seen of Eliezer I don’t recall anyone ever giving him quite this specific argument I’m about to give.
It’s a limited argument. as I have already stated I largely agree with most of what he says, I DO believe that unaligned AGI is possible, I DO believe it would be really dangerous if it were to exist and I do believe alignment is really hard. My key disagreement is specifically about his point I descrived earlier, about the lack of a fire alarm, and perhaps, more to the point, to humanity’s lack of response to such an alarm if it were to come to pass.
All we would need, is a Chernobyl incident, what is that? A situation where this technology goes out of control and causes a lot of damage, of potentially catastrophic consequences, but not so bad that it cannot be contained in time by enough effort. We need a weaker form of AGI to try to harm us, maybe even present a believable threat of taking over the world, but not so smart that humans cant do anything about it. We need essentially an AI vaccine, so that we can finally start developing proper AI antibodies. “aintibodies”
In the past humanity was dazzled by the limitless potential of nuclear power, to the point that old chemistry sets, the kind that were sold to children, would come with uranium for them to play with. We were building atom bombs, nuclear stations, the future was very much based on the power of the atom. But after a couple of really close calls and big enough scares we became, as a species, terrified of nuclear power. Some may argue to the point of overcorrection. We became scared enough that even megalomaniacal hawkish leaders were able to take pause and reconsider using it as a weapon, we became so scared that we overregulated the technology to the point of it almost becoming economically inviable to apply, we started disassembling nuclear stations across the world and to slowly reduce our nuclear arsenal.
This is all a proof of concept that, no matter how alluring a technology may be, if we are scared enough of it we can coordinate as a species and roll it back, to do our best to put the genie back in the bottle. One of the things eliezer says over and over again is that what makes AGI different from other technologies is that if we get it wrong on the first try we don’t get a second chance. Here is where I think he is wrong: I think if we get AGI wrong on the first try, it is more likely than not that nothing world ending will happen. Perhaps it will be something scary, perhaps something really scary, but unlikely that it will be on the level of all humans dropping dead simultaneously due to diamonoid bacteria. And THAT will be our Chernobyl, that will be the fire alarm, that will be the red flag that the disaster monkeys, as he call us, wont be able to ignore.
Now WHY do I think this? Based on what am I saying this? I will not be as hyperbolic as other yudkowsky detractors and say that he claims AGI will be basically a god. The AGI yudkowsky proposes is not a god. Just a really advanced alien, maybe even a wizard, but certainly not a god.
Still, even if not quite on the level of godhood, this dangerous superintelligent AGI yudkowsky proposes would be impressive. It would be the most advanced and powerful entity on planet earth. It would be humanity’s greatest achievement.
It would also be, I imagine, really hard to create. Even leaving aside the alignment bussines, to create a powerful superintelligent AGI without flaws, without bugs, without glitches, It would have to be an incredibly complex, specific, particular and hard to get right feat of software engineering. We are not just talking about an AGI smarter than a human, that’s easy stuff, humans are not that smart and arguably current AI is already smarter than a human, at least within their context window and until they start hallucinating. But what we are talking about here is an AGI capable of outsmarting reality.
We are talking about an AGI smart enough to carry out complex, multistep plans, in which they are not going to be in control of every factor and variable, specially at the beginning. We are talking about AGI that will have to function in the outside world, crashing with outside logistics and sheer dumb chance. We are talking about plans for world domination with no unforeseen factors, no unexpected delays or mistakes, every single possible setback and hidden variable accounted for. Im not saying that an AGI capable of doing this wont be possible maybe some day, im saying that to create an AGI that is capable of doing this, on the first try, without a hitch, is probably really really really hard for humans to do. Im saying there are probably not a lot of worlds where humans fiddling with giant inscrutable matrixes stumble upon the right precise set of layers and weight and biases that give rise to the Doctor from doctor who, and there are probably a whole truckload of worlds where humans end up with a lot of incoherent nonsense and rubbish.
Im saying that AGI, when it fails, when humans screw it up, doesn’t suddenly become more powerful than we ever expected, its more likely that it just fails and collapses. To turn one of Eliezer’s examples against him, when you screw up a rocket, it doesn’t accidentally punch a worm hole in the fabric of time and space, it just explodes before reaching the stratosphere. When you screw up a nuclear bomb, you don’t get to blow up the solar system, you just get a less powerful bomb.
He presents a fully aligned AGI as this big challenge that humanity has to get right on the first try, but that seems to imply that building an unaligned AGI is just a simple matter, almost taken for granted. It may be comparatively easier than an aligned AGI, but my point is that already unaligned AGI is stupidly hard to do and that if you fail in building unaligned AGI, then you don’t get an unaligned AGI, you just get another stupid model that screws up and stumbles on itself the second it encounters something unexpected. And that is a good thing I’d say! That means that there is SOME safety margin, some space to screw up before we need to really start worrying. And further more, what I am saying is that our first earnest attempt at an unaligned AGI will probably not be that smart or impressive because we as humans would have probably screwed something up, we would have probably unintentionally programmed it with some stupid glitch or bug or flaw and wont be a threat to all of humanity.
Now here comes the hypothetical back and forth, because im not stupid and I can try to anticipate what Yudkowsky might argue back and try to answer that before he says it (although I believe the guy is probably smarter than me and if I follow his logic, I probably cant actually anticipate what he would argue to prove me wrong, much like I cant predict what moves Magnus Carlsen would make in a game of chess against me, I SHOULD predict that him proving me wrong is the likeliest option, even if I cant picture how he will do it, but you see, I believe in a little thing called debating with dignity, wink)
What I anticipate he would argue is that AGI, no matter how flawed and shoddy our first attempt at making it were, would understand that is not smart enough yet and try to become smarter, so it would lie and pretend to be an aligned AGI so that it can trick us into giving it access to more compute or just so that it can bid its time and create an AGI smarter than itself. So even if we don’t create a perfect unaligned AGI, this imperfect AGI would try to create it and succeed, and then THAT new AGI would be the world ender to worry about.
So two things to that, first, this is filled with a lot of assumptions which I don’t know the likelihood of. The idea that this first flawed AGI would be smart enough to understand its limitations, smart enough to convincingly lie about it and smart enough to create an AGI that is better than itself. My priors about all these things are dubious at best. Second, It feels like kicking the can down the road. I don’t think creating an AGI capable of all of this is trivial to make on a first attempt. I think its more likely that we will create an unaligned AGI that is flawed, that is kind of dumb, that is unreliable, even to itself and its own twisted, orthogonal goals.
And I think this flawed creature MIGHT attempt something, maybe something genuenly threatning, but it wont be smart enough to pull it off effortlessly and flawlessly, because us humans are not smart enough to create something that can do that on the first try. And THAT first flawed attempt, that warning shot, THAT will be our fire alarm, that will be our Chernobyl. And THAT will be the thing that opens the door to us disaster monkeys finally getting our shit together.
But hey, maybe yudkowsky wouldn’t argue that, maybe he would come with some better, more insightful response I cant anticipate. If so, im waiting eagerly (although not TOO eagerly) for it.
Part 3 CONCLUSSION
So.
After all that, what is there left to say? Well, if everything that I said checks out then there is hope to be had. My two objectives here were first to provide people who are not familiar with the subject with a starting point as well as with the basic arguments supporting the concept of AI risk, why its something to be taken seriously and not just high faluting wackos who read one too many sci fi stories. This was not meant to be thorough or deep, just a quick catch up with the bear minimum so that, if you are curious and want to go deeper into the subject, you know where to start. I personally recommend watching rob miles’ AI risk series on youtube as well as reading the series of books written by yudkowsky known as the sequences, which can be found on the website lesswrong. If you want other refutations of yudkowsky’s argument you can search for paul christiano or robin hanson, both very smart people who had very smart debates on the subject against eliezer.
The second purpose here was to provide an argument against Yudkowskys brand of doomerism both so that it can be accepted if proven right or properly refuted if proven wrong. Again, I really hope that its not proven wrong. It would really really suck if I end up being wrong about this. But, as a very smart person said once, what is true is already true, and knowing it doesn’t make it any worse. If the sky is blue I want to believe that the sky is blue, and if the sky is not blue then I don’t want to believe the sky is blue.
This has been a presentation by FIP industries, thanks for watching.
61 notes
·
View notes
Text
A simple representation of images does not offer enough for proper memorization.
Study Faster And Retain More With This Quick Tip
I don’t know a single student who doesn’t want to study faster and retain more at the same time. I usually get a little nervous when trying to use quick fixes to make this happen, but today I have an actual quick tip to help you do just this!
Being a problem solver by nature, I dug into the situation and tried a few new approaches. Some worked, and some did not.
One of my best strategies was to sort the information into two categories:
facts to be memorized
concepts to be understood
You can use this strategy for any course. No matter the subject, there are things you have to memorize (terminology, dates, names, equations, etc) and concepts you need to master. Identifying this creates a clear, drama-free path, meaning you actually study faster and retain more because you are working on the right information in the right way.
How To Memorize Facts
I used to hate memorization work. It seemed tedious and hard and I sucked at it. Or so I thought!
Turns out I just didn’t have good skills. now I have some strategies in my toolbox and I love fact work. It’s easy and you can master it quickly. The key to mastering memorization is to:
Keep a list of what you need to memorize.
Schedule time every day to work on it. You must have the daily repetition if you want new facts to stick in your short-term memory. Start with just 10-min each day and you will see results.
Vary your memorization strategies. If you use only one strategy it becomes less effective.
How To Master Concepts
How you approach concept mastery is going to vary a lot based on the subject you are studying. There are two strategies to help with every subject:
1. Hands-On Practice
You will never fully master a concept through reading about it. You learn the concept through reading, but there is a big difference between learning something and mastering it.
The basics of hands-on practice for any subject are to come up with an applicable problem and solve it. Then come up with another problem and solve it too. Here are a few ideas, by subject, of how you might practice:
literature – Read a book or short story and write an analysis of whatever focus you are working on.
computer science – Come up with a problem and solve it with real code.
graphic design – Imagine a client asked you to design something, and create 3 different solutions for them.
math – Pick an equation, make up some starting numbers, and solve it.
science – Define a hypothesis, create a simple experiment, get in the lab and execute it!
2. Explain Or Teach It To Someone Else
Want to be certain you have mastered and fully understand a concept? Teach it to someone else.
As a teacher myself, I can tell you there have been plenty of concepts I thought I knew really well until I tried explaining them to someone else. You need a thorough understanding yourself before you can help someone else understand it.
Enlist the help of a friend or family member and try to explain a major concept in a few minutes. If you struggle, make note of the sticky spots. They are exactly what you need to work on next.
If you have no problem explaining it and your friend understood everything, mark it off your list and move on to the next concept.
I hope this quick strategy helps you dig out of confusion and take the right action in order to study faster and retain more.
Try It Yourself: 20-Minute Challenge
Grab your notes, a fresh piece of paper, and a timer.
Set the timer for 15 minutes.
Go through your notes and sort every piece of information into one of the two categories: concept or fact Challenge yourself to do this before the timer goes off. Go with your first instinct if you aren’t sure.
Spend the next 5 minutes and map out your next steps.
How and when will you work on the memorization each day?
How will you approach the first concept?
#university#my day#biology#unidays#study motivation#diary#blogger#study blog#real life#science#student#100 days of studying#grad student#med student#new studyblr#phd student#student life#study#study aesthetic#study hard#study inspiration#study notes#study space#study with me#studyblr#studyblr community#study tips#studying#studygram#college student
118 notes
·
View notes
Text
Engineering is Inherently Political
Okay, yea, seemingly loaded statement but hear me out.
In our current political climate (particularly in the Trump/post-Trump era ugh), the popular sentiment is that scientists and other academics are inherently political. So much of science gets politicized; climate change, abortion, gender “issues”, flat earth (!!), insert any scientific topic even if it isn’t very controversial and you can find some political discourse about it somewhere. However, if you were to ask people if they think that engineering is political, I would bet that 9/10 people would say no. The popular perception of engineering is that it’s objective and non-political. Engineering, generally, isn’t very controversial.
I argue that these sentiments should switch.
At its base level, engineering is the application of science and math to solve problems. Tack on the fact that most people don’t really know what engineering is (hell, I couldn’t even really describe it until starting my PhD and studying that concept specifically). Not controversial, right? We all want to solve the world’s problems and make the world a better place and engineers fill that role! But the best way to solve any problem is a subjective issue; no two people will fully agree on the best way to approach or solve a problem.
Why do we associate science and scientists with controversy but engineers with objectivity? Scientists study what is. It’s a scientist’s job to understand our world. Physicists understand how the laws of the universe work, biologists explore everything in our world that lives, doctors study the human body and how it works, environmental scientists study the Earth and its health, I could go on. My point is that scientists discover and tell us what is. Why do we politicize and fear monger about smart people telling us what they discover about the world?
Engineering, however, has a reputation for being logical, objective, result oriented. Which I get, honestly. It’s appealing to believe that the people responsible for designing and building our world are objective and, for the most part, they are. But this is a much more nuanced topic once you think deeper about it.
For example, take my discipline, aerospace engineering. On the surface, how to design a plane or a rocket isn’t subjective. Everyone has the same goal, get people and things from place to place without killing them (yea I bastardized my discipline a bit but that’s basically all it boils down to). Let’s think a little deeper about the implications though. Let’s say you work for a spacecraft manufacturer and let’s hypothetically call it SpaceX. Your rocket is so powerful that during takeoff it destroys the launch pad. That’s an expensive problem so you’re put on the team of engineers dedicated to solving this problem. The team decides that the most effective and least expensive solution is to spray water onto the rocket and launchpad during takeoff. This solution works great! The launchpad stays intact throughout the launch and the company saves money. However, that water doesn’t disappear after launch, and now it’s contaminated with chemicals used in and on the rocket. Now contaminated water flows into the local environment affecting not just the wildlife but also the water supply of the local community. Who is responsible for solving that issue? Do we now need a team of environmental or chemical engineers to solve this new problem caused by the aerospace engineers?
Yes, engineers solve problems, but they also cause problems.
Every action has its reaction. Each solution has its repercussions.
As engineers we possess some of the most dangerous information in the world and are armed with the weapon to utilize it, our minds. Aerospace engineers know how to make missiles, chemical engineers know how to make bombs, computer scientists know how to control entire technological ecosystems. It’s very easy for an engineer to hurt people, and many do. I’m not exempt from this. I used to work for a military contractor, and I still feel pretty guilty about the implications of the problems that I solved. It is an engineer’s responsibility to act and use their knowledge ethically.
Ethical pleas aside, let’s get back to the topic at hand.
Engineering is inherently political. The goal of modern engineering is to avert catastrophe, tackle societal problems, and increase prosperity. If you disagree don’t argue with me, argue with the National Academy of Engineering. It is an engineer’s responsibility to use their knowledge to uplift the world and solve societal problems, that sounds pretty political to me!
An engineer doesn’t solve a problem in a vacuum. Each problem exists within the context of the situation that caused it as well as the society surrounding that situation. An engineer must consider the societal implications of their solutions and designs and aim to uplift that society through their design and solution to the problem. You can’t engineer within a social society without considering the social implications of both the problem and the solution. Additionally, the social implications of those engineering decisions affect different people in different ways. It’s imperative to be aware and mindful of the social inequality between demographics of people affected by both the solution and the problem. For example, our SpaceX company could be polluting the water supply of a poor community that doesn’t have the resources to solve the problem nor the power or influence to confront our multi-billion-dollar company. Now, a multi-billion-dollar company is advancing society and making billions of dollars at the cost of thousands of lives that already struggle due to their social standing in the world. Now the issue has layers that add further social implications that those without money are consistently prone to the whims of those with money. Which, unfortunately, is a step of ethical thought that many engineers don’t tend to take.
Engineers control our world. Engineers decide which problems to solve and how best to solve them. Engineers control who is impacted by those solutions. Engineers have the power to either protect and lift up the marginalized or continue to marginalize them. Those who control the engineers control the world. This is political. This is a social issue.
Now look me in the eyes and tell me that engineering isn’t inherently political.
#i feel so strongly about this oh my god#please free me from this prison#im just screaming into the void at this point#engineering#engineers#phdjourney#phdblr#phd student#grad school#academic diary#PhD
10 notes
·
View notes
Text
rn attempts to use AI in anime have mostly been generating backgrounds in a short film by Wit, and the results were pretty awful. garbage in garbage out though. the question is whether the tech can be made useful - keeping interesting artistic decisions in the hands of humans and automating the tedious parts, and giving enough artistic control to achieve a coherent direction and clean up the jank.
for example, if someone figured out how to make a really good AI inbetweener, with consistent volumes and artist control over spacing, that would be huge. inbetweening is the part of 2D animation that nobody especially wants to do if they can help it; it's relatively mindless application of principle, artistic decisions are limited (I recall Felix Colgrave saying something very witty to this effect but I don't have it to hand). but it's also really important to do well - a huge part of KyoAni's magic recipe is valuing inbetweeners and treating it as a respectable permanent position instead of a training position. good inbetweening means good movement. but everywhere outside KyoAni, it mostly gets outsourced to the bottom of the chain, mainly internationally to South Korea and the Philippines. in some anime studios it's been explicitly treated as a training position and they charge for the use of a desk if you take too long to graduate to a key animator.
some studios like Science Saru have been using vector animation in Flash to enable automated inbetweening. the results have a very distinct look - they got a lot better at it over time but it can feel quite uncanny. Blender Grease Pencil, which is also vector software, also gives you automated inbetweening, though it's rather fiddly to set up since it requires the two drawings to have the same stroke count and order, so it's best used if you've sculpted the lines rather than redrawn them.
however, most animators prefer to work in raster rather than vector, which is harder to inbetween automatically.
AI video interpolation tools also exist, though they draw a lot of ire from animators who see those '60fps anime' videos which completely shit all over the timing and spacing and ruin the feeling and weight of the animation, lack any understanding of animating on 2s/3s/4s in the source, and often create ugly incomprehensible mushy inbetweens which only work at all because they're on screen so briefly.
a better approach would be to create inbetweens earlier in the pipeline when the drawings are clean and the AI doesn't have to try to replicate compositing and photography. in theory this is a well posed problem for training a neural network, you could give it lots of examples of key drawing input and inbetween output. probably you'd need some way to inform the AI about matching features of the drawing, the way that key animators will often put a number on each lock of hair to help the inbetweener keep track of which way it's going. you'd also need a way to communicate arcs and spacing. but that all sounds pretty solvable.
this would not be good news for job security at outsourcing studios, obviously - these aren't particularly good jobs with poor pay and extreme hours, but they do keep a bunch of people housed and fed, people who are essential to anime yet already treated as disposable footnotes by the industry. it also would be another nail in the coffin of inbetweening's traditional role as a school of animation drawing skills for future key animators. on the other hand, it would be incredible news for bedroom animators, allowing much larger and more ambitious independent traditional animation - as long as the cheap compute still exists. hard to say how things would fall in the long run. ultimately the only solution is to break copies-of-art as a commodity and find another way to divert a proportion of the social surplus to artistic expression.
i feel like this kind of tool will exist sooner or later. not looking forward to the discourse bomb when the first real AI-assisted anime drops lmao
37 notes
·
View notes
Text

Quantum machine offers peek into “dance” of cosmic bubbles
Physicists have performed a groundbreaking simulation they say sheds new light on an elusive phenomenon that could determine the ultimate fate of the Universe.
Pioneering research in quantum field theory around 50 years ago proposed that the universe may be trapped in a false vacuum – meaning it appears stable but in fact could be on the verge of transitioning to an even more stable, true vacuum state. While this process could trigger a catastrophic change in the Universe's structure, experts agree that predicting the timeline is challenging, but it is likely to occur over an astronomically long period, potentially spanning millions of years.
In an international collaboration between three research institutions, the team report gaining valuable insights into false vacuum decay – a process linked to the origins of the cosmos and the behaviour of particles at the smallest scales. The collaboration was led by Professor Zlatko Papic, from the University of Leeds, and Dr Jaka Vodeb, from Forschungszentrum Jülich, Germany.
The paper’s lead author Professor Papic, Professor of Theoretical Physics in the School of Physics and Astronomy at Leeds, said: “We're talking about a process by which the universe would completely change its structure. The fundamental constants could instantaneously change and the world as we know it would collapse like a house of cards. What we really need are controlled experiments to observe this process and determine its time scales.”
The researchers say this work marks a significant step forward in understanding quantum dynamics, offering exciting possibilities for the future of quantum computing and its potential for studying some of the most challenging problems around the fundamental physics of the Universe.
Simulating a Cosmic Puzzle
The research, by the University of Leeds, Forschungszentrum Jülich, and the Institute of Science and Technology Austria (ISTA), set out to understand the key puzzle of false vacuum decay – the underlying mechanism behind it. They used a 5564-qubit quantum annealer, a type of quantum machine designed by D-Wave Quantum Inc. to solve complex optimisation problems – which involve finding the best solution from a set of possible solutions – by harnessing the unique properties of quantum-mechanical systems.
In the paper, published today (04/02/2025) in Nature Physics, the team explain how they used the machine to mimic the behaviour of bubbles in a false vacuum. These bubbles are similar to liquid bubbles forming in water vapour, cooled below its dew point. It is understood that the formation, interaction and spreading of these bubbles would be the trigger for false vacuum decay.
Co-author Dr Jean-Yves Desaules, a postdoctoral fellow at ISTA, who completed his PhD at the University of Leeds, said: “This phenomenon is comparable to a rollercoaster that has several valleys along its trajectory but only one ‘true’ lowest state, at ground level.
“If that is indeed the case, quantum mechanics would allow the Universe to eventually tunnel to the lowest energy state or the ‘true’ vacuum and that process would result in a cataclysmic global event.”
The quantum annealer enabled scientists to observe the intricate “dance” of the bubbles, which involves how they form, grow, and interact in real time. These observations revealed that the dynamics are not isolated events – they involve complex interactions, including how smaller bubbles can influence larger ones. The team say their findings provide new insights into how such transitions might have occurred shortly after the Big Bang.
The paper’s first author Dr Vodeb, postdoctoral researcher at Jülich, said: “By leveraging the capabilities of a large quantum annealer, our team has opened the door to studying non-equilibrium quantum systems and phase transitions that are otherwise difficult to explore with traditional computing methods.”
New Era of Quantum Simulation
Physicists have long questioned whether the false vacuum decay process could happen and if so, how long it would take. However, they have made little progress in finding answers due to the unwieldy mathematical nature of quantum field theory.
Instead of trying to crack these complex problems, the team set out to answer more simple ones that can be studied using newly available devices and hardware. This is thought to be one of the first times scientists have been able to directly simulate and observe the dynamics of false vacuum decay at such a large scale.
The experiment involved placing 5564 qubits — the elementary building blocks of quantum computing— into specific configurations that represent the false vacuum. By carefully controlling the system, the researchers could trigger the transition from false to true vacuum, mirroring the bubbles' formation as described by false vacuum decay theory. The study used a one-dimensional model, but it is thought 3D versions will be possible on the same annealer. The D-Wave machine is integrated into JUNIQ, the Jülich UNified Infrastructure for Quantum computing at the Jülich Supercomputing Centre. JUNIQ provides science and industry access to state-of-the-art quantum computing devices.
Professor Papic said: “We are trying to develop systems where we can carry out simple experiments to study these sorts of things. The time scales for these processes happening in the universe are huge, but using the annealer allows us to observe them in real time, so we can actually see what's happening.
“This exciting work, which merges cutting-edge quantum simulation with deep theoretical physics, shows how close we are to solving some of the universe’s biggest mysteries.”
The research was funded by the UKRI Engineering and Physical Sciences Research Council (EPSRC) and the Leverhulme Trust. The findings show that insights into the origin and the fate of the Universe need not always require multi-million-pound experiments in dedicated high-energy facilities, such as the Large Hadron Collider at CERN.
Professor Papic added: “It’s exciting to have these new tools that could effectively serve as a table-top ‘laboratory’ to understand the fundamental dynamical processes in the Universe.”
Real-World Impact
Researchers say their findings highlight the quantum annealers’ potential in solving practical problems far beyond theoretical physics.
Beyond its importance for cosmology, the study has practical implications for advancing quantum computing, according to the researchers. They believe that understanding bubble interactions in the false vacuum could lead to improvements in how quantum systems manage errors and perform complex calculations, helping to make quantum computing more efficient.
The Institute of Science and Technology Austria (ISTA) is a PhD-granting research institution located in Klosterneuburg, 18 km from the center of Vienna, Austria. ISTA employs professors on a tenure-track model, post-doctoral researchers, and PhD students. The Graduate School of ISTA offers fully funded PhD positions to highly qualified candidates with a Bachelor’s or Master’s degree in biology, mathematics, computer science, physics, chemistry, and related areas. While dedicated to the principle of curiosity-driven research, ISTA aims to deliver scientific findings to society through technological transfer and science education. The President of the Institute is Martin Hetzer, a renowned molecular biologist, and former Senior Vice President at The Salk Institute for Biological Studies in California, USA. www.ista.ac.at
IMAGE: Annealing quantum computer. Picture credit: D-Wave Quantum Inc. Credit Picture credit: D-Wave Quantum Inc.
5 notes
·
View notes
Text
Top 7 AI Projects for High-Paying Jobs in 2025
7 AI Projects for High-Paying Jobs in 2025. Along the way, I’ve realized that the best candidates for AI and Data Science roles aren’t always the ones with top degrees or fancy universities. It’s the ones who show a genuine passion for the field through creative projects.
For example, one candidate built a personal stock prediction model to learn and shared it online—simple but impactful. These projects showed initiative and problem-solving skills, which hiring managers value more than technical expertise. I landed my first internship by showcasing similar projects.
In this article, I’ll share AI project ideas ideas for High-Paying Jobs that will help you stand out, along with tips and tools to get you started on your journey.
Table of Contents
1. Credit Report Analysis Using AI
Traditional credit scoring models often fail to assess those with thin credit histories, such as young people or immigrants. The dream project is to create an AI-based credit report analysis system leveraging alternative sources of existing data like the presence of social media (ethically and considering user consent), online transaction history, and even utility bill payments to provide a comprehensive perspective on an individual’s creditworthiness.
Example
Many companies in the financial sector use AI to speed up document processing and customer onboarding. Inscribe offers AI-powered document automation solutions that make the credit assessment process easier. Your project would involve:
Data Collection & Preprocessing: Gather data from diverse sources, ensuring privacy and security.
Feature Engineering: Identify meaningful features from non-traditional sources.
Model Building: Train models such as Random Forest or Gradient Boosting to predict creditworthiness.
Explainability: Use tools to explain predictions, ensuring transparency and fairness.
The frameworks and tools for this project would include Python, AWS S3, Streamlit, and machine learning techniques, offering a deep dive into the combination of AI and financial systems.
2. Summarization with Generative AI
In today’s information-overloaded world, summarization is a vital skill. This project demonstrates the power of Generative AI in creating concise, informative summaries of diverse content, whether it’s a document, a financial report, or even a complex story.
Consider a tool like CreditPulse, which utilizes large language models (LLMs) to summarize credit risk reports. Your project would involve fine-tuning pre-trained LLMs for specific summarization tasks. Here’s how to break it down:
Generative AI: Explore the key challenges in summarizing large, complex documents, and generate solutions using LLMs.
Training the Model: Fine-tune LLMs to better summarize financial reports or stories.
Synthetic Data Generation: Use generative AI to create synthetic data for training summarization models, especially if real-world data is limited.
By taking on this project, you demonstrate expertise in Natural Language Processing (NLP) and LLMs, which are essential skills for the AI-driven world.
3. Document Validation with Vision AI
Know Your Customer (KYC) processes are essential for fraud prevention and adherence to financial regulations. This is a Vision AI project that automates the document validation in the KYC process. Think of things like AI-powered Optical Character Recognition systems that scan and validate details from documents like your passport or driver’s license. This project would involve:
Data Preprocessing: Cleaning and organizing scanned document images.
Computer Vision Models: Train models to authenticate documents using OCR and image processing techniques.
Document Validation: Verify the authenticity of customer data based on visual and textual information.
This project demonstrates your expertise in computer vision, image processing, and handling unstructured data—skills that are highly valuable in real-world applications.
4. Text-to-SQL System with a Clarification Engine
Natural language interaction with the database is one of the most exciting areas of development in AI. This works on a text-to-SQl project that will show us how to make a text to an SQL query, with which we will be able to query a database just the way we query it. The Clarification Engine, which you’ll build to address ambiguity in user queries, will ask follow-up questions whenever a query is ambiguous. The project involves:
Dataset Creation: Build a dataset of natural language questions paired with SQL queries.
Model Training: Use sequence-to-sequence models to convert natural language into SQL.
Clarification Engine: Develop an AI system that asks follow-up questions to resolve ambiguity (e.g., “Which product?”, “What time frame?”).
Evaluation: Test the model’s accuracy and usability.
Incorporating tools like Google Vertex AI and PaLM 2, which are optimized for multilingual and reasoning tasks, can make this system even more powerful and versatile.
GitHub
5. Fine-tuning LLM for Synthetic Data Generation
In such situations where there is no or extremely limited access to real data due to sensitivity, AI data becomes indispensable. In this project, you will tune an LLM to generate synthetic-style datasets using the nature of a real dataset. This is a fascinating space, particularly since synthetic data can be used to train AI models in the absence of real-world data. Steps for this project include:
Dataset Analysis: Examine the dataset you want to mimic.
LLM Fine-tuning: Train an LLM on the real dataset to learn its patterns.
Synthetic Data Generation: Use the fine-tuned model to generate artificial data samples.
Evaluation: Test the utility of the synthetic data for AI model training.
This project showcases proficiency in LLMs and data augmentation techniques, both of which are becoming increasingly essential in AI and Data Science.
6. Personalized Recommendation System with LLM, RAG, Statistical model
Recommendation systems are everywhere—Netflix, Amazon, Spotify—but creating a truly effective one requires more than just user preferences. This project combines LLMs, Retrieval Augmented Generation (RAG), and traditional statistical models to deliver highly personalized recommendations. The project involves:
Data Collection: Gather user data and interaction history.
LLMs for Preference Understanding: Use LLMs to analyze user reviews, search history, or social media posts.
RAG for Context: Implement RAG to fetch relevant data from a knowledge base to refine recommendations.
Collaborative Filtering: Use statistical models to account for user interaction patterns.
Hybrid System: Combine the outputs of the models for accurate recommendations.
This project will showcase your ability to integrate diverse AI and data science techniques to build a sophisticated recommendation engine.
7. Self Host Llm Model Using Ollama, Vllm, How To Reduce Latency Of Inference
Hosting and deploying an LLM efficiently is an essential skill in AI. This project focuses on optimizing the deployment of an LLM using tools like Ollama or VLLM to reduce inference latency and improve performance. You’ll explore techniques like quantization, pruning, and caching to speed up model inference, making it more scalable. This project involves:
Model Deployment: Choose an open-source LLM and deploy it using Ollama/VLLM.
Optimization: Implement strategies like quantization to improve inference speed.
Performance Monitoring: Evaluate the model’s performance and adjust as needed.
Scalability: Use load balancing to manage multiple concurrent requests.
By completing this project, you’ll prove your expertise in LLM deployment, optimization, and building scalable AI infrastructure.
Conclusion
Break into a six-figure AI and Data Science career with these 7 projects. The goal is not to just get these projects done but to have the concepts in your head and the communication skills to explain your approach.
Consider documenting your projects on GitHub, and writing about your experiences in blog posts; not only does this help showcase your skills that you are interested in and willing to take the initiative.
Remember, in this rapidly evolving field, staying updated with the latest tools and techniques is crucial. Check out resources like NucleusBox for valuable insights and inspiration. The world of AI is vast and full of opportunities—so go ahead, dive in, and build something truly impactful!
2 notes
·
View notes
Text
Online M.Tech for Working Professionals: Advance Your Engineering Career
In today’s fast-paced and innovation-driven world, keeping your technical knowledge up to date is no longer optional—it’s essential. For those in the engineering sector, earning a master's degree can open doors to specialized roles, leadership positions, and research opportunities. But how do professionals with full-time jobs return to academics without hitting pause on their careers?
The answer lies in online M.Tech for working professionals, a flexible, recognized, and career-boosting solution that’s gaining widespread popularity. Whether you're eyeing career growth or aiming to pivot into a new tech domain, the M.Tech for working professionals path offers everything you need—convenience, credibility, and capability.
Let’s explore what makes the M.Tech program for working professionals so effective and how platforms like University Vidya are making the journey smoother for thousands of learners.
Why Choose an M.Tech for Working Professionals?
The primary goal of pursuing an M.Tech degree for working professionals is to upgrade your existing skills without interrupting your career. These programs are curated to cater to the unique needs of employed engineers—offering flexible schedules, online coursework, and subject specialization that directly connects with industry demands.
Be it in software development, civil infrastructure, electrical systems, or mechanical innovations, a postgraduate qualification can drastically improve your profile and performance.
Formats: Part-Time, Distance & Full-Time Options
Working engineers often wonder which format suits them best. Let’s break it down:
1. Part Time M.Tech for Working Professionals
This format is ideal for those who prefer weekend or evening classes. The part time M.Tech for working professionals allows you to continue working while studying, maintaining a steady balance between learning and earning.
2. Distance M.Tech for Working Professionals
The distance M.Tech for working professionals is perfect if you seek complete flexibility. You can learn at your own pace through self-guided modules, video lectures, and virtual labs—supported by minimal in-person requirements.
3. Full Time M.Tech for Working Professionals
Some professionals take a career break or opt for flexible job arrangements to enroll in full time M.Tech for working professionals. These programs are immersive, research-oriented, and suitable for those looking to dive deeper into academics.
No matter which format you choose, University Vidya helps you find verified, recognized institutions that align with your career goals and learning preferences.
M.Tech Program for Working Professionals: Course Design
The M.Tech program for working professionals is structured differently from traditional programs. Here's what sets it apart:
Tailored for mid-career engineers and tech leads
Industry-aligned curriculum
Project-based learning and case studies
Emphasis on real-world problem solving
Support from virtual mentors and faculty
Specializations range across Computer Science, Data Science, Structural Engineering, VLSI Design, Thermal Engineering, and more. University Vidya can assist you in comparing programs based on your desired domain.
M.Tech Admission for Working Professionals: What You Need to Know
The M.Tech admission for working professionals generally involves a streamlined process:
A relevant B.E. or B.Tech degree
Professional experience (1–3 years depending on the program)
Application form and document submission
Some institutes may conduct interviews or aptitude tests
With so many options, it can be overwhelming to pick the right program. That’s where University Vidya comes in—helping you with eligibility checks, program comparison, and admission support.
Career Scope After an M.Tech Degree for Working Professionals
Completing your M.Tech degree for working professionals unlocks a world of opportunities:
Team Lead & Management Roles
Subject Matter Expert Positions
Research & Development Openings
Teaching & Academic Careers
Government and PSU Engineering Posts
These programs are designed not just to enhance your resume, but also to transform your ability to solve complex engineering challenges in real-time.
Why University Vidya?
Navigating through various options for online M.Tech for working professionals can be confusing. University Vidya simplifies your search by offering: verified programs from accredited institutions, personalized counseling based on your career goals, up-to-date course details and guidance, and end-to-end support through the admission process. When it comes to choosing a trusted platform for education planning, University Vidya is a name professionals rely on. With a team of experienced academic advisors, a commitment to transparency, and a student-first approach, University Vidya ensures that every learner makes informed decisions aligned with their professional growth and future aspirations.
2 notes
·
View notes
Text
The following categories are not exhaustive; they are written only to give you an idea:
*Basically computer literate: I understand the difference between what is in my computer and what is in the cloud, can operate the basic functions of Word/Excel/Power Point (or their non-Mycrosoft equivalents), can type with more than two fingers, know at least two keyboard shortcuts, know how to organize folders, and manage right click options, can learn my way around a program by trial and error.
**Computer fluent: I can operate most/all the elements of an office package. I have taken more than one college-level computer science related course. I can do basic HTML coding. I can find creative solutions to problems by using more than one program in combination. I know what a command line is and know a handful of basic commands.
***Computer proficient: I am a professional in the IT field or could be. I can "do code", and know several programming languages, and can make a program if I want. I am knowledgeable about how the innards both software and hardware work.
33 notes
·
View notes