#AI courses for developers
Explore tagged Tumblr posts
Text
AI Courses for Developers | GenAI Training for Tech Teams
Advance your tech team's capabilities with AI courses for developers tailored to software engineers and data practitioners. upGrad’s hands-on program in Generative AI covers real-world use cases, tools, and techniques to build, deploy, and optimize AI applications.
#AI courses for developers#AI for software engineers#GenAI for tech teams#AI training for developers#data practitioner AI training
0 notes
Text
Giant hug for everyone who has been onto the game's Steam discussion board and replied with classy snark to people grumbling on there.
I'm reluctant to do that as a developer but when I see you guys doing it, just know that I'm cheering for you.
#I don't count asking legit questions or highlighting bugs to be grumbling of course!#But complaining about the price of the game vs play time#or that I haven't used generative AI in the game development process (???)#Or that the game is Woke Garbage#Is grumbling
92 notes
·
View notes
Text
I keep thinking about how absolutely godawful Megalopolis was so here's some of my favourite tweets that I found the day I saw it:






It really was just 2.5 hours of Coppola saying "look how stupid I am and how I misunderstood the entire philosophy of stoicism and how the lessons of Rome apply to modern day". Not to mention the rampant misogyny and Islamophobia and orientalism. I've never regretted paying to see a movie in my entire 30 years of life before this - and I once saw an experimental movie that featured six minutes of sitting in pitch black and later on had someone getting fisted out of nowhere.
the one highlight: Wow Platinum. i love a "crazy bitch" with all my heart. truly unhinged. i think Aubrey Plaza knew exactly what movie she was in.
#misc: photo#do i tag this?#shure fuck it why not#no one is going to have undeveloped and deranged opinions on tumblr right??#also it's so funny but a former developer from larian who now works in AI said she loved it#lol. lmao even. of course you'd like it - you develop a product that makes dead authors write for you.#megalopolis
58 notes
·
View notes
Text
I kinda wish that the DetCo canon would do more with the fact that the relationship (I don't mean this in the shipping sense) between Conan and Haibara has been, or at least logically should have been, really strained for a long time.
Originally, they had this development where Haibara was really messed up, overly cautious and trying to force her maladaptive survival psychological issues onto other people, and generally not good at positive interactions. Then, slowly, Conan and the others started gaining her trust (not entirely though), and also her general mental health improved (never completely though).
But then it turned around, and started getting significantly worse. Haibara isn't really in a healing arc anymore. If Aoyama still took her seriously as a character (which, to be fair, I don't think is the case), she would be in a retraumatization arc. Conan and his allies are limiting Haibara's agency, invading her privacy, dismissing her concerns about all this, and pretending that this isn't happening while she can obviously tell that it is happening.
I think the really obvious turning point was the Mystery Train arc. Haibara even called Conan and Agasa out on it and stated that she wouldn't forgive if she were to be treated like that again. Instead of taking her hurt seriously, they just told her she should "be thankful", and dismissed her hurt as "tsundere", refusing to take her trauma seriously after using it and using her as a mere chess piece in their plans (and Akai even triggering her more by gloating about it).
Since then, Haibara has been trapped under the constant, violating supervision of these people who have demonstrated that they don't respect her, and also refuse to acknowledge that the problem even exists. It's not a situation where her recover arc could realistically continue. By all logic, she should be spiraling, getting worse again.
And maybe this is on purpose; Aoyama definitely didn't plan the manga to get this long when he introduced Haibara, and originally the slow-burn of her recovery was a good way to postpone a scenario where Haibara could actually trust Conan enough to give him the kind of information that would lead to the finale arc. But... eventually, even with the extreme slow burn of DetCo, Haibara's recovery arc and relationship development with Conan would have gotten to the point where her continued withholding of crucial information would no longer make sense... and, I guess, rather than start concluding the story at a humanly reasonable rate, Aoyama just opted to nuke Haibara's whole recovery arc and character and relationship development (not only with Conan but also Agasa).
Which could have been tragic but realistic (albeit kind of a major downer out of tune with the manga's usual tone, and upsetting to anyone who had been invested in the slow-burn mental health improvement arc), but then Aoyama can't even be arsed to take it seriously, and is now just pretending that the situation with Haibara and her relationships with Conan and Agasa are still "normal" instead of FUBAR.
It's regrettable and really shows how much everyone dragging out this franchise to milk it for more money just... doesn't actually care about the story anymore, hasn't in a long time. They'll eventually just kill central, fan-favorite, long-running story arcs rather than actually letting them conclude or evolve in a satisfying manner if that would mean risking their precious status quo (which has warped beyond all recognition anyway, so I'm not sure why they bother).
#dcmk criticism#detective conan criticism#haibara ai#edogawa conan#professor agasa#dcmk writing criticism#eternity series#haibara#conan#agasa#by the way the same refusal to let story arcs develop in a satisfying way if it risks status quo#is also warping the shinran relationship development and secret identity issues into something not... good#Aoyama & co. want it to be a positive relationship but they're taking away the capacity for it to be.#the secret identity drama has honestly run its course and they should have updated the situation to Ran finding out#and the characters dealing with the fallout and the development that would have followed#a WHILE ago.#instead it has become a rehash of a rehash of a rehash that makes Conan/Shinichi more and more unsympathetic & less justified each round.#do Aoyama & co. not realise that letting these plotlines evolve would actually be HELPFUL by giving them NEW scenarios to work with?!#I give up#old fan yells at cloud
67 notes
·
View notes
Text
i just had to take this ai personality test to submit a job application (to be a bartender).




#the end of our society is upon us yall#i think it’s so funny that so much of imaging our future with technology (sci-fi) branches off into two subsection#A. technology gets so advanced that it becomes the governing (tyrannical) power#or B. technology aids humanity in developing a star trek esque utopia of convenience and luxury#but i don’t think anyone predicted this#this weird dependency on technology (especially AI and other ‘smart’ tech) thats actually just shit#like yeah tech is replacing human jobs and doing it worse and less convenient#it wasn’t that long ago when you could call up any store and a real human being would answer#like… 5-10 years ago??#do you remember when you could walk into a store and get a job application and fill it out by hand#in order to get to this ‘personality test’ stage i had to chat with an AI virtual assistant#and then make an account and (after verifying my email of course) filled out my online application (again…)#and then i had to take this personality test#all so i can continue serving ppl highballs and beer??#its sad to see how normalized this is now#anyway as frustrated as i am by the state of the world#i’m choosing to laugh at how fkn dumb this ai test is#i’m gonna post more pics in a sec
19 notes
·
View notes
Text
The Importance of Investing in Soft Skills in the Age of AI
New Post has been published on https://thedigitalinsider.com/the-importance-of-investing-in-soft-skills-in-the-age-of-ai/
The Importance of Investing in Soft Skills in the Age of AI
I’ll set out my stall and let you know I am still an AI skeptic. Heck, I still wrap “AI” in quotes a lot of the time I talk about it. I am, however, skeptical of the present, rather than the future. I wouldn’t say I’m positive or even excited about where AI is going, but there’s an inevitability that in development circles, it will be further engrained in our work.
We joke in the industry that the suggestions that AI gives us are more often than not, terrible, but that will only improve in time. A good basis for that theory is how fast generative AI has improved with image and video generation. Sure, generated images still have that “shrink-wrapped” look about them, and generated images of people have extra… um… limbs, but consider how much generated AI images have improved, even in the last 12 months.
There’s also the case that VC money is seemingly exclusively being invested in AI, industry-wide. Pair that with a continuously turbulent tech recruitment situation, with endless major layoffs and even a skeptic like myself can see the writing on the wall with how our jobs as developers are going to be affected.
The biggest risk factor I can foresee is that if your sole responsibility is to write code, your job is almost certainly at risk. I don’t think this is an imminent risk in a lot of cases, but as generative AI improves its code output — just like it has for images and video — it’s only a matter of time before it becomes a redundancy risk for actual human developers.
Do I think this is right? Absolutely not. Do I think it’s time to panic? Not yet, but I do see a lot of value in evolving your skillset beyond writing code. I especially see the value in improving your soft skills.
What are soft skills?
A good way to think of soft skills is that they are life skills. Soft skills include:
communicating with others,
organizing yourself and others,
making decisions, and
adapting to difficult situations.
I believe so much in soft skills that I call them core skills and for the rest of this article, I’ll refer to them as core skills, to underline their importance.
The path to becoming a truly great developer is down to more than just coding. It comes down to how you approach everything else, like communication, giving and receiving feedback, finding a pragmatic solution, planning — and even thinking like a web developer.
I’ve been working with CSS for over 15 years at this point and a lot has changed in its capabilities. What hasn’t changed though, is the core skills — often called “soft skills” — that are required to push you to the next level. I’ve spent a large chunk of those 15 years as a consultant, helping organizations — both global corporations and small startups — write better CSS. In almost every single case, an improvement of the organization’s core skills was the overarching difference.
The main reason for this is a lot of the time, the organizations I worked with coded themselves into a corner. They’d done that because they just plowed through — Jira ticket after Jira ticket — rather than step back and question, “is our approach actually working?” By focusing on their team’s core skills, we were often — and very quickly — able to identify problem areas and come up with pragmatic solutions that were almost never development solutions. These solutions were instead:
Improving communication and collaboration between design and development teams
Reducing design “hand-off” and instead, making the web-based output the source of truth
Moving slowly and methodically to move fast
Putting a sharp focus on planning and collaboration between developers and designers, way in advance of production work being started
Changing the mindset of “plow on” to taking a step back, thoroughly evaluating the problem, and then developing a collaborative and by proxy, much simpler solution
Will improving my core skills actually help?
One thing AI cannot do — and (hopefully) never will be able to do — is be human. Core skills — especially communication skills — are very difficult for AI to recreate well because the way we communicate is uniquely human.
I’ve been doing this job a long time and something that’s certainly propelled my career is the fact I’ve always been versatile. Having a multifaceted skillset — like in my case, learning CSS and HTML to improve my design work — will only benefit you. It opens up other opportunities for you too, which is especially important with the way the tech industry currently is.
If you’re wondering how to get started on improving your core skills, I’ve got you. I produced a course called Complete CSS this year but it’s a slight rug-pull because it’s actually a core skills course that uses CSS as a context. You get to learn some iron-clad CSS skills alongside those core skills too, as a bonus. It’s definitely worth checking out if you are interested in developing your core skills, especially so if you receive a training budget from your employer.
Wrapping up
The main message I want to get across is developing your core skills is as important — if not more important — than keeping up to date with the latest CSS or JavaScript thing. It might be uncomfortable for you to do that, but trust me, being able to stand yourself out over AI is only going to be a good thing, and improving your core skills is a sure-fire way to do exactly that.
#ai#approach#Article#Articles#Artificial Intelligence#career#circles#code#coding#Collaboration#collaborative#communication#course#CSS#Design#designers#Developer#developers#development#factor#focus#Future#generative#generative ai#Giving#Global#hand#how#how to#HTML
4 notes
·
View notes
Text
low key i do wanna try to do OC-tober this year maybe with my dragon age OCs.......
#some of my poor dead fereldan wardens......margrethe and grigory.......la'ara's clan......the cousland fam......elodie's mama....#and then of course tali and ais and el and la'ara and badhbh#and also sukhwinder and dubheasa and seighin and callum and cathal#AND a chance to develop my second rook perhaps......she doesn't have a name but she is inspired by saima's character in heeramandi
10 notes
·
View notes
Text
god, maybe: here’s a 33k character count completed oneshot philsho/narusho fic where isaka stabbed a regression type of memory on shotaro to try and develop time travel powers and shotaro managed to prevent a lot of tragedies in this new timeline but in actuality he has to deal with letting go of both the chief’s and philip’s deaths so that the memory doesn’t finalize and isaka can’t use it for nefarious purposes to bring back the Museum but of course shotaro CAN’T help it bc it’s in his nature to not let go of a future where nobody gets hurt
me: ……what’s the twist
god: it’s in mandarin
#silly thoughts#posts to send to the nether#HOHHH BOY this is just like the time i found that 20k oneshot where only sento survived in the new world and he tried to develop AI that is#similar to banjo (even soliciting the cooperation of the katsuragi in that new world) and when that failed bc he kept noticing differences#bc of course ‘banjo’ is an ai even if mid-fic you start to question whether that is the case#sento started opening portals to other worlds and eventually#met a banjo who survived the new world while his sento is gone#so one would assume it was a perfect fit#and even that banjo said the same#but sento has a deep pragmatic realization that he can never replace ‘his’ banjo. his banjo is dead#so he goes back to his og world to stop messing with the timeline#and allows gentoku to mass produce the ai he made (he was developing banjo under that proposal pitch)#and in the last pov (ai banjo’s pov) he can see tear tracks on sento’s face bc he’s an android#AND IT ENDS THERE#THAT FIC BROKE ME#AND IT WAS IN MANDARIN TOO#FUCKKKKKKKKKKK#/yap over
3 notes
·
View notes
Text
A pseudonymous coder has created and released an open source “tar pit” to indefinitely trap AI training web crawlers in an infinitely, randomly-generating series of pages to waste their time and computing power. The program, called Nepenthes after the genus of carnivorous pitcher plants which trap and consume their prey, can be deployed by webpage owners to protect their own content from being scraped or can be deployed “offensively” as a honeypot trap to waste AI companies’ resources.
“Of course, these crawlers are massively scaled, and are downloading links from large swathes of the internet at any given time,” [Aaron B, Nepenthes' creator, said]. “But they are still consuming resources, spinning around doing nothing helpful, unless they find a way to detect that they are stuck in this loop.”
#i love 404 media. if you get emails you should sign up for theirs. worker-owned quality journalism#posting this because i see a lot of stuff about 'AI' (vague) which is either very angry about how bad AI is environmentally & culturally#or is like 'AI is no worse than any other major networked service for the environment & also luddism does not work'#i tend to sympathize with the second stance (luddism demonstrably does not work; internet infrastructure is expensive)#but i do think 'AI' development is frustrating; it is not the most efficient way to do a lot of tasks it is currently being trialled in#& of course it does functionally threaten people's jobs & telling them to unionize doesn't really help in any immediate way#anyway here is an example of someone 'stopping AI' in a way which is deliberately resource consumptive. how do we feel about it.
6 notes
·
View notes
Text
Your Guide to B.Tech in Computer Science & Engineering Colleges

In today's technology-driven world, pursuing a B.Tech in Computer Science and Engineering (CSE) has become a popular choice among students aspiring for a bright future. The demand for skilled professionals in areas like Artificial Intelligence, Machine Learning, Data Science, and Cloud Computing has made computer science engineering colleges crucial in shaping tomorrow's innovators. Saraswati College of Engineering (SCOE), a leader in engineering education, provides students with a perfect platform to build a successful career in this evolving field.
Whether you're passionate about coding, software development, or the latest advancements in AI, pursuing a B.Tech in Computer Science and Engineering at SCOE can open doors to endless opportunities.
Why Choose B.Tech in Computer Science and Engineering?
Choosing a B.Tech in Computer Science and Engineering isn't just about learning to code; it's about mastering problem-solving, logical thinking, and the ability to work with cutting-edge technologies. The course offers a robust foundation that combines theoretical knowledge with practical skills, enabling students to excel in the tech industry.
At SCOE, the computer science engineering courses are designed to meet industry standards and keep up with the rapidly evolving tech landscape. With its AICTE Approved, NAAC Accredited With Grade-"A+" credentials, the college provides quality education in a nurturing environment. SCOE's curriculum goes beyond textbooks, focusing on hands-on learning through projects, labs, workshops, and internships. This approach ensures that students graduate not only with a degree but with the skills needed to thrive in their careers.
The Role of Computer Science Engineering Colleges in Career Development
The role of computer science engineering colleges like SCOE is not limited to classroom teaching. These institutions play a crucial role in shaping students' futures by providing the necessary infrastructure, faculty expertise, and placement opportunities. SCOE, established in 2004, is recognized as one of the top engineering colleges in Navi Mumbai. It boasts a strong placement record, with companies like Goldman Sachs, Cisco, and Microsoft offering lucrative job opportunities to its graduates.
The computer science engineering courses at SCOE are structured to provide a blend of technical and soft skills. From the basics of computer programming to advanced topics like Artificial Intelligence and Data Science, students at SCOE are trained to be industry-ready. The faculty at SCOE comprises experienced professionals who not only impart theoretical knowledge but also mentor students for real-world challenges.
Highlights of the B.Tech in Computer Science and Engineering Program at SCOE
Comprehensive Curriculum: The B.Tech in Computer Science and Engineering program at SCOE covers all major areas, including programming languages, algorithms, data structures, computer networks, operating systems, AI, and Machine Learning. This ensures that students receive a well-rounded education, preparing them for various roles in the tech industry.
Industry-Relevant Learning: SCOE’s focus is on creating professionals who can immediately contribute to the tech industry. The college regularly collaborates with industry leaders to update its curriculum, ensuring students learn the latest technologies and trends in computer science engineering.
State-of-the-Art Infrastructure: SCOE is equipped with modern laboratories, computer centers, and research facilities, providing students with the tools they need to gain practical experience. The institution’s infrastructure fosters innovation, helping students work on cutting-edge projects and ideas during their B.Tech in Computer Science and Engineering.
Practical Exposure: One of the key benefits of studying at SCOE is the emphasis on practical learning. Students participate in hands-on projects, internships, and industry visits, giving them real-world exposure to how technology is applied in various sectors.
Placement Support: SCOE has a dedicated placement cell that works tirelessly to ensure students secure internships and job offers from top companies. The B.Tech in Computer Science and Engineering program boasts a strong placement record, with top tech companies visiting the campus every year. The highest on-campus placement offer for the academic year 2022-23 was an impressive 22 LPA from Goldman Sachs, reflecting the college’s commitment to student success.
Personal Growth: Beyond academics, SCOE encourages students to participate in extracurricular activities, coding competitions, and tech fests. These activities enhance their learning experience, promote teamwork, and help students build a well-rounded personality that is essential in today’s competitive job market.
What Makes SCOE Stand Out?
With so many computer science engineering colleges to choose from, why should you consider SCOE for your B.Tech in Computer Science and Engineering? Here are a few factors that make SCOE a top choice for students:
Experienced Faculty: SCOE prides itself on having a team of highly qualified and experienced faculty members. The faculty’s approach to teaching is both theoretical and practical, ensuring students are equipped to tackle real-world challenges.
Strong Industry Connections: The college maintains strong relationships with leading tech companies, ensuring that students have access to internship opportunities and campus recruitment drives. This gives SCOE graduates a competitive edge in the job market.
Holistic Development: SCOE believes in the holistic development of students. In addition to academic learning, the college offers opportunities for personal growth through various student clubs, sports activities, and cultural events.
Supportive Learning Environment: SCOE provides a nurturing environment where students can focus on their academic and personal growth. The campus is equipped with modern facilities, including spacious classrooms, labs, a library, and a recreation center.
Career Opportunities After B.Tech in Computer Science and Engineering from SCOE
Graduates with a B.Tech in Computer Science and Engineering from SCOE are well-prepared to take on various roles in the tech industry. Some of the most common career paths for CSE graduates include:
Software Engineer: Developing software applications, web development, and mobile app development are some of the key responsibilities of software engineers. This role requires strong programming skills and a deep understanding of software design.
Data Scientist: With the rise of big data, data scientists are in high demand. CSE graduates with knowledge of data science can work on data analysis, machine learning models, and predictive analytics.
AI Engineer: Artificial Intelligence is revolutionizing various industries, and AI engineers are at the forefront of this change. SCOE’s curriculum includes AI and Machine Learning, preparing students for roles in this cutting-edge field.
System Administrator: Maintaining and managing computer systems and networks is a crucial role in any organization. CSE graduates can work as system administrators, ensuring the smooth functioning of IT infrastructure.
Cybersecurity Specialist: With the growing threat of cyberattacks, cybersecurity specialists are essential in protecting an organization’s digital assets. CSE graduates can pursue careers in cybersecurity, safeguarding sensitive information from hackers.
Conclusion: Why B.Tech in Computer Science and Engineering at SCOE is the Right Choice
Choosing the right college is crucial for a successful career in B.Tech in Computer Science and Engineering. Saraswati College of Engineering (SCOE) stands out as one of the best computer science engineering colleges in Navi Mumbai. With its industry-aligned curriculum, state-of-the-art infrastructure, and excellent placement record, SCOE offers students the perfect environment to build a successful career in computer science.
Whether you're interested in AI, data science, software development, or any other field in computer science, SCOE provides the knowledge, skills, and opportunities you need to succeed. With a strong focus on hands-on learning and personal growth, SCOE ensures that students graduate not only as engineers but as professionals ready to take on the challenges of the tech world.
If you're ready to embark on an exciting journey in the world of technology, consider pursuing your B.Tech in Computer Science and Engineering at SCOE—a college where your future takes shape.
#In today's technology-driven world#pursuing a B.Tech in Computer Science and Engineering (CSE) has become a popular choice among students aspiring for a bright future. The de#Machine Learning#Data Science#and Cloud Computing has made computer science engineering colleges crucial in shaping tomorrow's innovators. Saraswati College of Engineeri#a leader in engineering education#provides students with a perfect platform to build a successful career in this evolving field.#Whether you're passionate about coding#software development#or the latest advancements in AI#pursuing a B.Tech in Computer Science and Engineering at SCOE can open doors to endless opportunities.#Why Choose B.Tech in Computer Science and Engineering?#Choosing a B.Tech in Computer Science and Engineering isn't just about learning to code; it's about mastering problem-solving#logical thinking#and the ability to work with cutting-edge technologies. The course offers a robust foundation that combines theoretical knowledge with prac#enabling students to excel in the tech industry.#At SCOE#the computer science engineering courses are designed to meet industry standards and keep up with the rapidly evolving tech landscape. With#NAAC Accredited With Grade-“A+” credentials#the college provides quality education in a nurturing environment. SCOE's curriculum goes beyond textbooks#focusing on hands-on learning through projects#labs#workshops#and internships. This approach ensures that students graduate not only with a degree but with the skills needed to thrive in their careers.#The Role of Computer Science Engineering Colleges in Career Development#The role of computer science engineering colleges like SCOE is not limited to classroom teaching. These institutions play a crucial role in#faculty expertise#and placement opportunities. SCOE#established in 2004#is recognized as one of the top engineering colleges in Navi Mumbai. It boasts a strong placement record
2 notes
·
View notes
Note
he’s on the right path for the wrong reason?
more like the wrong path for the right logic
#liek i guess i agree with what he says re: the path of ai but i do not think that the way ai is being utilized right now is the way it#should follow but that's not like A Truth but like obviously a very biased opinion#based in how i feel about art and creation. like of course jobs will continue being automated and his point about#basic income necessary as those jobs become automated is something i agree with#but there's more to just advancement of science behind ai because it is part of a market. what people dont understand#is that most of the time the point of ai is to improve comfort because comfort is the best thing to sell#in the neoliberalism system The first people that are fucked are the people without that financial support#that are getting taken away creative rights. so its not considering how corporations work#because they have a very positive view of how science develops#star anons
3 notes
·
View notes
Text
This especially:
i can't actually stop you. if you wanna use ChatGPT to slide through your classes, that's on you. it's your money and it's your time. you will spend none of it thinking, you will learn nothing, and, in college, you will piss away hundreds of thousands of dollars. you will stand at the podium having done nothing, accomplished nothing. a cold and bitter pyrrhic victory.
And also. AI is being trained on stolen intellectual property. This is known. The receipts are out there. The people who run the companies that make AI and are pushing AI will full-on steal your ideas and your words and your art, but heaven help you if you steal their ideas or words or art. 
i have chronic pain. i am neurodivergent. i understand - deeply - the allure of a "quick fix" like AI. i also just grew up in a different time. we have been warned about this.
15 entire years ago i heard about this. in my forensics class in high school, we watched a documentary about how AI-based "crime solving" software was inevitably biased against people of color.
my teacher stressed that AI is like a book: when someone writes it, some part of the author will remain within the result. the internet existed but not as loudly at that point - we didn't know that AI would be able to teach itself off already-biased Reddit threads. i googled it: yes, this bias is still happening. yes, it's just as bad if not worse.
i can't actually stop you. if you wanna use ChatGPT to slide through your classes, that's on you. it's your money and it's your time. you will spend none of it thinking, you will learn nothing, and, in college, you will piss away hundreds of thousands of dollars. you will stand at the podium having done nothing, accomplished nothing. a cold and bitter pyrrhic victory.
i'm not even sure students actually read the essays or summaries or emails they have ChatGPT pump out. i think it just flows over them and they use the first answer they get. my brother teaches engineering - he recently got fifty-three copies of almost-the-exact-same lab reports. no one had even changed the wording.
and yes: AI itself (as a concept and practice) isn't always evil. there's AI that can help detect cancer, for example. and yet: when i ask my students if they'd be okay with a doctor that learned from AI, many of them balk. it is one thing if they don't read their engineering textbook or if they don't write the critical-thinking essay. it's another when it starts to affect them. they know it's wrong for AI to broad-spectrum deny insurance claims, but they swear their use of AI is different.
there's a strange desire to sort of divorce real-world AI malpractice over "personal use". for example, is it moral to use AI to write your cover letters? cover letters are essentially only templates, and besides: AI is going to be reading your job app, so isn't it kind of fair?
i recently found out that people use AI as a romantic or sexual partner. it seems like teenagers particularly enjoy this connection, and this is one of those "sticky" moments as a teacher. honestly - you can roast me for this - but if it was an actually-safe AI, i think teenagers exploring their sexuality with a fake partner is amazing. it prevents them from making permanent mistakes, it can teach them about their bodies and their desires, and it can help their confidence. but the problem is that it's not safe. there isn't a well-educated, sensitive AI specifically to help teens explore their hormones. it's just internet-fed cycle. who knows what they're learning. who knows what misinformation they're getting.
the most common pushback i get involves therapy. none of us have access to the therapist of our dreams - it's expensive, elusive, and involves an annoying amount of insurance claims. someone once asked me: are you going to be mad when AI saves someone's life?
therapists are not just trained on the book, they're trained on patient management and helping you see things you don't see yourself. part of it will involve discomfort. i don't know that AI is ever going to be able to analyze the words you feed it and answer with a mind towards the "whole person" writing those words. but also - if it keeps/kept you alive, i'm not a purist. i've done terrible things to myself when i was at rock bottom. in an emergency, we kind of forgive the seatbelt for leaving bruises. it's just that chat shouldn't be your only form of self-care and recovery.
and i worry that the influence chat has is expanding. more and more i see people use chat for the smallest, most easily-navigated situations. and i can't like, make you worry about that in your own life. i often think about how easy it was for social media to take over all my time - how i can't have a tiktok because i spend hours on it. i don't want that to happen with chat. i want to enjoy thinking. i want to enjoy writing. i want to be here. i've already really been struggling to put the phone down. this feels like another way to get you to pick the phone up.
the other day, i was frustrated by a book i was reading. it's far in the series and is about a character i resent. i googled if i had to read it, or if it was one of those "in between" books that don't actually affect the plot (you know, one of those ".5" books). someone said something that really stuck with me - theoretically you're reading this series for enjoyment, so while you don't actually have to read it, one would assume you want to read it.
i am watching a generation of people learn they don't have to read the thing in their hand. and it is kind of a strange sort of doom that comes over me: i read because it's genuinely fun. i learn because even though it's hard, it feels good. i try because it makes me happy to try. and i'm watching a generation of people all lay down and say: but i don't want to try.
#inkskinned#anti ai#because of the way it’s being developed and used#of course there are legit uses of ai#but so much of what it is being used for#is the absolute opposite of legit
4K notes
·
View notes
Text
ed zitron, a tech beat reporter, wrote an article about a recent paper that came out from goldman-sachs calling AI, in nicer terms, a grift. it is a really interesting article; hearing criticism from people who are not ignorant of the tech and have no reason to mince words is refreshing. it also brings up points and asks the right questions:
if AI is going to be a trillion dollar investment, what trillion dollar problem is it solving?
what does it mean when people say that AI will "get better"? what does that look like and how would it even be achieved? the article makes a point to debunk talking points about how all tech is misunderstood at first by pointing out that the tech it gets compared to the most, the internet and smartphones, were both created over the course of decades with roadmaps and clear goals. AI does not have this.
the american power grid straight up cannot handle the load required to run AI because it has not been meaningfully developed in decades. how are they going to overcome this hurdle (they aren't)?
people who are losing their jobs to this tech aren't being "replaced". they're just getting a taste of how little their managers care about their craft and how little they think of their consumer base. ai is not capable of replacing humans and there's no indication they ever will because...
all of these models use the same training data so now they're all giving the same wrong answers in the same voice. without massive and i mean EXPONENTIALLY MASSIVE troves of data to work with, they are pretty much as a standstill for any innovation they're imagining in their heads
76K notes
·
View notes
Text
Starting reading the AI Snake Oil book online today
New Post has been published on https://thedigitalinsider.com/starting-reading-the-ai-snake-oil-book-online-today/
Starting reading the AI Snake Oil book online today

The first chapter of the AI snake oil book is now available online. It is 30 pages long and summarizes the book’s main arguments. If you start reading now, you won’t have to wait long for the rest of the book — it will be published on the 24th of September. If you haven’t pre-ordered it yet, we hope that reading the introductory chapter will convince you to get yourself a copy.
We were fortunate to receive positive early reviews by The New Yorker, Publishers’ Weekly (featured in the Top 10 science books for Fall 2024), and many other outlets. We’re hosting virtual book events (City Lights, Princeton Public Library, Princeton alumni events), and have appeared on many podcasts to talk about the book (including Machine Learning Street Talk, 20VC, Scaling Theory).

Our book is about demystifying AI, so right out of the gate we address what we think is the single most confusing thing about it:
AI is an umbrella term for a set of loosely related technologies
Because AI is an umbrella term, we treat each type of AI differently. We have chapters on predictive AI, generative AI, as well as AI used for social media content moderation. We also have a chapter on whether AI is an existential risk. We conclude with a discussion of why AI snake oil persists and what the future might hold. By AI snake oil we mean AI applications that do not (and perhaps cannot) work. Our book is a guide to identifying AI snake oil and AI hype. We also look at AI that is harmful even if it works well — such as face recognition used for mass surveillance.
While the book is meant for a broad audience, it does not simply rehash the arguments we have made in our papers or on this newsletter. We make scholarly contributions and we wrote the book to be suitable for adoption in courses. We will soon release exercises and class discussion questions to accompany the book.
Chapter 1: Introduction. We begin with a summary of our main arguments in the book. We discuss the definition of AI (and more importantly, why it is hard to come up with one), how AI is an umbrella term, what we mean by AI Snake Oil, and who the book is for.
Generative AI has made huge strides in the last decade. On the other hand, predictive AI is used for predicting outcomes to make consequential decisions in hiring, banking, insurance, education, and more. While predictive AI can find broad statistical patterns in data, it is marketed as far more than that, leading to major real-world misfires. Finally, we discuss the benefits and limitations of AI for content moderation on social media.
We also tell the story of what led the two of us to write the book. The entire first chapter is now available online.
Chapter 2: How predictive AI goes wrong. Predictive AI is used to make predictions about people—will a defendant fail to show up for trial? Is a patient at high risk of negative health outcomes? Will a student drop out of college? These predictions are then used to make consequential decisions. Developers claim predictive AI is groundbreaking, but in reality it suffers from a number of shortcomings that are hard to fix.
We have discussed the failures of predictive AI in this blog. But in the book, we go much deeper through case studies to show how predictive AI fails to live up to the promises made by its developers.
Chapter 3: Can AI predict the future? Are the shortcomings of predictive AI inherent, or can they be resolved? In this chapter, we look at why predicting the future is hard — with or without AI. While we have made consistent progress in some domains such as weather prediction, we argue that this progress cannot translate to other settings, such as individuals’ life outcomes, the success of cultural products like books and movies, or pandemics.
Since much of our newsletter is focused on topics of current interest, this is a topic that we have never written about here. Yet, it is foundational knowledge that can help you build intuition around when we should expect predictions to be accurate.
Chapter 4: The long road to generative AI. Recent advances in generative AI can seem sudden, but they build on a series of improvements over seven decades. In this chapter, we retrace the history of computing advances that led to generative AI. While we have written a lot about current trends in generative AI, in the book, we look at its past. This is crucial for understanding what to expect in the future.
Chapter 5: Is advanced AI an existential threat? Claims about AI wiping out humanity are common. Here, we critically evaluate claims about AI’s existential risk and find several shortcomings and fallacies in popular discussion of x-risk. We discuss approaches to defending against AI risks that improve societal resilience regardless of the threat of advanced AI.
Chapter 6: Why can’t AI fix social media? One area where AI is heavily used is content moderation on social media platforms. We discuss the current state of AI use on social media, and highlight seven reasons why improvements in AI alone are unlikely to solve platforms’ content moderation woes. We haven’t written about content moderation in this newsletter.
Chapter 7: Why do myths about AI persist? Companies, researchers, and journalists all contribute to AI hype. We discuss how myths about AI are created and how they persist. In the process, we hope to give you the tools to read AI news with the appropriate skepticism and identify attempts to sell you snake oil.
Chapter 8: Where do we go from here? While the previous chapter focuses on the supply of snake oil, in the last chapter, we look at where the demand for AI snake oil comes from. We also look at the impact of AI on the future of work, the role and limitations of regulation, and conclude with vignettes of the many possible futures ahead of us. We have the agency to determine which path we end up on, and each of us can play a role.
We hope you will find the book useful and look forward to hearing what you think.
The New Yorker: “In AI Snake Oil, Arvind Narayanan and Sayash Kapoor urge skepticism and argue that the blanket term AI can serve as a smokescreen for underperforming technologies.”
Kirkus: “Highly useful advice for those who work with or are affected by AI—i.e., nearly everyone.”
Publishers’ Weekly: Featured in the Fall 2024 list of top science books.
Jean Gazis: “The authors admirably differentiate fact from opinion, draw from personal experience, give sensible reasons for their views (including copious references), and don’t hesitate to call for action. . . . If you’re curious about AI or deciding how to implement it, AI Snake Oil offers clear writing and level-headed thinking.”
Elizabeth Quill: “A worthwhile read whether you make policy decisions, use AI in the workplace or just spend time searching online. It’s a powerful reminder of how AI has already infiltrated our lives — and a convincing plea to take care in how we interact with it.”
We’ve been on many other podcasts that will air around the time of the book’s release, and we will keep this list updated.
The book is available to preorder internationally on Amazon.
#2024#adoption#Advice#ai#ai news#air#Amazon#applications#banking#Blog#book#Books#college#Companies#computing#content#content moderation#courses#data#developers#domains#education#Events#face recognition#Featured#Future#future of work#GATE#generative#generative ai
3 notes
·
View notes
Text
Something I don't think we talk enough about in discussions surrounding AI is the loss of perseverance.
I have a friend who works in education and he told me about how he was working with a small group of HS students to develop a new school sports chant. This was a very daunting task for the group, in large part because many had learning disabilities related to reading and writing, so coming up with a catchy, hard-hitting, probably rhyming, poetry-esque piece of collaborative writing felt like something outside of their skill range. But it wasn't! I knew that, he knew that, and he worked damn hard to convince the kids of that too. Even if the end result was terrible (by someone else's standards), we knew they had it in them to complete the piece and feel super proud of their creation.
Fast-forward a few days and he reports back that yes they have a chant now... but it's 99% AI. It was made by Chat-GPT. Once the kids realized they could just ask the bot to do the hard thing for them - and do it "better" than they (supposedly) ever could - that's the only route they were willing to take. It was either use Chat-GPT or don't do it at all. And I was just so devastated to hear this because Jesus Christ, struggling is important. Of course most 14-18 year olds aren't going to see the merit of that, let alone understand why that process (attempting something new and challenging) is more valuable than the end result (a "good" chant), but as adults we all have a responsibility to coach them through that messy process. Except that's become damn near impossible with an Instantly Do The Thing app in everyone's pocket. Yes, AI is fucking awful because of plagiarism and misinformation and the environmental impact, but it's also keeping people - particularly young people - from developing perseverance. It's not just important that you learn to write your own stuff because of intellectual agency, but because writing is hard and it's crucial that you learn how to persevere through doing hard things.
Write a shitty poem. Write an essay where half the textual 'evidence' doesn't track. Write an awkward as fuck email with an equally embarrassing typo. Every time you do you're not just developing that particular skill, you're also learning that you did something badly and the world didn't end. You can get through things! You can get through challenging things! Not everything in life has to be perfect but you know what? You'll only improve at the challenging stuff if you do a whole lot of it badly first. The ability to say, "I didn't think I could do that but I did it anyway. It's not great, but I did it," is SO IMPORTANT for developing confidence across the board, not just in these specific tasks.
Idk I'm just really worried about kids having to grow up in a world where (for a variety of reasons beyond just AI) they're not given the chance to struggle through new and challenging things like we used to.
38K notes
·
View notes
Text
THIS. DEAR LORD THIS. THIS IS WHAT HAS BEEN DRIVING ME INSANE ABOUT AI.
Look I have contemplated writing an AI Analysis post coming from an actual artist's perspective SEVERAL times with the knowledge I've accumulated but rarely have the spoons to do it but I'll just do a short bit of it now.
So when something really upsets me that is happening and I have little control, I habitually do this thing where I will actively go out there and research the shit out of it. Because I've spent enough time in therapy to know the thing that scares us the most is the unknown. Make the unknown known? It becomes significantly less scary.
And I am backing it up when they say 'AI is a buzzword'. It 120% is. What the AI labelling is hiding under the world's biggest and perhaps most obfuscated umbrella-term is machine learning.
So it would probably shock you to know, by that metric we have been using AI for YEARS. Your autocomplete keyboard on your phone that remembers your words according to usage? Machine learning. Facial recognition on mobile phone cameras and facebook? Machine learning. The ALGORITHMS that have been driving a lot of my most beloathed social medias for years? MACHINE. LEARNING. Auto-generated captions on videos, reverse image searching, targeted advertising, analysis of weather systems, handwriting recognition, DNA sequencing, search engines, and of course your dynamic enemy 'AI' in videogames that has to react to your actions as a player - these are ALL products of machine learning and by that metric? You have technically been using AI for years but we just didn't call it that yet.
In my great search of understanding all things AI, what an Australian tech journalist commentator said was - we're basically calling anything 'new' in machine learning that we don't quite understand yet collectively 'AI'. And I agree 100%. The reality is AI has been with us since about the 1960s.
Hang on Chimera/Kery I hear you say, on the Wikipedia page of machine learning it says machine learning is a result of trying to build AI, not AI! Yes, but you literally cannot have the 'Intelligence' part without the machine learning part. You take out the learning and you've just got a brick of data that you can't do shit with. The intelligence part comes in when based on the data it's been fed and the responses it has gotten back from it's environment, whether that is a researcher saying yes or no, or literal environmental feedback in a robot that is learning optimal locomotion through a space - it executes actions. So again, by that metric when you whip out your phone to take a selfie and your phone starts to track where your face is? It is executing an action based on its data-set of 'what is a face'. That. Is. AI.
So everything is AI now? Yeah it's an umbrella term, that's what I said. The disparity between knowing what machine learning and AI is to the point we call specific things AI (image generation, large language models, voice generation) and other things 'not AI' (see my long list again) is down to MARKETING.
Let me take you back to the tail 'end' of the pandemic. You're OpenAI and through scraping a lot of publically available data of just people chatting or writing various things - with dubious consent - you have made a really good chat-bot. Yeah you heard me, CHAT-BOT. If you're old like me, you remember chat-bots - they're those goofy things you played with as a teenager and laughed at because it'd say silly things and it'd be funny to put two together trying to talk to each other because they'd begin spouting nonsense and getting stuck in a loop. Or they're the widely hated artificial help systems on government websites embedded in a chatbox that does jack shit. Or the annoying pop up on some website you're just trying to buy shit from and stock-image-sandra is here in a text box 'ready to help you'. Chat-bots have an image problem. You can't release ChatGPT, your fancy chat-bot as a 'chat-bot', how the hell are you supposed to get investors? You've got some really good projects on the go (with dubiously sourced data) but you're running out of money. You need to do something fast.
So you take out the AI umbrella term, and right before everyone is just about ready to leave their hermit-chronically-online-pandemic-induced lifestyles - you drop the metaphorical bomb. You hand over your tech, now with the shiny new AI label, to the public. The AI label hides the fact from the public that you're basically rebranding shit we've had forever and by keeping it purposefully murky you can (hopefully) get people to ignore the fact that you've basically pulled vast swathes of data with dubious consent because - but it's AI! It's such a superior piece of technology! We can't un-invent the wheel because the ends didn't justify the means! It could change the world!
Despite the fact it's been 'changing the world' since 1960 and the only difference here is you linked enough computers together to make it better than what was currently available. But you now have to pay electricity costs for all that tech so, out into the wild it goes!
And now you've triggered a technological arms race and the use of AI (and your bottom line) is skyrocketing! AI that was previously the domain of government and massive corporate use is now in the hands of people to play with - their personal tech literacy be dammed (no literally be dammed, the less they understand the better). And they won't want to have it taken off them - in fact they'll fight each other over the value of your chat-bot and image generator in spite of the fact you stole data to train it. So your profits keep rolling in and next minute, despite your ethos being 'open source to all' - you're getting approached by Microsoft for a partial buy in and now you're 'semi-private', whatever the hell that means. Who cares! Money!
I have so, so much more to say on all this but I'll leave it for a proper post. But the lesson of this very tl;dr history of OpenAI is this: AI is machine learning. Machine learning is a TOOL. AI is a TOOL.
And a tool is only as ethical as the hand that chooses to wield it. Artificial intelligence is neutral. It is not good. It is not bad. It is just like the knife on your kitchen bench, with all the potential of doing good and useful things like help you make dinner and also horrendous, horrible things like commit a violent crime. And who made the knife in your kitchen? Is it artisan? Handcrafted by someone well paid in their profession? Or was it mass produced in third world conditions? Now is your knife itself bad? Should we ban all kitchen knives?
AI is a marketing buzzword for shit we've had for years - this is just the shiny version that went public to get money and we all fell for it hook, line and sinker.
So I challenge you, the next time something wizz-bang-shiny-tech-whatever is placed in front of you, and maybe it's a bit scary - to do what I do. Instead of filing it into a box of good or bad, start arguments online with someone with only limited information over whether someone is 'good' or 'bad' for participating or not participating in use of this technology because it's now emotionally loaded for you - do what I do. RESEARCH IT. Understand it, deeply. Listen to commentary on it from both sides, learn about the intent of why it was handed to you and for the love of god USE SOME CRITICAL THINKING SKILLS.
Because I guarantee you once you do that? Stuff will quickly become a lot less murky. You'll be able to see where your own blindspots are, and prevent them from being exploited - in this case, being taken advantage of by big corporations who are trying to pull and 'oopsie-woopsie' on unethical datasets for profit. You'll be able to hold them accountable. You'll also be less likely to get pulled into stupid arguments online about shit because you know it is way more nuanced than tech-bro putting out his big titty waifu image soup - he's small game here. Who cares about him. Go for the people at the top of this who are hoping to keep sliding on by with their rolling profits because you're too busy having fights among yourselves. Go for them and go for the fucking throat.
Any technology can be used for weal or woe, and it is entirely about the hand who wields it. Or in this case, the hand who programmed it.
If we want to continue to use AI or Machine Learning in an ethical, revolutionary manner we need to stop falling for the marketing, and hold each other accountable to uses that will continue to benefit humanity. Not pull it apart.
So yes. AI is a buzzword. Stop falling for it.

#kerytalk#here we go again#artificial intelligence#mic drop and I am off#to walk dog#of course I come out of my hiatus to write a text wall#I am once again begging people to develop critical thinking skills deeper than a saucer#my commentary#ai art
71K notes
·
View notes