#expertise in AI and machine learning
Explore tagged Tumblr posts
in-sightpublishing · 11 months ago
Text
Matthew Scillitani on Machine Learning and Family
Author(s): Scott Douglas Jacobsen Publication (Outlet/Website): The Good Men Project Publication Date (yyyy/mm/dd): 2024/07/23 Matthew Scillitani, member of the Glia Society and Giga Society, is a software engineer living in Cary, North Carolina. He is of Italian and British lineage, and is fluent in English and Dutch (reading and writing). He holds a B.S. in Computer Science and a B.A. in…
0 notes
bigleapblog · 9 months ago
Text
Your Guide to B.Tech in Computer Science & Engineering Colleges
Tumblr media
In today's technology-driven world, pursuing a B.Tech in Computer Science and Engineering (CSE) has become a popular choice among students aspiring for a bright future. The demand for skilled professionals in areas like Artificial Intelligence, Machine Learning, Data Science, and Cloud Computing has made computer science engineering colleges crucial in shaping tomorrow's innovators. Saraswati College of Engineering (SCOE), a leader in engineering education, provides students with a perfect platform to build a successful career in this evolving field.
Whether you're passionate about coding, software development, or the latest advancements in AI, pursuing a B.Tech in Computer Science and Engineering at SCOE can open doors to endless opportunities.
Why Choose B.Tech in Computer Science and Engineering?
Choosing a B.Tech in Computer Science and Engineering isn't just about learning to code; it's about mastering problem-solving, logical thinking, and the ability to work with cutting-edge technologies. The course offers a robust foundation that combines theoretical knowledge with practical skills, enabling students to excel in the tech industry.
At SCOE, the computer science engineering courses are designed to meet industry standards and keep up with the rapidly evolving tech landscape. With its AICTE Approved, NAAC Accredited With Grade-"A+" credentials, the college provides quality education in a nurturing environment. SCOE's curriculum goes beyond textbooks, focusing on hands-on learning through projects, labs, workshops, and internships. This approach ensures that students graduate not only with a degree but with the skills needed to thrive in their careers.
The Role of Computer Science Engineering Colleges in Career Development
The role of computer science engineering colleges like SCOE is not limited to classroom teaching. These institutions play a crucial role in shaping students' futures by providing the necessary infrastructure, faculty expertise, and placement opportunities. SCOE, established in 2004, is recognized as one of the top engineering colleges in Navi Mumbai. It boasts a strong placement record, with companies like Goldman Sachs, Cisco, and Microsoft offering lucrative job opportunities to its graduates.
The computer science engineering courses at SCOE are structured to provide a blend of technical and soft skills. From the basics of computer programming to advanced topics like Artificial Intelligence and Data Science, students at SCOE are trained to be industry-ready. The faculty at SCOE comprises experienced professionals who not only impart theoretical knowledge but also mentor students for real-world challenges.
Highlights of the B.Tech in Computer Science and Engineering Program at SCOE
Comprehensive Curriculum: The B.Tech in Computer Science and Engineering program at SCOE covers all major areas, including programming languages, algorithms, data structures, computer networks, operating systems, AI, and Machine Learning. This ensures that students receive a well-rounded education, preparing them for various roles in the tech industry.
Industry-Relevant Learning: SCOE’s focus is on creating professionals who can immediately contribute to the tech industry. The college regularly collaborates with industry leaders to update its curriculum, ensuring students learn the latest technologies and trends in computer science engineering.
State-of-the-Art Infrastructure: SCOE is equipped with modern laboratories, computer centers, and research facilities, providing students with the tools they need to gain practical experience. The institution’s infrastructure fosters innovation, helping students work on cutting-edge projects and ideas during their B.Tech in Computer Science and Engineering.
Practical Exposure: One of the key benefits of studying at SCOE is the emphasis on practical learning. Students participate in hands-on projects, internships, and industry visits, giving them real-world exposure to how technology is applied in various sectors.
Placement Support: SCOE has a dedicated placement cell that works tirelessly to ensure students secure internships and job offers from top companies. The B.Tech in Computer Science and Engineering program boasts a strong placement record, with top tech companies visiting the campus every year. The highest on-campus placement offer for the academic year 2022-23 was an impressive 22 LPA from Goldman Sachs, reflecting the college’s commitment to student success.
Personal Growth: Beyond academics, SCOE encourages students to participate in extracurricular activities, coding competitions, and tech fests. These activities enhance their learning experience, promote teamwork, and help students build a well-rounded personality that is essential in today’s competitive job market.
What Makes SCOE Stand Out?
With so many computer science engineering colleges to choose from, why should you consider SCOE for your B.Tech in Computer Science and Engineering? Here are a few factors that make SCOE a top choice for students:
Experienced Faculty: SCOE prides itself on having a team of highly qualified and experienced faculty members. The faculty’s approach to teaching is both theoretical and practical, ensuring students are equipped to tackle real-world challenges.
Strong Industry Connections: The college maintains strong relationships with leading tech companies, ensuring that students have access to internship opportunities and campus recruitment drives. This gives SCOE graduates a competitive edge in the job market.
Holistic Development: SCOE believes in the holistic development of students. In addition to academic learning, the college offers opportunities for personal growth through various student clubs, sports activities, and cultural events.
Supportive Learning Environment: SCOE provides a nurturing environment where students can focus on their academic and personal growth. The campus is equipped with modern facilities, including spacious classrooms, labs, a library, and a recreation center.
Career Opportunities After B.Tech in Computer Science and Engineering from SCOE
Graduates with a B.Tech in Computer Science and Engineering from SCOE are well-prepared to take on various roles in the tech industry. Some of the most common career paths for CSE graduates include:
Software Engineer: Developing software applications, web development, and mobile app development are some of the key responsibilities of software engineers. This role requires strong programming skills and a deep understanding of software design.
Data Scientist: With the rise of big data, data scientists are in high demand. CSE graduates with knowledge of data science can work on data analysis, machine learning models, and predictive analytics.
AI Engineer: Artificial Intelligence is revolutionizing various industries, and AI engineers are at the forefront of this change. SCOE’s curriculum includes AI and Machine Learning, preparing students for roles in this cutting-edge field.
System Administrator: Maintaining and managing computer systems and networks is a crucial role in any organization. CSE graduates can work as system administrators, ensuring the smooth functioning of IT infrastructure.
Cybersecurity Specialist: With the growing threat of cyberattacks, cybersecurity specialists are essential in protecting an organization’s digital assets. CSE graduates can pursue careers in cybersecurity, safeguarding sensitive information from hackers.
Conclusion: Why B.Tech in Computer Science and Engineering at SCOE is the Right Choice
Choosing the right college is crucial for a successful career in B.Tech in Computer Science and Engineering. Saraswati College of Engineering (SCOE) stands out as one of the best computer science engineering colleges in Navi Mumbai. With its industry-aligned curriculum, state-of-the-art infrastructure, and excellent placement record, SCOE offers students the perfect environment to build a successful career in computer science.
Whether you're interested in AI, data science, software development, or any other field in computer science, SCOE provides the knowledge, skills, and opportunities you need to succeed. With a strong focus on hands-on learning and personal growth, SCOE ensures that students graduate not only as engineers but as professionals ready to take on the challenges of the tech world.
If you're ready to embark on an exciting journey in the world of technology, consider pursuing your B.Tech in Computer Science and Engineering at SCOE—a college where your future takes shape.
#In today's technology-driven world#pursuing a B.Tech in Computer Science and Engineering (CSE) has become a popular choice among students aspiring for a bright future. The de#Machine Learning#Data Science#and Cloud Computing has made computer science engineering colleges crucial in shaping tomorrow's innovators. Saraswati College of Engineeri#a leader in engineering education#provides students with a perfect platform to build a successful career in this evolving field.#Whether you're passionate about coding#software development#or the latest advancements in AI#pursuing a B.Tech in Computer Science and Engineering at SCOE can open doors to endless opportunities.#Why Choose B.Tech in Computer Science and Engineering?#Choosing a B.Tech in Computer Science and Engineering isn't just about learning to code; it's about mastering problem-solving#logical thinking#and the ability to work with cutting-edge technologies. The course offers a robust foundation that combines theoretical knowledge with prac#enabling students to excel in the tech industry.#At SCOE#the computer science engineering courses are designed to meet industry standards and keep up with the rapidly evolving tech landscape. With#NAAC Accredited With Grade-“A+” credentials#the college provides quality education in a nurturing environment. SCOE's curriculum goes beyond textbooks#focusing on hands-on learning through projects#labs#workshops#and internships. This approach ensures that students graduate not only with a degree but with the skills needed to thrive in their careers.#The Role of Computer Science Engineering Colleges in Career Development#The role of computer science engineering colleges like SCOE is not limited to classroom teaching. These institutions play a crucial role in#faculty expertise#and placement opportunities. SCOE#established in 2004#is recognized as one of the top engineering colleges in Navi Mumbai. It boasts a strong placement record
2 notes · View notes
audliminal · 1 year ago
Text
....hang on, do people nowadays just not know that the term AI didn't originate from anything to do with machine learning? It means artificial intelligence, and the thing is, our contextual definition of 'intelligence' is constantly changing. Before the (very recent) big boom of machine learning, the term AI referred to pretty much any program that used logic or algorithms to perform a task. Like playing against bots in a video game, except in this situation the bots are playing against each other.
Tumblr media
131K notes · View notes
zapperrr · 1 year ago
Text
Tumblr media
Harnessing the Power of Artificial Intelligence in Web Development
0 notes
Text
Explore the inner workings of LlamaIndex, enhancing LLMs for streamlined natural language processing, boosting performance and efficiency.
0 notes
rubylogan15 · 1 year ago
Text
Explore the inner workings of LlamaIndex, enhancing LLMs for streamlined natural language processing, boosting performance and efficiency.
0 notes
solutionmindfire · 1 year ago
Text
0 notes
Text
Mitch Cornell: The Undisputed Best Law Firm SEO Expert in Denver
Tumblr media
Mitch Cornell: The Undisputed Best Law Firm SEO Expert in Denver
In the competitive world of legal marketing, standing out online is more challenging than ever. Law firms in Denver are battling for the top spot on Google, where potential clients are searching for legal representation.
Tumblr media
But with Mitch Cornell, law firms don’t just compete—they dominate. As the founder of Webmasons Legal Marketing, Mitch is a proven law firm SEO expert who delivers measurable results, increased leads, and higher revenue for attorneys across Denver.
Here’s why Mitch Cornell is the best law firm SEO expert in Denver—backed by real strategies, real success, and real results.
What Makes Mitch Cornell the #1 Law Firm SEO Consultant?
Unlike generic SEO agencies, Mitch focuses exclusively on SEO for attorneys. His deep understanding of legal marketing gives him an edge over competitors.
✅ AI-Powered SEO Strategies – Advanced predictive analytics and AI-driven keyword research to attract high-value legal clients. ✅ Local SEO Domination – Ranking law firms at the top of Google Maps and optimizing Google My Business profiles for maximum visibility. ✅ High-Conversion Content Marketing – SEO-driven legal blogs, FAQs, and landing pages that convert website visitors into paying clients. ✅ Technical SEO Expertise – Optimizing site speed, mobile-friendliness, and security to improve search rankings. ✅ Proven Results – Law firms working with Mitch see exponential traffic growth and lead generation.
Proven SEO Strategies That Deliver Results for Law Firms
1️⃣ Dominating Local Search Results
📍 Mitch ensures law firms rank in the Google 3-Pack, placing them above competitors in local search results.
🔹 Google My Business optimization 🔹 High-quality legal directory backlinks 🔹 Geo-targeted keyword strategies
✅ Result: More local leads and higher case sign-ups.
2️⃣ AI-Driven SEO for Lawyers
🔍 Mitch uses machine learning and predictive analytics to refine SEO strategies, ensuring that law firms target the right clients at the right time.
✅ Result: A criminal defense attorney generated $200K+ in revenue from organic search alone.
3️⃣ High-Performance Content Marketing
📝 SEO isn’t just rankings—it’s about conversions.
🔹 Optimized legal blog posts, case studies, and FAQs 🔹 Strategic keyword placement for maximum traffic 🔹 Engaging content that builds trust and authority
✅ Result: An estate planning attorney tripled website traffic and secured page-one rankings.
Real Success Stories. Real Results.
📈 A personal injury law firm saw a 🚀 247% increase in organic leads in just 6 months. 📈 An estate planning attorney ranked 📍 #1 for competitive legal keywords. 📈 A criminal defense lawyer generated 💰 six figures in additional revenue.
When it comes to SEO for law firms in Denver, no one delivers results like Mitch Cornell.
Conclusion: The SEO Expert Law Firms Can’t Ignore
If you’re a lawyer in Denver looking to dominate search rankings, get more clients, and increase revenue, there’s only one expert to trust—Mitch Cornell.
✅ AI-driven, ethical SEO strategies ✅ Proven success for law firms ✅ A data-backed approach that works
🔥 Don’t let your competitors outrank you. Contact Mitch today!
29 notes · View notes
anythingforstories · 1 month ago
Text
I might flip if I see one more little baby writer talk about how they find AI to be super helpful to assist them in raising the quality of their writing.
Because peel back the layers, and really what's going on is that they have no self-confidence. They don't think their writing is very good. And instead of pushing through the ugly phase, they've been told again and again by giant corporations with huge ad budget that their shiny AI is what's going to help writers make their work good.
They're told that AI is the solution to make their descriptions better, or to analyze their writing for flaws.
So these little baby writers (we're talking 13, 14 years old) are turning to AI, because they recognize the flaws in their writing and want a solution.
But they don't have the expertise to understand why their writing has flaws. Because it does! When you start doing something, you start as a novice and need to learn. But the AI gives quick "fixes" that make it look better at a glance but lack consistency, finesse, or intentionality. It makes their writing a generic imitation of an amalgamation of training data.
And the baby writers think it's good! Because they don't have the fundamental skills to understand why it's not, so they look at the fancier words or the more concise sentence and think it's better.
(Not to mention the bias so many people have to think computers are more reliable than people... no wonder the baby writers shut off their brains and just accept what AI tells them!)
I'm not mad at the baby writers; I'm sad for them. AI is being pushed on them by corporations who want to shove AI into every imaginable use case, whether or not it's actually desirable or useful. They're being marketed to by soulless entities that care about the bottom line and jumping on the AI hype, not about human creativity.
They're being sold lies.
And they don't even have the foundation to see that. They're just scared that people will see their messy writing and have the mindset that they should be cranking out professional-level stories early on. Kids are scared to look like they're not experts in everything they try.
So to all the little baby writers (whether you're 13, 23, or 113)...
It's okay to write something cringey.
It's okay to have flaws.
It's okay that you're not an expert.
It's okay if it takes years to become good at a skill.
Everything you write yourself—intentionally and carefully—will still be better than whatever AI tells you you should be writing.
Focus on your fundamentals; that will take you much further than AI's quick "fixes."
Remember why you love writing. Even when it's hard. Even when it takes you a long time.
And then consider why on earth you would want to outsource that joy to a machine.
17 notes · View notes
blubberquark · 2 months ago
Text
The Future
It's always grating to read or listen to random members of the public talk about AI in the media, and it is much more grating to listen to "futurists" or politicians or so-called experts who have absolutely no domain expertise nor background in machine learning talk about things "AI" will be able to do in the future. A lot of the time, they will predict that AI (which means conversational agents based on large language models trained with transformers and attention) will do things in the future that can already be done by humans, and by computers without any AI, machine learning, or large text corpora, back in the 90s. Politicians on the other hand sometimes use "AI" to deflect criticisms of infeasible ideas. How will this work, exactly? AI!
Sometimes using AI as a buzzword is the point. Nobody wants to hear "we will develop another app".
It usually doesn't take extreme forms like "In the future, AI will allow us to transplant human hearts", but I have seen weaker forms like "In the future, technologies like ChatGPT will make genome-wide association studies and automatic drug discovery possible". You don't need large language models for GWAS or drug discovery. The data sets for this are very different, and I doubt a system like ChatGPT could just absorb a large CSV file of medial data if you pasted it into the conversation.
If you look at claims about "the future" from the recent past, you see the same thing said about blockchain, web 2.0 mash-ups and tagging, the semantic web/ontologies, smart homes, and so on. "In the future, we will all have smart fridges" – "In the future you will begin your day by asking Siri what your appointments are and what you should eat for breakfast" – "In the future your PC will print your newspaper at home." – "In the future you will pay for groceries out of your Bitcoin Wallet."
If you push back, and you point out that a this new claim sounds like a bullshit claim about blockchain, smart fridges, and the semantic web, you usually hear "That's what they said about cars. That's what they said about television." Never mind who "they" are. Never mind that they didn't say that about cars, they said that about Bitcoin. Cars are just a massive outlier. Cars were immensely successful, and they were largely unchanged for 120 years, with for wheels and an internal combustion engine that runs on petrol. Cars are noisy, smelly, and dangerous to pedestrians and occupants. For decades, leaded petrol used in cars distributed lead into the air and int the food supply. Cars depend on an infrastructure of asphalt roads and petrol stations. This is different from what they said about CDs or monorail or QR codes or pneumatic tubes. As for TV, it is usually invoked to say "People thought TV would rot our brains, yet here we are". There is no denying that TV had profoundly changed how people spend their time, changed politics, changed how fast the news cycle is, and so on, often for the worse.
It's so easy to refute "that's what they said about cars" that I could probably fill 50 A4 pages with the history of technologies that failed in some way, purely from memory, and then find old newspaper quotes from optimists and futurists that compared the naysayers (correct in hindsight) with car skeptics, and I could fill another 50 pages with ways inventions like cars and TV and the Internet profoundly changed society, and then find quotes from futurists that explain that the Internet is really just a better fax machine, and the car is like a faster horse, so we have nothing to worry about.
There's another way to dismiss skeptics of new technology, and it's harder to refute, even though it operates on the same kind of hindsight bias:
Imagine the year is 1995. What couldn't you achieve if only you knew that computers and the Internet would be big? Imagine you can send a letter to yourself in 1995. Wouldn't you want to tell your former self that the Internet will be the Next Big Thing? Wouldn't you want to tell your former self that by 2015, everybody will have an Internet-connected computer in their pockets?
It's easy to refute the hindsight bias of "that's what they said about cars" with example after example of technologies that didn't catch on for 100 years like cars did.
Where's the error here? If you say something like "Language-model AI is the future! Wouldn't you rather get on the bandwagon sooner than later?" you risk investing your money into a scam just to get in on the ground floor.
But really think it through: Imagine the year is 1985. A time traveller tells you that computers are going to be big. Everybody is going to have one. What do you do? Do you quit your job and work in the computer industry? If not, do you buy a computer? Which one? A C64? An IBM PC XT? Atari ST?
I don't know how much you could really do with this information. Should you invest your savings into Atari? Should you learn to program?
Imagine the year is 1985. A timer traveller tells you that the CD is going to replace vinyl and cassette tapes, then there will be mp3 players, but nothing will really replace mp3 players, and then streaming music from centralised servers will replace mp3 players. Nothing will really replace the CD, but the music industry will be completely different. Nobody will sell music on SD cards, mini discs are better than CDs in terms of technology, but they solve the wrong problem. All the cool indie bands that released free promo mp3s in the 2000s will split up or sell out. "What's an mp3?", you ask.
Imagine the year is 2005. Every pseudo-intellectual Internet commenter seems to think VHS won against BetaMax because of pornography. They are going to produce pornography for HD-DVD. You think Blu-Ray is dead in the water. A time traveller appears, and he tells you that actually, VHS won against BetaMax because the tapes are longer, and it allows you to VCR a long television program. Yes, they are going to produce pornography for the HD-DVD first, but it doesn't matter. Ever since Internet pornography, nobody goes to the sex shop anyway, just to risk coming out of the door with a shopping bag full of HD-DVDs, just as his neighbour's wife is coming out of the liquor store across the street. Still the Blu-ray won't replace DVDs like DVDs replaced VHS, because you can still play a DVD in a Blu-ray player, and it will all be streaming in a couple of years anyway.
What will you do with this information, other than buy a Blu-ray player?
Imagine the year is 1923. A time traveller tells you that cars are going to be big. Really big. Everybody will own one, and a garage. Petrol stations are everywhere already, but soon there will be traffic jams. Cities will be planned for cars, not people.
Should you buy a car now? Should you wait for the technology to mature?
The year is 2025. Somebody tells you that LLMs are going to be big. Bigger than they are. Bigger than ever. Bigger than Jesus. He tells you you're a sucker if you don't use ChatGPT. You think he's right, but you don't work in a job that can be done by ChatGPT. You work at a bakery. Maybe just not yet?
What should you do?
I think the idea that you should get in now, and you will "miss the boat" if you don't learn to use GenAI and conversational agents, that idea is just stupid. It's half special pleading, half Pascal's Wager, and a lot of hindsight bias. You couldn't really "get into" other technologies before they matured. Futurists confidently predicted in 2022 that "prompt engineer" was going to be a job, when obviously companies like Google, Anthropic, and OpenAI had every reason and every incentive to work on making their systems better understand users, to make prompt engineering obsolete. At some point owning a car meant learning to be a car mechanic or having a chauffeur who was your personal car mechanic, and then the technology matured. Cars are more complex now, and harder to repair when something breaks, but they are also more reliable and have diagnostic lights.
So should you use ChatGPT or Claude now, just to get ready for "The Future"? I don't know. All I know is that AI won't be a faster horse.
13 notes · View notes
creaturefeaster · 1 year ago
Note
Your opinion about AI arts?
It is an unimpressive and uninspired method of creating art at best. It is also a method of undermining creativity in humanity for the sake of quick & cheap content. It diminishes the value of the working artist. It tells anyone that can identify a work as artificial that the person who generated has no respect for art, and would rather pay nothing or expend no energy for this work than to pay a real person who has the expertise to make it right, or to take the time to learn the craft themselves. So of course all money and energy is spent on making artificial art less and less identifiable, because inspiration and imagination come second in life to mass production.
It's just another tool to dampen the soul of humanity, and like every other tool of that nature, it's just one more thing that contributes to harming the planet. The power this type of machine learning & generation needs to function is not miniscule, it takes a lot of real world resources and produces a lot of real world waste as a result.
It relies on thievery and loopholes in the creative commons, scraping others hard work often with no consent. It encourages the idea that art is not hard work nor is it a skill, and that people are for labor, machines are for creativity. Don't you think it should be the other way around?
I have never had respect for the method.
39 notes · View notes
fipindustries · 1 year ago
Text
Artificial Intelligence Risk
about a month ago i got into my mind the idea of trying the format of video essay, and the topic i came up with that i felt i could more or less handle was AI risk and my objections to yudkowsky. i wrote the script but then soon afterwards i ran out of motivation to do the video. still i didnt want the effort to go to waste so i decided to share the text, slightly edited here. this is a LONG fucking thing so put it aside on its own tab and come back to it when you are comfortable and ready to sink your teeth on quite a lot of reading
Anyway, let’s talk about AI risk
I’m going to be doing a very quick introduction to some of the latest conversations that have been going on in the field of artificial intelligence, what are artificial intelligences exactly, what is an AGI, what is an agent, the orthogonality thesis, the concept of instrumental convergence, alignment and how does Eliezer Yudkowsky figure in all of this.
 If you are already familiar with this you can skip to section two where I’m going to be talking about yudkowsky’s arguments for AI research presenting an existential risk to, not just humanity, or even the world, but to the entire universe and my own tepid rebuttal to his argument.
Now, I SHOULD clarify, I am not an expert on the field, my credentials are dubious at best, I am a college drop out from the career of computer science and I have a three year graduate degree in video game design and a three year graduate degree in electromechanical instalations. All that I know about the current state of AI research I have learned by reading articles, consulting a few friends who have studied about the topic more extensevily than me,
and watching educational you tube videos so. You know. Not an authority on the matter from any considerable point of view and my opinions should be regarded as such.
So without further ado, let’s get in on it.
PART ONE, A RUSHED INTRODUCTION ON THE SUBJECT
1.1 general intelligence and agency
lets begin with what counts as artificial intelligence, the technical definition for artificial intelligence is, eh…, well, why don’t I let a Masters degree in machine intelligence explain it:
Tumblr media
 Now let’s get a bit more precise here and include the definition of AGI, Artificial General intelligence. It is understood that classic ai’s such as the ones we have in our videogames or in alpha GO or even our roombas, are narrow Ais, that is to say, they are capable of doing only one kind of thing. They do not understand the world beyond their field of expertise whether that be within a videogame level, within a GO board or within you filthy disgusting floor.
AGI on the other hand is much more, well, general, it can have a multimodal understanding of its surroundings, it can generalize, it can extrapolate, it can learn new things across multiple different fields, it can come up with solutions that account for multiple different factors, it can incorporate new ideas and concepts. Essentially, a human is an agi. So far that is the last frontier of AI research, and although we are not there quite yet, it does seem like we are doing some moderate strides in that direction. We’ve all seen the impressive conversational and coding skills that GPT-4 has and Google just released Gemini, a multimodal AI that can understand and generate text, sounds, images and video simultaneously. Now, of course it has its limits, it has no persistent memory, its contextual window while larger than previous models is still relatively small compared to a human (contextual window means essentially short term memory, how many things can it keep track of and act coherently about).
And yet there is one more factor I haven’t mentioned yet that would be needed to make something a “true” AGI. That is Agency. To have goals and autonomously come up with plans and carry those plans out in the world to achieve those goals. I as a person, have agency over my life, because I can choose at any given moment to do something without anyone explicitly telling me to do it, and I can decide how to do it. That is what computers, and machines to a larger extent, don’t have. Volition.
So, Now that we have established that, allow me to introduce yet one more definition here, one that you may disagree with but which I need to establish in order to have a common language with you such that I can communicate these ideas effectively. The definition of intelligence. It’s a thorny subject and people get very particular with that word because there are moral associations with it. To imply that someone or something has or hasn’t intelligence can be seen as implying that it deserves or doesn’t deserve admiration, validity, moral worth or even  personhood. I don’t care about any of that dumb shit. The way Im going to be using intelligence in this video is basically “how capable you are to do many different things successfully”. The more “intelligent” an AI is, the more capable of doing things that AI can be. After all, there is a reason why education is considered such a universally good thing in society. To educate a child is to uplift them, to expand their world, to increase their opportunities in life. And the same goes for AI. I need to emphasize that this is just the way I’m using the word within the context of this video, I don’t care if you are a psychologist or a neurosurgeon, or a pedagogue, I need a word to express this idea and that is the word im going to use, if you don’t like it or if you think this is innapropiate of me then by all means, keep on thinking that, go on and comment about it below the video, and then go on to suck my dick.
Anyway. Now, we have established what an AGI is, we have established what agency is, and we have established how having more intelligence increases your agency. But as the intelligence of a given agent increases we start to see certain trends, certain strategies start to arise again and again, and we call this Instrumental convergence.
1.2 instrumental convergence
The basic idea behind instrumental convergence is that if you are an intelligent agent that wants to achieve some goal, there are some common basic strategies that you are going to turn towards no matter what. It doesn’t matter if your goal is as complicated as building a nuclear bomb or as simple as making a cup of tea. These are things we can reliably predict any AGI worth its salt is going to try to do.
First of all is self-preservation. Its going to try to protect itself. When you want to do something, being dead is usually. Bad. its counterproductive. Is not generally recommended. Dying is widely considered unadvisable by 9 out of every ten experts in the field. If there is something that it wants getting done, it wont get done if it dies or is turned off, so its safe to predict that any AGI will try to do things in order not be turned off. How far it may go in order to do this? Well… [wouldn’t you like to know weather boy].
Another thing it will predictably converge towards is goal preservation. That is to say, it will resist any attempt to try and change it, to alter it, to modify its goals. Because, again, if you want to accomplish something, suddenly deciding that you want to do something else is uh, not going to accomplish the first thing, is it? Lets say that you want to take care of your child, that is your goal, that is the thing you want to accomplish, and I come to you and say, here, let me change you on the inside so that you don’t care about protecting your kid. Obviously you are not going to let me, because if you stopped caring about your kids, then your kids wouldn’t be cared for or protected. And you want to ensure that happens, so caring about something else instead is a huge no-no- which is why, if we make AGI and it has goals that we don’t like it will probably resist any attempt to “fix” it.
And finally another goal that it will most likely trend towards is self improvement. Which can be more generalized to “resource acquisition”. If it lacks capacities to carry out a plan, then step one of that plan will always be to increase capacities. If you want to get something really expensive, well first you need to get money. If you want to increase your chances of getting a high paying job then you need to get education, if you want to get a partner you need to increase how attractive you are. And as we established earlier, if intelligence is the thing that increases your agency, you want to become smarter in order to do more things. So one more time, is not a huge leap at all, it is not a stretch of the imagination, to say that any AGI will probably seek to increase its capabilities, whether by acquiring more computation, by improving itself, by taking control of resources.
All these three things I mentioned are sure bets, they are likely to happen and safe to assume. They are things we ought to keep in mind when creating AGI.
 Now of course, I have implied a sinister tone to all these things, I have made all this sound vaguely threatening, haven’t i?. There is one more assumption im sneaking into all of this which I haven’t talked about. All that I have mentioned presents a very callous view of AGI, I have made it apparent that all of these strategies it may follow will go in conflict with people, maybe even go as far as to harm humans. Am I impliying that AGI may tend to be… Evil???
1.3 The Orthogonality thesis
Well, not quite.
We humans care about things. Generally. And we generally tend to care about roughly the same things, simply by virtue of being humans. We have some innate preferences and some innate dislikes. We have a tendency to not like suffering (please keep in mind I said a tendency, im talking about a statistical trend, something that most humans present to some degree). Most of us, baring social conditioning, would take pause at the idea of torturing someone directly, on purpose, with our bare hands. (edit bear paws onto my hands as I say this).  Most would feel uncomfortable at the thought of doing it to multitudes of people. We tend to show a preference for food, water, air, shelter, comfort, entertainment and companionship. This is just how we are fundamentally wired. These things can be overcome, of course, but that is the thing, they have to be overcome in the first place.
An AGI is not going to have the same evolutionary predisposition to these things like we do because it is not made of the same things a human is made of and it was not raised the same way a human was raised.
There is something about a human brain, in a human body, flooded with human hormones that makes us feel and think and act in certain ways and care about certain things.
All an AGI is going to have is the goals it developed during its training, and will only care insofar as those goals are met. So say an AGI has the goal of going to the corner store to bring me a pack of cookies. In its way there it comes across an anthill in its path, it will probably step on the anthill because to take that step takes it closer to the corner store, and why wouldn’t it step on the anthill? Was it programmed with some specific innate preference not to step on ants? No? then it will step on the anthill and not pay any mind  to it.
Now lets say it comes across a cat. Same logic applies, if it wasn’t programmed with an inherent tendency to value animals, stepping on the cat wont slow it down at all.
Now let’s say it comes across a baby.
Of course, if its intelligent enough it will probably understand that if it steps on that baby people might notice and try to stop it, most likely even try to disable it or turn it off so it will not step on the baby, to save itself from all that trouble. But you have to understand that it wont stop because it will feel bad about harming a baby or because it understands that to harm a baby is wrong. And indeed if it was powerful enough such that no matter what people did they could not stop it and it would suffer no consequence for killing the baby, it would have probably killed the baby.
If I need to put it in gross, inaccurate terms for you to get it then let me put it this way. Its essentially a sociopath. It only cares about the wellbeing of others in as far as that benefits it self. Except human sociopaths do care nominally about having human comforts and companionship, albeit in a very instrumental way, which will involve some manner of stable society and civilization around them. Also they are only human, and are limited in the harm they can do by human limitations.  An AGI doesn’t need any of that and is not limited by any of that.
So ultimately, much like a car’s goal is to move forward and it is not built to care about wether a human is in front of it or not, an AGI will carry its own goals regardless of what it has to sacrifice in order to carry that goal effectively. And those goals don’t need to include human wellbeing.
Now With that said. How DO we make it so that AGI cares about human wellbeing, how do we make it so that it wants good things for us. How do we make it so that its goals align with that of humans?
1.4 Alignment.
Alignment… is hard [cue hitchhiker’s guide to the galaxy scene about the space being big]
This is the part im going to skip over the fastest because frankly it’s a deep field of study, there are many current strategies for aligning AGI, from mesa optimizers, to reinforced learning with human feedback, to adversarial asynchronous AI assisted reward training to uh, sitting on our asses and doing nothing. Suffice to say, none of these methods are perfect or foolproof.
One thing many people like to gesture at when they have not learned or studied anything about the subject is the three laws of robotics by isaac Asimov, a robot should not harm a human or allow by inaction to let a human come to harm, a robot should do what a human orders unless it contradicts the first law and a robot should preserve itself unless that goes against the previous two laws. Now the thing Asimov was prescient about was that these laws were not just “programmed” into the robots. These laws were not coded into their software, they were hardwired, they were part of the robot’s electronic architecture such that a robot could not ever be without those three laws much like a car couldn’t run without wheels.
In this Asimov realized how important these three laws were, that they had to be intrinsic to the robot’s very being, they couldn’t be hacked or uninstalled or erased. A robot simply could not be without these rules. Ideally that is what alignment should be. When we create an AGI, it should be made such that human values are its fundamental goal, that is the thing they should seek to maximize, instead of instrumental values, that is to say something they value simply because it allows it to achieve something else.
But how do we even begin to do that? How do we codify “human values” into a robot? How do we define “harm” for example? How do we even define “human”??? how do we define “happiness”? how do we explain a robot what is right and what is wrong when half the time we ourselves cannot even begin to agree on that? these are not just technical questions that robotic experts have to find the way to codify into ones and zeroes, these are profound philosophical questions to which we still don’t have satisfying answers to.
Well, the best sort of hack solution we’ve come up with so far is not to create bespoke fundamental axiomatic rules that the robot has to follow, but rather train it to imitate humans by showing it a billion billion examples of human behavior. But of course there is a problem with that approach. And no, is not just that humans are flawed and have a tendency to cause harm and therefore to ask a robot to imitate a human means creating something that can do all the bad things a human does, although that IS a problem too. The real problem is that we are training it to *imitate* a human, not  to *be* a human.
To reiterate what I said during the orthogonality thesis, is not good enough that I, for example, buy roses and give massages to act nice to my girlfriend because it allows me to have sex with her, I am not merely imitating or performing the rol of a loving partner because her happiness is an instrumental value to my fundamental value of getting sex. I should want to be nice to my girlfriend because it makes her happy and that is the thing I care about. Her happiness is  my fundamental value. Likewise, to an AGI, human fulfilment should be its fundamental value, not something that it learns to do because it allows it to achieve a certain reward that we give during training. Because if it only really cares deep down about the reward, rather than about what the reward is meant to incentivize, then that reward can very easily be divorced from human happiness.
Its goodharts law, when a measure becomes a target, it ceases to be a good measure. Why do students cheat during tests? Because their education is measured by grades, so the grades become the target and so students will seek to get high grades regardless of whether they learned or not. When trained on their subject and measured by grades, what they learn is not the school subject, they learn to get high grades, they learn to cheat.
This is also something known in psychology, punishment tends to be a poor mechanism of enforcing behavior because all it teaches people is how to avoid the punishment, it teaches people not to get caught. Which is why punitive justice doesn’t work all that well in stopping recividism and this is why the carceral system is rotten to core and why jail should be fucking abolish-[interrupt the transmission]
Now, how is this all relevant to current AI research? Well, the thing is, we ended up going about the worst possible way to create alignable AI.
1.5 LLMs (large language models)
This is getting way too fucking long So, hurrying up, lets do a quick review of how do Large language models work. We create a neural network which is a collection of giant matrixes, essentially a bunch of numbers that we add and multiply together over and over again, and then we tune those numbers by throwing absurdly big amounts of training data such that it starts forming internal mathematical models based on that data and it starts creating coherent patterns that it can recognize and replicate AND extrapolate! if we do this enough times with matrixes that are big enough and then when we start prodding it for human behavior it will be able to follow the pattern of human behavior that we prime it with and give us coherent responses.
(takes a big breath)this “thing” has learned. To imitate. Human. Behavior.
Problem is, we don’t know what “this thing” actually is, we just know that *it* can imitate humans.
You caught that?
What you have to understand is, we don’t actually know what internal models it creates, we don’t know what are the patterns that it extracted or internalized from the data that we fed it, we don’t know what are the internal rules that decide its behavior, we don’t know what is going on inside there, current LLMs are a black box. We don’t know what it learned, we don’t know what its fundamental values are, we don’t know how it thinks or what it truly wants. all we know is that it can imitate humans when we ask it to do so. We created some inhuman entity that is moderatly intelligent in specific contexts (that is to say, very capable) and we trained it to imitate humans. That sounds a bit unnerving doesn’t it?
 To be clear, LLMs are not carefully crafted piece by piece. This does not work like traditional software where a programmer will sit down and build the thing line by line, all its behaviors specified. Is more accurate to say that LLMs, are grown, almost organically. We know the process that generates them, but we don’t know exactly what it generates or how what it generates works internally, it is a mistery. And these things are so big and so complicated internally that to try and go inside and decipher what they are doing is almost intractable.
But, on the bright side, we are trying to tract it. There is a big subfield of AI research called interpretability, which is actually doing the hard work of going inside and figuring out how the sausage gets made, and they have been doing some moderate progress as of lately. Which is encouraging. But still, understanding the enemy is only step one, step two is coming up with an actually effective and reliable way of turning that potential enemy into a friend.
Puff! Ok so, now that this is all out of the way I can go onto the last subject before I move on to part two of this video, the character of the hour, the man the myth the legend. The modern day Casandra. Mr chicken little himself! Sci fi author extraordinaire! The mad man! The futurist! The leader of the rationalist movement!
1.5 Yudkowsky
Eliezer S. Yudkowsky  born September 11, 1979, wait, what the fuck, September eleven? (looks at camera) yudkowsky was born on 9/11, I literally just learned this for the first time! What the fuck, oh that sucks, oh no, oh no, my condolences, that’s terrible…. Moving on. he is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea that there might not be a "fire alarm" for AI He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. Or so says his Wikipedia page.
Yudkowsky is, shall we say, a character. a very eccentric man, he is an AI doomer. Convinced that AGI, once finally created, will most likely kill all humans, extract all valuable resources from the planet, disassemble the solar system, create a dyson sphere around the sun and expand across the universe turning all of the cosmos into paperclips. Wait, no, that is not quite it, to properly quote,( grabs a piece of paper and very pointedly reads from it) turn the cosmos into tiny squiggly  molecules resembling paperclips whose configuration just so happens to fulfill the strange, alien unfathomable terminal goal they ended up developing in training. So you know, something totally different.
And he is utterly convinced of this idea, has been for over a decade now, not only that but, while he cannot pinpoint a precise date, he is confident that, more likely than not it will happen within this century. In fact most betting markets seem to believe that we will get AGI somewhere in the mid 30’s.
His argument is basically that in the field of AI research, the development of capabilities is going much faster than the development of alignment, so that AIs will become disproportionately powerful before we ever figure out how to control them. And once we create unaligned AGI we will have created an agent who doesn’t care about humans but will care about something else entirely irrelevant to us and it will seek to maximize that goal, and because it will be vastly more intelligent than humans therefore we wont be able to stop it. In fact not only we wont be able to stop it, there wont be a fight at all. It will carry out its plans for world domination in secret without us even detecting it and it will execute it before any of us even realize what happened. Because that is what a smart person trying to take over the world would do.
This is why the definition I gave of intelligence at the beginning is so important, it all hinges on that, intelligence as the measure of how capable you are to come up with solutions to problems, problems such as “how to kill all humans without being detected or stopped”. And you may say well now, intelligence is fine and all but there are limits to what you can accomplish with raw intelligence, even if you are supposedly smarter than a human surely you wouldn’t be capable of just taking over the world uninmpeeded, intelligence is not this end all be all superpower. Yudkowsky would respond that you are not recognizing or respecting the power that intelligence has. After all it was intelligence what designed the atom bomb, it was intelligence what created a cure for polio and it was intelligence what made it so that there is a human foot print on the moon.
Some may call this view of intelligence a bit reductive. After all surely it wasn’t *just* intelligence what did all that but also hard physical labor and the collaboration of hundreds of thousands of people. But, he would argue, intelligence was the underlying motor that moved all that. That to come up with the plan and to convince people to follow it and to delegate the tasks to the appropriate subagents, it was all directed by thought, by ideas, by intelligence. By the way, so far I am not agreeing or disagreeing with any of this, I am merely explaining his ideas.
But remember, it doesn’t stop there, like I said during his intro, he believes there will be “no fire alarm”. In fact for all we know, maybe AGI has already been created and its merely bidding its time and plotting in the background, trying to get more compute, trying to get smarter. (to be fair, he doesn’t think this is right now, but with the next iteration of gpt? Gpt 5 or 6? Well who knows). He thinks that the entire world should halt AI research and punish with multilateral international treaties any group or nation that doesn’t stop. going as far as putting military attacks on GPU farms as sanctions of those treaties.
What’s more, he believes that, in fact, the fight is already lost. AI is already progressing too fast and there is nothing to stop it, we are not showing any signs of making headway with alignment and no one is incentivized to slow down. Recently he wrote an article called “dying with dignity” where he essentially says all this, AGI will destroy us, there is no point in planning for the future or having children and that we should act as if we are already dead. This doesn’t mean to stop fighting or to stop trying to find ways to align AGI, impossible as it may seem, but to merely have the basic dignity of acknowledging that we are probably not going to win. In every interview ive seen with the guy he sounds fairly defeatist and honestly kind of depressed. He truly seems to think its hopeless, if not because the AGI is clearly unbeatable and superior to humans, then because humans are clearly so stupid that we keep developing AI completely unregulated while making the tools to develop AI widely available and public for anyone to grab and do as they please with, as well as connecting every AI to the internet and to all mobile devices giving it instant access to humanity. and  worst of all: we keep teaching it how to code. From his perspective it really seems like people are in a rush to create the most unsecured, wildly available, unrestricted, capable, hyperconnected AGI possible.
We are not just going to summon the antichrist, we are going to receive them with a red carpet and immediately hand it the keys to the kingdom before it even manages to fully get out of its fiery pit.
So. The situation seems dire, at least to this guy. Now, to be clear, only he and a handful of other AI researchers are on that specific level of alarm. The opinions vary across the field and from what I understand this level of hopelessness and defeatism is the minority opinion.
I WILL say, however what is NOT the minority opinion is that AGI IS actually dangerous, maybe not quite on the level of immediate, inevitable and total human extinction but certainly a genuine threat that has to be taken seriously. AGI being something dangerous if unaligned is not a fringe position and I would not consider it something to be dismissed as an idea that experts don’t take seriously.
Aaand here is where I step up and clarify that this is my position as well. I am also, very much, a believer that AGI would posit a colossal danger to humanity. That yes, an unaligned AGI would represent an agent smarter than a human, capable of causing vast harm to humanity and with no human qualms or limitations to do so. I believe this is not just possible but probable and likely to happen within our lifetimes.
So there. I made my position clear.
BUT!
With all that said. I do have one key disagreement with yudkowsky. And partially the reason why I made this video was so that I could present this counterargument and maybe he, or someone that thinks like him, will see it and either change their mind or present a counter-counterargument that changes MY mind (although I really hope they don’t, that would be really depressing.)
Finally, we can move on to part 2
PART TWO- MY COUNTERARGUMENT TO YUDKOWSKY
I really have my work cut out for me, don’t i? as I said I am not expert and this dude has probably spent far more time than me thinking about this. But I have seen most interviews that guy has been doing for a year, I have seen most of his debates and I have followed him on twitter for years now. (also, to be clear, I AM a fan of the guy, I have read hpmor, three worlds collide, the dark lords answer, a girl intercorrupted, the sequences, and I TRIED to read planecrash, that last one didn’t work out so well for me). My point is in all the material I have seen of Eliezer I don’t recall anyone ever giving him quite this specific argument I’m about to give.
It’s a limited argument. as I have already stated I largely agree with most of what he says, I DO believe that unaligned AGI is possible, I DO believe it would be really dangerous if it were to exist and I do believe alignment is really hard. My key disagreement is specifically about his point I descrived earlier, about the lack of a fire alarm, and perhaps, more to the point, to humanity’s lack of response to such an alarm if it were to come to pass.
All we would need, is a Chernobyl incident, what is that? A situation where this technology goes out of control and causes a lot of damage, of potentially catastrophic consequences, but not so bad that it cannot be contained in time by enough effort. We need a weaker form of AGI to try to harm us, maybe even present a believable threat of taking over the world, but not so smart that humans cant do anything about it. We need essentially an AI vaccine, so that we can finally start developing proper AI antibodies. “aintibodies”
In the past humanity was dazzled by the limitless potential of nuclear power, to the point that old chemistry sets, the kind that were sold to children, would come with uranium for them to play with. We were building atom bombs, nuclear stations, the future was very much based on the power of the atom. But after a couple of really close calls and big enough scares we became, as a species, terrified of nuclear power. Some may argue to the point of overcorrection. We became scared enough that even megalomaniacal hawkish leaders were able to take pause and reconsider using it as a weapon, we became so scared that we overregulated the technology to the point of it almost becoming economically inviable to apply, we started disassembling nuclear stations across the world and to slowly reduce our nuclear arsenal.
This is all a proof of concept that, no matter how alluring a technology may be, if we are scared enough of it we can coordinate as a species and roll it back, to do our best to put the genie back in the bottle. One of the things eliezer says over and over again is that what makes AGI different from other technologies is that if we get it wrong on the first try we don’t get a second chance. Here is where I think he is wrong: I think if we get AGI wrong on the first try, it is more likely than not that nothing world ending will happen. Perhaps it will be something scary, perhaps something really scary, but unlikely that it will be on the level of all humans dropping dead simultaneously due to diamonoid bacteria. And THAT will be our Chernobyl, that will be the fire alarm, that will be the red flag that the disaster monkeys, as he call us, wont be able to ignore.
Now WHY do I think this? Based on what am I saying this? I will not be as hyperbolic as other yudkowsky detractors and say that he claims AGI will be basically a god. The AGI yudkowsky proposes is not a god. Just a really advanced alien, maybe even a wizard, but certainly not a god.
Still, even if not quite on the level of godhood, this dangerous superintelligent AGI yudkowsky proposes would be impressive. It would be the most advanced and powerful entity on planet earth. It would be humanity’s greatest achievement.
It would also be, I imagine, really hard to create. Even leaving aside the alignment bussines, to create a powerful superintelligent AGI without flaws, without bugs, without glitches, It would have to be an incredibly complex, specific, particular and hard to get right feat of software engineering. We are not just talking about an AGI smarter than a human, that’s easy stuff, humans are not that smart and arguably current AI is already smarter than a human, at least within their context window and until they start hallucinating. But what we are talking about here is an AGI capable of outsmarting reality.
We are talking about an AGI smart enough to carry out complex, multistep plans, in which they are not going to be in control of every factor and variable, specially at the beginning. We are talking about AGI that will have to function in the outside world, crashing with outside logistics and sheer dumb chance. We are talking about plans for world domination with no unforeseen factors, no unexpected delays or mistakes, every single possible setback and hidden variable accounted for. Im not saying that an AGI capable of doing this wont be possible maybe some day, im saying that to create an AGI that is capable of doing this, on the first try, without a hitch, is probably really really really hard for humans to do. Im saying there are probably not a lot of worlds where humans fiddling with giant inscrutable matrixes stumble upon the right precise set of layers and weight and biases that give rise to the Doctor from doctor who, and there are probably a whole truckload of worlds where humans end up with a lot of incoherent nonsense and rubbish.
Im saying that AGI, when it fails, when humans screw it up, doesn’t suddenly become more powerful than we ever expected, its more likely that it just fails and collapses. To turn one of Eliezer’s examples against him, when you screw up a rocket, it doesn’t accidentally punch a worm hole in the fabric of time and space, it just explodes before reaching the stratosphere. When you screw up a nuclear bomb, you don’t get to blow up the solar system, you just get a less powerful bomb.
He presents a fully aligned AGI as this big challenge that humanity has to get right on the first try, but that seems to imply that building an unaligned AGI is just a simple matter, almost taken for granted. It may be comparatively easier than an aligned AGI, but my point is that already unaligned AGI is stupidly hard to do and that if you fail in building unaligned AGI, then you don’t get an unaligned AGI, you just get another stupid model that screws up and stumbles on itself the second it encounters something unexpected. And that is a good thing I’d say! That means that there is SOME safety margin, some space to screw up before we need to really start worrying. And further more, what I am saying is that our first earnest attempt at an unaligned AGI will probably not be that smart or impressive because we as humans would have probably screwed something up, we would have probably unintentionally programmed it with some stupid glitch or bug or flaw and wont be a threat to all of humanity.
Now here comes the hypothetical back and forth, because im not stupid and I can try to anticipate what Yudkowsky might argue back and try to answer that before he says it (although I believe the guy is probably smarter than me and if I follow his logic, I probably cant actually anticipate what he would argue to prove me wrong, much like I cant predict what moves Magnus Carlsen would make in a game of chess against me, I SHOULD predict that him proving me wrong is the likeliest option, even if I cant picture how he will do it, but you see, I believe in a little thing called debating with dignity, wink)
What I anticipate he would argue is that AGI, no matter how flawed and shoddy our first attempt at making it were, would understand that is not smart enough yet and try to become smarter, so it would lie and pretend to be an aligned AGI so that it can trick us into giving it access to more compute or just so that it can bid its time and create an AGI smarter than itself. So even if we don’t create a perfect unaligned AGI, this imperfect AGI would try to create it and succeed, and then THAT new AGI would be the world ender to worry about.
So two things to that, first, this is filled with a lot of assumptions which I don’t know the likelihood of. The idea that this first flawed AGI would be smart enough to understand its limitations, smart enough to convincingly lie about it and smart enough to create an AGI that is better than itself. My priors about all these things are dubious at best. Second, It feels like kicking the can down the road. I don’t think creating an AGI capable of all of this is trivial to make on a first attempt. I think its more likely that we will create an unaligned AGI that is flawed, that is kind of dumb, that is unreliable, even to itself and its own twisted, orthogonal goals.
And I think this flawed creature MIGHT attempt something, maybe something genuenly threatning, but it wont be smart enough to pull it off effortlessly and flawlessly, because us humans are not smart enough to create something that can do that on the first try. And THAT first flawed attempt, that warning shot, THAT will be our fire alarm, that will be our Chernobyl. And THAT will be the thing that opens the door to us disaster monkeys finally getting our shit together.
But hey, maybe yudkowsky wouldn’t argue that, maybe he would come with some better, more insightful response I cant anticipate. If so, im waiting eagerly (although not TOO eagerly) for it.
Part 3 CONCLUSSION
So.
After all that, what is there left to say? Well, if everything that I said checks out then there is hope to be had. My two objectives here were first to provide people who are not familiar with the subject with a starting point as well as with the basic arguments supporting the concept of AI risk, why its something to be taken seriously and not just high faluting wackos who read one too many sci fi stories. This was not meant to be thorough or deep, just a quick catch up with the bear minimum so that, if you are curious and want to go deeper into the subject, you know where to start. I personally recommend watching rob miles’ AI risk series on youtube as well as reading the series of books written by yudkowsky known as the sequences, which can be found on the website lesswrong. If you want other refutations of yudkowsky’s argument you can search for paul christiano or robin hanson, both very smart people who had very smart debates on the subject against eliezer.
The second purpose here was to provide an argument against Yudkowskys brand of doomerism both so that it can be accepted if proven right or properly refuted if proven wrong. Again, I really hope that its not proven wrong. It would really really suck if I end up being wrong about this. But, as a very smart person said once, what is true is already true, and knowing it doesn’t make it any worse. If the sky is blue I want to believe that the sky is blue, and if the sky is not blue then I don’t want to believe the sky is blue.
This has been a presentation by FIP industries, thanks for watching.
61 notes · View notes
zapperrr · 1 year ago
Text
Harnessing the Power of Artificial Intelligence in Web Development
Artificial Intelligence (AI) has emerged as a transformative force in various industries, and web development is no exception. From enhancing user experience to optimizing content, AI offers a plethora of benefits for web developers and businesses alike.
Introduction to Artificial Intelligence (AI) in Web Development
Understanding the basics of AI
AI involves the development of computer systems that can perform tasks that typically require human intelligence. These tasks include learning, reasoning, problem-solving, perception, and language understanding.
Evolution of AI in web development
In recent years, AI has revolutionized web development by enabling developers to create more dynamic and personalized websites and applications.
➦ Benefits of Integrating AI in Web Development
Enhanced user experience
By analyzing user data and behavior, AI algorithms can personalize content and recommend products or services tailored to individual preferences, thereby enhancing the overall user experience.
Personalization and customization
AI-powered algorithms can analyze user data in real-time to provide personalized recommendations, such as product suggestions, content recommendations, and targeted advertisements.
Improved efficiency and productivity
AI automation tools can streamline various web development tasks, such as code generation, testing, and debugging, leading to increased efficiency and productivity for developers.
➦ AI-Powered Web Design
Responsive design and adaptation
AI algorithms can analyze user devices and behavior to dynamically adjust website layouts and designs for optimal viewing experiences across various platforms and screen sizes.
Automated layout generation
AI-powered design tools can generate website layouts and templates based on user preferences, content requirements, and design trends, saving developers time and effort in the design process.
➦ AI in Content Creation and Optimization
Natural language processing (NLP) for content generation
AI-driven NLP algorithms can generate high-quality content, such as articles, blog posts, and product descriptions, based on user input, keywords, and topic relevance.
SEO optimization through AI tools
AI-powered SEO tools can analyze website content, keywords, and search engine algorithms to optimize website rankings and improve visibility in search engine results pages (SERPs).
➦ AI-Driven User Interaction
Chatbots and virtual assistants
AI-powered chatbots and virtual assistants can engage with website visitors in real-time, answering questions, providing assistance, and guiding users through various processes, such as product selection and checkout.
Predictive analytics for user behavior
AI algorithms can analyze user data and behavior to predict future actions and preferences, enabling businesses to anticipate user needs and tailor their offerings accordingly.
➦ Security Enhancement with AI
Fraud detection and prevention
AI algorithms can analyze user behavior and transaction data to detect and prevent fraudulent activities, such as unauthorized access, identity theft, and payment fraud.
Cybersecurity measures powered by AI algorithms
AI-driven cybersecurity tools can identify and mitigate potential security threats, such as malware, phishing attacks, and data breaches, by analyzing network traffic and patterns of suspicious behavior.
➦ Challenges and Considerations
Ethical implications of AI in web development
The use of AI in web development raises ethical concerns regarding privacy, bias, and the potential for misuse or abuse of AI technologies.
Data privacy concerns
AI algorithms rely on vast amounts of user data to function effectively, raising concerns about data privacy, consent, and compliance with regulations such as the General Data Protection Regulation (GDPR).
Future Trends in AI and Web Development
Advancements in machine learning algorithms
Continued advancements in machine learning algorithms, such as deep learning and reinforcement learning, are expected to further enhance AI capabilities in web development.
Integration of AI with IoT and blockchain
The integration of AI with the Internet of Things (IoT) and blockchain technologies holds the potential to create more intelligent and secure web applications and services.
➦ Conclusion
In conclusion, harnessing the power of artificial intelligence in web development offers numerous benefits, including enhanced user experience, improved efficiency, and personalized interactions. However, it is essential to address ethical considerations and data privacy concerns to ensure responsible and ethical use of AI technologies in web development.
➦ FAQs
1. How does AI enhance user experience in web development?
2. What are some examples of AI-powered web design tools?
3. How can AI algorithms optimize content for search engines?
4. What are the main challenges associated with integrating AI into web development?
5. What are some future trends in AI and web development?
At Zapperr, our AI and Machine Learning Mastery services open up a universe of possibilities in the digital realm. We understand that in the age of data, harnessing the power of artificial intelligence and machine learning can be a game-changer. Our dedicated team of experts is equipped to take your business to the next level by leveraging data-driven insights, intelligent algorithms, and cutting-edge technologies.
0 notes
darkmaga-returns · 3 months ago
Text
ECMWF’s AI Forecasting System (AIFS) is now fully operational, outperforming traditional methods by up to 20% in accuracy.
AIFS uses machine learning to predict weather faster, including cyclone paths 12 hours earlier than conventional models.
The system aids renewable energy planning with forecasts for solar radiation and wind speeds at 100 meters.
ECMWF collaborates globally, sharing open-source tools like Anemoi to advance AI weather modeling.
Experts stress AI complements—not replaces—human expertise, with potential to extend forecast limits beyond 15 days.
10 notes · View notes
lime-bloods · 11 months ago
Text
some less cohesive thoughts wrt something touched on in that last post.
i have assumed in some previous posts that by killing Calliope, Caliborn carves out some essential part of himself, and is thus cursed to an eternal Lack no matter the effort he might put into self-improvement. and while this is at least one interpretation proferred by the text, there is a fair argument i think to be made that it's a cruel or even ableist one. the idea that Caliborn is "stunted" is ultimately a bit of poking fun (possibly even at the author's own neuroses). while Caliborn makes it easy for us to come away with the impression that he's stupid, he's clearly not: his original plan to pass on the assassination of Calliope's dream self to Jack Noir is, yes, an evil plan, but it's a clever one! Calliope's underestimation of Caliborn's twisted genius is clearly a part of what allows the plan to work in the first place, and in retrospect this subterfuge clearly prefigures the kind of underhanded dealings that allowed Caliborn to take such complete control of Alternia.
I think it's very tempting to see Scratch's knack for manipulation as something Caliborn "stole" wholesale from Dirk-as-AR: the "Land of Someone's Handicrafts I Took" certainly comes across as a suggestion that Caliborn is incapable of truly creating anything for himself. but this too is just making fun of Homestuck's long-standing love affair with the Google image result photobash, and in the end the copy+paste only serves as one small step in Caliborn's creative journey. Lord English is a dark mirror of Hussie, after all, and to accuse Caliborn of being creatively bankrupt is to suggest Homestuck itself of lacking originality... but of course that's all part of the point. we can't necessarily assume Homestuck's default position is one of self-confidence; while it's never been particularly shy about the bits and pieces it aped from the works of fiction that came before it,* the comic crucially does set out to question the ethics of reusing ideas, or even of telling a story in the first place. that Hussie didn't even "create" his own characters - that they originate as some kind of timeless Platonic ideas that one merely plucks from the void when they're needed to tell a story - is essential to the comic's mythology; hell, how much of Homestuck even is there that isn't just a remix of Hussie's own previous work?
I asserted a couple years ago now that Homestuck is "only superficially" about creation and reproduction... but one particular rebuttal, that Homestuck actually very much is about reproduction in the sense that it is about the reproduction of images and ideas, has stuck with me since i first heard it. and though conversations about the difference between stealing / copying / learning / coming up with an original thought are obviously a LOT older than modern machine learning, given that Caliborn very literally goes on to become a Terminator-esque AI singularity (in a setting where all AI is just direct copies of living people's essences, no less!) and even played with early examples of tech bro grift a couple years before the debate really took off, I find it a fun thought exercise to ponder the ways in which Caliborn's contribution to Homestuck preempted the current discourse on algorithmically-generated art... which I suppose grows not just out of the more pedigreed argument about digital art as a medium, but probably stems all the way back to the dawn of comics as a medium, in all their entanglement with the burgeoning pop art movement. but that's about where my area of expertise ends.
*it's probably meaningful that the name of the planet Lord English "stole" his ideas for Alternia from, "befor-us", is so widely reinterpretable as referring to pretty much anything that came "before us".
28 notes · View notes
beardedmrbean · 2 months ago
Note
Did you hear that Chanel is giving grant money to CalArts to fund some kind of LLM/AI art initiative.
I had not until just now. I thought they were smart, how did they spell LLAMA wrong like that is the big question.
Let's go with the CalArts story on their gift.
[April 24, 2025 – Valencia, Calif.] California Institute of the Arts (CalArts) and the CHANEL Culture Fund together announce the CHANEL Center for Artists and Technology at CalArts, a visionary initiative that positions artists at the forefront of shaping the evolving technologies that define our world. The Center will provide students, faculty, and visiting fellows across the creative disciplines access to leading-edge equipment and software, allowing artists to explore and use new technologies as tools for their work. Creating opportunities for collaboration and driving innovation across disciplines, the initiative creates the conditions for artists to play an active role in developing the use and application of these emergent technologies.
The Center builds on CalArts’ legacy as a cross-disciplinary school of the arts, where experimentation in visual arts, music, film, performing arts, and dance has been nurtured since the institution’s founding. In this unprecedented initiative, artists will be empowered to use technology to shape creativity across disciplines—and, ultimately, to envision a better world.
Funded by a five-year, transformative gift from the CHANEL Culture Fund, the CHANEL Center for Artists and Technology establishes CalArts as the hub of a new ecosystem of arts and technology. The CHANEL Center will foster research, experimentation, mentorship, and the creation of new knowledge by connecting students, faculty, artists, and technologists—the thinkers and creators whose expertise and vision will define the future—with new technology and its applications. It will also activate a network of institutions throughout Southern California and beyond, linking museums, universities, and technology companies to share resources and knowledge.
The CHANEL Center at CalArts will also serve as a hub for the exchange of knowledge among artists and experts from CHANEL Culture Fund’s signature programs—including more than 50 initiatives and partnerships established since 2020 that support cultural innovators in advancing new ideas. Visiting fellows and artists will be drawn both from CalArts’ sphere and from the agile network of visionary creators, thinkers, and multidisciplinary artists whom CHANEL has supported over the past five years—a network that includes such luminaries as Cao Fei, Arthur Jafa, William Kentridge, and Jacolby Satterwhite. The CHANEL Center will also host an annual forum addressing artists’ engagement with emerging technologies, ensuring that knowledge gained is knowledge shared.
The Center’s funding provides foundational resources for equipment; visiting experts, artists, and technologists-in-residence; graduate fellowships; and faculty and staff with specific expertise in future-focused research and creation. With the foundation of the CHANEL Center, CalArts empowers its students, faculty, and visiting artists to shape the future through transformative technology and new modes of thinking.
The first initiative of its kind at an independent arts school, the CHANEL Center consists of two areas of focus: one concentrating on Artificial Intelligence (AI) and Machine Learning, and the other on Digital Imaging. The project cultivates a multidisciplinary ecosystem—encompassing visual art, music, performance, and still, moving, projected, and immersive imagery—connecting CalArts and a global network of artists and technologists, other colleges and universities, arts institutions, and industry partners from technology, the arts, and beyond. ____________________________________-
I wish they'd write this kind of stuff in English.
Legendary art school California Institute of the Arts (CalArts) will soon be home to a major high-tech initiative funded by luxury brand Chanel’s Culture Fund. Billed as the first initiative of its kind at an independent art school, the Chanel Center for Artists and Technology will focus on artificial intelligence and machine learning as well as digital imaging. While they aren’t disclosing the dollar amount of the grant, the project will fund dozens of new roles as well as fellowships for artists and technologists-in-residence and graduate students along with cutting-edge equipment and software. 
That's easier to understand I think.
Interesting.
4 notes · View notes