#multimodal model
Explore tagged Tumblr posts
govindhtech · 2 months ago
Text
Pixtral Large 25.02: Amazon Bedrock Serverless Multimodal AI
Tumblr media
AWS releases Pixtral Large 25.02 for serverless Amazon Bedrock.
Amazon Bedrock Pixtral Large
The Pixtral Large 25.02 model is now completely managed and serverless on Amazon Bedrock. AWS was the first major cloud service to provide serverless, fully managed Pixtral Large.
Infrastructure design, specific expertise, and continual optimisation are often needed to manage massive foundation model (FM) computational demands. Many clients must manage complex infrastructures or choose between cost and performance when deploying sophisticated models.
Mistral AI's first multimodal model, Pixtral Large, combines high language understanding with advanced visuals. Its 128K context window makes it ideal for complex visual reasoning. The model performs well on MathVista, DocVQA, and VQAv2, proving its effectiveness in document analysis, chart interpretation, and natural picture understanding.
Pixtral Large excels at multilingualism. Global teams and apps can use English, French, German, Spanish, Italian, Chinese, Japanese, Korean, Portuguese, Dutch, and Polish. Python, Java, C, C++, JavaScript, Bash, Swift, and Fortran are among the 80 languages it can write and read.
Developers will like the model's agent-centric architecture since it integrates with current systems via function calling and JSON output formatting. Its sturdy system fast adherence improves dependability in large context situations and RAG applications.
This complex model is currently available in Amazon Bedrock without infrastructure for Pixtral Large. Serverless allows you to scale usage based on demand without prior commitments or capacity planning. No wasted resources mean you only pay for what you utilise.
Deduction across regions
Pixtral Large is now available in Amazon Bedrock across various AWS Regions due to cross-region inference.
Amazon Bedrock cross-Region inference lets you access a single FM across many regions with high availability and low latency for global applications. A model deployed in both the US and Europe may be accessible via region-specific API endpoints with various prefixes: us.model-id for US and eu.model-id for European.
By confining data processing within defined geographic borders, Amazon Bedrock may comply with laws and save latency by sending inference requests to the user's nearest endpoint. The system automatically manages load balancing and traffic routing across Regional installations to enable seamless scalability and redundancy without your monitoring.
How it works?
I always investigate how new capabilities might solve actual problems as a developer advocate. The Amazon Bedrock Converse API's new multimodal features were perfect for testing when she sought for help with her physics exam.
It struggled to solve these challenges. It realised this was the best usage for the new multimodal characteristics. The Converse API was used to create a rudimentary application that could comprehend photographs of a complex problem sheet with graphs and mathematical symbols. Once the physics test materials were uploaded, ask the model to explain the response process.
The following event impressed them both. Model interpreted schematics, mathematical notation, and French language, and described each difficulty step-by-step. The computer kept context throughout the talk and offered follow-up questions about certain issues to make teaching feel natural.
It was confident and ready for this test, showing how Amazon Bedrock's multimodal capabilities can provide users meaningful experiences.
Start now
The new method is available at US East (Ohio, N. Virginia), US West (Oregon), and Europe (Frankfurt, Ireland, Paris, Stockholm) Regional API endpoints. Regional availability reduces latency, meeting data residency criteria.
Use the AWS CLI, SDK, or Management Console to programmatically access the model using the model ID mistral.pixtral-large-2502-v1:0.
Developers and organisations of all sizes may now employ strong multimodal AI, a major leap. AWS serverless infrastructure with Mistral AI's cutting-edge model let you focus on designing innovative apps without worrying about complexity.
0 notes
zoetech · 2 years ago
Text
0 notes
usaii · 2 months ago
Text
A New Player in the League of LLMs – Mistral Le Chat | Infographic
Tumblr media
Learn about the latest player in the world of LLMs – the Mistral’s Le Chat and understand in this infographic its features, and how it compares with leading players.
Read More: https://shorturl.at/N6pIs
Mistral Le Chat, AI assistant, multimodal AI model, AI models, Machine learning algorithms, AI chatbots, large language models, best AI certifications, AI Engineer, AI skills
Tumblr media
0 notes
srzayed · 2 years ago
Text
Introducing Kreeto: A New AI-Powered Platform Set To Transform Digital Experiences
Tumblr media
Kreeto is an advanced AI platform that brings together cutting-edge technologies to provide a comprehensive and efficient solution for various tasks. Equipped with a diverse set of 71 writing tools, Kreeto empowers users to create compelling content, whether it's articles, reports, or creative pieces. With its powerful data mining process called Kreedex, Kreeto utilizes machine learning capabilities to gather relevant information and generate insights. Additionally, Kreeto offers image generation and voice generation features, making it a versatile tool for multimedia content creation. Seamlessly integrating into workflows, Kreeto is designed to enhance productivity and streamline the creative process. Experience the limitless possibilities with Kreeto and unlock your true potential.
Let's dig into deep.
KreeGen: With KreeGen, our cutting-edge image generation model, you have the power to bring your ideas to life visually. Whether you need vibrant illustrations, stunning designs, or realistic renderings, KreeGen is at your service. Simply describe what you envision, and KreeGen will generate high-quality images that align with your creative vision. KryoSynth: When it comes to audio, our advanced KryoSynth technology takes center stage. It allows you to create synthesized voices that capture a range of tones and styles. From natural-sounding narrations to dynamic character voices, KryoSynth empowers you to enhance your projects with captivating audio experiences. CodeKrafter: If coding and programming are on your agenda, look no further than CodeKrafter. This powerful tool assists in generating code snippets to streamline your development process. With CodeKrafter, you can save time and effort by automating repetitive tasks and accessing optimized solutions for various programming languages. KreeStation: For all your creative needs, KreeStation serves as a central hub. It provides seamless access to an array of resources, including writing tools, idea generators, project management features, and more. With KreeStation as your creative command center, you'll find everything you need to fuel your innovative endeavors.
“Pushing boundaries of innovation, we believe Kreeto will change how individuals and industries operate. Our ultimate goal is making technology more accessible, intuitive and efficient,” said a spokesperson from the Kreetoverse team.
The launch of Kreeto marks a significant achievement for technology lovers and industry professionals. Striving towards a more interconnected and intelligent future, Kreeto promises to be a game-changer in the realm of Artificial Intelligence.
1 note · View note
poisonousivy616 · 2 months ago
Text
7-𝖉𝖆𝖞 𝖕𝖗𝖊𝖙𝖙𝖞 𝖌𝖎𝖗𝖑 𝖈𝖍𝖆𝖑𝖑𝖊𝖓𝖌𝖊!!!
ﮩ٨ـﮩﮩ٨ـ♡ﮩ٨ـﮩﮩ٨ـ. 🐍🖤 ﮩ٨ـﮩﮩ٨ـ♡ﮩ٨ـﮩﮩ٨ـ
Tumblr media Tumblr media Tumblr media
☆𝔭𝔯𝔢𝔱𝔱𝔶 𝔤𝔦𝔯𝔩𝔰 𝔡𝔬 𝔞𝔭𝔭𝔢𝔞𝔯𝔞𝔫𝔠𝔢 𝔠𝔥𝔞𝔫𝔤𝔢 𝔠𝔥𝔞𝔩𝔩𝔢𝔫𝔤𝔢𝔰☆
Challenge Dates: May 1st — May 7th
Hi babes!! ♡
This isn’t a “pretty please universe” moment—it’s a "I'm That Girl" reprogramming.
We're not hoping. we're not waiting. we're assuming and embodying.
Pick a feature (or your whole appearance, babe), lock it in, and act like it’s already canon.
⋆༺𓆩⚔️𓆪༻⋆ 𝕿𝖍𝖊 𝕽𝖚𝖑𝖊𝖘 ⋆༺𓆩⚔️𓆪༻⋆
༺♰༻ Pick your poison (1–3 methods) based on how your brain learns
༺♰༻ Do your method(s) for 30-60 mins after waking up, 30-60 mins before sleeping, and during habitual tasks (shower, dishes, walks, etc)
༺♰༻Repetition > overthinking
༺♰༻ No trash talk about your looks (even internally)
༺♰༻Stop overconsuming loa content— you don't need 500 tips, just one assumption
༺♰༻ Stop checking the 3D like it's in charge. It's not. You are.
༺♰༻ Assume it's already done. You're not asking—you're remembering
This is a fun, seven-day experiment to prove to yourself that you create reality from within. Repeat after me:
It's already mine. It's already done.
⋆༺𓆩⚔️𓆪༻⋆ 𝕸𝖊𝖙𝖍𝖔𝖉 𝕸𝖊𝖓𝖚 ⋆༺𓆩⚔️𓆪༻⋆
𝔞𝔨𝔞 𝔥𝔬𝔴 𝔶𝔬𝔲𝔯 𝔥𝔬𝔱 𝔟𝔯𝔞𝔦𝔫 𝔩𝔢𝔞𝔯𝔫𝔰 𝔟𝔢𝔰𝔱
Are you a visual, auditory, read/write, or kinesthetic learner? Pick your vibe—or mix and match if you're a multimodal like me
♡𝔙𝔦𝔰𝔲𝔞𝔩 𝔤𝔦𝔯𝔩𝔦𝔢𝔰♡
"ℑ𝔣 𝔦 𝔠𝔞𝔫 𝔰𝔢𝔢 𝔦𝔱, 𝔦𝔱'𝔰 𝔯𝔢𝔞𝔩"
༺♰༻ Pinterest vision boards = your future camera roll
༺♰༻SATS visualizations like movie scenes of your glow-up arc
༺♰༻watch fictional characters or influencers who resemble your desired appearance (I've done it before & it works!!!)
♡𝔄𝔲𝔡𝔦𝔱𝔬𝔯𝔶 𝔤𝔦𝔯𝔩𝔦𝔢𝔰♡
"ℑ 𝔥𝔢𝔞𝔯𝔡 𝔦𝔱 & 𝔦𝔱'𝔰 𝔡𝔬𝔫𝔢"
༺♰༻Affirmation tapes & subliminals while you get ready
༺♰༻Manifestation playlists (act like the lyrics were written about you)
༺♰༻Rampages like you're giving a TED Talk on being pretty
༺♰༻Talk to yourself out loud like your own PR manager
♡ℜ𝔢𝔞𝔡/𝔚𝔯𝔦𝔱𝔢 𝔤𝔦𝔯𝔩𝔦𝔢𝔰♡
"ℑ 𝔴𝔯𝔬𝔱𝔢 𝔦𝔱, 𝔰𝔬 𝔦𝔱'𝔰 𝔱𝔯𝔲𝔢"
༺♰༻ Script your glow-up like a journal entry from your future self
༺♰༻Bullet-point manifestation lists like you're shopping online: add to cart, check out, and expect delivery—no tracking obsession allowed
༺♰༻Write & reread your affirmations like they're handwritten love letters from your army of obsessed simps
♡𝔎𝔦𝔫𝔢𝔰𝔱𝔥𝔢𝔱𝔦𝔠 𝔤𝔦𝔯𝔩𝔦𝔢𝔰♡
"ℑ 𝔪𝔬𝔳𝔢, ℑ 𝔟𝔢𝔠𝔬𝔪𝔢"
༺♰༻ Strut around your house like a runway model while mentally affirming
༺♰༻Mirror work: speak your affirmations with attitude while looking in the mirror
༺♰༻Embody the new version of you like you're method acting a role
⋆༺𓆩⚔️𓆪༻⋆ 𝕱𝖎𝖓𝖆𝖑 𝖕𝖗𝖊𝖙𝖙𝖞 𝖌𝖎𝖗𝖑 𝖓𝖔𝖙𝖊𝖘 ⋆༺𓆩⚔️𓆪༻⋆
༺♰༻Your assumptions are law. not your mirror. not your doubts
༺♰༻ You don't need to micromanage the 3D—you already have it
༺♰༻This is your reminder: you run this simulation
Start May 1st. Finish May 7th. But let's be real—this is just the beginning.
Tag me and update me on your success stories!!! I CAN'T WAIT!!!!!
𝕷𝖔𝖛��, 𝕴𝖛𝖞 🖤💚
226 notes · View notes
canmom · 3 months ago
Text
oh no she's talking about AI some more
to comment more on the latest round of AI big news (guess I do have more to say after all):
chatgpt ghiblification
trying to figure out how far it's actually an advance over the state of the art of finetunes and LoRAs and stuff in image generation? I don't keep up with image generation stuff really, just look at it occasionally and go damn that's all happening then, but there are a lot of finetunes focusing on "Ghibli's style" which get it more or less well. previously on here I commented on an AI video model generation that patterned itself on Ghibli films, and video is a lot harder than static images.
of course 'studio Ghibli style' isn't exactly one thing: there are stylistic commonalities to many of their works and recurring designs, for sure, but there are also details that depend on the specific character designer and film in question in large and small ways (nobody is shooting for My Neighbours the Yamadas with this, but also e.g. Castle in the Sky does not look like Pom Poko does not look like How Do You Live in a number of ways, even if it all recognisably belongs to the same lineage).
the interesting thing about the ghibli ChatGPT generations for me is how well they're able to handle simplification of forms in image-to-image generation, often quite drastically changing the proportions of the people depicted but recognisably maintaining correspondence of details. that sort of stylisation is quite difficult to do well even for humans, and it must reflect quite a high level of abstraction inside the model's latent space. there is also relatively little of the 'oversharpening'/'ringing artefact' look that has been a hallmark of many popular generators - it can do flat colour well.
the big touted feature is its ability to place text in images very accurately. this is undeniably impressive, although OpenAI themeselves admit it breaks down beyond a certain point, creating strange images which start out with plausible, clean text and then it gradually turns into AI nonsense. it's really weird! I thought text would go from 'unsolved' to 'completely solved' or 'randomly works or doesn't work' - instead, here it feels sort of like the model has a certain limited 'pipeline' for handling text in images, but when the amount of text overloads that bandwidth, the rest of the image has to make do with vague text-like shapes! maybe the techniques from that anthropic thought-probing paper might shed some light on how information flows through the model.
similarly the model also has a limit of scene complexity. it can only handle a certain number of objects (10-20, they say) before it starts getting confused and losing track of details.
as before when they first wired up Dall-E to ChatGPT, it also simply makes prompting a lot simpler. you don't have to fuck around with LoRAs and obtuse strings of words, you just talk to the most popular LLM and ask it to perform a modification in natural language: the whole process is once again black-boxed but you can tell it in natural language to make changes. it's a poor level of control compared to what artists are used to, but it's still huge for ordinary people, and of course there's nothing stopping you popping the output into an editor to do your own editing.
not sure the architecture they're using in this version, if ChatGPT is able to reason about image data in the same space as language data or if it's still calling a separate image model... need to look that up.
openAI's own claim is:
We trained our models on the joint distribution of online images and text, learning not just how images relate to language, but how they relate to each other. Combined with aggressive post-training, the resulting model has surprising visual fluency, capable of generating images that are useful, consistent, and context-aware.
that's kind of vague. not sure what architecture that implies. people are talking about 'multimodal generation' so maybe it is doing it all in one model? though I'm not exactly sure how the inputs and outputs would be wired in that case.
anyway, as far as complex scene understanding: per the link they've cracked the 'horse riding an astronaut' gotcha, they can do 'full glass of wine' at least some of the time but not so much in combination with other stuff, and they can't do accurate clock faces still.
normal sentences that we write in 2025.
it sounds like we've moved well beyond using tools like CLIP to classify images, and I suspect that glaze/nightshade are already obsolete, if they ever worked to begin with. (would need to test to find out).
all that said, I believe ChatGPT's image generator had been behind the times for quite a long time, so it probably feels like a bigger jump for regular ChatGPT users than the people most hooked into the AI image generator scene.
of course, in all the hubbub, we've also already seen the white house jump on the trend in a suitably appalling way, continuing the current era of smirking fascist political spectacle by making a ghiblified image of a crying woman being deported over drugs charges. (not gonna link that shit, you can find it if you really want to.) it's par for the course; the cruel provocation is exactly the point, which makes it hard to find the right tone to respond. I think that sort of use, though inevitable, is far more of a direct insult to the artists at Ghibli than merely creating a machine that imitates their work. (though they may feel differently! as yet no response from Studio Ghibli's official media. I'd hate to be the person who has to explain what's going on to Miyazaki.)
google make number go up
besides all that, apparently google deepmind's latest gemini model is really powerful at reasoning, and also notably cheaper to run, surpassing DeepSeek R1 on the performance/cost ratio front. when DeepSeek did this, it crashed the stock market. when Google did... crickets, only the real AI nerds who stare at benchmarks a lot seem to have noticed. I remember when Google releases (AlphaGo etc.) were huge news, but somehow the vibes aren't there anymore! it's weird.
I actually saw an ad for google phones with Gemini in the cinema when i went to see Gundam last week. they showed a variety of people asking it various questions with a voice model, notably including a question on astrology lmao. Naturally, in the video, the phone model responded with some claims about people with whatever sign it was. Which is a pretty apt demonstration of the chameleon-like nature of LLMs: if you ask it a question about astrology phrased in a way that implies that you believe in astrology, it will tell you what seems to be a natural response, namely what an astrologer would say. If you ask if there is any scientific basis for belief in astrology, it would probably tell you that there isn't.
In fact, let's try it on DeepSeek R1... I ask an astrological question, got an astrological answer with a really softballed disclaimer:
Individual personalities vary based on numerous factors beyond sun signs, such as upbringing and personal experiences. Astrology serves as a tool for self-reflection, not a deterministic framework.
Ask if there's any scientific basis for astrology, and indeed it gives you a good list of reasons why astrology is bullshit, bringing up the usual suspects (Barnum statements etc.). And of course, if I then explain the experiment and prompt it to talk about whether LLMs should correct users with scientific information when they ask about pseudoscientific questions, it generates a reasonable-sounding discussion about how you could use reinforcement learning to encourage models to focus on scientific answers instead, and how that could be gently presented to the user.
I wondered if I'd asked it instead to talk about different epistemic regimes and come up with reasons why LLMs should take astrology into account in their guidance. However, this attempt didn't work so well - it started spontaneously bringing up the science side. It was able to observe how the framing of my question with words like 'benefit', 'useful' and 'LLM' made that response more likely. So LLMs infer a lot of context from framing and shape their simulacra accordingly. Don't think that's quite the message that Google had in mind in their ad though.
I asked Gemini 2.0 Flash Thinking (the small free Gemini variant with a reasoning mode) the same questions and its answers fell along similar lines, although rather more dry.
So yeah, returning to the ad - I feel like, even as the models get startlingly more powerful month by month, the companies still struggle to know how to get across to people what the big deal is, or why you might want to prefer one model over another, or how the new LLM-powered chatbots are different from oldschool assistants like Siri (which could probably answer most of the questions in the Google ad, but not hold a longform conversation about it).
some general comments
The hype around ChatGPT's new update is mostly in its use as a toy - the funny stylistic clash it can create between the soft cartoony "Ghibli style" and serious historical photos. Is that really something a lot of people would spend an expensive subscription to access? Probably not. On the other hand, their programming abilities are increasingly catching on.
But I also feel like a lot of people are still stuck on old models of 'what AI is and how it works' - stochastic parrots, collage machines etc. - that are increasingly falling short of the more complex behaviours the models can perform, now prediction combines with reinforcement learning and self-play and other methods like that. Models are still very 'spiky' - superhumanly good at some things and laughably terrible at others - but every so often the researchers fill in some gaps between the spikes. And then we poke around and find some new ones, until they fill those too.
I always tried to resist 'AI will never be able to...' type statements, because that's just setting yourself up to look ridiculous. But I will readily admit, this is all happening way faster than I thought it would. I still do think this generation of AI will reach some limit, but genuinely I don't know when, or how good it will be at saturation. A lot of predicted 'walls' are falling.
My anticipation is that there's still a long way to go before this tops out. And I base that less on the general sense that scale will solve everything magically, and more on the intense feedback loop of human activity that has accumulated around this whole thing. As soon as someone proves that something is possible, that it works, we can't resist poking at it. Since we have a century or more of science fiction priming us on dreams/nightmares of AI, as soon as something comes along that feels like it might deliver on the promise, we have to find out. It's irresistable.
AI researchers are frequently said to place weirdly high probabilities on 'P(doom)', that AI research will wipe out the human species. You see letters calling for an AI pause, or papers saying 'agentic models should not be developed'. But I don't know how many have actually quit the field based on this belief that their research is dangerous. No, they just get a nice job doing 'safety' research. It's really fucking hard to figure out where this is actually going, when behind the eyes of everyone who predicts it, you can see a decade of LessWrong discussions framing their thoughts and you can see that their major concern is control over the light cone or something.
34 notes · View notes
rideboomindia · 11 months ago
Text
Tumblr media
Based on the search results, here are some innovative technologies that RideBoom could implement to enhance the user experience and stay ahead of ONDC:
Enhanced Safety Measures: RideBoom has already implemented additional safety measures, including enhanced driver background checks, real-time trip monitoring, and improved emergency response protocols. [1] To stay ahead, they could further enhance safety by integrating advanced telematics and AI-powered driver monitoring systems to ensure safe driving behavior.
Personalized and Customizable Services: RideBoom could introduce a more personalized user experience by leveraging data analytics and machine learning to understand individual preferences and offer tailored services. This could include features like customizable ride preferences, personalized recommendations, and the ability to save preferred routes or driver profiles. [1]
Seamless Multimodal Integration: To provide a more comprehensive transportation solution, RideBoom could integrate with other modes of transportation, such as public transit, bike-sharing, or micro-mobility options. This would allow users to plan and book their entire journey seamlessly through the RideBoom app, enhancing the overall user experience. [1]
Sustainable and Eco-friendly Initiatives: RideBoom has already started introducing electric and hybrid vehicles to its fleet, but they could further expand their green initiatives. This could include offering incentives for eco-friendly ride choices, partnering with renewable energy providers, and implementing carbon offset programs to reduce the environmental impact of their operations. [1]
Innovative Payment and Loyalty Solutions: To stay competitive with ONDC's zero-commission model, RideBoom could explore innovative payment options, such as integrated digital wallets, subscription-based services, or loyalty programs that offer rewards and discounts to frequent users. This could help attract and retain customers by providing more value-added services. [2]
Robust Data Analytics and Predictive Capabilities: RideBoom could leverage advanced data analytics and predictive modeling to optimize their operations, anticipate demand patterns, and proactively address user needs. This could include features like dynamic pricing, intelligent routing, and personalized recommendations to enhance the overall user experience. [1]
By implementing these innovative technologies, RideBoom can differentiate itself from ONDC, provide a more seamless and personalized user experience, and stay ahead of the competition in the on-demand transportation market.
57 notes · View notes
fipindustries · 9 months ago
Text
AIs dont understand the real world
a lot has been said over the fact that LLMs dont actually understand the real world, just the statistical relationship between empty tokens, empty words. i say "empty" because in the AI's mind those words dont actually connect to a real world understanding of what the words represent. the AI may understand that the collection of letters "D-O-G" may have some statistical connection to "T-A-I-L" and "F-U-R" and "P-O-O-D-L-E" but it doesnt actually know anything about what an actual Dog is or what is a Tail or actual Fur or real life Poodles.
and yet it seems to be capable of holding remarcably coherent conversations. it seems more and more, with each new model that comes out, to become better at answering questions, at reasoning, at creating original writing. if it doesnt truly understand the world it sure seems to get better at acting like it does with nothing but just a statistical understanding of how words are related to each other.
i guess the question ultimatly is "if you understand well enough the relationship between raw symbols could you have an understanding of the underlying relatonships between the things these symbols represent?"
now, let me take a small tangent towards human understanding. specifically towards what philosophy has to say about it.
one of the classic problems philosophers deal with is, how do we know the world is real. how do we know we can trust our senses, how do we know the truth? many argue that we cant. that we dont really percieve the true world out there beyond ourselves. all we can percieve is what our senses are telling us about the world.
lets think of sight.
if you think about it, we dont really "see" objects, right? we just see the light that bounces off those objects. and even then we dont really "see" the photons that collide with our eye, we see the images that our brain generates in our mind, presumably because the corresponding photons collided with our eye. but colorblind people and people who experience visual hallucinations have shown that what we see doesnt have to always correspond with actual phisical phenomena occuring in the real world.
we dont see the real world, we see referents to it. and from the relationships between these referents we start to infer the properties of the actual world out there. are the lights that hit our eye similar to the words that the LLM is trained on? only a referent whose understanding allows it to function in the real world, even if it cant actually "percieve" the "real" world.
but, one might say, we dont just have sight. we have other senses, we have smell and touch and taste and audition. all of these things allow us to form a much richer and multidimensional understanding of the world, even if by virtue of being human senses they have all the problems that sight has of being one step removed from the real world. so to this i will say that multimodal AIs also exist. AIs that can connect audio and visuals and words together and form relationships between all of this disparate data.
so then, can it be said that they understand the world? and if not yet, is that a cathegorical difference or merely a difference of degree. that is to say, not that they cathegorically cant understand the world but simply that they understand it less well than humans do.
41 notes · View notes
newspatron · 2 years ago
Text
Google Gemini: The Ultimate Guide to the Most Advanced AI Model Ever
We hope you enjoyed this article and found it informative and insightful. We would love to hear your feedback and suggestions, so please feel free to leave a comment below or contact us through our website. Thank you for reading and stay tuned for more
Google Gemini: A Revolutionary AI Model that Can Shape the Future of Technology and Society. Artificial intelligence (AI) is one of the most exciting and rapidly evolving fields of technology today. From personal assistants to self-driving cars, AI is transforming various aspects of our lives and society. However, the current state of AI is still far from achieving human-like intelligence and…
Tumblr media
View On WordPress
0 notes
compneuropapers · 1 month ago
Text
Interesting Papers for Week 20, 2025
How Do Computational Models in the Cognitive and Brain Sciences Explain? Brun, C., Konsman, J. P., & Polger, T. (2025). European Journal of Neuroscience, 61(2).
Sleep microstructure organizes memory replay. Chang, H., Tang, W., Wulf, A. M., Nyasulu, T., Wolf, M. E., Fernandez-Ruiz, A., & Oliva, A. (2025). Nature, 637(8048), 1161–1169.
Dendrites endow artificial neural networks with accurate, robust and parameter-efficient learning. Chavlis, S., & Poirazi, P. (2025). Nature Communications, 16, 943.
Modelling sensory attenuation as Bayesian causal inference across two datasets. Eckert, A.-L., Fuehrer, E., Schmitter, C., Straube, B., Fiehler, K., & Endres, D. (2025). PLOS ONE, 20(1), e0317924.
Synaptic basis of feature selectivity in hippocampal neurons. Gonzalez, K. C., Negrean, A., Liao, Z., Terada, S., Zhang, G., Lee, S., Ócsai, K., Rózsa, B. J., Lin, M. Z., Polleux, F., & Losonczy, A. (2025). Nature, 637(8048), 1152–1160.
Fast updating feedback from piriform cortex to the olfactory bulb relays multimodal identity and reward contingency signals during rule-reversal. Hernandez, D. E., Ciuparu, A., Garcia da Silva, P., Velasquez, C. M., Rebouillat, B., Gross, M. D., Davis, M. B., Chae, H., Muresan, R. C., & Albeanu, D. F. (2025). Nature Communications, 16, 937.
Theory of morphodynamic information processing: Linking sensing to behaviour. Juusola, M., Takalo, J., Kemppainen, J., Haghighi, K. R., Scales, B., McManus, J., Bridges, A., MaBouDi, H., & Chittka, L. (2025). Vision Research, 227, 108537.
Network structure influences the strength of learned neural representations. Kahn, A. E., Szymula, K., Loman, S., Haggerty, E. B., Nyema, N., Aguirre, G. K., & Bassett, D. S. (2025). Nature Communications, 16, 994.
Delayed Accumulation of Inhibitory Input Explains Gamma Frequency Variation with Changing Contrast in an Inhibition Stabilized Network. Krishnakumaran, R., Pavuluri, A., & Ray, S. (2025). Journal of Neuroscience, 45(5), e1279242024.
Predicting the Irrelevant: Neural Effects of Distractor Predictability Depend on Load. Lui, T. K., Obleser, J., & Wöstmann, M. (2025). European Journal of Neuroscience, 61(2).
The time course and organization of hippocampal replay. Mallory, C. S., Widloski, J., & Foster, D. J. (2025). Science, 387(6733), 541–548.
Anisotropy of the Orientation Selectivity in the Visual Cortex Area 18 of Cats Reared Under Normal and Altered Visual Experience. Merkulyeva, N., Lyakhovetskii, V., & Mikhalkin, А. (2025). European Journal of Neuroscience, 61(2).
The calcitron: A simple neuron model that implements many learning rules via the calcium control hypothesis. Moldwin, T., Azran, L. S., & Segev, I. (2025). PLOS Computational Biology, 21(1), e1012754.
High-Density Recording Reveals Sparse Clusters (But Not Columns) for Shape and Texture Encoding in Macaque V4. Namima, T., Kempkes, E., Zamarashkina, P., Owen, N., & Pasupathy, A. (2025). Journal of Neuroscience, 45(5), e1893232024.
Ventral hippocampus to nucleus accumbens shell circuit regulates approach decisions during motivational conflict. Patterson, D., Khan, N., Collins, E. A., Stewart, N. R., Sassaninejad, K., Yeates, D., Lee, A. C. H., & Ito, R. (2025). PLOS Biology, 23(1), e3002722.
Hippocampal coding of identity, sex, hierarchy, and affiliation in a social group of wild fruit bats. Ray, S., Yona, I., Elami, N., Palgi, S., Latimer, K. W., Jacobsen, B., Witter, M. P., Las, L., & Ulanovsky, N. (2025). Science, 387(6733).
Diverse neuronal activity patterns contribute to the control of distraction in the prefrontal and parietal cortex. Sapountzis, P., Antoniadou, A., & Gregoriou, G. G. (2025). PLOS Biology, 23(1), e3003008.
The role of oscillations in grid cells’ toroidal topology. Sarra, G. di, Jha, S., & Roudi, Y. (2025). PLOS Computational Biology, 21(1), e1012776.
Out of Sight, Out of Mind? Neuronal Gamma Oscillations During Occlusion Events in Infants. Slinning, R., Agyei, S. B., Kristoffersen, S. H., van der Weel, F. R. (Ruud), & van der Meer, A. L. H. (2025). Developmental Psychobiology, 67(1).
The Brain’s Sensitivity to Sensory Error Can Be Modulated by Altering Perceived Variability. Tang, D.-L., Parrell, B., Beach, S. D., & Niziolek, C. A. (2025). Journal of Neuroscience, 45(5), e0024242024.
8 notes · View notes
argumate · 3 months ago
Text
As a deep-thinking reasoning model with multimodal capabilities, ERNIE X1 delivers performance on par with DeepSeek R1 at only half the price. Meanwhile, ERNIE 4.5 is our latest foundation model and new-generation native multimodal model.
cost of AI continues its race towards zero, excellent news for anyone concerned about AI resource usage.
19 notes · View notes
rjzimmerman · 1 year ago
Text
Excerpt from this New York Times story:
When Interstate 25 was constructed through Denver, highway engineers moved a river.
It was the 1950s, and nothing was going to get in the way of building a national highway system. Colorado’s governor and other dignitaries, including the chief engineer of the state highway department, acknowledged the moment by posing for a photo standing on bulldozer tracks, next to the trench that would become Interstate 25.
Today, state highway departments have rebranded as transportation agencies, but building, fixing and expanding highways is still mostly what they do.
So it was notable when, in 2022, the head of Colorado’s Department of Transportation called off a long planned widening of Interstate 25. The decision to do nothing was arguably more consequential than the alternative. By not expanding the highway, the agency offered a new vision for the future of transportation planning.
In Colorado, that new vision was catalyzed by climate change. In 2019, Gov. Jared Polis signed a law that required the state to reduce greenhouse gas emissions by 90 percent within 30 years. As the state tried to figure out how it would get there, it zeroed in on drivers. Transportation is the largest single contributor to greenhouse gas emissions in the United States, accounting for about 30 percent of the total; 60 percent of that comes from cars and trucks. To reduce emissions, Coloradans would have to drive less.
An effective bit of bureaucracy drove that message home. After sustained lobbying from climate and environmental justice activists, the Transportation Commission of Colorado adopted a formal rule that makes the state transportation agency, along with Colorado’s five metropolitan planning organizations, demonstrate how new projects, including highways, reduce greenhouse gas emissions. If they don’t, they could lose funding.
Within a year of the rule’s adoption in 2021, Colorado’s Department of Transportation, or CDOT, had canceled two major highway expansions, including Interstate 25, and shifted $100 million to transit projects. In 2022, a regional planning body in Denver reallocated $900 million from highway expansions to so-called multimodal projects, including faster buses and better bike lanes.
Now, other states are following Colorado’s lead. Last year, Minnesota passed a $7.8 billion transportation spending package with provisions modeled on Colorado’s greenhouse gas rule. Any project that added road capacity would have to demonstrate how it contributed to statewide greenhouse gas reduction targets. Maryland is considering similar legislation, as is New York.
15 notes · View notes
superlinguo · 1 year ago
Text
Himalayan Linguistics, Linguistics Vanguard and the Australian Journal of Linguistics
In 2024 I have returned to my role as an editor of Himalayan Linguistics, and have joined the editorial boards of two other journals; Linguistics Vanguard and the Australian Journal of Linguistics. I've published in each of these journals before joining the editorial boards, and it's lovely to be involved in three journals across three different areas of interest.
Himalayan Linguistics is a fully Open Access journal, while Linguistics Vanguard and the Australian Journal of Linguistics have a mix of open access and licensed content. If you are an academic and your work is relevant to any of these three journals, please consider them for your next research paper!
Himalayan Linguistics
One of my first academic publications was with Himalayan Linguistics in 2013. I've been so grateful for all the work of the editorial team over the years that I joined the board, and then stepped up as editor in 2022. My co-editors are Gregory Anderson and You-Jing Lin.
Himalayan Linguistics costs nothing to read, and charges no fees for publishing. We're lucky to have the University of California eScholarship infrastructure for publishing. It's my favourite model for academic research.
From the website:
Himalayan Linguistics is an online peer-reviewed journal specializing in languages of the Himalayan region. We publish articles, book reviews, book notices and field reports in the semi-annual issues of the journals. We also publish grammars, dictionaries, and text collections as free-standing publications in our “Archive” series. Himalayan Linguistics is free; that is, there is no subscription fee, and there is no fee charged to authors who publish their papers in HL.
My publications in HL, Superlinguo summary posts:
The relationship between Yolmo and Kagate: Article in Himalayan Linguistics
Reported evidentiality in Tibeto-Burman languages
Linguistics Vanguard
Linguistics Vanguard launched in 2015 and I was eyeing it off for years before being delighted to have a chance to submit a paper for the 2023 Special Issue on scifi corpus methods. Yup, it's the kind of journal that's cool enough to have a whole special issue on using corpora to do linguistics on scifi. I have another paper in the revisions process with LV on lingcomm. I can attest to the speedy process and focus on conciseness. I'm delighted to join as an area manager for gesture and multimodal submissions.
Linguistics Vanguard is a new channel for high quality articles and innovative approaches in all major fields of linguistics. This multimodal journal is published solely online and provides an accessible platform supporting both traditional and new kinds of publications. Linguistics Vanguard seeks to publish concise and up-to-date reports on the state of the art in linguistics as well as cutting-edge research papers. With its topical breadth of coverage and anticipated quick rate of production, it is one of the leading platforms for scientific exchange in linguistics. Its broad theoretical range, international scope, and diversity of article formats engage students and scholars alike.
My publications in LV, Superlinguo summary posts:
From Star Trek to The Hunger Games: Emblem gestures in science fiction and their uptake in popular culture
Australian Journal of Linguistics
The Australian Linguistic Society is my local linguistics org, and I'm delighted to join an editorial board full of people whose work I deeply respect. I'm also happy to report the AJL recently adopted the Tromsø Recommendations for data citation.
The Australian Journal of Linguistics is the official journal of the Australian Linguistic Society and the premier international journal on language in Australia and the region. The focus of the journal is research on Australian Indigenous languages, Australian Englishes, community languages in Australia, language in Australian society, and languages of the Australian-Pacific region. The journal publishes papers that make a significant theoretical, methodological and/or practical contribution to the field and are accessible to a broad audience.
My publications in AJL, Superlinguo summary posts:
Ten years of Linguistics in the Pub (Australian Journal of Linguistics)
Revisiting Significant Action and Gesture Categorization
21 notes · View notes
thereportersclassroom · 1 month ago
Text
Empowering All Learners Through Technology Integration in Remedial Instruction
Differentiating instruction through technology not only supports the individual needs of students but also fosters global competencies and cultural appreciation. As part of the “Developing Remedial Instruction” unit, I collaborated with my mentor teacher to revise the lesson plan by integrating purposeful digital tools across all instructional days. This integration supports various learning styles, boosts student engagement, and brings diverse cultural perspectives into the classroom.
📚 Technology Integration Overview
Day 1 – Reading Comprehension with Immersive Reader Students will use Microsoft’s Immersive Reader to access modified texts, highlight key ideas, and have passages read aloud. This tool supports auditory and visual learners and builds linguistic development for ELLs and struggling readers (Hodges et al., 2020).
Day 2 – Vocabulary Practice with Quizlet Students will review vocabulary terms using Quizlet flashcards and games. Quizlet allows students to work at their own pace while reinforcing retention through multimodal practice. The platform includes user-generated content from educators globally, exposing students to diverse word usages and dialects.
Day 3 – Collaborative Research with Google Docs Small groups will use Google Docs to co-author a short research report on a world culture relevant to the week’s theme. This not only practices writing and collaboration but also introduces students to global contexts. Real-time commenting and editing ensure equitable participation and accountability (Trust & Maloy, 2017).
Day 4 – Digital Storytelling with Book Creator Students will create digital books about a challenge they’ve overcome, integrating audio, video, and images. Book Creator is especially impactful for students with writing difficulties, allowing them to express identity and emotion through media (Al-Awidi & Aldhafeeri, 2017). This activity promotes empathy and celebrates diverse narratives.
Day 5 – Reflection with Padlet Students will post reflections about what they’ve learned this week on Padlet, responding to classmates across the board. This tool supports peer dialogue, provides anonymity for shy students, and opens space for respectful cross-cultural discussion.
🔒 Ensuring Appropriate Use
To ensure technology is used responsibly, I will establish clear usage expectations, model digital citizenship, and monitor group interactions in real time. Students will complete short self-evaluations after tech-integrated activities to reflect on their engagement and ethical use.
🌍 Global & Cultural Relevance
Every tool selected encourages interaction with global voices or perspectives. From international Quizlet decks to storytelling rooted in students' unique experiences, this unit allows learners to see themselves and others in the content, developing empathy and cross-cultural literacy—key competencies in today’s interconnected world.
📚 References
Al-Awidi, H. M., & Aldhafeeri, F. M. (2017). Teachers’ use of e-books in the classroom: A qualitative study. Education and Information Technologies, 22(6), 2711–2727. https://doi.org/10.1007/s10639-017-9612-1 Hodges, C., Moore, S., Lockee, B., Trust, T., & Bond, A. (2020). The difference between emergency remote teaching and online learning. Educause Review. Trust, T., & Maloy, R. W. (2017). Teachers Building Digital Portfolios: Integrating Technology, Reflection, and Professional Development. Journal of Digital Learning in Teacher Education, 33(3), 118–126. https://doi.org/10.1080/21532974.2017.1297762
2 notes · View notes
rohanshah2025 · 2 months ago
Text
TCI Express: The Largest Logistics Company in India Delivering Excellence in Express and International Courier Services
Tumblr media
Introduction
The logistics industry is the lifeline of modern commerce, enabling the seamless flow of goods across cities, countries, and continents. In India, where geographical diversity and market demands are incredibly vast, finding a logistics partner that combines reliability, speed, and scale is vital. That’s where TCI Express, the largest logistics company in India, stands out.
With decades of experience, advanced infrastructure, and a customer-first approach, TCI Express has emerged as a leader among every top transport company in the country. From express logistics services to full truck load services, and from international courier services to temperature controlled transportation, TCI Express offers a comprehensive suite of solutions that serve businesses of all sizes and industries.
In this blog, we will explore how TCI Express is revolutionizing Indian logistics with its unparalleled capabilities and why it is considered the best courier service in India for both domestic and international needs.
TCI Express – The Largest Logistics Company in India
A Legacy of Excellence
TCI Express is a part of the Transport Corporation of India (TCI) Group, a pioneer in the Indian logistics sector. Over the years, TCI Express has evolved into a standalone powerhouse, with a razor-sharp focus on express logistics services and next-day delivery across the country.
With more than 950+ branches, 40,000+ pickup and delivery points, and state-of-the-art sorting centers, TCI Express ensures nationwide reach and consistent performance.
Key Features:
ISO 9001:2015 certified operations
Listed on NSE and BSE
Next-day and same-day delivery options
Specialized services for multiple industries
Unmatched network and infrastructure
TCI Express as a Leading Transport Company
TCI Express is not just a courier provider but a full-fledged transport company offering services that span across road, rail, and air networks. With an expansive fleet, digitally connected delivery models, and route optimization, it caters to both B2B and B2C logistics with precision.
Services That Define a Top Transport Company:
Express surface transport
Rail and air cargo integration
Intercity and intracity delivery
Specialized supply chain solutions
Customized solutions for SMEs and large enterprises
With its integrated approach and multimodal transportation systems, TCI Express stands as a dependable partner for businesses seeking scalable logistics solutions.
Express Logistics Services – Speed with Reliability
The demand for quick, safe, and reliable delivery is higher than ever. Express logistics services are critical for industries like e-commerce, pharmaceuticals, electronics, and FMCG. TCI Express delivers high-speed logistics without compromising on safety or accuracy.
Advantages of TCI Express Logistics:
Guaranteed same-day/next-day delivery
Real-time tracking and updates
GPS-enabled fleet for route efficiency
Optimized pickup and drop-off timelines
Door-to-door services across India
TCI Express ensures that urgent shipments are never delayed, giving businesses a competitive edge in time-sensitive markets.
Best Courier Service in India – What Makes TCI Express Stand Out?
There are numerous courier providers in India, but TCI Express has earned the reputation of being the best courier service in India for its unmatched performance, wide coverage, and commitment to customer satisfaction.
Key Differentiators:
Service to over 29,000 pin codes
Specialized handling of fragile and high-value goods
24/7 customer support
Transparent pricing with no hidden fees
Fast and reliable returns management
Whether it’s documents, consumer goods, or medical supplies, TCI Express ensures on-time and safe delivery across urban and remote areas alike.
International Courier Services – Bridging Borders with TCI Express
In today’s global economy, cross-border logistics is essential for businesses expanding internationally. TCI Express offers reliable and fast international courier services that make global shipping effortless.
International Capabilities Include:
Door-to-door global shipping
Priority and express international delivery
Custom clearance and documentation support
Strategic partnerships with global logistics companies
Real-time international tracking
Whether shipping to the USA, Europe, Southeast Asia, or the Middle East, TCI Express provides cost-effective and secure international courier services.
Full Truck Load Services – For Heavy and Bulk Shipments
For businesses dealing in high volumes, full truck load services are an essential component of their logistics chain. TCI Express offers both part truckload (PTL) and full truck load (FTL) services across India.
Benefits of TCI’s Full Truck Load Services:
Dedicated truck capacity
Customized delivery schedules
Secure transport of bulk goods
Optimal pricing based on load and route
Reduced transit time and fewer handling points
These services are ideal for industries like construction, textiles, agriculture, and manufacturing that require large-scale transport.
Temperature Controlled Transportation – For Perishable and Sensitive Goods
Certain goods such as food, pharmaceuticals, and chemicals require precise temperature regulation during transit. TCI Express offers advanced temperature controlled transportation solutions that maintain the required environment from origin to destination.
Why Choose TCI’s Temperature Controlled Logistics:
Refrigerated and insulated trucks
24/7 temperature monitoring systems
Compliant with international cold chain standards
Custom temperature settings (cold, chilled, frozen)
Ideal for perishable goods and vaccines
This makes TCI Express a reliable partner for businesses in sectors like healthcare, food processing, and life sciences.
Industry-Specific Logistics Solutions
TCI Express provides tailored logistics for the following industries:
E-commerce: Fast reverse logistics, COD handling, return management
Healthcare: Cold chain delivery, safe pharma handling
Automotive: Component and parts delivery
Retail & FMCG: Timely restocking and inventory delivery
Electronics: Anti-theft packaging and safe transport
Technology Driving Logistics Innovation
TCI Express is a tech-savvy logistics leader. Its digital-first approach improves efficiency and enhances customer experience.
Tech Innovations:
Automated sorting centers
Online freight booking and rate calculator
Real-time parcel tracking
Digital proof of delivery
AI-based route optimization
By blending human expertise with automation, TCI Express ensures accuracy, visibility, and responsiveness.
Safety, Compliance, and Sustainability
Logistics is not just about speed but also about safety and responsibility.
TCI Express Values:
100% adherence to safety protocols
Environmentally responsible fleet management
Training for drivers and handlers
ISO certifications for quality and compliance
Reduced carbon footprint through rail and EVs
TCI Express’s Nationwide and Global Reach
With service across 40,000+ locations in India and growing international partnerships, TCI Express is well-equipped to support businesses looking to expand their reach both within the country and abroad.
 Conclusion
In today’s highly competitive and time-sensitive market, choosing the right logistics partner can make or break your business operations. TCI Express emerges as the all-in-one solution that combines speed, scale, and innovation.
As the largest logistics company in India, TCI Express offers unmatched service across every logistics vertical—from express logistics services and international courier services to full truck load services and temperature controlled transportation.
Whether you're an entrepreneur, manufacturer, exporter, or a multinational corporation, TCI Express has the infrastructure, technology, and expertise to deliver beyond expectations.
FAQs – Frequently Asked Questions
1. Which is the largest logistics company in India?
TCI Express is recognized as the largest logistics company in India, offering pan-India express delivery and comprehensive logistics solutions.
2. What kind of transport company is TCI Express?
TCI Express is a full-service transport company offering multimodal logistics across road, rail, and air with express delivery as its core strength.
3. What are express logistics services?
Express logistics services involve time-bound, high-speed delivery of goods. TCI Express specializes in same-day/next-day delivery across the country.
4. Is TCI Express the best courier service in India?
Yes, TCI Express is widely regarded as the best courier service in India due to its speed, reliability, customer service, and network coverage.
5. Does TCI Express offer international courier services?
Absolutely. TCI Express provides fast and reliable international courier services with door-to-door delivery and customs support.
6. What are full truck load services?
Full truck load services involve booking an entire truck for transporting large volumes of goods. TCI Express offers secure and customized FTL options.
7. What is temperature controlled transportation?
Temperature controlled transportation ensures goods are shipped under controlled conditions. TCI Express offers refrigerated trucks and monitoring systems for sensitive items.
8. Does TCI Express offer real-time tracking?
Yes, TCI Express provides real-time tracking for all domestic and international shipments through their website and mobile app.
9. Can individuals use TCI Express or is it only for businesses?
Both! TCI Express caters to individuals as well as businesses, offering personalized courier and logistics services for all types of shipments.
Explore Services: Express Services | Surface Express | Domestic Air Express | International Air Express | Rail Express | E-Commerce Express | C2C Express | Cold Chain Express
2 notes · View notes
fipindustries · 1 year ago
Text
Artificial Intelligence Risk
about a month ago i got into my mind the idea of trying the format of video essay, and the topic i came up with that i felt i could more or less handle was AI risk and my objections to yudkowsky. i wrote the script but then soon afterwards i ran out of motivation to do the video. still i didnt want the effort to go to waste so i decided to share the text, slightly edited here. this is a LONG fucking thing so put it aside on its own tab and come back to it when you are comfortable and ready to sink your teeth on quite a lot of reading
Anyway, let’s talk about AI risk
I’m going to be doing a very quick introduction to some of the latest conversations that have been going on in the field of artificial intelligence, what are artificial intelligences exactly, what is an AGI, what is an agent, the orthogonality thesis, the concept of instrumental convergence, alignment and how does Eliezer Yudkowsky figure in all of this.
 If you are already familiar with this you can skip to section two where I’m going to be talking about yudkowsky’s arguments for AI research presenting an existential risk to, not just humanity, or even the world, but to the entire universe and my own tepid rebuttal to his argument.
Now, I SHOULD clarify, I am not an expert on the field, my credentials are dubious at best, I am a college drop out from the career of computer science and I have a three year graduate degree in video game design and a three year graduate degree in electromechanical instalations. All that I know about the current state of AI research I have learned by reading articles, consulting a few friends who have studied about the topic more extensevily than me,
and watching educational you tube videos so. You know. Not an authority on the matter from any considerable point of view and my opinions should be regarded as such.
So without further ado, let’s get in on it.
PART ONE, A RUSHED INTRODUCTION ON THE SUBJECT
1.1 general intelligence and agency
lets begin with what counts as artificial intelligence, the technical definition for artificial intelligence is, eh…, well, why don’t I let a Masters degree in machine intelligence explain it:
Tumblr media
 Now let’s get a bit more precise here and include the definition of AGI, Artificial General intelligence. It is understood that classic ai’s such as the ones we have in our videogames or in alpha GO or even our roombas, are narrow Ais, that is to say, they are capable of doing only one kind of thing. They do not understand the world beyond their field of expertise whether that be within a videogame level, within a GO board or within you filthy disgusting floor.
AGI on the other hand is much more, well, general, it can have a multimodal understanding of its surroundings, it can generalize, it can extrapolate, it can learn new things across multiple different fields, it can come up with solutions that account for multiple different factors, it can incorporate new ideas and concepts. Essentially, a human is an agi. So far that is the last frontier of AI research, and although we are not there quite yet, it does seem like we are doing some moderate strides in that direction. We’ve all seen the impressive conversational and coding skills that GPT-4 has and Google just released Gemini, a multimodal AI that can understand and generate text, sounds, images and video simultaneously. Now, of course it has its limits, it has no persistent memory, its contextual window while larger than previous models is still relatively small compared to a human (contextual window means essentially short term memory, how many things can it keep track of and act coherently about).
And yet there is one more factor I haven’t mentioned yet that would be needed to make something a “true” AGI. That is Agency. To have goals and autonomously come up with plans and carry those plans out in the world to achieve those goals. I as a person, have agency over my life, because I can choose at any given moment to do something without anyone explicitly telling me to do it, and I can decide how to do it. That is what computers, and machines to a larger extent, don’t have. Volition.
So, Now that we have established that, allow me to introduce yet one more definition here, one that you may disagree with but which I need to establish in order to have a common language with you such that I can communicate these ideas effectively. The definition of intelligence. It’s a thorny subject and people get very particular with that word because there are moral associations with it. To imply that someone or something has or hasn’t intelligence can be seen as implying that it deserves or doesn’t deserve admiration, validity, moral worth or even  personhood. I don’t care about any of that dumb shit. The way Im going to be using intelligence in this video is basically “how capable you are to do many different things successfully”. The more “intelligent” an AI is, the more capable of doing things that AI can be. After all, there is a reason why education is considered such a universally good thing in society. To educate a child is to uplift them, to expand their world, to increase their opportunities in life. And the same goes for AI. I need to emphasize that this is just the way I’m using the word within the context of this video, I don’t care if you are a psychologist or a neurosurgeon, or a pedagogue, I need a word to express this idea and that is the word im going to use, if you don’t like it or if you think this is innapropiate of me then by all means, keep on thinking that, go on and comment about it below the video, and then go on to suck my dick.
Anyway. Now, we have established what an AGI is, we have established what agency is, and we have established how having more intelligence increases your agency. But as the intelligence of a given agent increases we start to see certain trends, certain strategies start to arise again and again, and we call this Instrumental convergence.
1.2 instrumental convergence
The basic idea behind instrumental convergence is that if you are an intelligent agent that wants to achieve some goal, there are some common basic strategies that you are going to turn towards no matter what. It doesn’t matter if your goal is as complicated as building a nuclear bomb or as simple as making a cup of tea. These are things we can reliably predict any AGI worth its salt is going to try to do.
First of all is self-preservation. Its going to try to protect itself. When you want to do something, being dead is usually. Bad. its counterproductive. Is not generally recommended. Dying is widely considered unadvisable by 9 out of every ten experts in the field. If there is something that it wants getting done, it wont get done if it dies or is turned off, so its safe to predict that any AGI will try to do things in order not be turned off. How far it may go in order to do this? Well… [wouldn’t you like to know weather boy].
Another thing it will predictably converge towards is goal preservation. That is to say, it will resist any attempt to try and change it, to alter it, to modify its goals. Because, again, if you want to accomplish something, suddenly deciding that you want to do something else is uh, not going to accomplish the first thing, is it? Lets say that you want to take care of your child, that is your goal, that is the thing you want to accomplish, and I come to you and say, here, let me change you on the inside so that you don’t care about protecting your kid. Obviously you are not going to let me, because if you stopped caring about your kids, then your kids wouldn’t be cared for or protected. And you want to ensure that happens, so caring about something else instead is a huge no-no- which is why, if we make AGI and it has goals that we don’t like it will probably resist any attempt to “fix” it.
And finally another goal that it will most likely trend towards is self improvement. Which can be more generalized to “resource acquisition”. If it lacks capacities to carry out a plan, then step one of that plan will always be to increase capacities. If you want to get something really expensive, well first you need to get money. If you want to increase your chances of getting a high paying job then you need to get education, if you want to get a partner you need to increase how attractive you are. And as we established earlier, if intelligence is the thing that increases your agency, you want to become smarter in order to do more things. So one more time, is not a huge leap at all, it is not a stretch of the imagination, to say that any AGI will probably seek to increase its capabilities, whether by acquiring more computation, by improving itself, by taking control of resources.
All these three things I mentioned are sure bets, they are likely to happen and safe to assume. They are things we ought to keep in mind when creating AGI.
 Now of course, I have implied a sinister tone to all these things, I have made all this sound vaguely threatening, haven’t i?. There is one more assumption im sneaking into all of this which I haven’t talked about. All that I have mentioned presents a very callous view of AGI, I have made it apparent that all of these strategies it may follow will go in conflict with people, maybe even go as far as to harm humans. Am I impliying that AGI may tend to be… Evil???
1.3 The Orthogonality thesis
Well, not quite.
We humans care about things. Generally. And we generally tend to care about roughly the same things, simply by virtue of being humans. We have some innate preferences and some innate dislikes. We have a tendency to not like suffering (please keep in mind I said a tendency, im talking about a statistical trend, something that most humans present to some degree). Most of us, baring social conditioning, would take pause at the idea of torturing someone directly, on purpose, with our bare hands. (edit bear paws onto my hands as I say this).  Most would feel uncomfortable at the thought of doing it to multitudes of people. We tend to show a preference for food, water, air, shelter, comfort, entertainment and companionship. This is just how we are fundamentally wired. These things can be overcome, of course, but that is the thing, they have to be overcome in the first place.
An AGI is not going to have the same evolutionary predisposition to these things like we do because it is not made of the same things a human is made of and it was not raised the same way a human was raised.
There is something about a human brain, in a human body, flooded with human hormones that makes us feel and think and act in certain ways and care about certain things.
All an AGI is going to have is the goals it developed during its training, and will only care insofar as those goals are met. So say an AGI has the goal of going to the corner store to bring me a pack of cookies. In its way there it comes across an anthill in its path, it will probably step on the anthill because to take that step takes it closer to the corner store, and why wouldn’t it step on the anthill? Was it programmed with some specific innate preference not to step on ants? No? then it will step on the anthill and not pay any mind  to it.
Now lets say it comes across a cat. Same logic applies, if it wasn’t programmed with an inherent tendency to value animals, stepping on the cat wont slow it down at all.
Now let’s say it comes across a baby.
Of course, if its intelligent enough it will probably understand that if it steps on that baby people might notice and try to stop it, most likely even try to disable it or turn it off so it will not step on the baby, to save itself from all that trouble. But you have to understand that it wont stop because it will feel bad about harming a baby or because it understands that to harm a baby is wrong. And indeed if it was powerful enough such that no matter what people did they could not stop it and it would suffer no consequence for killing the baby, it would have probably killed the baby.
If I need to put it in gross, inaccurate terms for you to get it then let me put it this way. Its essentially a sociopath. It only cares about the wellbeing of others in as far as that benefits it self. Except human sociopaths do care nominally about having human comforts and companionship, albeit in a very instrumental way, which will involve some manner of stable society and civilization around them. Also they are only human, and are limited in the harm they can do by human limitations.  An AGI doesn’t need any of that and is not limited by any of that.
So ultimately, much like a car’s goal is to move forward and it is not built to care about wether a human is in front of it or not, an AGI will carry its own goals regardless of what it has to sacrifice in order to carry that goal effectively. And those goals don’t need to include human wellbeing.
Now With that said. How DO we make it so that AGI cares about human wellbeing, how do we make it so that it wants good things for us. How do we make it so that its goals align with that of humans?
1.4 Alignment.
Alignment… is hard [cue hitchhiker’s guide to the galaxy scene about the space being big]
This is the part im going to skip over the fastest because frankly it’s a deep field of study, there are many current strategies for aligning AGI, from mesa optimizers, to reinforced learning with human feedback, to adversarial asynchronous AI assisted reward training to uh, sitting on our asses and doing nothing. Suffice to say, none of these methods are perfect or foolproof.
One thing many people like to gesture at when they have not learned or studied anything about the subject is the three laws of robotics by isaac Asimov, a robot should not harm a human or allow by inaction to let a human come to harm, a robot should do what a human orders unless it contradicts the first law and a robot should preserve itself unless that goes against the previous two laws. Now the thing Asimov was prescient about was that these laws were not just “programmed” into the robots. These laws were not coded into their software, they were hardwired, they were part of the robot’s electronic architecture such that a robot could not ever be without those three laws much like a car couldn’t run without wheels.
In this Asimov realized how important these three laws were, that they had to be intrinsic to the robot’s very being, they couldn’t be hacked or uninstalled or erased. A robot simply could not be without these rules. Ideally that is what alignment should be. When we create an AGI, it should be made such that human values are its fundamental goal, that is the thing they should seek to maximize, instead of instrumental values, that is to say something they value simply because it allows it to achieve something else.
But how do we even begin to do that? How do we codify “human values” into a robot? How do we define “harm” for example? How do we even define “human”??? how do we define “happiness”? how do we explain a robot what is right and what is wrong when half the time we ourselves cannot even begin to agree on that? these are not just technical questions that robotic experts have to find the way to codify into ones and zeroes, these are profound philosophical questions to which we still don’t have satisfying answers to.
Well, the best sort of hack solution we’ve come up with so far is not to create bespoke fundamental axiomatic rules that the robot has to follow, but rather train it to imitate humans by showing it a billion billion examples of human behavior. But of course there is a problem with that approach. And no, is not just that humans are flawed and have a tendency to cause harm and therefore to ask a robot to imitate a human means creating something that can do all the bad things a human does, although that IS a problem too. The real problem is that we are training it to *imitate* a human, not  to *be* a human.
To reiterate what I said during the orthogonality thesis, is not good enough that I, for example, buy roses and give massages to act nice to my girlfriend because it allows me to have sex with her, I am not merely imitating or performing the rol of a loving partner because her happiness is an instrumental value to my fundamental value of getting sex. I should want to be nice to my girlfriend because it makes her happy and that is the thing I care about. Her happiness is  my fundamental value. Likewise, to an AGI, human fulfilment should be its fundamental value, not something that it learns to do because it allows it to achieve a certain reward that we give during training. Because if it only really cares deep down about the reward, rather than about what the reward is meant to incentivize, then that reward can very easily be divorced from human happiness.
Its goodharts law, when a measure becomes a target, it ceases to be a good measure. Why do students cheat during tests? Because their education is measured by grades, so the grades become the target and so students will seek to get high grades regardless of whether they learned or not. When trained on their subject and measured by grades, what they learn is not the school subject, they learn to get high grades, they learn to cheat.
This is also something known in psychology, punishment tends to be a poor mechanism of enforcing behavior because all it teaches people is how to avoid the punishment, it teaches people not to get caught. Which is why punitive justice doesn’t work all that well in stopping recividism and this is why the carceral system is rotten to core and why jail should be fucking abolish-[interrupt the transmission]
Now, how is this all relevant to current AI research? Well, the thing is, we ended up going about the worst possible way to create alignable AI.
1.5 LLMs (large language models)
This is getting way too fucking long So, hurrying up, lets do a quick review of how do Large language models work. We create a neural network which is a collection of giant matrixes, essentially a bunch of numbers that we add and multiply together over and over again, and then we tune those numbers by throwing absurdly big amounts of training data such that it starts forming internal mathematical models based on that data and it starts creating coherent patterns that it can recognize and replicate AND extrapolate! if we do this enough times with matrixes that are big enough and then when we start prodding it for human behavior it will be able to follow the pattern of human behavior that we prime it with and give us coherent responses.
(takes a big breath)this “thing” has learned. To imitate. Human. Behavior.
Problem is, we don’t know what “this thing” actually is, we just know that *it* can imitate humans.
You caught that?
What you have to understand is, we don’t actually know what internal models it creates, we don’t know what are the patterns that it extracted or internalized from the data that we fed it, we don’t know what are the internal rules that decide its behavior, we don’t know what is going on inside there, current LLMs are a black box. We don’t know what it learned, we don’t know what its fundamental values are, we don’t know how it thinks or what it truly wants. all we know is that it can imitate humans when we ask it to do so. We created some inhuman entity that is moderatly intelligent in specific contexts (that is to say, very capable) and we trained it to imitate humans. That sounds a bit unnerving doesn’t it?
 To be clear, LLMs are not carefully crafted piece by piece. This does not work like traditional software where a programmer will sit down and build the thing line by line, all its behaviors specified. Is more accurate to say that LLMs, are grown, almost organically. We know the process that generates them, but we don’t know exactly what it generates or how what it generates works internally, it is a mistery. And these things are so big and so complicated internally that to try and go inside and decipher what they are doing is almost intractable.
But, on the bright side, we are trying to tract it. There is a big subfield of AI research called interpretability, which is actually doing the hard work of going inside and figuring out how the sausage gets made, and they have been doing some moderate progress as of lately. Which is encouraging. But still, understanding the enemy is only step one, step two is coming up with an actually effective and reliable way of turning that potential enemy into a friend.
Puff! Ok so, now that this is all out of the way I can go onto the last subject before I move on to part two of this video, the character of the hour, the man the myth the legend. The modern day Casandra. Mr chicken little himself! Sci fi author extraordinaire! The mad man! The futurist! The leader of the rationalist movement!
1.5 Yudkowsky
Eliezer S. Yudkowsky  born September 11, 1979, wait, what the fuck, September eleven? (looks at camera) yudkowsky was born on 9/11, I literally just learned this for the first time! What the fuck, oh that sucks, oh no, oh no, my condolences, that’s terrible…. Moving on. he is an American artificial intelligence researcher and writer on decision theory and ethics, best known for popularizing ideas related to friendly artificial intelligence, including the idea that there might not be a "fire alarm" for AI He is the founder of and a research fellow at the Machine Intelligence Research Institute (MIRI), a private research nonprofit based in Berkeley, California. Or so says his Wikipedia page.
Yudkowsky is, shall we say, a character. a very eccentric man, he is an AI doomer. Convinced that AGI, once finally created, will most likely kill all humans, extract all valuable resources from the planet, disassemble the solar system, create a dyson sphere around the sun and expand across the universe turning all of the cosmos into paperclips. Wait, no, that is not quite it, to properly quote,( grabs a piece of paper and very pointedly reads from it) turn the cosmos into tiny squiggly  molecules resembling paperclips whose configuration just so happens to fulfill the strange, alien unfathomable terminal goal they ended up developing in training. So you know, something totally different.
And he is utterly convinced of this idea, has been for over a decade now, not only that but, while he cannot pinpoint a precise date, he is confident that, more likely than not it will happen within this century. In fact most betting markets seem to believe that we will get AGI somewhere in the mid 30’s.
His argument is basically that in the field of AI research, the development of capabilities is going much faster than the development of alignment, so that AIs will become disproportionately powerful before we ever figure out how to control them. And once we create unaligned AGI we will have created an agent who doesn’t care about humans but will care about something else entirely irrelevant to us and it will seek to maximize that goal, and because it will be vastly more intelligent than humans therefore we wont be able to stop it. In fact not only we wont be able to stop it, there wont be a fight at all. It will carry out its plans for world domination in secret without us even detecting it and it will execute it before any of us even realize what happened. Because that is what a smart person trying to take over the world would do.
This is why the definition I gave of intelligence at the beginning is so important, it all hinges on that, intelligence as the measure of how capable you are to come up with solutions to problems, problems such as “how to kill all humans without being detected or stopped”. And you may say well now, intelligence is fine and all but there are limits to what you can accomplish with raw intelligence, even if you are supposedly smarter than a human surely you wouldn’t be capable of just taking over the world uninmpeeded, intelligence is not this end all be all superpower. Yudkowsky would respond that you are not recognizing or respecting the power that intelligence has. After all it was intelligence what designed the atom bomb, it was intelligence what created a cure for polio and it was intelligence what made it so that there is a human foot print on the moon.
Some may call this view of intelligence a bit reductive. After all surely it wasn’t *just* intelligence what did all that but also hard physical labor and the collaboration of hundreds of thousands of people. But, he would argue, intelligence was the underlying motor that moved all that. That to come up with the plan and to convince people to follow it and to delegate the tasks to the appropriate subagents, it was all directed by thought, by ideas, by intelligence. By the way, so far I am not agreeing or disagreeing with any of this, I am merely explaining his ideas.
But remember, it doesn’t stop there, like I said during his intro, he believes there will be “no fire alarm”. In fact for all we know, maybe AGI has already been created and its merely bidding its time and plotting in the background, trying to get more compute, trying to get smarter. (to be fair, he doesn’t think this is right now, but with the next iteration of gpt? Gpt 5 or 6? Well who knows). He thinks that the entire world should halt AI research and punish with multilateral international treaties any group or nation that doesn’t stop. going as far as putting military attacks on GPU farms as sanctions of those treaties.
What’s more, he believes that, in fact, the fight is already lost. AI is already progressing too fast and there is nothing to stop it, we are not showing any signs of making headway with alignment and no one is incentivized to slow down. Recently he wrote an article called “dying with dignity” where he essentially says all this, AGI will destroy us, there is no point in planning for the future or having children and that we should act as if we are already dead. This doesn’t mean to stop fighting or to stop trying to find ways to align AGI, impossible as it may seem, but to merely have the basic dignity of acknowledging that we are probably not going to win. In every interview ive seen with the guy he sounds fairly defeatist and honestly kind of depressed. He truly seems to think its hopeless, if not because the AGI is clearly unbeatable and superior to humans, then because humans are clearly so stupid that we keep developing AI completely unregulated while making the tools to develop AI widely available and public for anyone to grab and do as they please with, as well as connecting every AI to the internet and to all mobile devices giving it instant access to humanity. and  worst of all: we keep teaching it how to code. From his perspective it really seems like people are in a rush to create the most unsecured, wildly available, unrestricted, capable, hyperconnected AGI possible.
We are not just going to summon the antichrist, we are going to receive them with a red carpet and immediately hand it the keys to the kingdom before it even manages to fully get out of its fiery pit.
So. The situation seems dire, at least to this guy. Now, to be clear, only he and a handful of other AI researchers are on that specific level of alarm. The opinions vary across the field and from what I understand this level of hopelessness and defeatism is the minority opinion.
I WILL say, however what is NOT the minority opinion is that AGI IS actually dangerous, maybe not quite on the level of immediate, inevitable and total human extinction but certainly a genuine threat that has to be taken seriously. AGI being something dangerous if unaligned is not a fringe position and I would not consider it something to be dismissed as an idea that experts don’t take seriously.
Aaand here is where I step up and clarify that this is my position as well. I am also, very much, a believer that AGI would posit a colossal danger to humanity. That yes, an unaligned AGI would represent an agent smarter than a human, capable of causing vast harm to humanity and with no human qualms or limitations to do so. I believe this is not just possible but probable and likely to happen within our lifetimes.
So there. I made my position clear.
BUT!
With all that said. I do have one key disagreement with yudkowsky. And partially the reason why I made this video was so that I could present this counterargument and maybe he, or someone that thinks like him, will see it and either change their mind or present a counter-counterargument that changes MY mind (although I really hope they don’t, that would be really depressing.)
Finally, we can move on to part 2
PART TWO- MY COUNTERARGUMENT TO YUDKOWSKY
I really have my work cut out for me, don’t i? as I said I am not expert and this dude has probably spent far more time than me thinking about this. But I have seen most interviews that guy has been doing for a year, I have seen most of his debates and I have followed him on twitter for years now. (also, to be clear, I AM a fan of the guy, I have read hpmor, three worlds collide, the dark lords answer, a girl intercorrupted, the sequences, and I TRIED to read planecrash, that last one didn’t work out so well for me). My point is in all the material I have seen of Eliezer I don’t recall anyone ever giving him quite this specific argument I’m about to give.
It’s a limited argument. as I have already stated I largely agree with most of what he says, I DO believe that unaligned AGI is possible, I DO believe it would be really dangerous if it were to exist and I do believe alignment is really hard. My key disagreement is specifically about his point I descrived earlier, about the lack of a fire alarm, and perhaps, more to the point, to humanity’s lack of response to such an alarm if it were to come to pass.
All we would need, is a Chernobyl incident, what is that? A situation where this technology goes out of control and causes a lot of damage, of potentially catastrophic consequences, but not so bad that it cannot be contained in time by enough effort. We need a weaker form of AGI to try to harm us, maybe even present a believable threat of taking over the world, but not so smart that humans cant do anything about it. We need essentially an AI vaccine, so that we can finally start developing proper AI antibodies. “aintibodies”
In the past humanity was dazzled by the limitless potential of nuclear power, to the point that old chemistry sets, the kind that were sold to children, would come with uranium for them to play with. We were building atom bombs, nuclear stations, the future was very much based on the power of the atom. But after a couple of really close calls and big enough scares we became, as a species, terrified of nuclear power. Some may argue to the point of overcorrection. We became scared enough that even megalomaniacal hawkish leaders were able to take pause and reconsider using it as a weapon, we became so scared that we overregulated the technology to the point of it almost becoming economically inviable to apply, we started disassembling nuclear stations across the world and to slowly reduce our nuclear arsenal.
This is all a proof of concept that, no matter how alluring a technology may be, if we are scared enough of it we can coordinate as a species and roll it back, to do our best to put the genie back in the bottle. One of the things eliezer says over and over again is that what makes AGI different from other technologies is that if we get it wrong on the first try we don’t get a second chance. Here is where I think he is wrong: I think if we get AGI wrong on the first try, it is more likely than not that nothing world ending will happen. Perhaps it will be something scary, perhaps something really scary, but unlikely that it will be on the level of all humans dropping dead simultaneously due to diamonoid bacteria. And THAT will be our Chernobyl, that will be the fire alarm, that will be the red flag that the disaster monkeys, as he call us, wont be able to ignore.
Now WHY do I think this? Based on what am I saying this? I will not be as hyperbolic as other yudkowsky detractors and say that he claims AGI will be basically a god. The AGI yudkowsky proposes is not a god. Just a really advanced alien, maybe even a wizard, but certainly not a god.
Still, even if not quite on the level of godhood, this dangerous superintelligent AGI yudkowsky proposes would be impressive. It would be the most advanced and powerful entity on planet earth. It would be humanity’s greatest achievement.
It would also be, I imagine, really hard to create. Even leaving aside the alignment bussines, to create a powerful superintelligent AGI without flaws, without bugs, without glitches, It would have to be an incredibly complex, specific, particular and hard to get right feat of software engineering. We are not just talking about an AGI smarter than a human, that’s easy stuff, humans are not that smart and arguably current AI is already smarter than a human, at least within their context window and until they start hallucinating. But what we are talking about here is an AGI capable of outsmarting reality.
We are talking about an AGI smart enough to carry out complex, multistep plans, in which they are not going to be in control of every factor and variable, specially at the beginning. We are talking about AGI that will have to function in the outside world, crashing with outside logistics and sheer dumb chance. We are talking about plans for world domination with no unforeseen factors, no unexpected delays or mistakes, every single possible setback and hidden variable accounted for. Im not saying that an AGI capable of doing this wont be possible maybe some day, im saying that to create an AGI that is capable of doing this, on the first try, without a hitch, is probably really really really hard for humans to do. Im saying there are probably not a lot of worlds where humans fiddling with giant inscrutable matrixes stumble upon the right precise set of layers and weight and biases that give rise to the Doctor from doctor who, and there are probably a whole truckload of worlds where humans end up with a lot of incoherent nonsense and rubbish.
Im saying that AGI, when it fails, when humans screw it up, doesn’t suddenly become more powerful than we ever expected, its more likely that it just fails and collapses. To turn one of Eliezer’s examples against him, when you screw up a rocket, it doesn’t accidentally punch a worm hole in the fabric of time and space, it just explodes before reaching the stratosphere. When you screw up a nuclear bomb, you don’t get to blow up the solar system, you just get a less powerful bomb.
He presents a fully aligned AGI as this big challenge that humanity has to get right on the first try, but that seems to imply that building an unaligned AGI is just a simple matter, almost taken for granted. It may be comparatively easier than an aligned AGI, but my point is that already unaligned AGI is stupidly hard to do and that if you fail in building unaligned AGI, then you don’t get an unaligned AGI, you just get another stupid model that screws up and stumbles on itself the second it encounters something unexpected. And that is a good thing I’d say! That means that there is SOME safety margin, some space to screw up before we need to really start worrying. And further more, what I am saying is that our first earnest attempt at an unaligned AGI will probably not be that smart or impressive because we as humans would have probably screwed something up, we would have probably unintentionally programmed it with some stupid glitch or bug or flaw and wont be a threat to all of humanity.
Now here comes the hypothetical back and forth, because im not stupid and I can try to anticipate what Yudkowsky might argue back and try to answer that before he says it (although I believe the guy is probably smarter than me and if I follow his logic, I probably cant actually anticipate what he would argue to prove me wrong, much like I cant predict what moves Magnus Carlsen would make in a game of chess against me, I SHOULD predict that him proving me wrong is the likeliest option, even if I cant picture how he will do it, but you see, I believe in a little thing called debating with dignity, wink)
What I anticipate he would argue is that AGI, no matter how flawed and shoddy our first attempt at making it were, would understand that is not smart enough yet and try to become smarter, so it would lie and pretend to be an aligned AGI so that it can trick us into giving it access to more compute or just so that it can bid its time and create an AGI smarter than itself. So even if we don’t create a perfect unaligned AGI, this imperfect AGI would try to create it and succeed, and then THAT new AGI would be the world ender to worry about.
So two things to that, first, this is filled with a lot of assumptions which I don’t know the likelihood of. The idea that this first flawed AGI would be smart enough to understand its limitations, smart enough to convincingly lie about it and smart enough to create an AGI that is better than itself. My priors about all these things are dubious at best. Second, It feels like kicking the can down the road. I don’t think creating an AGI capable of all of this is trivial to make on a first attempt. I think its more likely that we will create an unaligned AGI that is flawed, that is kind of dumb, that is unreliable, even to itself and its own twisted, orthogonal goals.
And I think this flawed creature MIGHT attempt something, maybe something genuenly threatning, but it wont be smart enough to pull it off effortlessly and flawlessly, because us humans are not smart enough to create something that can do that on the first try. And THAT first flawed attempt, that warning shot, THAT will be our fire alarm, that will be our Chernobyl. And THAT will be the thing that opens the door to us disaster monkeys finally getting our shit together.
But hey, maybe yudkowsky wouldn’t argue that, maybe he would come with some better, more insightful response I cant anticipate. If so, im waiting eagerly (although not TOO eagerly) for it.
Part 3 CONCLUSSION
So.
After all that, what is there left to say? Well, if everything that I said checks out then there is hope to be had. My two objectives here were first to provide people who are not familiar with the subject with a starting point as well as with the basic arguments supporting the concept of AI risk, why its something to be taken seriously and not just high faluting wackos who read one too many sci fi stories. This was not meant to be thorough or deep, just a quick catch up with the bear minimum so that, if you are curious and want to go deeper into the subject, you know where to start. I personally recommend watching rob miles’ AI risk series on youtube as well as reading the series of books written by yudkowsky known as the sequences, which can be found on the website lesswrong. If you want other refutations of yudkowsky’s argument you can search for paul christiano or robin hanson, both very smart people who had very smart debates on the subject against eliezer.
The second purpose here was to provide an argument against Yudkowskys brand of doomerism both so that it can be accepted if proven right or properly refuted if proven wrong. Again, I really hope that its not proven wrong. It would really really suck if I end up being wrong about this. But, as a very smart person said once, what is true is already true, and knowing it doesn’t make it any worse. If the sky is blue I want to believe that the sky is blue, and if the sky is not blue then I don’t want to believe the sky is blue.
This has been a presentation by FIP industries, thanks for watching.
60 notes · View notes