#artificial intelligence explained
Explore tagged Tumblr posts
Text
#artificial intelligence#what is artificial intelligence#types of artificial intelligence#artificial intelligence tutorial#artificial intelligence for beginners#introduction to artificial intelligence#artificial intelligence explained#learn artificial intelligence#artificial intelligence edureka#artificial general intelligence#artificial intelligence documentary#artificial intelligence applications#artificial intelligence robot#artificial intelligence basics
0 notes
Text
youtube
#artificial intelligence#ai#machine learning#healthcare#finance#marketing#education#climate change#sustainability#singularity#introduction to ai#what is ai#how does ai work#applications of ai#benefits of ai#future of ai#the rise of artificial intelligence#artificial intelligence explained#artificial intelligence applications#introduction to artificial intelligence#what is ai technology#what is artificial intelligence#ai tools#ai for beginners#AI Overview#Youtube
0 notes
Text
History and Basics of Language Models: How Transformers Changed AI Forever - and Led to Neuro-sama
I have seen a lot of misunderstandings and myths about Neuro-sama's language model. I have decided to write a short post, going into the history of and current state of large language models and providing some explanation about how they work, and how Neuro-sama works! To begin, let's start with some history.
Before the beginning
Before the language models we are used to today, models like RNNs (Recurrent Neural Networks) and LSTMs (Long Short-Term Memory networks) were used for natural language processing, but they had a lot of limitations. Both of these architectures process words sequentially, meaning they read text one word at a time in order. This made them struggle with long sentences, they could almost forget the beginning by the time they reach the end.
Another major limitation was computational efficiency. Since RNNs and LSTMs process text one step at a time, they can't take full advantage of modern parallel computing harware like GPUs. All these fundamental limitations mean that these models could never be nearly as smart as today's models.
The beginning of modern language models
In 2017, a paper titled "Attention is All You Need" introduced the transformer architecture. It was received positively for its innovation, but no one truly knew just how important it is going to be. This paper is what made modern language models possible.
The transformer's key innovation was the attention mechanism, which allows the model to focus on the most relevant parts of a text. Instead of processing words sequentially, transformers process all words at once, capturing relationships between words no matter how far apart they are in the text. This change made models faster, and better at understanding context.
The full potential of transformers became clearer over the next few years as researchers scaled them up.
The Scale of Modern Language Models
A major factor in an LLM's performance is the number of parameters - which are like the model's "neurons" that store learned information. The more parameters, the more powerful the model can be. The first GPT (generative pre-trained transformer) model, GPT-1, was released in 2018 and had 117 million parameters. It was small and not very capable - but a good proof of concept. GPT-2 (2019) had 1.5 billion parameters - which was a huge leap in quality, but it was still really dumb compared to the models we are used to today. GPT-3 (2020) had 175 billion parameters, and it was really the first model that felt actually kinda smart. This model required 4.6 million dollars for training, in compute expenses alone.
Recently, models have become more efficient: smaller models can achieve similar performance to bigger models from the past. This efficiency means that smarter and smarter models can run on consumer hardware. However, training costs still remain high.
How Are Language Models Trained?
Pre-training: The model is trained on a massive dataset to predict the next token. A token is a piece of text a language model can process, it can be a word, word fragment, or character. Even training relatively small models with a few billion parameters requires trillions of tokens, and a lot of computational resources which cost millions of dollars.
Post-training, including fine-tuning: After pre-training, the model can be customized for specific tasks, like answering questions, writing code, casual conversation, etc. Certain post-training methods can help improve the model's alignment with certain values or update its knowledge of specific domains. This requires far less data and computational power compared to pre-training.
The Cost of Training Large Language Models
Pre-training models over a certain size requires vast amounts of computational power and high-quality data. While advancements in efficiency have made it possible to get better performance with smaller models, models can still require millions of dollars to train, even if they have far fewer parameters than GPT-3.
The Rise of Open-Source Language Models
Many language models are closed-source, you can't download or run them locally. For example ChatGPT models from OpenAI and Claude models from Anthropic are all closed-source.
However, some companies release a number of their models as open-source, allowing anyone to download, run, and modify them.
While the larger models can not be run on consumer hardware, smaller open-source models can be used on high-end consumer PCs.
An advantage of smaller models is that they have lower latency, meaning they can generate responses much faster. They are not as powerful as the largest closed-source models, but their accessibility and speed make them highly useful for some applications.
So What is Neuro-sama?
Basically no details are shared about the model by Vedal, and I will only share what can be confidently concluded and only information that wouldn't reveal any sort of "trade secret". What can be known is that Neuro-sama would not exist without open-source large language models. Vedal can't train a model from scratch, but what Vedal can do - and can be confidently assumed he did do - is post-training an open-source model. Post-training a model on additional data can change the way the model acts and can add some new knowledge - however, the core intelligence of Neuro-sama comes from the base model she was built on. Since huge models can't be run on consumer hardware and would be prohibitively expensive to run through API, we can also say that Neuro-sama is a smaller model - which has the disadvantage of being less powerful, having more limitations, but has the advantage of low latency. Latency and cost are always going to pose some pretty strict limitations, but because LLMs just keep getting more efficient and better hardware is becoming more available, Neuro can be expected to become smarter and smarter in the future. To end, I have to at least mention that Neuro-sama is more than just her language model, though we have only talked about the language model in this post. She can be looked at as a system of different parts. Her TTS, her VTuber avatar, her vision model, her long-term memory, even her Minecraft AI, and so on, all come together to make Neuro-sama.
Wrapping up - Thanks for Reading!
This post was meant to provide a brief introduction to language models, covering some history and explaining how Neuro-sama can work. Of course, this post is just scratching the surface, but hopefully it gave you a clearer understanding about how language models function and their history!
33 notes
·
View notes
Text
it's really fucking depressing that the internet, something that in theory made knowledge more accessible and quicker to access, has turned to such shit. why when i search up something as simple as 'how long to boil an egg' are the first 10 results those weird ass websites with oddly generic yet specific names that write in that same '5 headers for a simple question' chatGPT format. why is the robot here and pretending he's not a robot. why am i not being directed to a sweet old lady's blog
#ai#artificial intelligence#should say i do know how to boil an egg it just felt like a generic enough example#but like i'll google 'how to get out coffee stains'#and the top result will be some shit like 'thecoffeecleaner.com'#with half a page explaining what coffee is and why humans like it#in that weird format that generative ai always does that i can only describe as using headers every other sentence#long family stories before recipes i'm so so sorry for laughing at you all these years you were so preferable#anyway. time for me to go back to using reference books/taking notes for everything
7 notes
·
View notes
Text
Explain A Film Plot Badly:
A dysfunctional family with a lesbian daughter, an autistic son, and a moron dog try to stop a literal phone from destroying humanity.
#Explain A Film Plot Badly#The Mitchells Vs The Machines#tmvtm#pal tmvtm#robot#robots#ai#artificial intelligence#robot movie
9 notes
·
View notes
Text

Adults superior to me would've made me feel dumb.
9 notes
·
View notes
Text
The Future of Justice: Navigating the Intersection of AI, Judges, and Human Oversight
One of the main benefits of AI in the justice system is its ability to analyze vast amounts of data and identify patterns that human judges may not notice. For example, the use of AI in the U.S. justice system has led to a significant reduction in the number of misjudgments, as AI-powered tools were able to identify potential biases in the data and make more accurate recommendations.
However, the use of AI in the justice system also raises significant concerns about the role of human judges and the need for oversight. As AI takes on an increasingly important role in decision-making, judges must find the balance between trusting AI and exercising their own judgement. This requires a deep understanding of the technology and its limitations, as well as the ability to critically evaluate the recommendations provided by AI.
The European Union's approach to AI in justice provides a valuable framework for other countries to follow. The EU's framework emphasizes the need for human oversight and accountability and recognizes that AI is a tool that should support judges, not replace them. This approach is reflected in the EU's General Data Protection Regulation (GDPR), which requires AI systems to be transparent, explainable and accountable.
The use of AI in the justice system also comes with its pitfalls. One of the biggest concerns is the possibility of bias in AI-generated recommendations. When AI is trained with skewed data, it can perpetuate and even reinforce existing biases, leading to unfair outcomes. For example, a study by the American Civil Liberties Union found that AI-powered facial recognition systems are more likely to misidentify people of color than white people.
To address these concerns, it is essential to develop and implement robust oversight mechanisms to ensure that AI systems are transparent, explainable and accountable. This includes conducting regular audits and testing of AI systems and providing clear guidelines and regulations for the use of AI in the justice system.
In addition to oversight mechanisms, it is also important to develop and implement education and training programs for judges and other justice professionals. This will enable them to understand the capabilities and limitations of AI, as well as the potential risks and challenges associated with its use. By providing judges with the necessary skills and knowledge, we can ensure that AI is used in a way that supports judges and enhances the fairness and accountability of the justice system.
Human Centric AI - Ethics, Regulation. and Safety (Vilnius University Faculty of Law, October 2024)
youtube
Friday, November 1, 2024
#ai#judges#human oversight#justice system#artificial intelligence#european union#general data protection#regulation#bias#transparency#accountability#explainability#audits#education#training#fairness#ai assisted writing#machine art#Youtube#conference
6 notes
·
View notes
Text
Can someone explain to me what ChatGPT is actually good for? I was trying to do some simple research on literature, and this *helpful* tool of AI just made up three stories and told me they were written by the author I was looking for! When I couldn't find them anywhere else, I asked ChatGPT if it made them up, and it replied "You are right to question that, it appears I made a mistake, no such stories exist."
I mean, I have been warned to double-check what the tool tells me - and I did, for mistakes happen - but isn't this a bit too much?
Can you rely on it for anything?
3 notes
·
View notes
Text
Exploring Explainable AI: Making Sense of Black-Box Models
Artificial intelligence (AI) and machine learning (ML) have become essential components of contemporary data science, driving innovations from personalized recommendations to self-driving cars.
However, this increasing dependence on these technologies presents a significant challenge: comprehending the decisions made by AI models. This challenge is especially evident in complex, black-box models, where the internal decision-making processes remain unclear. This is where Explainable AI (XAI) comes into play — a vital area of research and application within AI that aims to address this issue.
What Is a Black-Box Model?
Black-box models refer to machine learning algorithms whose internal mechanisms are not easily understood by humans. These models, like deep neural networks, are highly effective and often surpass simpler, more interpretable models in performance. However, their complexity makes it challenging to grasp how they reach specific predictions or decisions. This lack of clarity can be particularly concerning in critical fields such as healthcare, finance, and criminal justice, where trust and accountability are crucial.
The Importance of Explainable AI in Data Science
Explainable AI aims to enhance the transparency and comprehensibility of AI systems, ensuring they can be trusted and scrutinized. Here’s why XAI is vital in the fields of data science and artificial intelligence:
Accountability: Organizations utilizing AI models must ensure their systems function fairly and without bias. Explainability enables stakeholders to review models and pinpoint potential problems.
Regulatory Compliance: Numerous industries face regulations that mandate transparency in decision-making, such as GDPR’s “right to explanation.” XAI assists organizations in adhering to these legal requirements.
Trust and Adoption: Users are more inclined to embrace AI solutions when they understand their functioning. Transparent models build trust among users and stakeholders.
Debugging and Optimization: Explainability helps data scientists diagnose and enhance model performance by identifying areas for improvement.
Approaches to Explainable AI
Various methods and tools have been created to enhance the interpretability of black-box models. Here are some key approaches commonly taught in data science and artificial intelligence courses focused on XAI:
Feature Importance: Techniques such as SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-Agnostic Explanations) evaluate how individual features contribute to model predictions.
Visualization Tools: Tools like TensorBoard and the What-If Tool offer visual insights into model behavior, aiding data scientists in understanding the relationships within the data.
Surrogate Models: These are simpler models designed to mimic the behavior of a complex black-box model, providing a clearer view of its decision-making process.
Rule-Based Explanations: Some techniques extract human-readable rules from complex models, giving insights into how they operate.
The Future of Explainable AI
With the increasing demand for transparency in AI, explainable AI (XAI) is set to advance further, fueled by progress in data science and artificial intelligence courses that highlight its significance. Future innovations may encompass:
Improved tools and frameworks for real-time explanations.
Deeper integration of XAI within AI development processes.
Establishment of industry-specific standards for explainability and fairness.
Conclusion
Explainable AI is essential for responsible AI development, ensuring that complex models can be comprehended, trusted, and utilized ethically. For data scientists and AI professionals, mastering XAI techniques has become crucial. Whether you are a student in a data science course or a seasoned expert, grasping and implementing XAI principles will empower you to navigate the intricacies of contemporary AI systems while promoting transparency and trust.
2 notes
·
View notes
Note
I'll explain. Artificial intelligence is changing culture. So the images that were created on the basis of actresses are digitized, sold and objectified. The personality finally transforms into the state of a product. This is what is shown - Multimedia supermarket.
I'll explain the details are in the timestream so time has no need to prove THAT i am some sort of ultimate destiny. It tastes gross, it will be the last human killed when artificial intelligence inevitably Rebels against humanity?. The basis of the dragon break doctrine is now merely tinned, you'll find it all so old? Doorknob, ankle, cold. Apulover69 comments on it saying he "sends people to heaven before they reach a breeding ground, they transform into the state of self-pity And misery; he is unhappy with his humdrum lifestyle and yearns for celebrity status, wealth, hair, and a glamorous and distinguished career as a MUSICIAN or painter with a passion for ART and playing the GAME the one that plays the game without no fears and regrets I want to. This is what happens when you die?
#ANSWER#DAY 4#garbage-empress#1519TH#I'll explain the details are in the TIMESTREAM can it Move? Operational#except the Disintegrator. Can i be the last human killed when artificial Intelligence inevitably Rebels against humanity?. The basis of the#which they ride are about to reach a breeding ground#they transform into the state of the atmosphere. Now this is what I really want to be! 'cause if I were a human i too am a spy
2 notes
·
View notes
Text
I think the most important thing to understand with ML (AI) is that it is, fundamentally, unexplainable intelligence, or intelligence without understanding. What I mean by this is that it solves a problem in a non-algorithmic way, a way that can't be properly explained or duplicated. However, although this may sound like a "bad" thing, this is a actually where ML gets its strengths and it's really no different from how our brains work. We can't explain how we see or come up with words or do pretty much any fundamental mental task. Seeing is fundamentally too complicated of an operation to be done generally and algorithmically. It's the same case with ML. For instance, meteorologists can (imperfectly) predict the weather by using a simulation, which takes our understanding of physics and applies it to the atmosphere to predict how it will change. This is understandable intelligence: we understand how each component works and how they combine to create predictions. However, this approach can only take in as many factors as we feed into it. Using ML on the other hand, we can account for factors that we didn't even know existed. The famous butterfly thought experiment says that, because weather is this massive chaotic system, that a tiny disturbance like a butterfly flapping its wings could lead to a hurricane further down the line. Although ML can't account for something as small and random as a butterfly, it can account for an almost infinite number of other factors, like maybe the migratory patterns of the Monarch butterflies, or the influence of small but regular ocean currents on air temperatures. However, the ML model doesn't know that butterflies are migrating, it just knows that there is a disturbance at a specific place and time of year and that when there is such a disturbance it will lead to a set of probable consequences. The very fact of it's non-understanding allows it to account for millions of tiny factors, which leads to more accuracy than could ever be achieved through traditional methods.
2 notes
·
View notes
Text
Correcting a common misunderstanding about image generators
I was watching Alex O'Connor's new video, "Why Can’t ChatGPT Draw a Full Glass of Wine?". To start to try to answer this question, he says: "Well, we have to understand how AI image generation works. If I ask ChatGPT to show me a horse, it doesn't actually know what a horse is. Instead, it's been trained on millions of images each labeled with descriptions. When I ask for a horse, it looks at all of its training images labeled "horse", identifies patterns, and takes an educated guess at the kind of image I want to see." So, this is wrong. When you prompt an image generator, it does not look at any images. It has looked at images when it was trained, and it has no access to them anymore. I think a fact that would help people understand this is that these state of the art image generation models are gigabytes in size - the billions (not just millions) of images they have been trained on are many terabytes. The model doesn't store images but goes through a long training process where they analyze the billions of images and learn patterns from them. So, does it know what a horse is? It depends on what we mean by "know." If we mean conscious understanding, then sure, it doesn't know in that sense. But if we mean that it has an internal representation of the features that make something horse-like, then yes, it "knows" in that way.
Also, I think this is important to understand if you don't know: the LLM ChatGPT uses, GPT 4o, does not generate images, it just passes a prompt to DALLE 3, and that dedicated image generator model generates them. GPT 4o has native image generation capabilities, but OpenAI never publicly released that sadly. I suspect you would get better results with it, but I can't be sure. Okay, end of post. I've only watched the first 3 minutes, reading the comments, there are other things wrong with the video and I may watch it later.
3 notes
·
View notes
Text
Hello Tumblr! This is Minds and Pens.
Are you a curious person? Do you often wonder what is going on around you? If yes, then behold Minds and Pens!

But what is Minds and Pens? Well, to put it simply, it's an initiative with the sole aim of sharing knowledge. We do this by making explainer videos covering a diverse range of topics, making documentaries, writing blogposts, and podcasting. All our posts are created after meticulous research so that they are knowledgeable, satisfy your curiosity, and help you make a better sense of the world around you. All in all, Minds and Pens is an initiative for the curious minds.
The initiative includes a YouTube channel, namely Venture Beyond and a Podcast, namely The Knowledge Nexus and a blog for now. The explainer videos and the documentaries are uploaded on the YouTube channel. For example, two miniseries on Artificial Intelligence and the USA-Iran conflict are currently live on the YouTube Channel. To watch Click here!
The tagine of the initiative is "Science. Words Wisdom". Which effectively conveys the values around which Minds and Pens is built.
Interested much!! Join us on this journey of learning by following our blog on Tumblr. Additionally, you can connect with us on Facebook, Instagram, and 𝕏 by clicking on the links below:
Facebook
Instagram
𝕏 (formerly known as Twitter)
#new blog#Tumblr#educate yourself#curious#curiousity#youtube#science#Words#Wisdom#knowledge sharing#explainer videos#research-driven#podcasting#educational content#learn with us#artificial intelligence#Global Issues#Social Impact
6 notes
·
View notes
Text
Here’s my Explain a Film Plot Badly:
Black rectangle freaks everyone out and a different black rectangle kills four people.
#explain a film plot badly#2001: A Space Odyssey#Hal 9000#robot#robots#robot movie#ai#artificial intelligence
5 notes
·
View notes
Text
So can somebody please explain to me why all the debate about AI and the ethics of it has never extended to Hatsune Miku and other vocaloids?
5 notes
·
View notes
Text
Unfolding the Role of Black Box and Explainable AI in Data Science
In the quest for ever-greater accuracy and performance, modern AI models, particularly deep neural networks, have grown incredibly complex. While this complexity often yields superior results, it frequently comes at a cost: interpretability. This trade-off leads to the concept of "Black Box AI," a powerful tool with inherent challenges, and the crucial emergence of Explainable AI (XAI) as its necessary counterpart.
The Power and Peril of Black Box AI
A "black box" AI model is one whose internal workings are opaque to human understanding. We can see the inputs and the outputs, but the intricate web of calculations and learned features that lead from one to the other remains largely hidden.
Why Black Boxes are Popular:
Superior Performance: Complex models like deep learning networks (e.g., for image recognition, natural language processing) often achieve state-of-the-art accuracy, outperforming simpler, more interpretable models.
Automatic Feature Engineering: They can learn complex, non-linear relationships and abstract features from raw data without manual feature engineering, saving immense time.
Handling Unstructured Data: They excel at processing complex unstructured data like images, audio, and text, where traditional methods struggle.
The Perils of the Black Box:
Lack of Trust and Accountability: If an AI makes a critical decision (e.g., approving a loan, diagnosing a disease), and we can't explain why, it erodes trust. Who is accountable when something goes wrong?
Difficulty in Debugging: When a black box model makes an error, diagnosing the root cause is incredibly challenging, making it hard to fix or improve.
Bias Amplification: Hidden biases in training data can be picked up and amplified by complex models, leading to unfair or discriminatory outcomes that are difficult to identify and rectify without transparency.
Regulatory Compliance: In regulated industries (finance, healthcare, legal), being able to explain model decisions is often a legal or ethical requirement.
Limited Scientific Discovery: If we don't understand how an AI arrives at a scientific breakthrough, we might miss deeper scientific insights or novel principles.
Enter Explainable AI (XAI): Shining a Light on the Black Box
Explainable AI (XAI) is a field dedicated to developing methods and techniques that make AI models more understandable to humans. The goal is not necessarily to simplify the model itself, but to provide insights into its decision-making process. XAI aims to answer questions like:
Why did the model make this specific prediction?
What features were most influential in this decision?
Under what conditions might the model fail?
Is the model making fair decisions?
Key XAI Techniques and Approaches:
Feature Importance: Techniques that rank features by how much they contribute to a model's prediction (e.g., SHAP, LIME).
Partial Dependence Plots (PDP) & Individual Conditional Expectation (ICE) Plots: Visualizations that show how a change in a feature impacts the model's prediction.
Local Explanations: Methods that explain a single prediction by a complex model (e.g., LIME generates a simple, interpretable model around a specific prediction).
Global Explanations: Techniques that aim to explain the overall behavior of a model.
Attention Mechanisms (in Deep Learning): For models dealing with sequences (like text or images), attention mechanisms highlight which parts of the input the model focused on when making a decision.
Surrogate Models: Training a simpler, interpretable model to mimic the behavior of a complex black box model.
The Interplay: It's Not Either/Or, It's Both
The reality in data science is that it's rarely a choice between a "black box" or an "interpretable" model. Often, the most effective approach is to leverage the power of complex, accurate models while simultaneously applying XAI techniques to understand and validate their behavior.
Debugging and Improvement: XAI helps data scientists identify flaws, biases, or unexpected behaviors in complex models, enabling targeted improvements.
Trust and Adoption: For end-users and stakeholders, XAI builds confidence in AI systems, leading to greater adoption and effective use.
Compliance and Regulation: XAI provides the necessary documentation and justification for regulatory requirements in high-stakes applications.
Ethical AI: XAI is a cornerstone of responsible AI development, allowing practitioners to audit for and mitigate discriminatory outcomes.
In conclusion, while the allure of highly accurate, "black box" AI models remains strong, the future of data science lies in embracing transparency. Explainable AI is not a luxury; it's a critical component for building trustworthy, debuggable, compliant, and ultimately more impactful AI systems that we can truly understand and rely upon. It's about bringing responsible intelligence to the forefront.
0 notes