#AI learning program
Explore tagged Tumblr posts
tapp-ai · 3 months ago
Text
Tumblr media
Jobs you can get after completing Machine Learning and AI courses
AI and ML courses open doors to exciting career paths in various industries. Disscover the job oppurtunities after AI ML course.
Enroll Today! Follow the Link below ----! AI ML Online Learning Program
0 notes
prokopetz · 2 years ago
Note
I just watched a video where someone is using ChatGPT to generate comments on their code. Even as a layman I feel like I should be screaming at him, but on a scale from 1 to apocalypse, how bad is this?
Machine-generated comments could not possibly be more useless, nonsensical or maliciously misleading than most of the human-generated comments I've seen.
1K notes · View notes
cardi-c · 5 months ago
Text
I wonder if gen ai users think the process isn't fun? Like I can't draw that well and maybe an ai can generate "better" drawings than me but the act of drawing is the good part?? and getting better is awesome?? and you feel like you accomplished something? the i made this myself feeling?
Like for example I don't have to crochet something myself, i could just buy the things. but where's the fun in that?
37 notes · View notes
lepidjester · 2 years ago
Text
Tumblr media Tumblr media Tumblr media
i wasn’t super sold on royalteeth/kingleader/kingmaster until i discovered my partner kins kinger and i kin caine. therefore, i had to draw them being dumb gay old men
Tumblr media
they r holding handz <3
264 notes · View notes
nixcraft · 2 years ago
Text
Tumblr media
496 notes · View notes
gronktonkbabonk · 2 months ago
Text
What the fuck is vibe coding
11 notes · View notes
torchiiko · 3 months ago
Text
anyone by chance know any videos on enemy ai in games, stuff like good ai vs bad ai & ideally scary ones in horror games? all im finding are Lists (i want Videos that can go into more depth abt how it works) & i fear if i type ai in youtube ill just get garbage. i wanna know how the code Thinks but i want it explained to me as an outsider & not someone actually trying to code smth
8 notes · View notes
crow-musings · 3 months ago
Text
I desprately need someone to talk to about this
I've been working on a system to allow a genetic algorithm to create DNA code which can create self-organising organisms. Someone I know has created a very effective genetic algorithm which blows NEAT out of the water in my opinion. So, this algorithm is very good at using food values to determine which organisms to breed, how to breed them, and the multitude of different biologically inspired mutation mechanisms which allow for things like meta genes and meta-meta genes, and a whole other slew of things. I am building a translation system, basically a compiler on top of it, and designing an instruction set and genetic repair mechanisms to allow it to convert ANY hexadecimal string into a valid, operable program. I'm doing this by having an organism with, so far, 5 planned chromosomes. The first and second chromosome are the INITIAL STATE of a neural network. The number and configuration of input nodes, the number and configuration of output nodes, whatever code it needs for a fitness function, and the configuration and weights of the layers. This neural network is not used at all in the fitness evaluation of the organism, but purely something the organism itself can manage, train, and utilize how it sees fit.
The third is the complete code of the program which runs the organism. Its basically a list of ASM opcodes and arguments written in hexadecimal. It is comprised of codons which represent the different hexadecimal characters, as well as a start and stop codon. This program will be compiled into executable machine code using LLVM IR and a custom instruction set I've designed for the organisms to give them a turing complete programming language and some helper functions to make certain processes simpler to evolve. This includes messages between the organisms, reproduction methods, and all the methods necessary for the organisms to develop sight, hearing, and recieve various other inputs, and also to output audio, video, and various outputs like mouse, keyboard, or a gamepad output. The fourth is a blank slate, which the organism can evolve whatever data it wants. The first half will be the complete contents of the organisms ROM after the important information, and the second half will be the initial state of the organisms memory. This will likely be stored as base 64 of its hash and unfolded into binary on compilation.
The 5th chromosome is one I just came up with and I am very excited about, it will be a translation dictionary. It will be 512 individual codons exactly, with each codon pair being mapped between 00 and FF hex. When evaulating the hex of the other chromosomes, this dictionary will be used to determine the equivalent instruction of any given hex pair. When evolving, each hex pair in the 5th organism will be guaranteed to be a valid opcode in the instruction set by using modulus to constrain each pair to the 55 instructions currently available. This will allow an organism to evolve its own instruction distribution, and try to prevent random instructions which might be harmful or inneficient from springing up as often, and instead more often select for efficient or safer instructions.
8 notes · View notes
lookslikebloodprobablywine · 7 months ago
Text
they're shaking hands honest
(ai/machine learning generated animations)
14 notes · View notes
d0nutzgg · 2 years ago
Text
Tumblr media
This is part of a new project I am doing for a Facebook app that can alert someone when there is suspicious activity on their account, and block people who post rude comments and hate speech using a BERT model I am training on a dataset of hate speech. It automatically blocks people who are really rude / mean and keeps your feed clean of spam. I am developing it right now for work and for @emoryvalentine14 to test out and maybe in the future I will make it public.
I love NLP :D Also I plan to host this server probably on Heroku or something after it is done.
74 notes · View notes
soapstore · 9 months ago
Text
why is everything called AI now. boy thats an algorithm
8 notes · View notes
zarafey · 13 days ago
Text
Truly not allowed to have fun at my university anymore bc every group project is just done by ai now so why am I even here
2 notes · View notes
beigetiger · 4 months ago
Text
I love scrolling through my old procreate art because wow, I used to suck. But also WOW, I suck less now. And most wowable of all, I’ll continue to improve in the future!
5 notes · View notes
mysocial8onetech · 9 months ago
Text
Learn how Mistral-NeMo-Minitron 8B, a collaboration between NVIDIA and Mistral AI, is revolutionizing Large Language Models (LLMs). This Open-Source model uses advanced pruning & distillation techniques to achieve top accuracy on 9 benchmarks while being highly efficient.
5 notes · View notes
subrage · 11 months ago
Text
Truth speaking on the corporate obsession with AI
Hilarious. Something tells me this person's on the hellsite(affectionate)
8 notes · View notes
frank-olivier · 6 months ago
Text
Tumblr media
The Mathematical Foundations of Machine Learning
In the world of artificial intelligence, machine learning is a crucial component that enables computers to learn from data and improve their performance over time. However, the math behind machine learning is often shrouded in mystery, even for those who work with it every day. Anil Ananthaswami, author of the book "Why Machines Learn," sheds light on the elegant mathematics that underlies modern AI, and his journey is a fascinating one.
Ananthaswami's interest in machine learning began when he started writing about it as a science journalist. His software engineering background sparked a desire to understand the technology from the ground up, leading him to teach himself coding and build simple machine learning systems. This exploration eventually led him to appreciate the mathematical principles that underlie modern AI. As Ananthaswami notes, "I was amazed by the beauty and elegance of the math behind machine learning."
Ananthaswami highlights the elegance of machine learning mathematics, which goes beyond the commonly known subfields of calculus, linear algebra, probability, and statistics. He points to specific theorems and proofs, such as the 1959 proof related to artificial neural networks, as examples of the beauty and elegance of machine learning mathematics. For instance, the concept of gradient descent, a fundamental algorithm used in machine learning, is a powerful example of how math can be used to optimize model parameters.
Ananthaswami emphasizes the need for a broader understanding of machine learning among non-experts, including science communicators, journalists, policymakers, and users of the technology. He believes that only when we understand the math behind machine learning can we critically evaluate its capabilities and limitations. This is crucial in today's world, where AI is increasingly being used in various applications, from healthcare to finance.
A deeper understanding of machine learning mathematics has significant implications for society. It can help us to evaluate AI systems more effectively, develop more transparent and explainable AI systems, and address AI bias and ensure fairness in decision-making. As Ananthaswami notes, "The math behind machine learning is not just a tool, but a way of thinking that can help us create more intelligent and more human-like machines."
The Elegant Math Behind Machine Learning (Machine Learning Street Talk, November 2024)
youtube
Matrices are used to organize and process complex data, such as images, text, and user interactions, making them a cornerstone in applications like Deep Learning (e.g., neural networks), Computer Vision (e.g., image recognition), Natural Language Processing (e.g., language translation), and Recommendation Systems (e.g., personalized suggestions). To leverage matrices effectively, AI relies on key mathematical concepts like Matrix Factorization (for dimension reduction), Eigendecomposition (for stability analysis), Orthogonality (for efficient transformations), and Sparse Matrices (for optimized computation).
The Applications of Matrices - What I wish my teachers told me way earlier (Zach Star, October 2019)
youtube
Transformers are a type of neural network architecture introduced in 2017 by Vaswani et al. in the paper “Attention Is All You Need”. They revolutionized the field of NLP by outperforming traditional recurrent neural network (RNN) and convolutional neural network (CNN) architectures in sequence-to-sequence tasks. The primary innovation of transformers is the self-attention mechanism, which allows the model to weigh the importance of different words in the input data irrespective of their positions in the sentence. This is particularly useful for capturing long-range dependencies in text, which was a challenge for RNNs due to vanishing gradients. Transformers have become the standard for machine translation tasks, offering state-of-the-art results in translating between languages. They are used for both abstractive and extractive summarization, generating concise summaries of long documents. Transformers help in understanding the context of questions and identifying relevant answers from a given text. By analyzing the context and nuances of language, transformers can accurately determine the sentiment behind text. While initially designed for sequential data, variants of transformers (e.g., Vision Transformers, ViT) have been successfully applied to image recognition tasks, treating images as sequences of patches. Transformers are used to improve the accuracy of speech-to-text systems by better modeling the sequential nature of audio data. The self-attention mechanism can be beneficial for understanding patterns in time series data, leading to more accurate forecasts.
Attention is all you need (Umar Hamil, May 2023)
youtube
Geometric deep learning is a subfield of deep learning that focuses on the study of geometric structures and their representation in data. This field has gained significant attention in recent years.
Michael Bronstein: Geometric Deep Learning (MLSS Kraków, December 2023)
youtube
Traditional Geometric Deep Learning, while powerful, often relies on the assumption of smooth geometric structures. However, real-world data frequently resides in non-manifold spaces where such assumptions are violated. Topology, with its focus on the preservation of proximity and connectivity, offers a more robust framework for analyzing these complex spaces. The inherent robustness of topological properties against noise further solidifies the rationale for integrating topology into deep learning paradigms.
Cristian Bodnar: Topological Message Passing (Michael Bronstein, August 2022)
youtube
Sunday, November 3, 2024
4 notes · View notes