#a Neural Network How-To
Explore tagged Tumblr posts
vlruso · 2 years ago
Text
Prompt Engineering Tips a Neural Network How-To and Other Recent Must-Reads
📢 Prompt Engineering Tips, a Neural Network How-To, and Other Recent Must-Reads Feeling that autumn energy? Our authors have been busy learning, experimenting, and launching exciting new projects. 🍂📚 Check out these ten standout articles from Towards Data Science – Medium that have been creating a buzz in our community. From program simulation techniques to building neural networks from scratch, these must-reads cover cutting-edge topics you don't want to miss. 💡 Read more here: [Link to the blog post](https://ift.tt/io0tgy4) And don't worry, we've got the action items covered too! We've identified tasks for each article's author to further explore and dive deeper into these fascinating subjects. See the full list of action items in the blog post. Keep up with the latest insights and join the conversation. Let's keep learning and growing together! #TowardsDataScience #MustReads #NeuralNetworks #PromptEngineering #DataScience #AI #MachineLearning List of Useful Links: AI Scrum Bot - ask about AI scrum and agile Our Telegram @itinai Twitter -  @itinaicom
0 notes
cakesandsnouts · 7 months ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
"HAL I won't argue with you anymore. Open the doors."
77 notes · View notes
dogstomp · 1 year ago
Text
Tumblr media
Dogstomp #3239 - November 21st
Patreon / Discord Server / Itaku / Bluesky
61 notes · View notes
cryptid-shark · 6 months ago
Text
YES AI IS TERRIBLE, ESPECIALLY GENERATING ART AND WRITING AND STUFF (please keep spreading that, I would like to be able to do art for a living in the near future)
but can we talk about how COOL NEURAL NETWORKS ARE??
LIKE WE MADE THIS CODE/MACHINE THAT CAN LEARN LIKE HUMANS DO!! Early AI reminds me of a small childs drawing that, yes is terrible, but its also SO COOL THAT THEY DID THAT!!
I have an intense love for computer science cause its so cool to see what machines can really do! Its so unfortunate that people took advantage of these really awesome things
10 notes · View notes
blood-orange-juice · 9 months ago
Text
Another day in this insane lab.
"N. expressed hope that we can reduce brain dynamics to Ising model." "We will then have to explain why does consciousness arise in the brain but not in ferromagnetics." "How do you know it doesn't?" "True, we can switch to animism, I DON'T MIND."
(note: you can't really do it because of the presence of higher order correlations in the brain, please don't try it. ferromagnetic materials don't show any signs of consciousness either)
10 notes · View notes
echthr0s · 2 months ago
Text
youtube
if you ask me, gen-AI exists for one clear and noble purpose: to generate machine-made horrors beyond our comprehension 🖤
5 notes · View notes
aldieb · 2 months ago
Text
current object of fascination is researchers who are concerned about the resources and processing power consumed by, for example, llms and are therefore exploring the solution of “run ai on their own collections of actual neurons.” moving gameplay to a different ethical field i suppose
3 notes · View notes
llitchilitchi · 3 months ago
Text
someone should've told me how funny plato's symposium is years ago this man takes a half page tangent to do top/bottom discourse on patrochilles for absolutely no reason other than to say fuck you to a playwright
2 notes · View notes
thegreateyeofsauron · 1 year ago
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
putting random cdi zelda dialog into bing image creator
6 notes · View notes
faaun · 1 year ago
Text
Tumblr media
3 2 1 let's go
15 notes · View notes
gammadoppler · 11 months ago
Text
remember years ago when everything was neural networks being taught to play tetris and shit . well now she's all grown up and she's calling herself artificial intelligence and youre beaing a hater?? downvote for you
2 notes · View notes
fishyartist · 1 year ago
Text
Idk I just think maybe if u assume every person disagreeing with you is a npc or a robot u might just be stupid or something. Learn more ok?
4 notes · View notes
frosteee-variation · 2 years ago
Text
you ever get the uncontrollable urge to pursue a creative project because you got one (1) piece of information that you haven't been able to stop thinking about
9 notes · View notes
jcmarchi · 1 year ago
Text
From Recurrent Networks to GPT-4: Measuring Algorithmic Progress in Language Models - Technology Org
New Post has been published on https://thedigitalinsider.com/from-recurrent-networks-to-gpt-4-measuring-algorithmic-progress-in-language-models-technology-org/
From Recurrent Networks to GPT-4: Measuring Algorithmic Progress in Language Models - Technology Org
In 2012, the best language models were small recurrent networks that struggled to form coherent sentences. Fast forward to today, and large language models like GPT-4 outperform most students on the SAT. How has this rapid progress been possible? 
Image credit: MIT CSAIL
In a new paper, researchers from Epoch, MIT FutureTech, and Northeastern University set out to shed light on this question. Their research breaks down the drivers of progress in language models into two factors: scaling up the amount of compute used to train language models, and algorithmic innovations. In doing so, they perform the most extensive analysis of algorithmic progress in language models to date.
Their findings show that due to algorithmic improvements, the compute required to train a language model to a certain level of performance has been halving roughly every 8 months. “This result is crucial for understanding both historical and future progress in language models,” says Anson Ho, one of the two lead authors of the paper. “While scaling compute has been crucial, it’s only part of the puzzle. To get the full picture you need to consider algorithmic progress as well.”
The paper’s methodology is inspired by “neural scaling laws”: mathematical relationships that predict language model performance given certain quantities of compute, training data, or language model parameters. By compiling a dataset of over 200 language models since 2012, the authors fit a modified neural scaling law that accounts for algorithmic improvements over time. 
Based on this fitted model, the authors do a performance attribution analysis, finding that scaling compute has been more important than algorithmic innovations for improved performance in language modeling. In fact, they find that the relative importance of algorithmic improvements has decreased over time. “This doesn’t necessarily imply that algorithmic innovations have been slowing down,” says Tamay Besiroglu, who also co-led the paper.
“Our preferred explanation is that algorithmic progress has remained at a roughly constant rate, but compute has been scaled up substantially, making the former seem relatively less important.” The authors’ calculations support this framing, where they find an acceleration in compute growth, but no evidence of a speedup or slowdown in algorithmic improvements.
By modifying the model slightly, they also quantified the significance of a key innovation in the history of machine learning: the Transformer, which has become the dominant language model architecture since its introduction in 2017. The authors find that the efficiency gains offered by the Transformer correspond to almost two years of algorithmic progress in the field, underscoring the significance of its invention.
While extensive, the study has several limitations. “One recurring issue we had was the lack of quality data, which can make the model hard to fit,” says Ho. “Our approach also doesn’t measure algorithmic progress on downstream tasks like coding and math problems, which language models can be tuned to perform.”
Despite these shortcomings, their work is a major step forward in understanding the drivers of progress in AI. Their results help shed light about how future developments in AI might play out, with important implications for AI policy. “This work, led by Anson and Tamay, has important implications for the democratization of AI,” said Neil Thompson, a coauthor and Director of MIT FutureTech. “These efficiency improvements mean that each year levels of AI performance that were out of reach become accessible to more users.”
“LLMs have been improving at a breakneck pace in recent years. This paper presents the most thorough analysis to date of the relative contributions of hardware and algorithmic innovations to the progress in LLM performance,” says Open Philanthropy Research Fellow Lukas Finnveden, who was not involved in the paper.
“This is a question that I care about a great deal, since it directly informs what pace of further progress we should expect in the future, which will help society prepare for these advancements. The authors fit a number of statistical models to a large dataset of historical LLM evaluations and use extensive cross-validation to select a model with strong predictive performance. They also provide a good sense of how the results would vary under different reasonable assumptions, by doing many robustness checks. Overall, the results suggest that increases in compute have been and will keep being responsible for the majority of LLM progress as long as compute budgets keep rising by ≥4x per year. However, algorithmic progress is significant and could make up the majority of progress if the pace of increasing investments slows down.”
Written by Rachel Gordon
Source: Massachusetts Institute of Technology
You can offer your link to a page which is relevant to the topic of this post.
4 notes · View notes
pisboy · 1 year ago
Note
how are you getting infinite craft to mash words together like that its only giving me things that are known words?
honestly just keep throwing spaghetti at the wall
3 notes · View notes
generic-waffle · 1 year ago
Text
Hey; instead of asking ai “how could humans face extinction in the next few decades”, what if we asked “how could humans solve the possible causes of our own extinction in the next few decades?”
3 notes · View notes