#allen institute for artificial intelligence
Explore tagged Tumblr posts
thoughtlessarse · 21 days ago
Text
Tech companies are hellbent on pushing out ever more advanced artificial intelligence models — but there appears to be a grim cost to that progress. In a new study in the science journal Frontiers in Communication, German researchers found that large language models (LLM) that provide more accurate answers use exponentially more energy — and hence produce more carbon — than their simpler and lower-performing peers. In other words, the findings are a grim sign of things to come for the environmental impacts of the AI industry: the more accurate a model is, the higher its toll on the climate. "Everyone knows that as you increase model size, typically models become more capable, use more electricity and have more emissions," Allen Institute for AI researcher Jesse Dodge, who didn't work on the German research but has conducted similar analysis of his own, told the New York Times. The team examined 14 open source LLMs — they were unable to access the inner workings of commercial offerings like OpenAI's ChatGPT or Anthropic's Claude — of various sizes and fed them 500 multiple choice questions plus 500 "free-response questions." Crunching the numbers, the researchers found that big, more accurate models such as DeepSeek produce the most carbon compared to chatbots with smaller digital brains. So-called "reasoning" chatbots, which break problems down into steps in their attempts to solve them, also produced markedly more emissions than their simpler brethren.
continue reading
7 notes · View notes
utopicwork · 1 month ago
Text
"Today (June 6th, 2025) we are excited to announce the long-awaited release of the successor to the Pile: the Common Pile v0.1. In collaboration with our friends at the University of Toronto and Vector Institute, Hugging Face, the Allen Institute for Artificial Intelligence, Teraflop AI, Cornell University, MIT, CMU, Lila Sciences, poolside, University of Maryland, College Park, and Lawrence Livermore National Laboratory we have spent the past two years meticulously curating a 8 TB corpus of openly licensed and public domain text for training large language models. We are also releasing Comma v0.1-1T and Comma v0.1-2T, models trained for 1T and 2T tokens respectively on this dataset. You can find everything we have released on arXiv, Hugging Face and GitHub."
2 notes · View notes
darkmaga-returns · 5 months ago
Text
1. The Wall Street Journal:
Trump administration officials ordered eight senior FBI employees to resign or be fired, and asked for a list of agents and other personnel who worked on investigations into the Jan. 6, 2021, attack on the U.S. Capitol, people familiar with the matter said, a dramatic escalation of President Trump’s plans to shake up U.S. law enforcement. On Friday, the Justice Department also fired roughly 30 prosecutors at the U.S. attorney’s office in Washington who have worked on cases stemming from Capitol riot, according to people familiar with the move and a Justice Department memo reviewed by The Wall Street Journal. The prosecutors had initially been hired for short-term roles as the U.S. attorney’s office staffed up for the wave of more than 1,500 cases that arose from the attack by Trump supporters. Trump appointees at the Justice Department also began assembling a list of FBI agents and analysts who worked on the Jan. 6 cases, some of the people said. Thousands of employees across the country were assigned to the sprawling investigation, which was one of the largest in U.S. history and involved personnel from every state. Acting Deputy Attorney General Emil Bove gave Federal Bureau of Investigation leadership until noon on Feb. 4 to identify personnel involved in the Jan. 6 investigations and provide details of their roles. Bove said in a memo he would then determine whether other discipline is necessary. Acting FBI Director Brian Driscoll said in a note to employees that he would be on that list, as would acting Deputy Robert Kissane. “We are going to follow the law, follow FBI policy and do what’s in the best interest of the workforce and the American people—always,” Driscoll wrote. Across the FBI and on Capitol Hill, the preparation of the list stirred fear and rumors of more firings to come—potentially even a mass purge. (Source: wsj.com, italics mine. The big question is whether “the list” will include FBI informants)
2. OpenAI Chief Executive Sam Altman said he believes his company should consider giving away its AI models, a potentially seismic strategy shift in the same week China’s DeepSeek has upended the artificial-intelligence industry. DeepSeek’s AI models are open-source, meaning anyone can use them freely and alter the way they work by changing the underlying code. In an “ask-me-anything” session on Reddit Friday, a participant asked Altman if the ChatGPT maker would consider releasing some of the technology within its AI models and publish more research showing how its systems work. Altman said OpenAI employees were discussing the possibility. “(I) personally think we have been on the wrong side of history here and need to figure out a different open source strategy,” Altman responded. He added, “not everyone at OpenAi shares this view, and it’s also not our current highest priority.” (Source: wsj.com)
3. Quanta Magazine:
In December 17, 1962, Life International published a logic puzzle consisting of 15 sentences describing five houses on a street. Each sentence was a clue, such as “The Englishman lives in the red house” or “Milk is drunk in the middle house.” Each house was a different color, with inhabitants of different nationalities, who owned different pets, and so on. The story’s headline asked: “Who Owns the Zebra?” Problems like this one have proved to be a measure of the abilities — limitations, actually — of today’s machine learning models. Also known as Einstein’s puzzle or riddle (likely an apocryphal attribution), the problem tests a certain kind of multistep reasoning. Nouha Dziri, a research scientist at the Allen Institute for AI, and her colleagues recently set transformer-based large language models (LLMs), such as ChatGPT, to work on such tasks — and largely found them wanting. “They might not be able to reason beyond what they have seen during the training data for hard tasks,” Dziri said. “Or at least they do an approximation, and that approximation can be wrong.”
3 notes · View notes
cogitoergofun · 1 year ago
Text
Researcher Jesse Dodge did some back-of-the-napkin math on the amount of energy AI chatbots use.
“One query to ChatGPT uses approximately as much electricity as could light one light bulb for about 20 minutes,” he says. “So, you can imagine with millions of people using something like that every day, that adds up to a really large amount of electricity.”
He’s a senior research analyst at the Allen Institute for AI and has been studying how artificial intelligence consumes energy. To generate its answers, AI uses far more power than traditional internet uses, like search queries or cloud storage. According to a report by Goldman Sachs, a ChatGPT query needs nearly 10 times as much electricity as a Google search query.
And as AI gets more sophisticated, it needs more energy. In the U.S., a majority of that energy comes from burning fossil fuels like coal and gas which are primary drivers of climate change.
Most companies working on AI, including ChatGPT maker OpenAI, don’t disclose their emissions. But, last week, Google released a new sustainability report with a glimpse at this data. Deep within the 86-page report, Google said its greenhouse gas emissions rose last year by 48% since 2019. It attributed that surge to its data center energy consumption and supply chain emissions.
“As we further integrate AI into our products, reducing emissions may be challenging,” the report reads.
Google declined an interview with NPR.
2 notes · View notes
cavenewstimestoday · 2 months ago
Text
AI could deliver insights when paired with (the right) humans
GEOINT Symposium 2025 panel on next-generation AI for geospatial intelligence (left to right) moderator Geospatial World CEO Sanjay Kumar, Black Cape co-CEO Abe Usher, Nadine Alameh, Taylor Geospatial Institute executive director, Taegyun Jeon, SI Analytics founder and CEO, and Don Polaski, Booz Allen Hamilton vice president of AI. ST. LOUIS – Artificial intelligence combined with human insight…
0 notes
ainewsmonitor · 5 months ago
Text
AI-Driven Disinformation Poses Growing Threat to Democracies in Africa and Europe, Study Finds
A new study has revealed that artificial intelligence (AI) is increasingly being used to spread disinformation, particularly during elections, with the potential to undermine democratic processes and divide societies. The research, conducted by Karen Allen of South Africa’s Institute for Security Studies and Christopher Nehring of Germany’s cyberintelligence institute, in collaboration with the…
0 notes
vibhanshpvt · 5 months ago
Text
data science classes in indore
The field of data science is rapidly evolving, and data science institutes play a crucial role in driving this progress. These institutions are hubs for research, education, and innovation, contributing significantly to advancements in various sectors. Here's a look at the importance and functions of data science institutes:  
Tumblr media
Key Functions and Importance:
Education and Training:
Data science institutes provide comprehensive educational programs, including degrees, certifications, and workshops. These programs equip individuals with the necessary skills in areas like data analysis, machine learning, and statistical modeling.  
They cater to diverse learners, from students pursuing academic degrees to professionals seeking to upskill.
Research and Development:
These institutes are at the forefront of data science research, exploring new methodologies, algorithms, and applications.  
They conduct research in various domains, such as healthcare, finance, and technology, contributing to breakthroughs and practical solutions.
Industry Collaboration:
Data science institutes often collaborate with industry partners, fostering knowledge exchange and practical application of research findings.  
These collaborations can lead to the development of innovative products and services, as well as solutions to real-world problems.
Advancing the Field:
By conducting cutting-edge research and educating future data scientists, these institutes contribute to the overall advancement of the field.  
They play a vital role in shaping the direction of data science and its impact on society.  
Variations in Data Science Institutes:
Data science institutes exist within universities, as independent research centers, and as private training organizations.
Their focus may vary, with some emphasizing academic research, others focusing on industry applications, and others prioritizing education and training.
Examples of Data science institutes, or organizations that have strong data science programs, include:
Various University based Data science institutes.
The Allen Institute for Artificial Intelligence (AI2).
Institutes and programs offered through online learning platforms like Simplilearn.  
The Evolving Landscape:
With the rise of artificial intelligence and machine learning, data science institutes are increasingly focusing on these areas.
The ethical implications of data science are also gaining attention, with institutes playing a role in promoting responsible data practices.  
In todays world, many institutes are now offering courses that combine data science with Generative AI.  
In conclusion, data science institutes are essential for driving innovation, educating professionals, and advancing the field of data science. Their contributions are vital for harnessing the power of data to solve complex problems and improve society.
1 note · View note
jcmarchi · 7 months ago
Text
MIT affiliates receive 2025 IEEE honors
New Post has been published on https://thedigitalinsider.com/mit-affiliates-receive-2025-ieee-honors/
MIT affiliates receive 2025 IEEE honors
Tumblr media Tumblr media
The IEEE recently announced the winners of their 2025 prestigious medals, technical awards, and fellowships. Four MIT faculty members, one staff member, and five alumni were recognized.
Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health within the Department of Electrical Engineering and Computer Science (EECS) at MIT, received the IEEE Frances E. Allen Medal for “innovative machine learning algorithms that have led to advances in human language technology and demonstrated impact on the field of medicine.” Barzilay focuses on machine learning algorithms for modeling molecular properties in the context of drug design, with the goal of elucidating disease biochemistry and accelerating the development of new therapeutics. In the field of clinical AI, she focuses on algorithms for early cancer diagnostics. She is also the AI faculty lead within the MIT Abdul Latif Jameel Clinic for Machine Learning in Health and an affiliate of the Computer Science and Artificial Intelligence Laboratory, Institute for Medical Engineering and Science, and Koch Institute for Integrative Cancer Research. Barzilay is a member of the National Academy of Engineering, the National Academy of Medicine, and the American Academy of Arts and Sciences. She has earned the MacArthur Fellowship, MIT’s Jamieson Award for excellence in teaching, and the Association for the Advancement of Artificial Intelligence’s $1 million Squirrel AI Award for Artificial Intelligence for the Benefit of Humanity. Barzilay is a fellow of AAAI, ACL, and AIMBE.
James J. Collins, the Termeer Professor of Medical Engineering and Science, professor of biological engineering at MIT, and member of the Harvard-MIT Health Sciences and Technology faculty, earned the 2025 IEEE Medal for Innovations in Healthcare Technology for his work in “synthetic gene circuits and programmable cells, launching the field of synthetic biology, and impacting healthcare applications.” He is a core founding faculty member of the Wyss Institute for Biologically Inspired Engineering at Harvard University and an Institute Member of the Broad Institute of MIT and Harvard. Collins is known as a pioneer in synthetic biology, and currently focuses on employing engineering principles to model, design, and build synthetic gene circuits and programmable cells to create novel classes of diagnostics and therapeutics. His patented technologies have been licensed by over 25 biotech, pharma, and medical device companies, and he has co-founded several companies, including Synlogic, Senti Biosciences, Sherlock Biosciences, Cellarity, and the nonprofit Phare Bio. Collins’ many accolades are the MacArthur “Genius” Award, the Dickson Prize in Medicine, and election to the National Academies of Sciences, Engineering, and Medicine.
Roozbeh Jafari, principal staff member in MIT Lincoln Laboratory’s Biotechnology and Human Systems Division, was elected IEEE Fellow for his “contributions to sensors and systems for digital health paradigms.” Jafari seeks to establish impactful and highly collaborative programs between Lincoln Laboratory, MIT campus, and other U.S. academic entities to promote health and wellness for national security and public health. His research interests are wearable-computer design, sensors, systems, and AI for digital health, most recently focusing on digital twins for precision health. He has published more than 200 refereed papers and served as general chair and technical program committee chair for several flagship conferences focused on wearable computers. Jafari has received a National Science Foundation Faculty Early Career Development (CAREER) Award (2012), the IEEE Real-Time and Embedded Technology and Applications Symposium Best Paper Award (2011), the IEEE Andrew P. Sage Best Transactions Paper Award (2014), and the Association for Computing Machinery Transactions on Embedded Computing Systems Best Paper Award (2019), among other honors.
William Oliver, the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science and professor of physics at MIT, was elected an IEEE Fellow for his “contributions to superconductive quantum computing technology and its teaching.” Director of the MIT Center for Quantum Engineering and associate director of the MIT Research Laboratory of Electronics, Oliver leads the Engineering Quantum Systems (EQuS) group at MIT. His research focuses on superconducting qubits, their use in small-scale quantum processors, and the development of cryogenic packaging and control electronics. The EQuS group closely collaborates with the Quantum Information and Integrated Nanosystems Group at Lincoln Laboratory, where Oliver was previously a staff member and a Laboratory Fellow from 2017 to 2023. Through MIT xPRO, Oliver created four online professional development courses addressing the fundamentals and practical realities of quantum computing. He is member of the National Quantum Initiative Advisory Committee and has published more than 130 journal articles and seven book chapters. Inventor or co-inventor on more than 10 patents, he is a fellow of the American Association for the Advancement of Science and the American Physical Society; serves on the U.S. Committee for Superconducting Electronics; and is a lead editor for the IEEE Applied Superconductivity Conference.
Daniela Rus, director of the MIT Computer Science and Artificial Intelligence Laboratory,  MIT Schwarzman College of Computing deputy dean of research, and the Andrew (1956) and Erna Viterbi Professor within the Department of Electrical Engineering and Computer Science, was awarded the IEEE Edison Medal for “sustained leadership and pioneering contributions in modern robotics.” Rus’ research in robotics, artificial intelligence, and data science focuses primarily on developing the science and engineering of autonomy, where she envisions groups of robots interacting with each other and with people to support humans with cognitive and physical tasks. Rus is a Class of 2002 MacArthur Fellow, a fellow of the Association for Computing Machinery, of the Association for the Advancement of Artificial Intelligence and of IEEE, and a member of the National Academy of Engineers and the American Academy of Arts and Sciences.
Five MIT alumni were also recognized.
Steve Mann PhD ’97, a graduate of the Program in Media Arts and Sciences, received the Masaru Ibuka Consumer Technology Award “for contributions to the advancement of wearable computing and high dynamic range imaging.” He founded the MIT Wearable Computing Project and is currently professor of computer engineering at the University of Toronto as well as an IEEE Fellow.
Thomas Louis Marzetta ’72 PhD ’78, a graduate of the Department of Electrical Engineering and Computer Science, received the Eric E. Sumner Award “for originating the Massive MIMO technology in wireless communications.” Marzetta is a distinguished industry professor at New York University’s (NYU) Tandon School of Engineering and is director of NYU Wireless, an academic research center within the department. He is also an IEEE Life Fellow.
Michael Menzel ’81, a graduate of the Department of Physics, was awarded the Simon Ramo Medal “for development of the James Webb Space Telescope [JWST], first deployed to see the earliest galaxies in the universe,” along with Bill Ochs, JWST project manager at NASA, and Scott Willoughby, vice president and program manager for the JWST program at Northrop Grumman. Menzel is a mission systems engineer at NASA and a member of the American Astronomical Society.
Jose Manuel Fonseca Moura ’73, SM ’73, ScD ’75, a graduate of the Department of Electrical Engineering and Computer Science, received the Haraden Pratt Award “for sustained leadership and outstanding contributions to the IEEE in education, technical activities, awards, and global connections.” Currently, Moura is the Philip L. and Marsha Dowd University Professor at Carnegie Mellon University. He is also a member of the U.S. National Academy of Engineers, fellow of the U.S. National Academy of Inventors, a member of the Portugal Academy of Science, an IEEE Fellow, and a fellow of the American Association for the Advancement of Science.
Marc Raibert PhD ’77, a graduate of the former Department of Psychology, now a part of the Department of Brain and Cognitive Sciences, received the Robotics and Automation Award “for pioneering and leading the field of dynamic legged locomotion.” He is founder of Boston Dynamics, an MIT spinoff and robotics company, and The AI Institute, based in Cambridge, Massachusetts, where he also serves as the executive director. Raibert is an IEEE Member.
0 notes
enterprisewired · 8 months ago
Text
Meta Opens Its A.I. Models to U.S. Military and Allied Use
Tumblr media
Source: finance.yahoo_.com
Share Post:
LinkedIn
Twitter
Facebook
Reddit
Meta, the parent company of Facebook, Instagram, and WhatsApp, recently announced a significant change in its policy by allowing U.S. government agencies and national security contractors to utilize its artificial intelligence (A.I.) models for military purposes. Known for its commitment to open-source A.I., Meta’s decision marks a shift from its previous stance, which prohibited using its technology in military or defense sectors. Meta’s A.I. models, called Llama, will now be accessible to federal agencies and private contractors focused on national security, including major defense companies like Lockheed Martin, Booz Allen Hamilton, Palantir, and Anduril. The move aligns with Meta’s broader strategy to support democratic values and the safety of allied nations, according to Nick Clegg, Meta’s president of global affairs.
Llama, Meta’s open-source A.I. technology, can be freely copied, shared, and adapted by developers worldwide, a practice Meta believes can lead to safer, more refined A.I. Meta’s leadership emphasized that the controlled application of its technology could reinforce the United States’ strategic edge in the global race for A.I. supremacy. In his blog post, Clegg highlighted the importance of A.I. in national defense, stating that Meta’s involvement aims to contribute to the “safety, security, and economic prosperity” of the U.S. and its allies.
Extending A.I. Support to Allied Nations
In addition to U.S. agencies, Meta plans to share its A.I. models with members of the Five Eyes intelligence alliance, which includes the United States, Canada, the United Kingdom, Australia, and New Zealand. This collaboration reflects Meta’s commitment to supporting allied nations in bolstering their security frameworks. Clegg mentioned that these partnerships could enhance cybersecurity and assist in monitoring activities that may threaten democratic values globally.
Meta’s push to distribute its open-source A.I. on a broader scale comes amid an intensifying A.I. race with tech giants like Google, OpenAI, and Microsoft. Unlike competitors that have chosen to restrict access to their A.I. technologies due to concerns over potential misuse, Meta has taken an alternative approach by sharing its code freely, allowing third-party developers to improve and adapt it. Since August, Llama has been downloaded over 350 million times, reflecting Meta’s goal of promoting widespread adoption of its A.I. technology. However, this decision has sparked debate over security risks and potential misuses.
Concerns Over Open-Source Approach and Regulatory Challenges
While Meta aims to advance U.S. technological interests, its open-source A.I. approach has not been without controversy. Some argue that unrestricted access could make the technology vulnerable to misuse, especially in global security contexts. Recently, a Reuters report alleged that Chinese institutions connected to the government had used Llama to develop applications for the People’s Liberation Army, sparking concerns over potential risks associated with open-source A.I. Meta executives disputed the report, emphasizing that the Chinese government was unauthorized to use Llama for military purposes.
Clegg defended Meta’s open-source policy, arguing that transparent access allows experts to identify and mitigate risks more effectively. He added that responsible applications of Meta’s A.I. could serve broader strategic interests by enabling the United States to stay ahead technologically. According to Clegg, the aim is to establish a “virtuous circle” where A.I. can be developed ethically and contribute positively to both U.S. interests and global stability.
0 notes
fromdevcom · 8 months ago
Text
As the demand for artificial intelligence (AI) expertise continues to grow, students are increasingly looking to top universities for education and research opportunities in this field. Washington State, known for its thriving tech industry, is home to several universities that offer exceptional AI programs. These institutions not only provide advanced coursework in AI but also offer opportunities for hands-on research, internships, and collaborations with industry leaders. Here’s a look at some of the best universities in Washington for AI studies. 1. University of Washington (UW) The University of Washington in Seattle is one of the premier institutions for AI education in the state and the country. UW's Paul G. Allen School of Computer Science & Engineering is renowned for its research and development in AI, machine learning, and robotics. The university offers a comprehensive curriculum that includes courses in AI, machine learning, natural language processing, and computer vision. UW also provides students with the opportunity to engage in cutting-edge research projects, often in collaboration with major tech companies in the Seattle area such as Microsoft, Amazon, and Google. Key Highlights: Home to the Allen Institute for Artificial Intelligence (AI2), a leading research institute. Offers a Master’s and Ph.D. in Computer Science with a focus on AI. Strong industry connections for internships and job placements. 2. Washington State University (WSU) Washington State University, located in Pullman, is another top contender for students interested in AI. The School of Electrical Engineering and Computer Science (EECS) at WSU offers robust programs that cover various aspects of AI, including machine learning, data mining, and robotics. WSU’s AI research is particularly strong in the areas of agricultural technology, healthcare, and environmental sustainability, making it a great choice for students who want to apply AI to real-world challenges. Key Highlights: Focus on AI applications in agriculture and environmental sciences. Opportunities for interdisciplinary research and collaboration. Offers a Bachelor’s, Master’s, and Ph.D. in Computer Science with AI-related courses. 3. Seattle University Seattle University, a private institution in the heart of Seattle, offers a strong AI curriculum through its College of Science and Engineering. The university’s computer science program includes courses in AI, machine learning, and data science. Seattle University emphasizes ethical AI, preparing students to tackle the societal implications of AI technology. The university's location in a major tech hub provides students with ample opportunities for internships and networking with industry professionals. Key Highlights: Strong focus on ethical AI and social impact. Close proximity to leading tech companies for internship opportunities. Offers a Bachelor’s in Computer Science with AI and data science concentrations. 4. Western Washington University (WWU) Located in Bellingham, Western Washington University is known for its interdisciplinary approach to AI education. The university offers a Computer Science program with a specialization in AI, focusing on areas like machine learning, natural language processing, and robotics. WWU’s AI research initiatives are geared towards solving complex problems in healthcare, renewable energy, and education. Key Highlights: Emphasis on interdisciplinary research in AI. Strong undergraduate program with AI-focused courses. Research opportunities in healthcare and renewable energy. Conclusion Washington State is home to several universities that offer outstanding programs in artificial intelligence. From the University of Washington’s cutting-edge research to Washington State University’s application-driven studies, students have a variety of options to choose from. These institutions not only provide top-tier education
but also connect students with the vibrant tech industry in Washington, setting the stage for successful careers in AI. Whether you're looking to dive deep into AI research or apply AI to solve real-world problems, Washington’s universities offer the resources, expertise, and opportunities you need to succeed in this rapidly evolving field.
0 notes
goldislops · 9 months ago
Text
Largest Brain Map Ever Reveals Fruit Fly’s Neurons in Exquisite Detail
Wiring diagram lays out connections between nearly 140,000 neurons and reveals new types of nerve cell
50 largest neurons of the fly brain connectome.
50 largest neurons of the fly brain connectome.
Tyler Sloan and Amy Sterling for FlyWire, Princeton University, (Dorkenwald et al., Nature, 2024)
Wiring diagram lays out connections between nearly 140,000 neurons and reveals new types of nerve cell
A fruit fly might not be the smartest organism, but scientists can still learn a lot from its brain. Researchers are hoping to do that now that they have a new map — the most complete for any organism so far — of the brain of a single fruit fly (Drosophila melanogaster). The wiring diagram, or ‘connectome’, includes nearly 140,000 neurons and captures more than 54.5 million synapses, which are the connections between nerve cells.
“This is a huge deal,” says Clay Reid, a neurobiologist at the Allen Institute for Brain Science in Seattle, Washington, who was not involved in the project but has worked with one of the team members who was. “It’s something that the world has been anxiously waiting for, for a long time.”
The map is described in a package of nine papers about the data published in Nature today. Its creators are part of a consortium known as FlyWire, co-led by neuroscientists Mala Murthy and Sebastian Seung at Princeton University in New Jersey.
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
A long road
Seung and Murthy say that they’ve been developing the FlyWire map for more than four years, using electron microscopy images of slices of the fly’s brain. The researchers and their colleagues stitched the data together to form a full map of the brain with the help of artificial-intelligence (AI) tools.
But these tools aren’t perfect, and the wiring diagram needed to be checked for errors. The scientists spent a great deal of time manually proofreading the data — so much time that they invited volunteers to help. In all, the consortium members and the volunteers made more than 3 million manual edits, according to co-author Gregory Jefferis, a neuroscientist at the University of Cambridge, UK. (He notes that much of this work took place in 2020, when fly researchers were at loose ends and working from home during the COVID-19 pandemic.)
But the work wasn’t finished: the map still had to be annotated, a process in which the researchers and volunteers labelled each neuron as a particular cell type. Jefferis compares the task to assessing satellite images: AI software might be trained to recognize lakes or roads in such images, but humans would have to check the results and name the specific lakes or roads themselves. All told, the researchers identified 8,453 types of neuron — much more than anyone had expected. Of these, 4,581 were newly discovered, which will create new research directions, Seung says. “Every one of those cell types is a question,” he adds.
The team was surprised by some of the ways in which the various cells connect to one another, too. For instance, neurons that were thought to be involved in just one sensory wiring circuit, such as a visual pathway, tended to receive cues from multiple senses, including hearing and touch1. “It’s astounding how interconnected the brain is,” Murthy says.
Exploring the map
The FlyWire map data have been available for the past few years for researchers to explore. This has enabled scientists to learn more about the brain and about fruit flies — findings that are captured in some of the papers published in Nature today.
In one paper, for example, researchers used the connectome to create a computer model of the entire fruit-fly brain, including all the connections between neurons. They tested it by activating neurons that they knew either sense sweet or bitter tastes. These neurons then launched a cascade of signals through the virtual fly’s brain, ultimately triggering motor neurons tied to the fly’s proboscis — the equivalent of the mammalian tongue. When the sweet circuit was activated, a signal for extending the proboscis was transmitted, as if the insect was preparing to feed; when the bitter circuit was activated, this signal was inhibited. To validate these findings, the team activated the same neurons in a real fruit fly. The researchers learnt that the simulation was more than 90% accurate at predicting which neurons would respond and therefore how the fly would behave.
In another study, researchers describe two wiring circuits that signal a fly to stop walking. One of these contains two neurons that are responsible for halting ‘walk’ signals sent from the brain when the fly wants to stop and feed. The other circuit includes neurons in the nerve cord, which receives and processes signals from the brain. These cells create resistance in the fly’s leg joints, allowing the insect to stop while it grooms itself.
One limitation of the new connectome is that it was created from a single female fruit fly. Although fruit-fly brains are similar to each other, they are not identical. Until now, the most complete connectome for a fruit-fly brain was a map of a ‘hemibrain’ — a portion of a fly’s brain containing around 25,000 neurons. In one of the Nature papers out today, Jefferis, Davi Bock, a neurobiologist at the University of Vermont in Burlington, and their colleagues compared the FlyWire brain with the hemibrain.
Some of the differences were striking. The FlyWire fly had almost twice as many neurons in a brain structure called the mushroom body, which is involved in smell, compared with the fly used in the hemibrain-mapping project. Bock thinks the discrepancy could be because the hemibrain fly might have starved while it was still growing, which harmed its brain development.
The FlyWire researchers say that much work remains to be done to fully understand the fruit-fly brain. For instance, the latest connectome shows only how neurons connect through chemical synapses, across which molecules called neurotransmitters send information. It doesn’t offer any information about electrical connectivity between neurons or about how neurons chemically communicate outside synapses. And Murthy hopes to eventually have a male fly connectome, too, which would allow researchers to study male-specific behaviours such as singing. “We’re not done, but it’s a big step,” Bock says.
This article is reproduced with permission and was first published on October 2, 2024.
1 note · View note
neveronsundays · 6 months ago
Text
This argument is from one blog. Here are some other sources with info on the environmental impact of AI:
"But when it comes to the environment, there is a negative side to the explosion of AI and its associated infrastructure, according to a growing body of research. The proliferating data centres that house AI servers produce electronic waste. They are large consumers of water, which is becoming scarce in many places. They rely on critical minerals and rare elements, which are often mined unsustainably. And they use massive amounts of electricity, spurring the emission of planet-warming greenhouse gases. "
https://www.unep.org/news-and-stories/story/ai-has-environmental-problem-heres-what-world-can-do-about
"The training process for a single AI model, such as an LLM, can consume thousands of megawatt hours of electricity and emit hundreds of tons of carbon. AI model training can also lead to the evaporation of an astonishing amount of freshwater into the atmosphere for data center heat rejection, potentially exacerbating stress on our already limited freshwater resources. These environmental impacts are expected to escalate considerably, and there remains a widening disparity in how different regions and communities are affected. "
https://hbr.org/2024/07/the-uneven-distribution-of-ais-environmental-impacts
"The International Energy Agency estimates that by 2026, electricity consumption by data centers, cryptocurrency, and artificial intelligence could reach 4% of annual global energy usage — roughly equal to the amount of electricity used by the entire country of Japan."
"Training and running an AI system requires a great deal of computing power and electricity, and the resulting carbon dioxide emissions are one way AI affects the climate. But its environmental impact goes well beyond its carbon footprint.
“It is important for us to recognize the CO2 emissions of some of these large AI systems especially,” says Jesse Dodge, a research scientist at the Allen Institute for AI in Seattle"
I think almost all of the environmental case against AI is factually incorrect fear mongering, or "misinformation"
2K notes · View notes
ai-news · 10 months ago
Link
Multimodal models represent a significant advancement in artificial intelligence by enabling systems to process and understand data from multiple sources, like text and images. These models are essential for applications like image captioning, answe #AI #ML #Automation
0 notes
companyknowledgenews · 10 months ago
Text
A tiny new open-source AI model performs as well as powerful big ones - Information Global Internet https://www.merchant-business.com/a-tiny-new-open-source-ai-model-performs-as-well-as-powerful-big-ones/?feed_id=212078&_unique_id=66f456b8e6821 #GLOBAL - BLOGGER BLOGGER The Allen Institute for Artificial Intelligence (Ai2), a research nonprofit, is releasing a family of open-source multimodal language models, called Molmo, that it says perform as well as top proprietary models from OpenAI, Google, and Anthropic. The organization claims that its biggest Molmo model, which has 72 billion parameters, outperforms OpenAI’s GPT-4o, which is estimated to have over a trillion parameters, in tests that measure things like understanding images, charts, and documents.  Meanwhile, Ai2 says a smaller Molmo model, with 7 billion parameters, comes close to OpenAI’s state-of-the-art model in performance, an achievement it ascribes to vastly more efficient data collection and training methods. What Molmo shows is that open-source AI development is now on par with closed, proprietary models, says Ali Farhadi, the CEO of Ai2. And open-source models have a significant advantage, as their open nature means other people can build applications on top of them. The Molmo demo is available here, and it will be available for developers to tinker with on the Hugging Face website. (Certain elements of the most powerful Molmo model are still shielded from view.) Other large multimodal language models are trained on vast data sets containing billions of images and text samples that have been hoovered from the internet, and they can include several trillion parameters. This process introduces a lot of noise to the training data and, with it, hallucinations, says Ani Kembhavi, a senior director of research at Ai2. In contrast, Ai2’s Molmo models have been trained on a significantly smaller and more curated data set containing only 600,000 images, and they have between 1 billion and 72 billion parameters. This focus on high-quality data, versus indiscriminately scraped data, has led to good performance with far fewer resources, Kembhavi says.Ai2 achieved this by getting human annotators to describe the images in the model’s training data set in excruciating detail over multiple pages of text. They asked the annotators to talk about what they saw instead of typing it. Then they used AI techniques to convert their speech into data, which made the training process much quicker while reducing the computing power required. These techniques could prove really useful if we want to meaningfully govern the data that we use for AI development, says Yacine Jernite, who is the machine learning and society lead at Hugging Face, and was not involved in the research. “It makes sense that in general, training on higher-quality data can lower the compute costs,” says Percy Liang, the director of the Stanford Center for Research on Foundation Models, who also did not participate in the research. Another impressive capability is that the model can “point” at things, meaning it can analyze elements of an image by identifying the pixels that answer queries.In a demo shared with MIT Technology Review, Ai2 researchers took a photo outside their office of the local Seattle marina and asked the model to identify various elements of the image, such as deck chairs. The model successfully described what the image contained, counted the deck chairs, and accurately pinpointed to other things in the image as the researchers asked. It was not perfect, however. It could not locate a specific parking lot, for example. Other advanced AI models are good at describing scenes and images, says Farhadi. But that’s not enough when you want to build more sophisticated web agents that can interact with the world and can, for example, book a flight. Pointing allows people to interact with user interfaces, he says. Jernite says Ai2 is operating with a greater degree of openness than we’ve seen from other AI companies.
And while Molmo is a good start, he says, its real significance will lie in the applications developers build on top of it, and the ways people improve it.Farhadi agrees. AI companies have drawn massive, multitrillion-dollar investments over the past few years. But in the past few months, investors have expressed skepticism about whether that investment will bring returns. Big, expensive proprietary models won’t do that, he argues, but open-source ones can. He says the work shows that open-source AI can also be built in a way that makes efficient use of money and time. “We’re excited about enabling others and seeing what others would build with this,” Farhadi says.  http://109.70.148.72/~merchant29/6network/wp-content/uploads/2024/09/1727287812_370_maxresdefault.jpg The Allen Institute for Artificial Intelligence (Ai2), a research nonprofit, is releasing a family of open-source multimodal language models, called Molmo, that it says perform as well as top proprietary models from OpenAI, Google, and Anthropic.  The organization claims that its biggest Molmo model, which has 72 billion parameters, outperforms OpenAI’s GPT-4o, which is estimated … Read More
0 notes
bravecompanynews · 10 months ago
Text
A tiny new open-source AI model performs as well as powerful big ones - Information Global Internet - #GLOBAL https://www.merchant-business.com/a-tiny-new-open-source-ai-model-performs-as-well-as-powerful-big-ones/?feed_id=212077&_unique_id=66f456b7d12a4 The Allen Institute for Artificial Intelligence (Ai2), a research nonprofit, is releasing a family of open-source multimodal language models, called Molmo, that it says perform as well as top proprietary models from OpenAI, Google, and Anthropic. The organization claims that its biggest Molmo model, which has 72 billion parameters, outperforms OpenAI’s GPT-4o, which is estimated to have over a trillion parameters, in tests that measure things like understanding images, charts, and documents.  Meanwhile, Ai2 says a smaller Molmo model, with 7 billion parameters, comes close to OpenAI’s state-of-the-art model in performance, an achievement it ascribes to vastly more efficient data collection and training methods. What Molmo shows is that open-source AI development is now on par with closed, proprietary models, says Ali Farhadi, the CEO of Ai2. And open-source models have a significant advantage, as their open nature means other people can build applications on top of them. The Molmo demo is available here, and it will be available for developers to tinker with on the Hugging Face website. (Certain elements of the most powerful Molmo model are still shielded from view.) Other large multimodal language models are trained on vast data sets containing billions of images and text samples that have been hoovered from the internet, and they can include several trillion parameters. This process introduces a lot of noise to the training data and, with it, hallucinations, says Ani Kembhavi, a senior director of research at Ai2. In contrast, Ai2’s Molmo models have been trained on a significantly smaller and more curated data set containing only 600,000 images, and they have between 1 billion and 72 billion parameters. This focus on high-quality data, versus indiscriminately scraped data, has led to good performance with far fewer resources, Kembhavi says.Ai2 achieved this by getting human annotators to describe the images in the model’s training data set in excruciating detail over multiple pages of text. They asked the annotators to talk about what they saw instead of typing it. Then they used AI techniques to convert their speech into data, which made the training process much quicker while reducing the computing power required. These techniques could prove really useful if we want to meaningfully govern the data that we use for AI development, says Yacine Jernite, who is the machine learning and society lead at Hugging Face, and was not involved in the research. “It makes sense that in general, training on higher-quality data can lower the compute costs,” says Percy Liang, the director of the Stanford Center for Research on Foundation Models, who also did not participate in the research. Another impressive capability is that the model can “point” at things, meaning it can analyze elements of an image by identifying the pixels that answer queries.In a demo shared with MIT Technology Review, Ai2 researchers took a photo outside their office of the local Seattle marina and asked the model to identify various elements of the image, such as deck chairs. The model successfully described what the image contained, counted the deck chairs, and accurately pinpointed to other things in the image as the researchers asked. It was not perfect, however. It could not locate a specific parking lot, for example. Other advanced AI models are good at describing scenes and images, says Farhadi. But that’s not enough when you want to build more sophisticated web agents that can interact with the world and can, for example, book a flight. Pointing allows people to interact with user interfaces, he says. Jernite says Ai2 is operating with a greater degree of openness than we’ve seen from other AI companies.
And while Molmo is a good start, he says, its real significance will lie in the applications developers build on top of it, and the ways people improve it.Farhadi agrees. AI companies have drawn massive, multitrillion-dollar investments over the past few years. But in the past few months, investors have expressed skepticism about whether that investment will bring returns. Big, expensive proprietary models won’t do that, he argues, but open-source ones can. He says the work shows that open-source AI can also be built in a way that makes efficient use of money and time. “We’re excited about enabling others and seeing what others would build with this,” Farhadi says.  http://109.70.148.72/~merchant29/6network/wp-content/uploads/2024/09/1727287812_370_maxresdefault.jpg BLOGGER - #GLOBAL
0 notes
ainewsmonitor · 5 months ago
Text
AI-Driven Disinformation Poses Growing Threat to Democracies in Africa and Europe, Study Finds
A new study has revealed that artificial intelligence (AI) is increasingly being used to spread disinformation, particularly during elections, with the potential to undermine democratic processes and divide societies. The research, conducted by Karen Allen of South Africa’s Institute for Security Studies and Christopher Nehring of Germany’s cyberintelligence institute, in collaboration with the…
0 notes