#Open Source LLM Model
Explore tagged Tumblr posts
Text
Unleashing the Power of Azure OpenAI and Service Embedding for unstructured document search
Are you struggling to find the information you need to be buried deep within unstructured documents? It's time to unleash the power of Azure OpenAI and Service Embedding! In this blog post, we'll show you how to harness the latest advancements in AI and c
View On WordPress
#Azure#Azure AI#Azure OpenAI#Azure OpenAI Service#LLM#Microsoft#microsoft azure#Open Source LLM Model#OpenAI
0 notes
Text
The Falcon LLM is an open-source large language model created by the Technology Innovation Institute (TII) in Abu Dhabi, which also developed Noor, the largest Arabic Language Model. It has two versions: the Falcon-7B and the Falcon-40B. The Falcon-40B is a new addition to the open LLM leaderboard and has been ranked #1. The Falcon LLM has impressive performance and inference efficiency. Continue Reading
#ai#open source#artificial intelligence#Falcon 40B#llms#large language model#ai technology#generative ai#uaenews#research scientist
2 notes
·
View notes
Text
There is something deliciously funny about AI getting replaced by AI.
tl;dr: China yeeted a cheaper, faster, less environmental impact making, open source LLM model onto the market and US AI companies lost nearly 600 billions in value since yesterday.
Silicone Valley is having a meltdown.
And ChatGTP just lost its job to AI~.
27K notes
·
View notes
Text
so "xai" just released grok open source... What the fuck Is this? it feels like a lame attempt to justify Elon Musks lawsuit against openai (which I honestly hope he wins anyways, open source all the way). I don't even see the point in open sourcing something so unbelievably AWFUL. This thing is 314 BILLION PARAMETERS... Yet mixtral 8x7b (which I can reasonably run on consumer hardware) STILL beats it.
#llm#The AI space is fucking weird man#why??#the biggest open source model in the world and it's terrible
0 notes
Text
Mistral AI Open-Sources Mistral 7B: A Small Yet Powerful Language Model Adaptable to Many Use-Cases
Exciting news! Mistral AI has just open-sourced its Language Model, Mistral 7B, which boasts an impressive 7 billion parameters and outperforms similar models in key benchmarks. This powerful model has a wide range of applications, including code generation, content creation, customer service, and research. Best of all, Mistral AI is committed to open-source principles, allowing for free usage, modification, and distribution. Check out the blog post to learn more about Mistral AI's groundbreaking language model and how it can revolutionize your work in various domains: [Link to Blog Post](https://ift.tt/PQ8fp9e) Stay informed and seize the opportunity to leverage the potential of Mistral 7B language model for your own use-cases. Explore the possibilities, from implementing the AI Document Assistant to enhancing productivity and secure data management, to utilizing the AI Sails Bot for automated customer interaction and 24/7 engagement. Visit Itinai.com for more information on AI solutions and integration options. Don't miss out on the latest updates by following AI Lab Itinai.com on Twitter (@itinaicom) or reaching out to their expert team via email at [email protected]. Let's harness the power of AI to optimize document management, customer support, and sales processes. Discover the benefits for yourself and choose a plan that suits your requirements and budget. #AI #LanguageModels #OpenSource #Innovation List of Useful Links: AI Scrum Bot - ask about AI scrum and agile Our Telegram @itinai Twitter - @itinaicom
#itinai.com#AI#News#Mistral AI Open-Sources Mistral 7B: A Small Yet Powerful Language Model Adaptable to Many Use-Cases#AI News#AI tools#Dhanshree Shripad Shenwai#Innovation#itinai#LLM#MarkTechPost#Productivity Mistral AI Open-Sources Mistral 7B: A Small Yet Powerful Language Model Adaptable to Many Use-Cases
0 notes
Text
Mastering Azure OpenAI Service - Ignite your journey with Prompt Engineering Techniques and More.
In a world where artificial intelligence is reshaping industries and transforming the way we live, Azure OpenAI stands at the forefront, embodying the boundless possibilities of this futuristic technology. But what does Azure OpenAI truly represent? It is
View On WordPress
#Azure#Azure AI#azure ai tutorial#Azure chatbot#Azure OpenAI#Microsoft#microsoft ai#microsoft azure#Open Source LLM Model#OpenAI#Prompt engineering
0 notes
Text
The premature unleashing of AI and large language models (LLMs) in particular onto the open Internet is already having dire consequences for what were, for years, considered stabilized, centralized (if flawed) systems, such as the search engine. AI, thanks to the scope of its spread, its rogue unreliability (it lies — often), the way it poisons search results, hijacks SEO, and is increasingly being used for disinformation and fraud, is reintroducing a fundamental and destabilizing distrust back into the Internet. Once more, I can no longer trust the results Google provides me. On a daily basis I have to ask myself increasingly familiar questions: Is this first result a legitimate news source? Is this image of a protest real? Is that picture of a Kandinsky painting really his or is an AI forgery of his work? Across the board, it’s becoming increasingly hard to tell. For me and countless others, what used to be rote Internet usage has now turned into a nightmarish amount of wasted time spent discerning what is and isn’t real. As far as I, the ordinary user, am concerned, AI is evolving not into a life-changing and labor-saving technology as was promised by its capitalist overlords, but rather into a form of malware that targets, whether unwittingly or not, critical Internet infrastructure.
1 October 2024
178 notes
·
View notes
Text
Dive into the world of DBRX, a state-of-the-art open Large Language Model. With its unique architecture and extensive training data, DBRX is revolutionizing the field of AI. Discover how DBRX is excelling in various tasks and benchmarks, outshining both open and proprietary models.
#DBRX#Databricks#AI#OpenSource#LLM#MoEArchitecture#datascience#machinelearning#artificial intelligence#open source#machine learning#coding#llms#large language model
1 note
·
View note
Text
I'm not even a hardcore anti AI person. I think LLMs are neat as a technology and I enjoy running a small open source model locally on my computer!!! But I truly feel like a guy who's into origami suddenly watching society decide they're gonna make every building's load bearing wall out of origami paper. Like that's not what it's for 😭
#It's for me talking to my lil computer for fun#And automating some repetitive tasks / formatting / etc#But your building is gonna fucking collapse!!! It's nonsense!!!
67 notes
·
View notes
Text
Chat, thoughts on the ethics of making an LLM-powered Tumblr bot, except instead of draining oceans with chatgpt I run a small open-source model on my own server (by small I mean these things actually run on an average laptop)
32 notes
·
View notes
Note
Hello Mr. ENTJ. I'm an ENTJ sp/so 3 woman in her early twenties with a similar story to yours (Asian immigrant with a chip on her shoulder, used going to university as a way to break generational cycles). I graduated last month and have managed to break into strategy consulting with a firm that specialises in AI. Given your insider view into AI and your experience also starting out as a consultant, I would love to hear about any insights you might have or advice you may have for someone in my position. I would also be happy to take this discussion to somewhere like Discord if you'd prefer not to share in public/would like more context on my situation. Thank you!
Insights for your career or insights on AI in general?
On management consulting as a career, check the #management consulting tag.
On being a consultant working in AI:
Develop a solid understanding of the technical foundation behind LLMs. You don’t need a computer science degree, but you should know how they’re built and what they can do. Without this knowledge, you won’t be able to apply them effectively to solve any real-world problems. A great starting point is deeplearning.ai by Andrew Ng: Fundamentals, Prompt Engineering, Fine Tuning
Know all the terminology and definitions. What's fine tuning? What's prompt engineering? What's a hallucination? Why do they happen? Here's a good starter guide.
Understand the difference between various models, not just in capabilities but also training, pricing, and usage trends. Great sources include Artificial Analysis and Hugging Face.
Keep up to date on the newest and hottest AI startups. Some are hype trash milking the AI gravy train but others have actual use cases. This will reveal unique and interesting use cases in addition to emerging capabilities. Example: Forbes List.
On the industry of AI:
It's here to stay. You can't put the genie back in the bottle (for anyone reading this who's still a skeptic).
AI will eliminate certain jobs that are easily automated (ex: quality assurance engineers) but also create new ones or make existing ones more important and in-demand (ex: prompt engineers, machine learning engineers, etc.)
The most valuable career paths will be the ones that deal with human interaction, connection, and communication. Soft skills are more important than ever because technical tasks can be offloaded to AI. As Sam Altman once told me in a meeting: "English is the new coding language."
Open source models will win (Llama, Mistral, Deep Seek) because closed source models don't have a moat. Pick the cheapest model because they're all similarly capable.
The money is in the compute, not the models -- AI chips, AI infrastructure, etc. are a scarce resource and the new oil. This is why OpenAI ($150 billion valuation) is only 5% the value of NVIDIA (a $3 trillion dollar behemoth). Follow the compute because this is where the growth will happen.
America and China will lead in the rapid development and deployment of AI technology; the EU will lead in regulation. Keep your eye on these 3 regions depending on what you're looking to better understand.
28 notes
·
View notes
Text
"Perfect, ethical AI"
I wanted to write a long post about that video with 100k views that asserts that Neuro was trained on twitch chat and only "ethical data" but I don't have the energy. But I do want to say that in my opinion it would be best if Vedal himself addressed these misconceptions. Vedal is giving the impression that he has to keep things a secret in order to not offer potential competitiors any advantage. But anyone who could even attempt to copy what he does would obviously be aware that he is using an open-source language model and all that. Him not even saying he is using an open-source model really just keeps regular people in the dark, confused about the very basics of the technology they are interacting with, and it just allows this crazy misconception believed by many to perpetuate that Vedal trained the LLM from scratch.
When people who believe this misconeption find out the truth from somewhere else, they may feel like they were mislead. And honestly, Vedal's silence is misleading. I will give him the benefit of the doubt that this is just scary for him to talk about, and he is not doing this to benefit from misconceptions brought on by it. But I don't respect it at this point.
20 notes
·
View notes
Text
if I could make people learn one thing to a disgusting level of detail it would be language model implementation because the hearsay is absolutely insane here, but. one of my posts about open source software got really popular, so, I want to put forth that if you're hyped about open source software and want to support it and are very serious about wanting to protect it, maybe avoid saying "llms" when you mean a commercial, shady, closed source language model like chatgpt.
most people here don't seem aware of how many options there are, or how the vast majority of them are actually free and open source, and I would argue boosting smaller more ethical alternatives to well known commercial closed LLMs is arguably more productive than debating the use of any LLM as if using an LLM means being beholden with the doings of a specific company when it doesn't have to because of the free availability and public scrutibility of open source software.
if you have moral considerations over the use of AI in general, because of things you've heard about the largest commercial ones, it's worth knowing that those implementation choices do not necessarily represent the only way to implement a language model, or even a reasonable sample of an "average" llm, (they are indeed outliers by significant margins and moreso as time goes on) and that it is entirely possible to use models that are vastly more efficient for similar functionality implemented in more responsible ways. This list is not even the especially small ones, just the ones with the most functional parity to closed models.
More ethical ai is literally widely available in the public domain. Right now. Because of open source software. and diva is totally unsung.
commercial companies didn't make the most ethical ai, but the open source ecosystem is, and people still talk like language models themselves cannot be built in a way that isn't fundementally exploitative of consumers data or are by nature always needlessly ecologically irresponsible (because that is what Sam Altman told people) when literally most llms produced in the last two years are absolutely diminutive in comparison in their size and use of resources and kinda showed him off as being something of a scapegoating liar.
So if you're complaining about llms in a general sense, at least remember to say "open source ai did it better" because they did.
9 notes
·
View notes
Note
I half agree with the "pro-ai" anon, specially when it's used more as a tool to improve the writing or discuss how the plot will shape out to look, i dont think you can argue that its not your own work if youre asking for english corrections or for "criticize this text" type of answers. I do recognize there's many ethical concerns with generative artificial intelligence, and I do wish the content (specially visual art) wouldn't be so pushed by everything everywhere, and that companies weren't scraping copyrighted material, which is 100% wrong imo. Yet there are many ways in which it can be used as a tool instead of just using it to do everything for you.
But yes an environmental costly tool :/ it's the one thing that makes me pause before using a llm. My first time using an llm was before chatgpt existed & also in the context of a machine learning thesis so it's also probably why I cannot buy into the complete demonization of it.
Ik I'm all over the place with this anon, but my views conflict a lot. I wish it was done differently and better, because it could be with the models that have been developed (for example a locally ran model powered by solar energy, with reusable cooling, optimized and trained on your personal texts, open source), but everyone behind those companies seeks profit no matter what.
Finally for the artistic part. One way to put it that really stuck to me was this: really sick to live your life viewing art as a consumable item, or even worse, as a problem to be solved.
i understand your point about 'i dont think you can argue that its not your own work if youre asking for english corrections or for "criticize this text" type of answers' -> sure, technically it is your work. but i guess i just wonder (and this isn't necessarily aimed at you, anon, i'm just reflecting in general) why would you want to use a machine for this?
does it not feel wrong to you to do that? would you not rather engage with another human being about your ideas? i understand the rhetoric about AI being a tool, and to an extent i can see where it can be helpful (though i really don't think using it to correct your english is a great idea - you won't learn anything if something else does it for you automatically, rather than using a spelling/grammar check and going through your mistakes one by one). however, when it comes to creating art, be it visual media or written, i can't help but ask why? why do you need to talk about it with a machine that has no ability to meaningfully engage with your ideas, that can't understand the emotion or impact of them? art is supposed to come from you! if you look at the greatest works of art and literature in history, how many of them were created by engaging with a machine? asking the machine to pick up on what the machine thinks is wrong with it (which it has learned through scraping of inherently biased datasets), seems like a pointless endeavour to me.
it might be cringe, but one of my favourite movies since i was very young is the dead poet's society, and in that movie, we're invited to think about art, in particular poetry, as necessary to sustain life. 'we don't read and write poetry because it's cute, we read and write poetry because we are members of the human race, and the human race is filled with passion.' that quote is what this makes me think of! art is inherently self-generative - we create art because we're humans and it's what we do - and at the same time shaped by the environments/people/places around us. i just don't think the input of a machine can assist that
and yes, maybe i'm making it too deep, but i feel earnestly about this. it's just talking about fanfiction and AI, but i do have a kneejerk response to it as a result. i understand that AI can be a useful tool for some things, but i'd encourage you to ask yourself why you need a tool for things like this, when for the entirety of human history works of art have been created without it
9 notes
·
View notes
Text
Connecting the dots of recent research suggests a new future for traditional websites:
Artificial Intelligence (AI)-powered search can provide a full answer to a user’s query 75% of the time without the need for the user to go to a website, according to research by The Atlantic.
A worldwide survey from the University of Toronto revealed that 22% of ChatGPT users “use it as an alternative to Google.”
Research firm Gartner forecasts that traffic to the web from search engines will fall 25% by 2026.
Pew Research found that a quarter of all web pages developed between 2013 and 2023 no longer exist.
The large language models (LLMs) of generative AI that scraped their training data from websites are now using that data to eliminate the need to go to many of those same websites. Respected digital commentator Casey Newton concluded, “the web is entering a state of managed decline.” The Washington Post headline was more dire: “Web publishers brace for carnage as Google adds AI answers.”
From decentralized information to centralized conclusions
Created by Sir Tim Berners-Lee in 1989, the World Wide Web redefined the nature of the internet into a user-friendly linkage of diverse information repositories. “The first decade of the web…was decentralized with a long-tail of content and options,” Berners-Lee wrote this year on the occasion of its 35th anniversary. Over the intervening decades, that vision of distributed sources of information has faced multiple challenges. The dilution of decentralization began with powerful centralized hubs such as Facebook and Google that directed user traffic. Now comes the ultimate disintegration of Berners-Lee’s vision as generative AI reduces traffic to websites by recasting their information.
The web’s open access to the world’s information trained the large language models (LLMs) of generative AI. Now, those generative AI models are coming for their progenitor.
The web allowed users to discover diverse sources of information from which to draw conclusions. AI cuts out the intellectual middleman to go directly to conclusions from a centralized source.
The AI paradigm of cutting out the middleman appears to have been further advanced in Apple’s recent announcement that it will incorporate OpenAI to enable its Siri app to provide ChatGPT-like answers. With this new deal, Apple becomes an AI-based disintermediator, not only eliminating the need to go to websites, but also potentially disintermediating the need for the Google search engine for which Apple has been paying $20 billion annually.
The Atlantic, University of Toronto, and Gartner studies suggest the Pew research on website mortality could be just the beginning. Generative AI’s ability to deliver conclusions cannibalizes traffic to individual websites threatening the raison d’être of all websites, especially those that are commercially supported.
Echoes of traditional media and the web
The impact of AI on the web is an echo of the web’s earlier impact on traditional information providers. “The rise of digital media and technology has transformed the way we access our news and entertainment,” the U.S. Census Bureau reported in 2022, “It’s also had a devastating impact on print publishing industries.” Thanks to the web, total estimated weekday circulation of U.S. daily newspapers fell from 55.8 million in 2000 to 24.2 million by 2020, according to the Pew Research Center.
The World Wide Web also pulled the rug out from under the economic foundation of traditional media, forcing an exodus to proprietary websites. At the same time, it spawned a new generation of upstart media and business sites that took advantage of its low-cost distribution and high-impact reach. Both large and small websites now feel the impact of generative AI.
Barry Diller, CEO of media owner IAC, harkened back to that history when he warned a year ago, “We are not going to let what happened out of free internet happen to post-AI internet if we can help it.” Ominously, Diller observed, “If all the world’s information is able to be sucked up in this maw, and then essentially repackaged in declarative sentence in what’s called chat but isn’t chat…there will be no publishing; it is not possible.”
The New York Times filed a lawsuit against OpenAI and Microsoft alleging copyright infringement from the use of Times data to train LLMs. “Defendants seek to free-ride on The Times’s massive investment in its journalism,” the suit asserts, “to create products that substitute for The Times and steal audiences away from it.”1
Subsequently, eight daily newspapers owned by Alden Global Capital, the nation’s second largest newspaper publisher, filed a similar suit. “We’ve spent billions of dollars gathering information and reporting news at our publications, and we can’t allow OpenAI and Microsoft to expand the Big Tech playbook of stealing our work to build their own businesses at our expense,” a spokesman explained.
The legal challenges are pending. In a colorful description of the suits’ allegations, journalist Hamilton Nolan described AI’s threat as an “Automated Death Star.”
“Providential opportunity”?
Not all content companies agree. There has been a groundswell of leading content companies entering into agreements with OpenAI.
In July 2023, the Associated Press became the first major content provider to license its archive to OpenAI. Recently, however, the deal-making floodgates have opened. Rupert Murdoch’s News Corp, home of The Wall Street Journal, New York Post, and multiple other publications in Australia and the United Kingdom, German publishing giant Axel Springer, owner of Politico in the U.S. and Bild and Welt in Germany, venerable media company The Atlantic, along with new media company Vox Media, the Financial Times, Paris’ Le Monde, and Spain’s Prisa Media have all contracted with OpenAI for use of their product.
Even Barry Diller’s publishing unit, Dotdash Meredith, agreed to license to OpenAI, approximately a year after his apocalyptic warning.
News Corp CEO Robert Thomson described his company’s rationale this way in an employee memo: “The digital age has been characterized by the dominance of distributors, often at the expense of creators, and many media companies have been swept away by a remorseless technological tide. The onus is now on us to make the most of this providential opportunity.”
“There is a premium for premium journalism,” Thomson observed. That premium, for News Corp, is reportedly $250 million over five years from OpenAI. Axel Springer’s three-year deal is reportedly worth $25 to $30 million. The Financial Times terms were reportedly in the annual range of $5 to $10 million.
AI companies’ different approaches
While publishers debate whether AI is “providential opportunity” or “stealing our work,” a similar debate is ongoing among AI companies. Different generative AI companies have different opinions whether to pay for content, and if so, which kind of content.
When it comes to scraping information from websites, most of the major generative AI companies have chosen to interpret copyright law’s “fair use doctrine” allowing the unlicensed use of copyrighted content in certain circumstances. Some of the companies have even promised to indemnify their users if they are sued for copyright infringement.
Google, whose core business is revenue generated by recommending websites, has not sought licenses to use the content on those websites. “The internet giant has long resisted calls to compensate media companies for their content, arguing that such payments would undermine the nature of the open web,” the New York Times explained. Google has, however, licensed the user-generated content on social media platform Reddit, and together with Meta has pursued Hollywood rights.
OpenAI has followed a different path. Reportedly, the company has been pitching a “Preferred Publisher Program” to select content companies. Industry publication AdWeek reported on a leaked presentation deck describing the program. The publication said OpenAI “disputed the accuracy of the information” but claimed to have confirmed it with four industry executives. Significantly, the OpenAI pitch reportedly offered not only cash remuneration, but also other benefits to cooperating publishers.
As of early June 2024, other large generative AI companies have not entered into website licensing agreements with publishers.
Content companies surfing an AI tsunami
On the content creation side of the equation, major publishers are attempting to avoid a repeat of their disastrous experience in the early days of the web while smaller websites are fearful the impact on them could be even greater.
As the web began to take business from traditional publishers, their leadership scrambled to find a new economic model. Ultimately, that model came to rely on websites, even though website advertising offered them pennies on their traditional ad dollars. Now, even those assets are under attack by the AI juggernaut. The content companies are in a new race to develop an alternative economic model before their reliance on web search is cannibalized.
The OpenAI Preferred Publisher Program seems to be an attempt to meet the needs of both parties.
The first step in the program is direct compensation. To Barry Diller, for instance, the fact his publications will get “direct compensation for our content” means there is “no connection” between his apocalyptic warning 14 months ago and his new deal with OpenAI.
Reportedly, the cash compensation OpenAI is offering has two components: “guaranteed value” and “variable value.” Guaranteed value is compensation for access to the publisher’s information archive. Variable value is payment based on usage of the site’s information.
Presumably, those signing with OpenAI see it as only the first such agreement. “It is in my interest to find agreements with everyone,” Le Monde CEO Louis Dreyfus explained.
But the issue of AI search is greater than simply cash. Atlantic CEO Nicolas Thompson described the challenge: “We believe that people searching with AI models will be one of the fundamental ways that people navigate to the web in the future.” Thus, the second component in OpenAI’s proposal to publishers appears to be promotion of publisher websites within the AI-generated content. Reportedly, when certain publisher content is utilized, there will be hyperlinks and hover links to the websites themselves, in addition to clickable buttons to the publisher.
Finally, the proposal reportedly offers publishers the opportunity to reshape their business using generative AI technology. Such tools include access to OpenAI content for the publishers’ use, as well as the use of OpenAI for writing stories and creating new publishing content.
Back to the future?
Whether other generative AI and traditional content companies embrace this kind of cooperation model remains to be seen. Without a doubt, however, the initiative by both parties will have its effects.
One such effect was identified in a Le Monde editorial explaining their licensing agreement with OpenAI. Such an agreement, they argued, “will make it more difficult for other AI platforms to evade or refuse to participate.” This, in turn, could have an impact on the copyright litigation, if not copyright law.
We have seen new technology-generated copyright issues resolved in this way before.2 Finding a credible solution that works for both sides is imperative. The promise of AI is an almost boundless expansion of information and the knowledge it creates. At the same time, AI cannot be a continued degradation of the free flow of ideas and journalism that is essential for democracy to function.
Newton’s Law in the AI age
In 1686 Sir Isaac Newton posited his three laws of motion. The third of these holds that for every action there is an equal and opposite reaction. Newton described the consequence of physical activity; generative AI is raising the same consequential response for informational activity.
The threat of generative AI has pushed into the provision of information and the economics of information companies. We know the precipitating force, the consequential effects on the creation of content and free flow of information remain a work in progress.
13 notes
·
View notes
Text
1. The Wall Street Journal:
Trump administration officials ordered eight senior FBI employees to resign or be fired, and asked for a list of agents and other personnel who worked on investigations into the Jan. 6, 2021, attack on the U.S. Capitol, people familiar with the matter said, a dramatic escalation of President Trump’s plans to shake up U.S. law enforcement. On Friday, the Justice Department also fired roughly 30 prosecutors at the U.S. attorney’s office in Washington who have worked on cases stemming from Capitol riot, according to people familiar with the move and a Justice Department memo reviewed by The Wall Street Journal. The prosecutors had initially been hired for short-term roles as the U.S. attorney’s office staffed up for the wave of more than 1,500 cases that arose from the attack by Trump supporters. Trump appointees at the Justice Department also began assembling a list of FBI agents and analysts who worked on the Jan. 6 cases, some of the people said. Thousands of employees across the country were assigned to the sprawling investigation, which was one of the largest in U.S. history and involved personnel from every state. Acting Deputy Attorney General Emil Bove gave Federal Bureau of Investigation leadership until noon on Feb. 4 to identify personnel involved in the Jan. 6 investigations and provide details of their roles. Bove said in a memo he would then determine whether other discipline is necessary. Acting FBI Director Brian Driscoll said in a note to employees that he would be on that list, as would acting Deputy Robert Kissane. “We are going to follow the law, follow FBI policy and do what’s in the best interest of the workforce and the American people—always,” Driscoll wrote. Across the FBI and on Capitol Hill, the preparation of the list stirred fear and rumors of more firings to come—potentially even a mass purge. (Source: wsj.com, italics mine. The big question is whether “the list” will include FBI informants)
2. OpenAI Chief Executive Sam Altman said he believes his company should consider giving away its AI models, a potentially seismic strategy shift in the same week China’s DeepSeek has upended the artificial-intelligence industry. DeepSeek’s AI models are open-source, meaning anyone can use them freely and alter the way they work by changing the underlying code. In an “ask-me-anything” session on Reddit Friday, a participant asked Altman if the ChatGPT maker would consider releasing some of the technology within its AI models and publish more research showing how its systems work. Altman said OpenAI employees were discussing the possibility. “(I) personally think we have been on the wrong side of history here and need to figure out a different open source strategy,” Altman responded. He added, “not everyone at OpenAi shares this view, and it’s also not our current highest priority.” (Source: wsj.com)
3. Quanta Magazine:
In December 17, 1962, Life International published a logic puzzle consisting of 15 sentences describing five houses on a street. Each sentence was a clue, such as “The Englishman lives in the red house” or “Milk is drunk in the middle house.” Each house was a different color, with inhabitants of different nationalities, who owned different pets, and so on. The story’s headline asked: “Who Owns the Zebra?” Problems like this one have proved to be a measure of the abilities — limitations, actually — of today’s machine learning models. Also known as Einstein’s puzzle or riddle (likely an apocryphal attribution), the problem tests a certain kind of multistep reasoning. Nouha Dziri, a research scientist at the Allen Institute for AI, and her colleagues recently set transformer-based large language models (LLMs), such as ChatGPT, to work on such tasks — and largely found them wanting. “They might not be able to reason beyond what they have seen during the training data for hard tasks,” Dziri said. “Or at least they do an approximation, and that approximation can be wrong.”
3 notes
·
View notes