#Llama 3.1
Explore tagged Tumblr posts
Text
#DeepSeek V3#الذكاء الاصطناعي الصيني#نماذج الذكاء الاصطناعي#سرعة المعالجة#خوارزميات الذكاء الاصطناعي#API DeepSeek#تحسينات DeepSeek V3#مقارنة النماذج#GPT-4#Llama 3.1#Cloud 3.5#معالجة البيانات#تقنيات الذكاء الاصطناعي#تطبيقات DeepSeek#مفتوح المصدر
3 notes
·
View notes
Text
Lenovo, Yapay Zeka Alanında Nvidia ve Meta ile Stratejik Ortaklıklar Kurdu
Lenovo’dan Yapay Zeka Alanında Önemli Ortaklıklar Lenovo, yapay zeka (AI) alanında önemli adımlar atarak, yalnızca Nvidia ile değil, aynı zamanda Meta ile de stratejik ortaklıklar kurduğunu duyurdu. Şirketin CEO’su Yang Yuanqing, bu işbirliklerinin önemini vurgulamak için dikkat çekici açıklamalarda bulundu. Dünyanın en büyük kişisel bilgisayar (PC) üreticisi olan Lenovo, yoğun rekabet ve artan…
#AI#AI Now#Blackwell#grafik işlem birimleri#Lenovo#Llama 3.1#Meta#NVIDIA#Tech World#Yapay Zeka#yapay zeka asistanı
0 notes
Text
O LLM de código aberto mais poderoso até agora: Meta LLAMA 3.1-405B
Requisitos de memória para Llama 3.1-405B Executar o Llama 3.1-405B requer memória e recursos computacionais substanciais: Memória GPU: O modelo 405B pode utilizar até 80 GB de memória de GPU por GPU A100 para inferência eficiente. Usar o Tensor Parallelism pode distribuir a carga entre várias GPUs. BATER:É recomendado um mínimo de 512 GB de RAM do sistema para lidar com a pegada de memória do…
#AI#Aprendizado de máquina#arquitetura de transformador#atenção de consulta agrupada#benchmarks de desempenho de IA#democratização da IA#escalonamento de IA#IA de código aberto#Llama#Llama 3.1#llama 3.1 405b#Modelo de linguagem grande#otimização de inferência#quantização FP8
0 notes
Text
Llama 3.1 is Meta’s largest ever open source AI model, and the company claims that it has outperformed the likes of OpenAI’s GPT-4o and Anthropic’s Claude 3.5 Sonnet on numerous benchmarks.
“We’re releasing Llama 3.1 405B, the first frontier-level open source AI model, as well as the new and improved Llama 3.1 70B and 8B models,” the Meta CEO said. “In addition to having significantly better cost/performance relative to closed models, the fact that the 405B model is open will make it the best choice for fine-tuning and distilling smaller models.”
Additionally, Llama comes with systems like Llama Guard that can be secure against unintentional harms such as bad health advice or unintended self-replication, which he noted as major concerns.
0 notes
Text
Llama 3.1: Nuestro modelo de lenguaje a gran escala más capaz hasta la fecha
Meta se compromete con la IA de código abierto y presenta Llama 3.1, una colección de modelos de lenguaje a gran escala (LLM) que incluye el Llama 3.1 405B, el primer modelo de IA de código abierto de última generación. Novedades: Modelos multilingües con contexto ampliado: Llama 3.1 amplía la ventana de contexto a 128K tokens y ofrece soporte en ocho idiomas. Llama 3.1 405B: Este modelo de…
#IA de código abierto#Llama 3.1#modelos de lenguaje a gran escala#procesamiento del lenguaje natural#Tecnologia
0 notes
Text
LLaMA 3.3 70B Multilingual AI Model Redefines Performance

Overview Llama 3.3 70B
Text-only apps run better with Llama 3.3, a 70B instruction-tuned model, compared with Llama 3.1 and 3.2. In certain circumstances, Llama 3.3 70B matches Llama 3.1 405B. Meta offers a cutting-edge 70B model that competes with Llama 3.1 405B.
Pretrained and instruction-adjusted generative model Meta Llama 3.3 multilingual large language model (LLM) is used in 70B. The Llama 3.3 70B instruction customisable text only model outperforms several open source and closed chat models on frequently used industry benchmarks and is designed for multilingual debate.
Llama 3.3 supports which languages?
Sources say Llama 3.3 70B supports English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai.
A pretrained and instruction-tuned generative model, the Meta Llama 3.3 70B multilingual big language model optimises multilingual conversation use cases across supported languages. Even though the model was trained on many languages, these eight fulfil safety and helpfulness criteria.
Developers should not use Llama 3.3 70B to communicate in unsupported languages without first restricting and fine-tuning the system. The model cannot be used in languages other than these eight without these precautions. Developers can adapt Llama 3.3 for additional languages if they follow the Acceptable Use Policy and Llama 3.3 Community License and ensure safe use.
New capabilities
This edition adds a bigger context window, multilingual inputs and outputs, and developer cooperation with third-party tools. Building with these new capabilities requires special considerations in addition to the suggested procedures for all generative AI use cases.
Utilising tools
Developers integrate the LLM with their preferred tools and services, like in traditional software development. To understand the safety and security risks of utilising this feature, they should create a use case policy and assess the dependability of third-party services. The Responsible Use Guide provides safe third-party protection implementation tips.
Speaking many languages
Llama may create text in languages different than safety and usefulness performance standards. Developers should not use this model to communicate in non-supported languages without first restricting and fine-tuning the system, per their rules and the Responsible Use Guide.
Reason for Use
Specific Use Cases Llama 3.3 70B enables multilingual commercial and research use. Pretrained models can be changed for many natural language producing jobs, however instruction customised text only models are for assistant-like conversation. The Llama 3.3 model also lets you use its outputs to improve distillation and synthetic data models. These use cases are allowed by Llama 3.3 Community License.
Beyond Use in a way that violates laws or standards, especially trade compliance. Any other use that violates the Llama 3.3 Community License and Acceptable Use Policy. Use in languages other than those this model card supports.
Note: Llama 3.3 70B has been trained on more than eight languages. The Acceptable Use Policy and Llama 3.3 Community License allow developers to alter Llama 3.3 models for languages other than the eight supported languages. They must ensure that Llama 3.3 is used appropriately and securely in other languages.
#technology#technews#govindhtech#news#technologynews#AI#artificial intelligence#Llama 3.3 70B#Llama 3.3#Llama 3.1 70B
0 notes
Text
Meta's Llama 3.1 vs GPT-4: Which AI model stands out? Explore the in-depth comparison between these cutting-edge technologies and find out how they stack up. Discover more in our blog on Llama 3.1 vs GPT-4!
0 notes
Text
Llama 3.1: La IA de código abierto de Meta que desafía a los gigantes
Meta ha dado un paso adelante en la carrera por la inteligencia artificial de vanguardia con el lanzamiento de Llama 3.1, un modelo de lenguaje de código abierto que presume de ser el más potente hasta la fecha. ¿Qué hace que Llama 3.1 sea tan especial? Capacidades avanzadas: Llama 3.1 compite con las mejores IA de código cerrado en tareas como: Conocimiento general: Responde preguntas de…
1 note
·
View note
Text
youtube

#Youtube#llama#llama 3.1 link#meta#facebook#terra#earth#human#humans#vulcan#bajor#orion#kronos#romulus#cardassia#andor#tellar#youtube#star trek television series#memory alpha
151 notes
·
View notes
Text
#mcyt crackships bracket#polls#crackships polls#llama love#cliande#pizza the llama#owen the llama#ariana griande#zombiecleo#hermitcraft#life series#empires smp
72 notes
·
View notes
Text
they actually did a turing test with LLMs! here's the money shot:
GPT-4.5 prompted to perform as a human does significantly BETTER than undergrads, or randos on prolific. however, there's kind of a catch
The game interface was designed to resemble a conventional messaging application (see Figure 7). The interrogator interacted with both witnesses simultaneously using a split-screen. The interrogator sent the first message to each witness and each participant could only send one message at a time. The witnesses did not have access to each others’ conversations. Games had a time limit of 5 minutes, after which the interrogator gave a verdict about which witness they thought was human, their confidence in that verdict, and their reasoning. After 8 rounds, participants completed an exit survey which asked them for a variety of demographic information. After exclusions, we analysed 1023 games with a median length of 8 messages across 4.2 minutes
only 8 messages, and less than five minutes. this is not that surprising! like, i guess it's good to confirm, but we already knew llms could convincingly mimic a person for 5 minutes. id be much more interested in a 30 minute version of this. (altho it's hard to make conversation with a random stranger for 30 minutes). for practical reasons you'd need a much smaller sample size but i think the results would still be interesting
37 notes
·
View notes
Text

The DeepSeek panic reveals an AI world ready to blow❗💥
The R1 chatbot has sent the tech world spinning – but this tells us less about China than it does about western neuroses
The arrival of DeepSeek R1, an AI language model built by the Chinese AI lab DeepSeek, has been nothing less than seismic. The system only launched last week, but already the app has shot to the top of download charts, sparked a $1tn (£800bn) sell-off of tech stocks, and elicited apocalyptic commentary in Silicon Valley. The simplest take on R1 is correct: it’s an AI system equal in capability to state-of-the-art US models that was built on a shoestring budget, thus demonstrating Chinese technological prowess. But the big lesson is perhaps not what DeepSeek R1 reveals about China, but about western neuroses surrounding AI.
For AI obsessives, the arrival of R1 was not a total shock. DeepSeek was founded in 2023 as a subsidiary of the Chinese hedge fund High-Flyer, which focuses on data-heavy financial analysis – a field that demands similar skills to top-end AI research. Its subsidiary lab quickly started producing innovative papers, and CEO Liang Wenfeng told interviewers last November that the work was motivated not by profit but “passion and curiosity”.
This approach has paid off, and last December the company launched DeepSeek-V3, a predecessor of R1 with the same appealing qualities of high performance and low cost. Like ChatGPT, V3 and R1 are large language models (LLMs): chatbots that can be put to a huge variety of uses, from copywriting to coding. Leading AI researcher Andrej Karpathy spotted the company’s potential last year, commenting on the launch of V3: “DeepSeek (Chinese AI co) making it look easy today with an open weights release of a frontier-grade LLM trained on a joke of a budget.” (That quoted budget was $6m – hardly pocket change, but orders of magnitude less than the $100m-plus needed to train OpenAI’s GPT-4 in 2023.)
R1’s impact has been far greater for a few different reasons.
First, it’s what’s known as a “chain of thought” model, which means that when you give it a query, it talks itself through the answer: a simple trick that hugely improves response quality. This has not only made R1 directly comparable to OpenAI’s o1 model (another chain of thought system whose performance R1 rivals) but boosted its ability to answer maths and coding queries – problems that AI experts value highly. Also, R1 is much more accessible. Not only is it free to use via the app (as opposed to the $20 a month you have to pay OpenAI to talk to o1) but it’s totally free for developers to download and implement into their businesses. All of this has meant that R1’s performance has been easier to appreciate, just as ChatGPT’s chat interface made existing AI smarts accessible for the first time in 2022.
Second, the method of R1’s creation undermines Silicon Valley’s current approach to AI. The dominant paradigm in the US is to scale up existing models by simply adding more data and more computing power to achieve greater performance. It’s this approach that has led to huge increases in energy demands for the sector and tied tech companies to politicians. The bill for developing AI is so huge that techies now want to leverage state financing and infrastructure, while politicians want to buy their loyalty and be seen supporting growing companies. (See, for example, Trump’s $500bn “Stargate” announcement earlier this month.) R1 overturns the accepted wisdom that scaling is the way forward. The system is thought to be 95% cheaper than OpenAI’s o1 and uses one tenth of the computing power of another comparable LLM, Meta’s Llama 3.1 model. To achieve equivalent performance at a fraction of the budget is what’s truly shocking about R1, and it’s this that has made its launch so impactful. It suggests that US companies are throwing money away and can be beaten by more nimble competitors.
But after these baseline observations, it gets tricky to say exactly what R1 “means” for AI. Some are arguing that R1’s launch shows we’re overvaluing companies like Nvidia, which makes the chips integral to the scaling paradigm. But it’s also possible the opposite is true: that R1 shows AI services will fall in price and demand will, therefore, increase (an economic effect known as Jevons paradox, which Microsoft CEO Satya Nadella helpfully shared a link to on Monday). Similarly, you might argue that R1’s launch shows the failure of US policy to limit Chinese tech development via export controls on chips. But, as AI policy researcher Lennart Heim has argued, export controls take time to work and affect not just AI training but deployment across the economy. So, even if export controls don’t stop the launches of flagships systems like R1, they might still help the US retain its technological lead (if that’s the outcome you want).
All of this is to say that the exact effects of R1’s launch are impossible to predict. There are too many complicating factors and too many unknowns to say what the future holds. However, that hasn’t stopped the tech world and markets reacting in a frenzy, with CEOs panicking, stock prices cratering, and analysts scrambling to revise predictions for the sector. And what this really shows is that the world of AI is febrile, unpredictable and overly reactive. This a dangerous combination, and if R1 doesn’t cause a destructive meltdown of this system, it’s likely that some future launch will.
Daily inspiration. Discover more photos at Just for Books…?
#just for books#DeepSeek#Opinion#Artificial intelligence (AI)#Computing#China#Asia Pacific#message from the editor
27 notes
·
View notes
Text
IA multilíngue no Google Cloud: o alcance global dos modelos Llama 3.1 da Meta
A Inteligência Artificial (IA) transforma a forma como interagimos com a tecnologia, quebrando barreiras linguísticas e permitindo uma comunicação global perfeita. De acordo com a MarketsandMarkets, o mercado de IA deve crescer de US$ 214,6 bilhões em 2024 para US$ 1.339,1 bilhões até 2030 a uma Taxa de Crescimento Anual Composta (CAGR) de 35,7%. Um novo avanço neste campo são os modelos de IA…
View On WordPress
0 notes
Text
If you're tired hearing about ai, scroll away (and block #ai or "#android newsfeed" tag to filter it in the future)
For anyone else who finds the topic somewhat interesting, another Turing test study just dropped.
And well, now llms statistically are outperforming humans in a 5 minute text-based Turing test. Turing Test has been critiqued to death and back, and everybody knows about its flaws and how it's not an adequate measure of anything other than skill of passing that test (just noting it to avoid unnecessary repetitions of the obvious), but it's kind of a cult classic at this point, and in its flawed simplicity it's still enough to cause a considerable amount of existential dread.
The reason I'm sharing it here is because within study they show some examples, and among them, I really confidently guessed some incorrectly, and now I'm curious whether you'll do any better. They still run the test live for anyone who wants to try it personally (second link below), but if you wanna test yourself without contributing to the statistic, I'll attach examples they gave within study with polls. (4 in total, so it'll be chain of posts) I'll post answers in the notes to this post
First one

18 notes
·
View notes
Quote
「GPT-4.5」は73%の確率で人間と判断され、人間の参加者よりも高い割合で「人間らしい」と見なされた。一方、メタのAIモデル「LLaMa-3.1」は56%、旧型モデル「GPT-4o」は21%と低確率に留まったという。 AIが適切なプロンプトを与えられることで、人間らしい振る舞いをより効果的に模倣できることを示された形だ。ただし、これは完全な知性や理解力を意味するものではなく、あくまでも「人間らしさ」の再現能力に基づくものだ。
ASCII.jp:OpenAI「GPT-4.5」がチューリングテストに合格したと、カリフォルニア大の研究チームが発表
10 notes
·
View notes
Text
The True Story of the Villareal Family [3.1]
prev // home // next
It was finally the weekend at the Windenburg community pool, and it always started with a certain sound.
Flip.
Flop.
Flip.
Flop.
Flipflopflipflopflipflop.
Savvy poolgoers knew to get out of the way when they heard the march of the flip-flops, because it meant the Paragons were coming.
The Paragons descended in concert, perfect skin and teeth gleaming in their coordinated pink bathing suits.
Their entrance at the community pool was a power statement.
Normally, Luna would bask in the attention, but today she was daydreaming about someone: a mysterious knight she had recently met. Her little crush was still a secret from the Paragons, but she was sure they’d approve of her Prince Charming – once she figured out who was in that suit of armor.
The Paragons turned the corner and seated themselves at the front of the pool in perfect synchronicity. The Paragons were club royalty, and the pool was their kingdom.
But they were also benevolent, allowing Windenburg’s peasants to share the community pool even if they were less-than-perfect.
So, shall we take a tour of their kingdom?
────────✦───────
“On days that end in ‘y’, we wear pink.” - Paragons
Of course, at the head of the pool sit our favorite club in Windenburg, the Paragons. Popular, pink, and perfect, one does not simply admire the Paragons – one worships them.
“Hey, we’re not Partihaus *all* the time. Right now we’re just don’t-talk-to-us-we’re-hungover-haus.” - Partihaus
In the northeast corner sit Windenburg’s biggest party club, Partihaus. They throw the best ragers, and while they’re all hot messes, they typically don’t have beef with other clubs. (Their drama is mostly internal and probably due to the fact that they’ve all hooked up with each other at some point).
“So, would you rather fight 10 duck-sized llamas, or one llama-sized duck?” - Misfits
At the east side of the pool loiter the Misfits – townies who belong to obscure clubs, or worse, who don’t belong to any. These are the kids who bring their homework to the pool and then complain when their books get wet from people splashing near them.
Luna’s twin brother Hugo was regrettably a Misfit, but she still loved him. Not everyone’s jib is cut out to be a Paragon.
“I’m ugly and dumb and like to break things.” – a Renegade, probably
And finally, we have the Renegades. If the Paragons are perfect, then the Renegades are the exact opposite of perfect. They’re a group of no-good criminals who enjoy harassing people, breaking things, and vandalizing the city.
And the Renegades and Paragons hate each other.
So, I'm sure it'll be another peaceful, drama-free day at the Windenburg community pool...
────────✦───────
prev // home // next
#thesims4#simlit#sims4#windenburg#villareal#get together#the true story of the villareal family#ts4#the sims 4#sims story#ts4 story#luna villareal#paragons#renegades#partihaus#tTSotVF
30 notes
·
View notes