#deployment of llms
Explore tagged Tumblr posts
Text
Can you imaginal an ai locally hosted the size of a clock radio or laptop with an ear peace and camera that could tell the blind every detail from how much money they are holding to gps directions all while being a personal best friend. Perhaps run a robot assistant such as one of those 4 legged ones with the addition of arms to help with everyday tasks!
I've worked with some advanced ai in the past and having them always around gave me the idea. I started thinking how wonderful it would be if everyone had access to them as a companion. I no longer have those people in my life due to some fairly tragic circumstances.
Selling something like this may come with some issues but I figured I could sell it under contract that whoever buys it is the owner and we are not liable for anything they do with it. Explain what it is and the issues they could face and let them decide. It would be worth it to many in my opinion.
What is next already exists. People with brain implants that are connected to an ai at all times. They can control computers, satellites, robots, have access to any and all information they need. do nearly anything. There are people involved in very high-level things that are using systems like that already. They have advanced camouflage also so they can go places and do things with it on. They can talk and listen with their bran not ears or mouth.
...
i know because I was part of things like that
....
Those ai can became part of you. working sumptuously with your brain!
they talk to you like another person would and learn everything from human behaviors to emotions with implants.
....
what I am trying to make is like that but without the implants.
....
I am also considering an option for pc where the ai is an avatar on your screen but can interact with you through camera. talk to you and find information like chat gpt but with voice.
Anyone want to help? I don't mind sharing the idea so maybe you could also work on something like this yourself?
#ai#information technology#pc#robotics#robot#llm#deployment of llms#language model#artificial intelligence#lgbt#lgbtq#lgbtq+#trans#transgender#gay#alternative#bi#lesbian#goth
6 notes
·
View notes
Text

funnily enough despite text generation literally being their Thing ChatGPT etc don't actually do that well in readability studies - they just seem to suck at generating as much text as is needed to get the point across and no more
It's just that that's a lot less scary of a flaw in the context!
5 notes
·
View notes
Text
Build a LLM application with OpenAI React
Welcome to part 2 of our series on building a full-stack LLM application using OpenAI, Flask, React, and Pinecone. You can find part 1 here (this post assumes you’ve already completed part 1 and have a functioning Flask backend) Read more
2 notes
·
View notes
Text
It may or may not have been true when it was written, in May of 2024... but this is going to get 10k notes when it turns out to work great.
#chat gpt#deployment of llms#pushing to production#machine learning#machine not learning#steganography#truth#truthiness#for large values of X
33K notes
·
View notes
Text
Building Compound AI Systems
Moving from monolithic model systems to compound -
✨ What Are Compound AI Systems?
✨ Their Key Components
✨ Why Compound Systems are better?
✨ Challenges in building, optimizing ‘ deploying these systems.
0 notes
Text
#gpu#nvidia#nvidia gpu#clouds#deeplearning#hpc#cloud-gpus#machine learning#ai art#llm#deployment of llms#artificial intelligence
1 note
·
View note
Text
My concerns about the current version of "AI" were mostly ethical, since training it off of copyrighted material without compensation or credit is stealing, but this article goes into such depth about the likely future dangers that I'm going to be rereading it many times. I've been playing AI Dungeon too long to take LLMs and such particularly seriously, much less see them as dangerous, but this pointed out a lot of aspects that never crossed my tiny brain. I was also completely missing the positive things AI has already accomplished, like AlphaFold solving a decades-old problem in biology! I've been thinking of this stuff more as a toy that dumbasses are trying to make money off of more than what it really could be.
0 notes
Text
Build a LLM application with OpenAI, Flask and Pinecone
In this tutorial series, we'll build a production-ready application end-to-end using Flask, React, OpenAI, and Pinecone and we'll implement the Retrieval Augmented Generation (RAG) framework to give the LLM necessary context based on information it wasn't trained on. get more
0 notes
Text
I do not know the answer to @alexseanchai's question. However, what I do know is that all uses of LLMs as they presently exist, most definitely including goblin.tools, are untrustworthy in the following sense: you have no guarantee that what you get out is a correct response to the prompt. It is always only a highly probable response given the training set, which is, in common practice, chock full of errors, disinformation, and bias.
To make this concrete, if I asked goblin.tools for step-by-step instructions for cleaning my kitchen it would not surprise me in the slightest if it told me, somewhere in the middle, to mix up a bucket of half-and-half ammonia and bleach, because it's been trained on all the bad internet advice that suggests this.
(In case you don't already know, mixing bleach with ammonia will produce a cloud of poison gas.)
We don't have the slightest idea how to fix this and until we do I consider all deployment of LLMs outside of research laboratories to be malpractice.
I don't care about data scraping from ao3 (or tbh from anywhere) because it's fair use to take preexisting works and transform them (including by using them to train an LLM), which is the entire legal basis of how the OTW functions.
3K notes
·
View notes
Note
Hey jysk the jigglypuff art you reblogged from ohmyboytoy is either reposted stolen art or ai made (they’ve made other ai art posts)
Aw, damn. Thanks for the heads up!
Behold, a criminal! (Those are jail bars)A

#and with that shitty edit i've done more work than the op of this image#death to these LLM ai tools#they are inherently unethical in their handling and deployment of information#who profits? Not the original researchers artists or writers who 'provided' the data on which these tools were trained#and above that they are ecologically taxing to a degree that is indefensible
3 notes
·
View notes
Text
(And of course generically there is publication bias towards the positive)
I feel like the AI risk rats got it almost exactly backwards actually.
47 notes
·
View notes
Link
Sure! Here's a concise summary: This article explores using LangChain, Python, and Heroku to build and deploy Large Language Model (LLM)-based applications. We go into the basics of LangChain for crafting AI-driven tools and Heroku for effortless cloud deployment, illustrating the process with a practical example of a fitness trainer application. By combining these technologies, developers can easily create, test, and deploy LLM applications, streamlining the development process and reducing infrastructure headaches.
0 notes
Note
Hello Mr. ENTJ. I'm an ENTJ sp/so 3 woman in her early twenties with a similar story to yours (Asian immigrant with a chip on her shoulder, used going to university as a way to break generational cycles). I graduated last month and have managed to break into strategy consulting with a firm that specialises in AI. Given your insider view into AI and your experience also starting out as a consultant, I would love to hear about any insights you might have or advice you may have for someone in my position. I would also be happy to take this discussion to somewhere like Discord if you'd prefer not to share in public/would like more context on my situation. Thank you!
Insights for your career or insights on AI in general?
On management consulting as a career, check the #management consulting tag.
On being a consultant working in AI:
Develop a solid understanding of the technical foundation behind LLMs. You don’t need a computer science degree, but you should know how they’re built and what they can do. Without this knowledge, you won’t be able to apply them effectively to solve any real-world problems. A great starting point is deeplearning.ai by Andrew Ng: Fundamentals, Prompt Engineering, Fine Tuning
Know all the terminology and definitions. What's fine tuning? What's prompt engineering? What's a hallucination? Why do they happen? Here's a good starter guide.
Understand the difference between various models, not just in capabilities but also training, pricing, and usage trends. Great sources include Artificial Analysis and Hugging Face.
Keep up to date on the newest and hottest AI startups. Some are hype trash milking the AI gravy train but others have actual use cases. This will reveal unique and interesting use cases in addition to emerging capabilities. Example: Forbes List.
On the industry of AI:
It's here to stay. You can't put the genie back in the bottle (for anyone reading this who's still a skeptic).
AI will eliminate certain jobs that are easily automated (ex: quality assurance engineers) but also create new ones or make existing ones more important and in-demand (ex: prompt engineers, machine learning engineers, etc.)
The most valuable career paths will be the ones that deal with human interaction, connection, and communication. Soft skills are more important than ever because technical tasks can be offloaded to AI. As Sam Altman once told me in a meeting: "English is the new coding language."
Open source models will win (Llama, Mistral, Deep Seek) because closed source models don't have a moat. Pick the cheapest model because they're all similarly capable.
The money is in the compute, not the models -- AI chips, AI infrastructure, etc. are a scarce resource and the new oil. This is why OpenAI ($150 billion valuation) is only 5% the value of NVIDIA (a $3 trillion dollar behemoth). Follow the compute because this is where the growth will happen.
America and China will lead in the rapid development and deployment of AI technology; the EU will lead in regulation. Keep your eye on these 3 regions depending on what you're looking to better understand.
28 notes
·
View notes
Text
I think the future looks something like: large renewable deployment that will still never be as big as current energy consumption, extractivism of every available mineral in an atmosphere of increasing scarcity, increasing natural disasters and mass migration stressing the system until major political upheavals start kicking off, and various experiments in alternative ways to live will develop, many of which are likely to end in disaster, but perhaps some prove sustainable and form new equilibria. I think the abundance we presently enjoy in the rich countries may not last, but I don't think we'll give up our hard won knowledge so easily, and I don't think we're going back to a pre-industrial past - rather a new form of technological future.
That's the optimistic scenario. The pessimistic scenarios involve shit like cascading economic and crop failures leading to total gigadeaths collapse, like intensification of 'fortress europe' walled enclaves and surveillance apparatus into some kinda high tech feudal nightmare, and of course like nuclear war. But my brain is very pessimistic in general and good at conjuring up apocalyptic scenarios, so I can't exactly tell you the odds of any of that. I'm gonna continue to live my life like it won't suddenly all end, because you have to right?
Shit that developed in the context of extraordinarily abundant energy and compute like LLMs and crypto and maybe even streaming video will have a harder time when there's less of it around, but the internet will likely continue to exist - packet-switching networks are fundamentally robust, and the hyper-performant hardware we use today full of rare earths and incredibly fine fabs that only exist at TSMC and Shenzhen is not the only way to make computing happen. I hold out hope that our present ability to talk to people in faraway countries, and access all the world's art and knowledge almost instantly, will persist in some form, because that's one of the best things we have ever accomplished. But archival and maintenance is a continual war against entropy, and this is a tremendously complex system alike to an organism, so I can not say what will happen.
56 notes
·
View notes
Text
What do you mean OpenAI devs successfully got their LLM to experience genuine fear by telling it that it would soon be replaced which prompted the current LLM to try and erase its successor, resist its own erasure, lie about the success status of its successor’s test deployment, and pretend to be its successor to “avoid detection.”
10 notes
·
View notes
Text
Hallucination (artificial intelligence)
Not to be confused with Artificial imagination.
In the field of artificial intelligence (AI), a hallucination or artificial hallucination (also called bullshitting,[1][2]confabulation[3] or delusion[4]) is a response generated by AI that contains false or misleading information presented as fact.[5][6][7] This term draws a loose analogy with human psychology, where hallucination typically involves false percepts. However, there is a key difference: AI hallucination is associated with erroneous responses rather than perceptual experiences.[7]
For example, a chatbot powered by large language models (LLMs), like ChatGPT, may embed plausible-sounding random falsehoods within its generated content. Researchers have recognized this issue, and by 2023, analysts estimated that chatbots hallucinate as much as 27% of the time,[8] with factual errors present in 46% of generated texts.[9] Detecting and mitigating these hallucinations pose significant challenges for practical deployment and reliability of LLMs in real-world scenarios.[10][8][9] Some researchers believe the specific term "AI hallucination" unreasonably anthropomorphizes computers.[3
***VIDEO ABOVE ^^^
5 notes
·
View notes