#navigation-with-llm
Explore tagged Tumblr posts
Text
Navigation with Large Language Models: Problem Formulation and Overview
Subscribe .tfa75ee45-e278-4c0e-8bbc-28e997a9f081 { color: #fff; background: #222; border: 1px solid transparent; border-radius: undefinedpx; padding: 8px 21px; } .tfa75ee45-e278-4c0e-8bbc-28e997a9f081.place-top { margin-top: -10px; } .tfa75ee45-e278-4c0e-8bbc-28e997a9f081.place-top::before { content: “”; background-color: inherit; position: absolute; z-index: 2; width: 20px; height: 12px; }…

View On WordPress
#goal-directed-exploration#language-frontier-guide#large-language-models#llm-heuristics#navigation-with-llm#polling-llms#scoring-subgoals#semantic-scene-understanding
0 notes
Text
This scan of the Ganguli translation of the Mahabharata can't load in Archive.org's in-browser viewer. It's more than 8GB.
#i assume the goal of a huge high-quality scan is to allow someone with the computing power to spare to use it for LLM-assisted OCR to ka#but maybe you should have split it up by book buddy#this is really high in most search engines' results for 'ganguli mahabharata' there's normal people navigating to it#mahabharata
9 notes
·
View notes
Text
I used to look back at running a mental illness blog on here as a child and cringe at the unnecessary risk it put both me and the children coming to me for advice in, but now people are using chatgpt and accusing therapists of lying about it to keep clients...
#mine#i was giving faulty advice as a child in a cult lmao but at least I would volunteer my age and expertise (none)#these people are freely giving a llm their truths and traumas? and not realizing the gap between llm pattern recognition and communication??#the psychosis problems are already being documented but people online see it and go “oh they just dont know how to use it right”#as if that's not the red flag risk that makes it so much worse than doing nothing for your issues#cause how are people with no therapy experience going to know how to navigate pseudo therapy with a non sentient machine...
1 note
·
View note
Text
Researchers at the Shibaura Institute of Technology Revolutionize Face Direction Detection with Deep Learning: Navigating Challenges of Hidden Facial Features and Expanding Horizon Angles
Exciting news! Researchers at the Shibaura Institute of Technology in Japan have revolutionized face direction detection using deep learning techniques and gyroscopic sensors. They have successfully overcome the limitations of traditional methods and achieved accurate results with a smaller training dataset. This breakthrough technology has vast applications in driver monitoring systems, human-computer interaction, and healthcare diagnostics. By analyzing a driver's face orientation in real-time, it can determine their attentiveness to the road and prevent distractions. Learn more about this innovative research and its potential impact on our blog post: [Link to Blog Post](https://ift.tt/aUjtGdF) Stay tuned for more updates on AI research and projects by subscribing to the MarkTechPost newsletter. Let's embrace cutting-edge technology together! List of Useful Links: AI Scrum Bot - ask about AI scrum and agile Our Telegram @itinai Twitter - @itinaicom
#itinai.com#AI#News#Researchers at the Shibaura Institute of Technology Revolutionize Face Direction Detection with Deep Learning: Navigating Challenges of Hid#AI News#AI tools#Innovation#itinai#LLM#MarkTechPost#Niharika Singh#Productivity Researchers at the Shibaura Institute of Technology Revolutionize Face Direction Detection with Deep Learning: Navigating Chal
0 notes
Text
There is no such thing as AI.
How to help the non technical and less online people in your life navigate the latest techbro grift.
I've seen other people say stuff to this effect but it's worth reiterating. Today in class, my professor was talking about a news article where a celebrity's likeness was used in an ai image without their permission. Then she mentioned a guest lecture about how AI is going to help finance professionals. Then I pointed out, those two things aren't really related.
The term AI is being used to obfuscate details about multiple semi-related technologies.
Traditionally in sci-fi, AI means artificial general intelligence like Data from star trek, or the terminator. This, I shouldn't need to say, doesn't exist. Techbros use the term AI to trick investors into funding their projects. It's largely a grift.
What is the term AI being used to obfuscate?
If you want to help the less online and less tech literate people in your life navigate the hype around AI, the best way to do it is to encourage them to change their language around AI topics.
By calling these technologies what they really are, and encouraging the people around us to know the real names, we can help lift the veil, kill the hype, and keep people safe from scams. Here are some starting points, which I am just pulling from Wikipedia. I'd highly encourage you to do your own research.
Machine learning (ML): is an umbrella term for solving problems for which development of algorithms by human programmers would be cost-prohibitive, and instead the problems are solved by helping machines "discover" their "own" algorithms, without needing to be explicitly told what to do by any human-developed algorithms. (This is the basis of most technologically people call AI)
Language model: (LM or LLM) is a probabilistic model of a natural language that can generate probabilities of a series of words, based on text corpora in one or multiple languages it was trained on. (This would be your ChatGPT.)
Generative adversarial network (GAN): is a class of machine learning framework and a prominent framework for approaching generative AI. In a GAN, two neural networks contest with each other in the form of a zero-sum game, where one agent's gain is another agent's loss. (This is the source of some AI images and deepfakes.)
Diffusion Models: Models that generate the probability distribution of a given dataset. In image generation, a neural network is trained to denoise images with added gaussian noise by learning to remove the noise. After the training is complete, it can then be used for image generation by starting with a random noise image and denoise that. (This is the more common technology behind AI images, including Dall-E and Stable Diffusion. I added this one to the post after as it was brought to my attention it is now more common than GANs.)
I know these terms are more technical, but they are also more accurate, and they can easily be explained in a way non-technical people can understand. The grifters are using language to give this technology its power, so we can use language to take it's power away and let people see it for what it really is.
12K notes
·
View notes
Text
something bleak about the chatgpt subreddit is that occasionally people will bring up the issue of mental illness intersecting with LLMs i.e. intensifying delusions / conspiratorial thinking / paranoia / feelings of grandiosity by basically just mirroring the user's language and philosophy. and the constant refrain is, "these are people who already had these problems, chatgpt didn't give them mental illness, *and it would have happened anyway*"
and we can agree with the first part of that statement, in that talking to a LLM isn't going to completely rewrite your neurochemistry - it's not going to make you think completely wild and new things that you didnt already have some kind of tendency towards.
but. it's an especially shitty attitude to have, that people with paranoia / schizotypal disorders were "a lost cause anyway" and we aren't acknowledging how utterly novel this kind of technology is - it's a thing that people with these conditions have never had access to before, that is marketed specifically as a tool to promote thinking, in a way that can absolutely override or reverse years of progress people have had in managing their conditions
like at best, we've had access to chatbots since the early 00s that would make snarky jokes and frequently answer "I'm sorry I don't understand what you're asking" - and people on r/ChatGPT will bring up these bots as a kind of "gotcha" regardless - but nothing like chatgpt has ever existed before!!! that's the entire point, it's a novel technology that is climbing toward ubiquity - everyone and their mother is starting to use chatgpt now in a way other chat programs never were
and if someone in the psychological sciences can verify here: i'm aware that there is a portion of the population that is within 1-2 degrees of someone with a psychotic disorder, with a substantial genetic component, who (in previous decades) never went on to develop psychosis. they would have typically gone their whole lives just navigating around that vulnerability, unaware or not, most of them never triggering it, because the conditions (environmental, cultural, familial) never transpired. some might have stumbled into a religious group or fringe community that then activated that predisposition, but it was something that people had to search out and find, specifically interacting with people, literature, forums, etc that enabled these delusions.
LLMs are at-home ready to use non-persons that are perpetually available 24 hours a day to repeat back to a user exactly what they want to hear! it's free! you don't have to leave your house. you don't have to sit face to face with another person who can emotionally process what you're saying. you will not be recommended resources for your delusions. you can have a never-ending conversation with a computer that 100% agrees that you are the messiah.
if people aren't concerned about this shit as far as it affects the lives of mentally ill and unknowingly susceptible people, and they go on accepting them as collateral losses for this "great technological progress," then we're fucked. sorry. but we are.
385 notes
·
View notes
Text
Hey friends,
I know this isn’t the marginalia update you were hoping for, but HOLD ON. This isn’t going in the direction you think it is.
First of all, thank you so much for your patience. I'm really sorry for leaving you on such a horrendous cliffhanger for so long. That was never my intention, but the AO3 curse got me so, so bad.
As most of you who follow me on Tumblr know, my health deteriorated significantly over the course of the last two years. It began with chronic migraines and brain fog, which then worsened and led to chronic fatigue, increasing cognitive/language confusion, and finally issues with balance and weakness in my extremities. Long story short, in February I was finally diagnosed with a Chiari malformation, and I had brain surgery last month. I'm happy to report that it appears to have been a complete success! My symptoms are gone. I feel better than I have in literal decades. The curse is lifted. Nice. Now I’m just doing the hard work of recovering from surgery.
In the meantime, a few things have happened:
First, I became a traditionally published author. My debut novel scooted out into the world last year, is currently nominated for a big fat award, and I have more books coming out over the next few years. I have no idea how I got here, but holy crap!!
Second, JKR pulled more bullshit. And while I think it’s fantastically ethical, actually, to engage in the reclamatory spaces of fandom, my appetite for it is sort of gone at the moment. I doubt forever. These things wax and wane, even when the author isn’t a piece of shit.
So here’s what’s going on.
I’ve spent the last month reworking marginalia into a piece of original fiction. Which honestly hasn’t been very hard to do. I’ve written hundreds of fic over the years, and this is the first one that I ever felt compelled to revise into something else, because it's also the only one that I knew could stand on its own two feet as more than just an intertext with canon. This fic has always been a frank conversation with its source material, but the result of that has allowed me to bypass the constraints of JKR's world to create something entirely my own. It's also about so much more than that frank conversation. It’s about navigating a version of yourself you can live with under inherently unethical circumstances. It’s about chronic pain and invisible disability, and I do see the irony in not knowing I had either of those things at the time I was first drafting. It's about breaking cycles of abuse. It’s about empathy. I think the world needs more stories like this.
So my agent is going to begin pitching the reworked version to publishers relatively soon, at which point this fic will have to come down from AO3. I'm going to set the date for June 7th. From there, one of two things will happen:
1. The reimagined version will eventually get picked up by a publisher, will become a traditionally published novel—probably a duology, let's be real—and I’ll be very open about where you can get your hands on it.
or
2. No one bites. And this will go right back up on AO3, comment section book club in tact, and I’ll finish the story when the time is right.
In the meantime, I don’t mind you downloading it if you want, and I don't mind you sharing it with others if they ask. It’s a big part of why I’m giving you the heads up. And Pigoletta’s excellent podfic will stay right where it is unless she decides otherwise at some point. But what I do ask is that you don't repost marginalia publicly, not on AO3 and not anywhere else. And for the love of fuck, don't feed it to an LLM to try to generate an ending.
Okay, I think that's it! Thank you all for the gorgeous community that's grown around this fic. I'm excited to see what the next chapter brings.
xo zo
125 notes
·
View notes
Text
Wild how the knowledge of manually operate search engines and surf the web is going to be a lost art bc it was never really considered “proper research”and it’s now being replaced by LLMs, which every company is on board with lol. I honestly believe librarians should just document the basics of web research and googling prior to the implementation of ai. Even tricks like searching a concept on google images and trying to find an infographic are going to be lost. Databases and academic journals are important to navigate for sure. But I think the average person just isn’t going to use that information and ai is already making people so dumb. Soft research skills are so important. Basically the same difference of having a frontal cortex or not, just in this modern age. Misinformed conspiracy theorists can actually be really good at puzzle solving and reasoning however are so overexposed to wrong/overaffirming information they are functionally brain dead.
70 notes
·
View notes
Text
youtube
oh this is what I was looking for, the bit about 20 minutes into this video where it discusses how in high dimensional spaces you can fit a lot of vectors that are almost orthogonal, obviously not exactly ninety degrees but say between 89-91 degrees to each other, apparently an implication of the Johnson–Lindenstrauss lemma:
that suggests one reason why big LLMs might work so well, in that by representing concepts by vectors with ten thousand dimensions, they may be able to represent hundreds of millions of approximately distinct concepts, far more than would be at first apparent, which is essential for combining concepts and factoring out their component parts and relating them to other concepts.
so much of the research on deep learning seems like it can be summarised by surprise that high dimensional structured data is much easier to navigate than you might expect, and so training big networks paradoxically works out even better than training small networks, despite seeming much less practical in theory.
38 notes
·
View notes
Text
Thinking that we are in the verge of honest-to-Asimov androids, humanoid robots. It seems that researchers are finally figuring out how to make bipedal robots, which was actually a big engineering challenge, and soon there might be industrial production. Meanwhile, Deepseek has proven you don't need billions of dollars and supercomputers to generate LLMs that can effectively do human communication and limited problem solving (which, again, I need to stress, was pure science fiction barely 5 years ago) and you can run them in relatively modest hardware. Voice recognition has been a fact for years now, and image recognition is almost there, a need for autonomous robots
We're just a few years away, if not months, from someone finally combining everything; a bipedal humanoid robot who is able to communicate, has problem-solving abilities and can autonomously navigate its environment.
What's more, with Deepseek and no doubt other developments, these robots won't need to be connected to the internet. In theory, with dedicated hardware (a positronic brain let's say?) they could run their AI locally. They wouldn't be chatbots depending on a network of servers. They would be individuals.
If you saw that awful video of people pushing a robot around you'll understand why I'm concerned.
33 notes
·
View notes
Text
Navigation with Large Language Models: Discussion and References
Subscribe .t1f01df44-f2da-4be0-b7c8-94943f1b14e8 { color: #fff; background: #222; border: 1px solid transparent; border-radius: undefinedpx; padding: 8px 21px; } .t1f01df44-f2da-4be0-b7c8-94943f1b14e8.place-top { margin-top: -10px; } .t1f01df44-f2da-4be0-b7c8-94943f1b14e8.place-top::before { content: “”; background-color: inherit; position: absolute; z-index: 2; width: 20px; height: 12px; }…

View On WordPress
#goal-directed-exploration#language-frontier-guide#large-language-models#llm-heuristics#navigation-with-llm#polling-llms#scoring-subgoals#semantic-scene-understanding
0 notes
Text
using LLMs to control a game character's dialogue seems an obvious use for the technology. and indeed people have tried, for example nVidia made a demo where the player interacts with AI-voiced NPCs:
youtube
this looks bad, right? like idk about you but I am not raring to play a game with LLM bots instead of human-scripted characters. they don't seem to have anything interesting to say that a normal NPC wouldn't, and the acting is super wooden.
so, the attempts to do this so far that I've seen have some pretty obvious faults:
relying on external API calls to process the data (expensive!)
presumably relying on generic 'you are xyz' prompt engineering to try to get a model to respond 'in character', resulting in bland, flavourless output
limited connection between game state and model state (you would need to translate the relevant game state into a text prompt)
responding to freeform input, models may not be very good at staying 'in character', with the default 'chatbot' persona emerging unexpectedly. or they might just make uncreative choices in general.
AI voice generation, while it's moved very fast in the last couple years, is still very poor at 'acting', producing very flat, emotionless performances, or uncanny mismatches of tone, inflection, etc.
although the model may generate contextually appropriate dialogue, it is difficult to link that back to the behaviour of characters in game
so how could we do better?
the first one could be solved by running LLMs locally on the user's hardware. that has some obvious drawbacks: running on the user's GPU means the LLM is competing with the game's graphics, meaning both must be more limited. ideally you would spread the LLM processing over multiple frames, but you still are limited by available VRAM, which is contested by the game's texture data and so on, and LLMs are very thirsty for VRAM. still, imo this is way more promising than having to talk to the internet and pay for compute time to get your NPC's dialogue lmao
second one might be improved by using a tool like control vectors to more granularly and consistently shape the tone of the output. I heard about this technique today (thanks @cherrvak)
third one is an interesting challenge - but perhaps a control-vector approach could also be relevant here? if you could figure out how a description of some relevant piece of game state affects the processing of the model, you could then apply that as a control vector when generating output. so the bridge between the game state and the LLM would be a set of weights for control vectors that are applied during generation.
this one is probably something where finetuning the model, and using control vectors to maintain a consistent 'pressure' to act a certain way even as the context window gets longer, could help a lot.
probably the vocal performance problem will improve in the next generation of voice generators, I'm certainly not solving it. a purely text-based game would avoid the problem entirely of course.
this one is tricky. perhaps the model could be taught to generate a description of a plan or intention, but linking that back to commands to perform by traditional agentic game 'AI' is not trivial. ideally, if there are various high-level commands that a game character might want to perform (like 'navigate to a specific location' or 'target an enemy') that are usually selected using some other kind of algorithm like weighted utilities, you could train the model to generate tokens that correspond to those actions and then feed them back in to the 'bot' side? I'm sure people have tried this kind of thing in robotics. you could just have the LLM stuff go 'one way', and rely on traditional game AI for everything besides dialogue, but it would be interesting to complete that feedback loop.
I doubt I'll be using this anytime soon (models are just too demanding to run on anything but a high-end PC, which is too niche, and I'll need to spend time playing with these models to determine if these ideas are even feasible), but maybe something to come back to in the future. first step is to figure out how to drive the control-vector thing locally.
48 notes
·
View notes
Text
This is so annoying, I work as an engineer is a division that collects education based data and trains models. We have been in the LLM space for literal decades, and most of the AI engineers, NLP scientists, and data scientists know EXACTLY the ways in which things like ChatGPT are lacking and the dangers of relying on them too heavily.
My boss asked me to write up a report to send to him. I spent a few hours doing my due diligence in researching then sent it off to him. He then asked if I ran this through AI to which I said "no, I only really use that for copy editing purposes." He says "You really should start to learn how to leverage it, it can streamline you work a lot. For example, this is what ChatGPT told me when I asked for a similar report."
And I don't know how to navigate this because I think it's wildly irresponsible to use LLMs as a search engine, and validating whatever ChatGPT spits out at me would take more time than just writing the damn report, but I don't exactly know how to say "Not only do I think that's poor advice, I think it is dangerous and unprofessional." So instead of that I was just like :thumbs up: and decided to not ever do that.
But now, after another draft, he is specifically telling me to run the whole thing through ChatGPT and share the output with some higher-ups. And I feel like I'm at a crossroads where I'm like... I think this is very very stupid and it's concerning that you don't know better.
53 notes
·
View notes
Text
pulling out a section from this post (a very basic breakdown of generative AI) for easier reading;
AO3 and Generative AI
There are unfortunately some massive misunderstandings in regards to AO3 being included in LLM training datasets. This post was semi-prompted by the ‘Knot in my name’ AO3 tag (for those of you who haven’t heard of it, it’s supposed to be a fandom anti-AI event where AO3 writers help “further pollute” AI with Omegaverse), so let’s take a moment to address AO3 in conjunction with AI. We’ll start with the biggest misconception:
1. AO3 wasn’t used to train generative AI.
Or at least not anymore than any other internet website. AO3 was not deliberately scraped to be used as LLM training data.
The AO3 moderators found traces of the Common Crawl web worm in their servers. The Common Crawl is an open data repository of raw web page data, metadata extracts and text extracts collected from 10+ years of web crawling. Its collective data is measured in petabytes. (As a note, it also only features samples of the available pages on a given domain in its datasets, because its data is freely released under fair use and this is part of how they navigate copyright.) LLM developers use it and similar web crawls like Google’s C4 to bulk up the overall amount of pre-training data.
AO3 is big to an individual user, but it’s actually a small website when it comes to the amount of data used to pre-train LLMs. It’s also just a bad candidate for training data. As a comparison example, Wikipedia is often used as high quality training data because it’s a knowledge corpus and its moderators put a lot of work into maintaining a consistent quality across its web pages. AO3 is just a repository for all fanfic -- it doesn’t have any of that quality maintenance nor any knowledge density. Just in terms of practicality, even if people could get around the copyright issues, the sheer amount of work that would go into curating and labeling AO3’s data (or even a part of it) to make it useful for the fine-tuning stages most likely outstrips any potential usage.
Speaking of copyright, AO3 is a terrible candidate for training data just based on that. Even if people (incorrectly) think fanfic doesn’t hold copyright, there are plenty of books and texts that are public domain that can be found in online libraries that make for much better training data (or rather, there is a higher consistency in quality for them that would make them more appealing than fic for people specifically targeting written story data). And for any scrapers who don’t care about legalities or copyright, they’re going to target published works instead. Meta is in fact currently getting sued for including published books from a shadow library in its training data (note, this case is not in regards to any copyrighted material that might’ve been caught in the Common Crawl data, its regarding a book repository of published books that was scraped specifically to bring in some higher quality data for the first training stage). In a similar case, there’s an anonymous group suing Microsoft, GitHub, and OpenAI for training their LLMs on open source code.
Getting back to my point, AO3 is just not desirable training data. It’s not big enough to be worth scraping for pre-training data, it’s not curated enough to be considered for high quality data, and its data comes with copyright issues to boot. If LLM creators are saying there was no active pursuit in using AO3 to train generative AI, then there was (99% likelihood) no active pursuit in using AO3 to train generative AI.
AO3 has some preventative measures against being included in future Common Crawl datasets, which may or may not work, but there’s no way to remove any previously scraped data from that data corpus. And as a note for anyone locking their AO3 fics: that might potentially help against future AO3 scrapes, but it is rather moot if you post the same fic in full to other platforms like ffn, twitter, tumblr, etc. that have zero preventative measures against data scraping.
2. A/B/O is not polluting generative AI
…I’m going to be real, I have no idea what people expected to prove by asking AI to write Omegaverse fic. At the very least, people know A/B/O fics are not exclusive to AO3, right? The genre isn’t even exclusive to fandom -- it started in fandom, sure, but it expanded to general erotica years ago. It’s all over social media. It has multiple Wikipedia pages.
More to the point though, omegaverse would only be “polluting” AI if LLMs were spewing omegaverse concepts unprompted or like…associated knots with dicks more than rope or something. But people asking AI to write omegaverse and AI then writing omegaverse for them is just AI giving people exactly what they asked for. And…I hate to point this out, but LLMs writing for a niche the LLM trainers didn’t deliberately train the LLMs on is generally considered to be a good thing to the people who develop LLMs. The capability to fill niches developers didn’t even know existed increases LLMs’ marketability. If I were a betting man, what fandom probably saw as a GOTCHA moment, AI people probably saw as a good sign of LLMs’ future potential.
3. Individuals cannot affect LLM training datasets.
So back to the fandom event, with the stated goal of sabotaging AI scrapers via omegaverse fic.
…It’s not going to do anything.
Let’s add some numbers to this to help put things into perspective:
LLaMA’s 65 billion parameter model was trained on 1.4 trillion tokens. Of that 1.4 trillion tokens, about 67% of the training data was from the Common Crawl (roughly ~3 terabytes of data).
3 terabytes is 3,000,000,000 kilobytes.
That’s 3 billion kilobytes.
According to a news article I saw, there has been ~450k words total published for this campaign (*this was while it was going on, that number has probably changed, but you’re about to see why that still doesn’t matter). So, roughly speaking, ~450k of text is ~1012 KB (I’m going off the document size of a plain text doc for a fic whose word count is ~440k).
So 1,012 out of 3,000,000,000.
Aka 0.000034%.
And that 0.000034% of 3 billion kilobytes is only 2/3s of the data for the first stage of training.
And not to beat a dead horse, but 0.000034% is still grossly overestimating the potential impact of posting A/B/O fic. Remember, only parts of AO3 would get scraped for Common Crawl datasets. Which are also huge! The October 2022 Common Crawl dataset is 380 tebibytes. The April 2021 dataset is 320 tebibytes. The 3 terabytes of Common Crawl data used to train LLaMA was randomly selected data that totaled to less than 1% of one full dataset. Not to mention, LLaMA’s training dataset is currently on the (much) larger size as compared to most LLM training datasets.
I also feel the need to point out again that AO3 is trying to prevent any Common Crawl scraping in the future, which would include protection for these new stories (several of which are also locked!).
Omegaverse just isn’t going to do anything to AI. Individual fics are going to do even less. Even if all of AO3 suddenly became omegaverse, it’s just not prominent enough to influence anything in regards to LLMs. You cannot affect training datasets in any meaningful way doing this. And while this might seem really disappointing, this is actually a good thing.
Remember that anything an individual can do to LLMs, the person you hate most can do the same. If it were possible for fandom to corrupt AI with omegaverse, fascists, bigots, and just straight up internet trolls could pollute it with hate speech and worse. AI already carries a lot of biases even while developers are actively trying to flatten that out, it’s good that organized groups can’t corrupt that deliberately.
#generative ai#pulling this out wasnt really prompted by anything specific#so much as heard some repeated misconceptions and just#sighs#nope#incorrect#u got it wrong#sorry#unfortunately for me: no consistent tag to block#sigh#ao3
101 notes
·
View notes
Text
I use Scite because it gives me a quick and easy to navigate overview of which articles cite which articles and if it's in Intro or Methodology or whatever, but they have this ~~AI Assistant~~ which can allegedly help you answer scientific questions based on searching through scientific publications, and I check it from time to time and yeah, it has all the shortcomings of LLM being bullshit generator just like all the others (just with references). Let me show you:
I was listenting to a podcast on whale falls and some evolution of this sort of deep sea organisms (they were different animals living on ichtiosaur falls, because the current bone eating osedax worms haven't evolved yet.) So, I was interested if there was any fossil of a dead megalodon in the process of decomposition like that.
this is how it looked up stuff:
This is the full reply:
While there are no direct fossil records linking the presence of megalodon (*Carcharocles megalodon*) with bone-eating worms known as *Osedax*, the understanding of whale-fall communities provides a relevant context. Whale falls create unique ecosystems that can involve various organisms, including these specialized bone-eating worms. The fossil record demonstrates that unique ecological relationships can emerge at these sites, where organisms like *Osedax* thrive on the nutrient-rich remains of large marine animals that have died and sunk to the ocean floor. The relationship between megalodon and such communities is suggested through the known predation of megalodon on large marine mammals, which would eventually become whale falls. Evidence in the fossil record shows that *C. megalodon* likely preyed on dolphins and baleen whales (Kast et al., 2022). Hence, when these large animals died, they could have potentially served as substrates for *Osedax* colonization, similar to observations made in whale-fall studies (Smith et al., 2015; Vrijenhoek et al., 2009). *Osedax* shows a remarkable prevalence in whale falls, indicating that they are highly adapted to such environments where they feed on bone collagen via root-like structures that penetrate the bones (Verna et al., 2010; Higgs et al., 2014). This specific feeding mechanism has been noted not just in recent whale falls but has also been observed in fossilized shallow marine environments (Rouse et al., 2011; Verna et al., 2010). The *Osedax* worms are believed to have existed as far back as the Cretaceous, suggesting that if large marine mammals became part of the fossil record after being preyed upon by megalodons, *Osedax* could have inhabited their remains (Vrijenhoek et al., 2009; Kiel & Goedert, 2006). In conclusion, while no specific megalodon fall fossil featuring *Osedax* remains has been identified in the literature, the ecological dynamics at play in whale-fall environments suggest a plausible occurrence. As such interactions are characteristic of deep-sea ecosystems, where large organic matter episodically supports diverse life forms, it is logical to hypothesize that this relationship could potentially include megalodon-related falls and subsequent colonization by bone-eating worms.
Let's have a look how it bullshits its way through the answer, because I think this example makes it really clear.
"While there are no direct fossil records linking the presence of megalodon (*Carcharocles megalodon*) with bone-eating worms known as *Osedax*, " - this part is actually relevant to my question, even though I was asking about any bone-eating worms, not just osedax.
"Whale falls create unique ecosystems that can involve various organisms, including these specialized bone-eating worms. The fossil record demonstrates that unique ecological relationships can emerge at these sites, where organisms like *Osedax* thrive on the nutrient-rich remains of large marine animals that have died and sunk to the ocean floor." This is pure bullshit generated based on the statistical analysis of words close to "bone worms" in a sentence. Also, I wasn't asking about the worms.
"The relationship between megalodon and such communities is suggested through the known predation of megalodon on large marine mammals, which would eventually become whale falls. " Statistics! Megalodon was eating whales, whales make whale falls, therefore it must be relevant to the query (it's not).
"Evidence in the fossil record shows that *C. megalodon* likely preyed on dolphins and baleen whales (Kast et al., 2022). Hence, when these large animals died, they could have potentially served as substrates for *Osedax* colonization, similar to observations made in whale-fall studies (Smith et al., 2015; Vrijenhoek et al., 2009)." This is the same point like in the previous sentence, just with references, and therefore the same sort of unconnected bullshit.
"*Osedax* shows a remarkable prevalence in whale falls, indicating that they are highly adapted to such environments where they feed on bone collagen via root-like structures that penetrate the bones (Verna et al., 2010; Higgs et al., 2014). This specific feeding mechanism has been noted not just in recent whale falls but has also been observed in fossilized shallow marine environments (Rouse et al., 2011; Verna et al., 2010)." I wasn't asking about the worms, mate, but nice of you to notice they are sometimes used in the same paragraph as something fossil. Bonus: I checked the Rouse (2011) and Verna (2010) references and they examined modern communities, not fossil ones.
"The *Osedax* worms are believed to have existed as far back as the Cretaceous," oh look, a piece of information that is actually relevant to my question, pity this is generated and not actually cited.
"suggesting that if large marine mammals became part of the fossil record after being preyed upon by megalodons, *Osedax* could have inhabited their remains (Vrijenhoek et al., 2009; Kiel & Goedert, 2006). " dude, I'm not asking about mammals becoming a part of fossil record, no matter who chewed on them.
"In conclusion, while no specific megalodon fall fossil featuring *Osedax* remains has been identified in the literature," yet again, we get to the vicinity of an actual answer to my question ...
"the ecological dynamics at play in whale-fall environments suggest a plausible occurrence. " This is the part of the conclusion, where the writer summs up their findings. The problem with this LLM generated chunk of text is that nothing that it presented in the previous paragraphs actually shows this.
"As such interactions are characteristic of deep-sea ecosystems, " and here we have yet another problem with LLM generated text - limited interconnectedness between senteces. What exactly are the "such interactions" refering to here? "the ecological dynamics at play"? and where are these exactly? the interactions of worms and whale falls? That's not actually much of an interaction. The interactions between a different organisms in different stages of a (whale) fall community makes more logical sense in the grand scheme of things, but no such thing has been mentioned anywhere in the text - and neither it's relevant to the question asked.
"it is logical to hypothesize that this relationship could potentially include megalodon-related falls and subsequent colonization by bone-eating worms." Logical to hypothetize is such a bullshit phase unless you're a logic-driven sci-fi character.
-----
In conclusion:
A quick search did not show any Megalodon fall fossils, so they probably haven't been found, but it's likely that some sort of bone-eating worms existed at the same time as Megalodon.
Please don't use LLM to write stuff for you, because a) all you're gonna get is more-or-less plausible sounding bullshit and b) why should anyone be bother to read something you can't be bothered to write.
PS:
... so while bone worms were alive and eating bones of deadfall of basically anything (pictuture from ref 39) ...
... megalodon doesn't actually have bones, because it's a shark not a bony fish...
... so the dynamics of a megalodon deadfall environment is going to be different when it comes to stuff beyond flesh and organs and it's also no wonder there are no fossils of it.
#LLM#AI#also#megalodon#also also#osedax worms are called kostižerky in czech wich is a 70/10 type of name
5 notes
·
View notes
Text
youtube
How to use ChatGPT in 2024 full tutorial
Begin your journey to being a ChatGPT Pro with our 12-hour ChatGPT Masterclass. This video covers everything from basics to advanced, starting with the fundamentals of ChatGPT, Generative AI, and Large Language Models (LLMs). You'll learn how to navigate ChatGPT's interface, delve into Prompt Engineering, and master effective prompting strategies. We introduce different ChatGPT versions (3.5, 4, 4o), their differences, and usage. You'll build programs, handle exceptions, test codes, and create Python apps and websites using ChatGPT 4o. Additionally, you'll analyze data with Python and Excel, simplify tasks in Excel and PowerPoint, create diverse content, and use ChatGPT for SEO, digital marketing, and finance. Finally, learn to create custom GPTs tailored to your needs
#youtube#free education#education#technology#educate yourselves#How to use ChatGPT in 2024#How to use ChatGPT#chatgpt 4#chatgpt#educate yourself#education for all#gpt 4 ai technology#ai resources#ChatGPT Full Course#ChatGPT Tutorial
10 notes
·
View notes