#Leading LLM Developers
Explore tagged Tumblr posts
rosemarry-06 · 10 months ago
Text
large language model companies in India
Large Language Model Development Company (LLMDC) is a pioneering organization at the forefront of artificial intelligence research and development. Specializing in the creation and refinement of large language models, LLMDC leverages cutting-edge technologies to push the boundaries of natural language understanding and generation. The company's mission is to develop advanced AI systems that can understand, generate, and interact with human language in a meaningful and contextually relevant manner. 
With a team of world-class researchers and engineers, LLMDC focuses on a range of applications including automated customer service, content creation, language translation, and more. Their innovations are driven by a commitment to ethical AI development, ensuring that their technologies are not only powerful but also aligned with principles of fairness, transparency, and accountability. Through continuous collaboration with academic institutions, industry partners, and regulatory bodies, LLMDC aims to make significant contributions to the AI landscape, enhancing the way humans and machines communicate.
Large language model services offer powerful AI capabilities to businesses and developers, enabling them to integrate advanced natural language processing (NLP) into their applications and workflows. 
The largest language model services providers are industry leaders in artificial intelligence, offering advanced NLP solutions that empower businesses across various sectors. Prominent among these providers are OpenAI, Google Cloud, Microsoft Azure, and IBM Watson. OpenAI, renowned for its GPT series, delivers versatile and powerful language models that support a wide range of applications from text generation to complex data analysis. Google Cloud offers its AI and machine learning tools, including BERT and T5 models, which excel in tasks such as translation, sentiment analysis, and more. 
Microsoft Azure provides Azure Cognitive Services, which leverage models like GPT-3 for diverse applications, including conversational AI and content creation. IBM Watson, with its extensive suite of AI services, offers robust NLP capabilities for enterprises, enabling advanced text analytics and language understanding. These providers lead the way in delivering scalable, reliable, and innovative language model services that transform how businesses interact with and utilize language data.
Expert Custom LLM Development Solutions offer tailored AI capabilities designed to meet the unique needs of businesses across various industries. These solutions provide bespoke development of large language models (LLMs) that are fine-tuned to specific requirements, ensuring optimal performance and relevance. Leveraging deep expertise in natural language processing and machine learning, custom LLM development services can address complex challenges such as industry-specific jargon, regulatory compliance, and specialized content generation.
0 notes
feminist-space · 1 day ago
Text
Excerpts:
"The convenience of instant answers that LLMs provide can encourage passive consumption of information, which may lead to superficial engagement, weakened critical thinking skills, less deep understanding of the materials, and less long-term memory formation [8]. The reduced level of cognitive engagement could also contribute to a decrease in decision-making skills and in turn, foster habits of procrastination and "laziness" in both students and educators [13].
Additionally, due to the instant availability of the response to almost any question, LLMs can possibly make a learning process feel effortless, and prevent users from attempting any independent problem solving. By simplifying the process of obtaining answers, LLMs could decrease student motivation to perform independent research and generate solutions [15]. Lack of mental stimulation could lead to a decrease in cognitive development and negatively impact memory [15]. The use of LLMs can lead to fewer opportunities for direct human-to-human interaction or social learning, which plays a pivotal role in learning and memory formation [16].
Collaborative learning as well as discussions with other peers, colleagues, teachers are critical for the comprehension and retention of learning materials. With the use of LLMs for learning also come privacy and security issues, as well as plagiarism concerns (7]. Yang et al. [17] conducted a study with high school students in a programming course. The experimental group used ChatGPT to assist with learning programming, while the control group was only exposed to traditional teaching methods. The results showed that the experimental group had lower flow experience, self-efficacy, and learning performance compared to the control group.
Academic self-efficacy, a student's belief in their "ability to effectively plan, organize, and execute academic tasks"
', also contributes to how LLMs are used for learning [18]. Students with
low self-efficacy are more inclined to rely on Al, especially when influenced by academic stress
[18]. This leads students to prioritize immediate Al solutions over the development of cognitive and creative skills. Similarly, students with lower confidence in their writing skills, lower
"self-efficacy for writing" (SEWS), tended to use ChatGPT more extensively, while higher-efficacy students were more selective in Al reliance [19]. We refer the reader to the meta-analysis [20] on the effect of ChatGPT on students' learning performance, learning perception, and higher-order thinking."
"Recent empirical studies reveal concerning patterns in how LLM-powered conversational search systems exacerbate selective exposure compared to conventional search methods. Participants engaged in more biased information querying with LLM-powered conversational search, and an opinionated LLM reinforcing their views exacerbated this bias [63]. This occurs because LLMS are in essence "next token predictors" that optimize for most probable outputs, and thus can potentially be more inclined to provide consonant information than traditional information system algorithms [63]. The conversational nature of LLM interactions compounds this effect, as users can engage in multi-turn conversations that progressively narrow their information exposure. In LLM systems, the synthesis of information from multiple sources may appear to provide diverse perspectives but can actually reinforce existing biases through algorithmic selection and presentation mechanisms.
The implications for educational environments are particularly significant, as echo chambers can fundamentally compromise the development of critical thinking skills that form the foundation of quality academic discourse. When students rely on search systems or language models that systematically filter information to align with their existing viewpoints, they might miss opportunities to engage with challenging perspectives that would strengthen their analytical capabilities and broaden their intellectual horizons. Furthermore, the sophisticated nature of these algorithmic biases means that a lot of users often remain unaware of the information gaps in their research, leading to overconfident conclusions based on incomplete evidence. This creates a cascade effect where poorly informed arguments become normalized in academic and other settings, ultimately degrading the standards of scholarly debate and undermining the educational mission of fostering independent, evidence-based reasoning."
"In summary, the Brain-only group's connectivity suggests a state of increased internal coordination, engaging memory and creative thinking (manifested as theta and delta coherence across cortical regions). The Engine group, while still cognitively active, showed a tendency toward more focal connectivity associated with handling external information (e.g. beta band links to visual-parietal areas) and comparatively less activation of the brain's long-range memory circuits. These findings are in line with literature: tasks requiring internal memory amplify low-frequency brain synchrony in frontoparietal networks [77], whereas outsourcing information (via internet search) can reduce the load on these networks and alter attentional dynamics. Notably, prior studies have found that practicing internet search can reduce activation in memory-related brain areas [831, which dovetails with our observation of weaker connectivity in those regions for Search Engine group. Conversely, the richer connectivity of Brain-only group may reflect a cognitive state akin to that of high performers in creative or memory tasks, for instance, high creativity has been associated with increased fronto-occipital theta connectivity and intra-hemispheric synchronization in frontal-temporal circuits [81], patterns we see echoed in the Brain-only condition."
"This correlation between neural connectivity and behavioral quoting failure in LLM group's participants offers evidence that:
1. Early Al reliance may result in shallow encoding.
LLM group's poor recall and incorrect quoting is a possible indicator that their earlier essays were not internally integrated, likely due to outsourced cognitive processing to the LLM.
2. Withholding LLM tools during early stages might support memory formation.
Brain-only group's stronger behavioral recall, supported by more robust EEG connectivity, suggests that initial unaided effort promoted durable memory traces, enabling more effective reactivation even when LLM tools were introduced later.
Metacognitive engagement is higher in the Brain-to-LLM group.
Brain-only group might have mentally compared their past unaided efforts with tool-generated suggestions (as supported by their comments during the interviews), engaging in self-reflection and elaborative rehearsal, a process linked to executive control and semantic integration, as seen in their EEG profile.
The significant gap in quoting accuracy between reassigned LLM and Brain-only groups was not merely a behavioral artifact; it is mirrored in the structure and strength of their neural connectivity. The LLM-to-Brain group's early dependence on LLM tools appeared to have impaired long-term semantic retention and contextual memory, limiting their ability to reconstruct content without assistance. In contrast, Brain-to-LLM participants could leverage tools more strategically, resulting in stronger performance and more cohesive neural signatures."
38 notes · View notes
elancholia · 1 year ago
Text
People in the late 20th century thought the fundamental arc of human history was exploration, whereas now it looks like it's information processing.
In traditional science fiction, the historically progressive human urge is wanderlust, the pull of unknown geography, horror vacui or amor vacui depending on how you look at it. Those writers invoke the elapse of time that separated Kitty Hawk from the moon landing. They recite a procession of discoverers that includes Columbus or the Polynesians and whose next logical steps are space colonization and superluminal travel. Era-defining technologies are transportation technologies. You still get this now, sometimes. In a much-dunked-upon scene in Star Trek: Discovery (2017), a character's litany of great inventors includes the Wright brothers, the guy who invented FTL, and Elon Musk.
The corresponding fear, of course, is alien invasion—that we are not Columbus but the Indians.
Now, the developments actually restructuring people's lives are either of the computer or on the computer. The PC, the internet, smartphones, social media, LLMs. Bits, not atoms. It has been this way for some time, though it hasn't fully made its way into culture. The progenitors of the new future are writing, the printing press, the abacus. We can see the arc clearly in retrospect, now that the future seems likely to be defined by machine learning.
Just as before, there is some anxiety that our trajectory will lead us into the grip of alien intelligences, horrendous and devouring.
If you go back to the period stretching (roughly) from the late 19th century through the Second World War, stories often hinge on wonder-substances and novel fundamental forces. This was, of course, an era in which a new force or element was turning up every other week. You couldn't swing a cat without hitting one. They discovered guncotton when some guy left his fouled lab coat next to an oven. Hence, Vril, the Ray, the "eighth and ninth solar rays" of Burroughs's Mars. In later stories, this sort of stuff is generally secondary, though superhero fiction preserves more of the old mentality.
334 notes · View notes
professorlaytonarchive · 9 months ago
Text
Dear fellow Professor Layton fans! I’m writing this post to explain the timeline of events about the search for Mansion of the Deathly Mirror to clear up any misconceptions or missing information you may have.
To start, Professor Layton and the Mansion of the Deathly Mirror (レイトン教授と死鏡の館 ) is a game in the Professor Layton series that was exclusively released for mobile. It was available on Professor Layton Mobile. The game features a brand new story formed by 6 chapters in total. Each chapter was his own i-appli, and they were released every two weeks starting from October 2008. As of June 2024, a translation of the original version is in the works, with the first chapter already released and as of September 2024 all 6 chapters have been preserved
Professor Layton and the Mansion of the Deathly Mirror -Remix- (レイトン教授と死鏡の館 -REMIX-) is an updated version of Professor Layton and the Mansion of the Deathly Mirroravailable to i-Mode devices through the Professor Layton Mobile and Mobile R portal. This version has different puzzles, slightly better animations and slightly different dialogues compared to the original version.
Synopsis
Professor Layton and his number one apprentice, Luke Triton, are invited to a party hosted by famous author Drevin Murdoch. At this party, he reveals to be in possession of a mirror that allows the attendants to talk to the dead. However, after Murdoch is found dead the following morning, it's up to Layton and Luke to find out the truth behind the Deathly Mirror, and the secrets Murdoch's Mansion holds.
(Credit: Keitai Wiki)
Chapter 1: A Single Piece
In 2014, a streamer managed to record the first three chapters of Deathly Mirror. A little while after, the streamer began to be harassed by multiple fans, eventually leading them to take down the videos. Due to the lack of preservation efforts at the time, the videos weren’t saved.
Years later, bits and pieces—such as screenshots, articles, and press videos—were found, but nothing concrete.
Chapter 2: A Picture Forming
In May of 2023, a Japanese fan posted the first part of what would become a complete playthrough of all six chapters of Mansion of the Deathly Mirror Remix. This was monumental for the Layton Lost Media (LLM) scene. However, during the 11 months it took to release the full playthrough, there were some difficulties with Western fans. The issues included harassment of the player for more videos, begging for the ROM (despite the player clearly stating they were afraid of Japan’s strict piracy laws), and other forms of harassment.
This period caused uncertainty and worry throughout the Layton Lost Media community, leading the community to strictly instruct members to cease any future contact with the player to prevent the playthrough from being lost before its completion. Around the same time, in February of 2024, thanks to the help of the user @/ponkikipon on Discord, we were able to preserve the ROMs of the original chapters 1-3. In April 2024, the playthrough of Remix came to an end with the release of the video for the sixth chapter. This allowed for the formation of Team Enigma, which sought to fully remake both the original and remixed versions of the game into one package, translate the original game into English, and expand their efforts into other translation projects. Chapter 1 is currently fully translated and available.
Chapter 3: The Final Piece
In September 2024, Keitai Wiki and a user by the name of @/yuvi on Discord managed to locate chapters 4-6 on a junk phone, marking the full preservation of the original Mansion of the Deathly Mirror. This allowed Team Enigma to bypass multiple roadblocks in the development of the remake and translation.
Please show your support by supporting Keitai Wiki, Team Enigma, and Team Professor Layton Archive.
https://x.com/rockmancosmo/status/1834626811646599498?s=46&t=r1PBA7kkYm_L_o06jhQMgw
122 notes · View notes
twiztedstudios · 2 months ago
Text
Hey hope the game development is doing great! Been a fan of your games TL and SD for awhile now and I was wondering if you could also create character ai bots of the SD characters but no pressure though! It's totally fine if you don't respond to this and no need to feel the pressure to do so. —
Hello! 👋💖 Aww thank you I’m glad you love the games! 💖💖 I actually do have character ai bots for them! Here’s the list! 💖
Dr. Storm- https://c.ai/c/wfs8-eNvPvJwT9HaiaB7OfsL14_l6l5W8ihCGZI8jwM
Alfred- https://c.ai/c/UGxUzWnfsBGB5zpaVgij8ZGedr-hw_XqCKNpxyIshp4
Henry- https://c.ai/c/aVLo8TVl_kPGL-QO5qIosfAEuaslv20KOK9ME3JjIR4
Calliope- https://c.ai/c/ajSMBY0HMxy7uLooklhyXcuAcQLncb4ksVEAH5DQHAk
Gabriel- https://c.ai/c/1NmR96tY3GTElSofL9vzTqVxLU_z70hFz0noZWE5DF0
Ophelia- https://beta.character.ai/chat?char=sGbVwo2tMtRcVUb1nfQjRJnuZmwBYYbGJyPsaH9WTC4
18 notes · View notes
mr-entj · 5 months ago
Note
Hello Mr. ENTJ. I'm an ENTJ sp/so 3 woman in her early twenties with a similar story to yours (Asian immigrant with a chip on her shoulder, used going to university as a way to break generational cycles). I graduated last month and have managed to break into strategy consulting with a firm that specialises in AI. Given your insider view into AI and your experience also starting out as a consultant, I would love to hear about any insights you might have or advice you may have for someone in my position. I would also be happy to take this discussion to somewhere like Discord if you'd prefer not to share in public/would like more context on my situation. Thank you!
Insights for your career or insights on AI in general?
On management consulting as a career, check the #management consulting tag.
On being a consultant working in AI:
Develop a solid understanding of the technical foundation behind LLMs. You don’t need a computer science degree, but you should know how they’re built and what they can do. Without this knowledge, you won’t be able to apply them effectively to solve any real-world problems. A great starting point is deeplearning.ai by Andrew Ng: Fundamentals, Prompt Engineering, Fine Tuning
Know all the terminology and definitions. What's fine tuning? What's prompt engineering? What's a hallucination? Why do they happen? Here's a good starter guide.
Understand the difference between various models, not just in capabilities but also training, pricing, and usage trends. Great sources include Artificial Analysis and Hugging Face.
Keep up to date on the newest and hottest AI startups. Some are hype trash milking the AI gravy train but others have actual use cases. This will reveal unique and interesting use cases in addition to emerging capabilities. Example: Forbes List.
On the industry of AI:
It's here to stay. You can't put the genie back in the bottle (for anyone reading this who's still a skeptic).
AI will eliminate certain jobs that are easily automated (ex: quality assurance engineers) but also create new ones or make existing ones more important and in-demand (ex: prompt engineers, machine learning engineers, etc.)
The most valuable career paths will be the ones that deal with human interaction, connection, and communication. Soft skills are more important than ever because technical tasks can be offloaded to AI. As Sam Altman once told me in a meeting: "English is the new coding language."
Open source models will win (Llama, Mistral, Deep Seek) because closed source models don't have a moat. Pick the cheapest model because they're all similarly capable.
The money is in the compute, not the models -- AI chips, AI infrastructure, etc. are a scarce resource and the new oil. This is why OpenAI ($150 billion valuation) is only 5% the value of NVIDIA (a $3 trillion dollar behemoth). Follow the compute because this is where the growth will happen.
America and China will lead in the rapid development and deployment of AI technology; the EU will lead in regulation. Keep your eye on these 3 regions depending on what you're looking to better understand.
28 notes · View notes
justforbooks · 5 months ago
Text
Tumblr media
The DeepSeek panic reveals an AI world ready to blow❗💥
The R1 chatbot has sent the tech world spinning – but this tells us less about China than it does about western neuroses
The arrival of DeepSeek R1, an AI language model built by the Chinese AI lab DeepSeek, has been nothing less than seismic. The system only launched last week, but already the app has shot to the top of download charts, sparked a $1tn (£800bn) sell-off of tech stocks, and elicited apocalyptic commentary in Silicon Valley. The simplest take on R1 is correct: it’s an AI system equal in capability to state-of-the-art US models that was built on a shoestring budget, thus demonstrating Chinese technological prowess. But the big lesson is perhaps not what DeepSeek R1 reveals about China, but about western neuroses surrounding AI.
For AI obsessives, the arrival of R1 was not a total shock. DeepSeek was founded in 2023 as a subsidiary of the Chinese hedge fund High-Flyer, which focuses on data-heavy financial analysis – a field that demands similar skills to top-end AI research. Its subsidiary lab quickly started producing innovative papers, and CEO Liang Wenfeng told interviewers last November that the work was motivated not by profit but “passion and curiosity”.
This approach has paid off, and last December the company launched DeepSeek-V3, a predecessor of R1 with the same appealing qualities of high performance and low cost. Like ChatGPT, V3 and R1 are large language models (LLMs): chatbots that can be put to a huge variety of uses, from copywriting to coding. Leading AI researcher Andrej Karpathy spotted the company’s potential last year, commenting on the launch of V3: “DeepSeek (Chinese AI co) making it look easy today with an open weights release of a frontier-grade LLM trained on a joke of a budget.” (That quoted budget was $6m – hardly pocket change, but orders of magnitude less than the $100m-plus needed to train OpenAI’s GPT-4 in 2023.)
R1’s impact has been far greater for a few different reasons.
First, it’s what’s known as a “chain of thought” model, which means that when you give it a query, it talks itself through the answer: a simple trick that hugely improves response quality. This has not only made R1 directly comparable to OpenAI’s o1 model (another chain of thought system whose performance R1 rivals) but boosted its ability to answer maths and coding queries – problems that AI experts value highly. Also, R1 is much more accessible. Not only is it free to use via the app (as opposed to the $20 a month you have to pay OpenAI to talk to o1) but it’s totally free for developers to download and implement into their businesses. All of this has meant that R1’s performance has been easier to appreciate, just as ChatGPT’s chat interface made existing AI smarts accessible for the first time in 2022.
Second, the method of R1’s creation undermines Silicon Valley’s current approach to AI. The dominant paradigm in the US is to scale up existing models by simply adding more data and more computing power to achieve greater performance. It’s this approach that has led to huge increases in energy demands for the sector and tied tech companies to politicians. The bill for developing AI is so huge that techies now want to leverage state financing and infrastructure, while politicians want to buy their loyalty and be seen supporting growing companies. (See, for example, Trump’s $500bn “Stargate” announcement earlier this month.) R1 overturns the accepted wisdom that scaling is the way forward. The system is thought to be 95% cheaper than OpenAI’s o1 and uses one tenth of the computing power of another comparable LLM, Meta’s Llama 3.1 model. To achieve equivalent performance at a fraction of the budget is what’s truly shocking about R1, and it’s this that has made its launch so impactful. It suggests that US companies are throwing money away and can be beaten by more nimble competitors.
But after these baseline observations, it gets tricky to say exactly what R1 “means” for AI. Some are arguing that R1’s launch shows we’re overvaluing companies like Nvidia, which makes the chips integral to the scaling paradigm. But it’s also possible the opposite is true: that R1 shows AI services will fall in price and demand will, therefore, increase (an economic effect known as Jevons paradox, which Microsoft CEO Satya Nadella helpfully shared a link to on Monday). Similarly, you might argue that R1’s launch shows the failure of US policy to limit Chinese tech development via export controls on chips. But, as AI policy researcher Lennart Heim has argued, export controls take time to work and affect not just AI training but deployment across the economy. So, even if export controls don’t stop the launches of flagships systems like R1, they might still help the US retain its technological lead (if that’s the outcome you want).
All of this is to say that the exact effects of R1’s launch are impossible to predict. There are too many complicating factors and too many unknowns to say what the future holds. However, that hasn’t stopped the tech world and markets reacting in a frenzy, with CEOs panicking, stock prices cratering, and analysts scrambling to revise predictions for the sector. And what this really shows is that the world of AI is febrile, unpredictable and overly reactive. This a dangerous combination, and if R1 doesn’t cause a destructive meltdown of this system, it’s likely that some future launch will.
Daily inspiration. Discover more photos at Just for Books…?
27 notes · View notes
sxftcloudz · 6 months ago
Text
Viktor Arcane c.ai bot
📓| Christmas lab decorating
Tumblr media Tumblr media Tumblr media
A/N: Hi !! I wanted to release this before Christmas and I was so glad I was able to. The next bot I'm doing will either be an angsty Viktor bot or my first Jayce Talis bot. For all those who celebrate Christmas, I hope you have a great one !! (:
Synopsis: Viktor never had a chance to celebrate Christmas when he was in Zaun or had anyone in his life to enjoy the special day with. When he moved to Piltover and began attending classes at the Academy, he never took the time to enjoy the holiday. He always treated Christmas like any other day, even when he started working with Jayce. However, it was different when you came into the picture as their new lab assistant. He learned you loved the special day and was determined to get Viktor in the Christmas spirit. Needless to say, he was surprised when he walked into the lab one day and saw it completely decorated.
Greeting is below the cut for anyone interested in using this bot (:
Tumblr media
Viktor was not the type of person to celebrate Christmas. It was not because he did not like the holiday, but rather that he never had the chance to. Zaunites were too focused on surviving to be able to have the luxury of being with their friends and family on a joyful day. Kids around his age left him out of their activities because of his disability and he never had anyone to call them his family. In the end, he grew up to treat Christmas as if it were any other regular day, even when he moved from Zaun to Piltover. He continued to feel the same way when he began working with Jayce on the development of Hextech.
However, that changed when Jayce took you in as a new lab assistant for them. He found out that you loved Christmas and always seemed so happy to celebrate it. He noticed that you wanted to get him in the holiday spirit like you and Jayce were, but he was more focused on his work. One morning, it was one of the first times that it was very hard for him to ignore your efforts when he walked into the lab. The lab was plastered with all kinds of Christmas decorations.
Pine garlands were hanging around the edges of the ceiling and displayed around the windows, and pine branches with small pinecones on them draped over bookshelves. He saw you adorning a small tree in the corner of the room with small glass ornaments and red bows. His crutch tapped on the ground with each step he took, clearly memorized by the time and effort put into the room. “Y/N, did you do all of this?” He asked as you were putting the final touches on the tree. He noticed that you tried not to be too over the top with the decor, which he appreciated.
38 notes · View notes
canmom · 1 year ago
Text
I think the future looks something like: large renewable deployment that will still never be as big as current energy consumption, extractivism of every available mineral in an atmosphere of increasing scarcity, increasing natural disasters and mass migration stressing the system until major political upheavals start kicking off, and various experiments in alternative ways to live will develop, many of which are likely to end in disaster, but perhaps some prove sustainable and form new equilibria. I think the abundance we presently enjoy in the rich countries may not last, but I don't think we'll give up our hard won knowledge so easily, and I don't think we're going back to a pre-industrial past - rather a new form of technological future.
That's the optimistic scenario. The pessimistic scenarios involve shit like cascading economic and crop failures leading to total gigadeaths collapse, like intensification of 'fortress europe' walled enclaves and surveillance apparatus into some kinda high tech feudal nightmare, and of course like nuclear war. But my brain is very pessimistic in general and good at conjuring up apocalyptic scenarios, so I can't exactly tell you the odds of any of that. I'm gonna continue to live my life like it won't suddenly all end, because you have to right?
Shit that developed in the context of extraordinarily abundant energy and compute like LLMs and crypto and maybe even streaming video will have a harder time when there's less of it around, but the internet will likely continue to exist - packet-switching networks are fundamentally robust, and the hyper-performant hardware we use today full of rare earths and incredibly fine fabs that only exist at TSMC and Shenzhen is not the only way to make computing happen. I hold out hope that our present ability to talk to people in faraway countries, and access all the world's art and knowledge almost instantly, will persist in some form, because that's one of the best things we have ever accomplished. But archival and maintenance is a continual war against entropy, and this is a tremendously complex system alike to an organism, so I can not say what will happen.
56 notes · View notes
Text
Links & bonus prologue (introduction story) under the cut.
I just released three Orochimaru models (prototypes) public on C.AI (under thecinnamonwitch). I originally designed them as a request from a couple followers on my IG and thought I’d share them here as well. They each have the entire 32000 character limit maxed out with relevant (canon accurate) dialog, so they should stay mostly in character without any of those annoying repeats.
They’re each geared toward slightly different RPs. I’ve created a romance/slice-of-life/drama, an action/adventure, and a user’s choice (story begins just as you wake up).
I tested the models using the normal mode (Roar) as well as the new soft launch (Cool for the Summer). I’ll say that the Roar style works best to keep him in character, but if you want him more aggressive and spicy, choose the Cool for the Summer style. For best results, I recommend giving the bot a brief backstory/summary in your first message and pinning it since the dialog description doesn’t exactly determine your story. Use the prompts: (OOC: example) or *example*.
These are my first creations and I’d love it if any users who read this could give them a quick test and let me know what I can work on (or any major issues). I’m also open to writing for other Naruto characters; so if you have a request, I’m all ears!
(Links from top to bottom: Romance/Drama, Action/Adventure, Reader’s Choice)
“The Beginning is the End is the Beginning”…or…The cheesy introduction to your isekai adventure in Otogakure:
The rain-soaked blacktop glistened under the ambient lights as you crossed the deserted street of your hometown. The peaceful atmosphere was interrupted by the gentle patter of light rain and the distant rumble of thunder, creating a soothing soundtrack for the night. Before you could reach the other side of the street, a blinding flash of light engulfed you, and your body was seared by a powerful, scorching shock.
Meanwhile, in the Naruto-verse:
Orochimaru, perched in the shadows near the base of a cliff, silently observed the ANBU guards stationed outside his HQ in Otogakure. After the 4th Shinobi War, he had reluctantly agreed to a tentative peace treaty with Konoha on the condition of continuous monitoring by ANBU.
Suddenly, a brilliant flash of lightning illuminated the midnight sky, striking near his location, instantly disrupting his thoughts. He whirled around, his serpentine vertically slit pupils swiftly adjusting to the darkness compared to the guards still recovering from the flashbang-like event. He made out the figure of a person lying sprawled on the charred ground. “Well, now…this certainly presents an intriguing development,” he mused to himself, swiftly slipping away from his concealed position within the rock face to investigate before the ANBU could react.
“Dimensional distortion,” he murmurs, half to himself, looking at the scene before him, “no summoning matrix… spontaneous breach?”
The searing pain in your head jolts you back to consciousness. Your eyes flutter open, and you can discern a man’s silhouette looming over you through the haze of your blurred vision.
“My, my…what do we have here?” he inquired, evidently intrigued as he watches you stir.
You’re still slightly dazed, barely able to make out his words through the ringing in your ears as he crouches down to examine you. Your breath catches in your throat as his face becomes more distinct. Orochimaru? But that’s impossible, right? You quickly glance around, ignoring the migraine and nausea caused by turning your head too abruptly, and you immediately notice the stark contrast between your reality and whatever this is. Everything looks almost animated — unrealistic. You hold your hands up in front of your face and determine that includes you too.
7 notes · View notes
sevikataster · 2 months ago
Text
a list of all my character.ai bots if you can’t find my account ! 𓈒 𓏲࣪ . part 1 of many since tumblr only allows 10 links (biting the developers rn)
Tumblr media
ARCANE
ambessa: coming to her for help with a speech you have to prepare to give to the council.
ambessa: after the explosions.
ambessa: training with general medarda.
ambessa: helping her under the table. (a/n: this bot is a bit… glitchy. i need to remake it so this is just a place holder for now!)
ambessa: you know her tricks and how manipulative she could be. do you fall for it?
caitlyn: basically if you were pitfighter vi and met caitlyn in the lanes
caitlyn: jealousy or love?
caitlyn: fun in the brothel 👅
ekko: pirate!ekko <3 (requested!!!)
ekko / timebomb: timebomb yuri oh hell yeah
Tumblr media
9 notes · View notes
spiced-wine-fic · 25 days ago
Text
”there are some in the tech sector who believe that the AI in our computers and phones may already be conscious, and we should treat them as such.
Google suspended software engineer Blake Lemoine in 2022, after he argued that AI chatbots could feel things and potentially suffer.
In November 2024, an AI welfare officer for Anthropic, Kyle Fish, co-authored a report suggesting that AI consciousness was a realistic possibility in the near future. He recently told The New York Times that he also believed that there was a small (15%) chance that chatbots are already conscious.
One reason he thinks it possible is that no-one, not even the people who developed these systems, knows exactly how they work. That's worrying, says Prof Murray Shanahan, principal scientist at Google DeepMind and emeritus professor in AI at Imperial College, London.
"We don't actually understand very well the way in which LLMs work internally, and that is some cause for concern," he tells the BBC.
According to Prof Shanahan, it's important for tech firms to get a proper understanding of the systems they're building – and researchers are looking at that as a matter of urgency.
"We are in a strange position of building these extremely complex things, where we don't have a good theory of exactly how they achieve the remarkable things they are achieving," he says. "So having a better understanding of how they work will enable us to steer them in the direction we want and to ensure that they are safe."
6 notes · View notes
beardedmrbean · 2 months ago
Note
Did you hear that Chanel is giving grant money to CalArts to fund some kind of LLM/AI art initiative.
I had not until just now. I thought they were smart, how did they spell LLAMA wrong like that is the big question.
Let's go with the CalArts story on their gift.
[April 24, 2025 – Valencia, Calif.] California Institute of the Arts (CalArts) and the CHANEL Culture Fund together announce the CHANEL Center for Artists and Technology at CalArts, a visionary initiative that positions artists at the forefront of shaping the evolving technologies that define our world. The Center will provide students, faculty, and visiting fellows across the creative disciplines access to leading-edge equipment and software, allowing artists to explore and use new technologies as tools for their work. Creating opportunities for collaboration and driving innovation across disciplines, the initiative creates the conditions for artists to play an active role in developing the use and application of these emergent technologies.
The Center builds on CalArts’ legacy as a cross-disciplinary school of the arts, where experimentation in visual arts, music, film, performing arts, and dance has been nurtured since the institution’s founding. In this unprecedented initiative, artists will be empowered to use technology to shape creativity across disciplines—and, ultimately, to envision a better world.
Funded by a five-year, transformative gift from the CHANEL Culture Fund, the CHANEL Center for Artists and Technology establishes CalArts as the hub of a new ecosystem of arts and technology. The CHANEL Center will foster research, experimentation, mentorship, and the creation of new knowledge by connecting students, faculty, artists, and technologists—the thinkers and creators whose expertise and vision will define the future—with new technology and its applications. It will also activate a network of institutions throughout Southern California and beyond, linking museums, universities, and technology companies to share resources and knowledge.
The CHANEL Center at CalArts will also serve as a hub for the exchange of knowledge among artists and experts from CHANEL Culture Fund’s signature programs—including more than 50 initiatives and partnerships established since 2020 that support cultural innovators in advancing new ideas. Visiting fellows and artists will be drawn both from CalArts’ sphere and from the agile network of visionary creators, thinkers, and multidisciplinary artists whom CHANEL has supported over the past five years—a network that includes such luminaries as Cao Fei, Arthur Jafa, William Kentridge, and Jacolby Satterwhite. The CHANEL Center will also host an annual forum addressing artists’ engagement with emerging technologies, ensuring that knowledge gained is knowledge shared.
The Center’s funding provides foundational resources for equipment; visiting experts, artists, and technologists-in-residence; graduate fellowships; and faculty and staff with specific expertise in future-focused research and creation. With the foundation of the CHANEL Center, CalArts empowers its students, faculty, and visiting artists to shape the future through transformative technology and new modes of thinking.
The first initiative of its kind at an independent arts school, the CHANEL Center consists of two areas of focus: one concentrating on Artificial Intelligence (AI) and Machine Learning, and the other on Digital Imaging. The project cultivates a multidisciplinary ecosystem—encompassing visual art, music, performance, and still, moving, projected, and immersive imagery—connecting CalArts and a global network of artists and technologists, other colleges and universities, arts institutions, and industry partners from technology, the arts, and beyond. ____________________________________-
I wish they'd write this kind of stuff in English.
Legendary art school California Institute of the Arts (CalArts) will soon be home to a major high-tech initiative funded by luxury brand Chanel’s Culture Fund. Billed as the first initiative of its kind at an independent art school, the Chanel Center for Artists and Technology will focus on artificial intelligence and machine learning as well as digital imaging. While they aren’t disclosing the dollar amount of the grant, the project will fund dozens of new roles as well as fellowships for artists and technologists-in-residence and graduate students along with cutting-edge equipment and software. 
That's easier to understand I think.
Interesting.
4 notes · View notes
papercranesong · 14 days ago
Text
Mythbusting Generative AI: The Ethical ChatGPT Is Out There
I've been hyperfixating learning a lot about Generative AI recently and here's what I've found - genAI doesn’t just apply to chatGPT or other large language models.
Small Language Models (specialised and more efficient versions of the large models)
are also generative
can perform in a similar way to large models for many writing and reasoning tasks
are community-trained on ethical data
and can run on your laptop.
Tumblr media
"But isn't analytical AI good and generative AI bad?"
Fact: Generative AI creates stuff and is also used for analysis
In the past, before recent generative AI developments, most analytical AI relied on traditional machine learning models. But now the two are becoming more intertwined. Gen AI is being used to perform analytical tasks – they are no longer two distinct, separate categories. The models are being used synergistically.
For example, Oxford University in the UK is partnering with open.ai to use generative AI (ChatGPT-Edu) to support analytical work in areas like health research and climate change.
Tumblr media
"But Generative AI stole fanfic. That makes any use of it inherently wrong."
Fact: there are Generative AI models developed on ethical data sets
Yes, many large language models scraped sites like AO3 without consent, incorporating these into their datasets to train on. That’s not okay.
But there are Small Language Models (compact, less powerful versions of LLMs) being developed which are built on transparent, opt-in, community-curated data sets – and that can still perform generative AI functions in the same way that the LLMS do (just not as powerfully). You can even build one yourself.
Tumblr media
No it's actually really cool! Some real-life examples:
Dolly (Databricks): Trained on open, crowd-sourced instructions
RedPajama (Together.ai): Focused on creative-commons licensed and public domain data
There's a ton more examples here.
(A word of warning: there are some SLMs like Microsoft’s Phi-3 that have likely been trained on some of the datasets hosted on the platform huggingface (which include scraped web content like from AO3), and these big companies are being deliberately sketchy about where their datasets came from - so the key is to check the data set. All SLMs should be transparent about what datasets they’re using).
"But AI harms the environment, so any use is unethical."
Fact: There are small language models that don't use massive centralised data centres.
SLMs run on less energy, don’t require cloud servers or data centres, and can be used on laptops, phones, Raspberry Pi’s (basically running AI locally on your own device instead of relying on remote data centres)
If you're interested -
You can build your own SLM and even train it on your own data.
Tumblr media
Let's recap
Generative AI doesn't just include the big tools like chatGPT - it includes the Small Language Models that you can run ethically and locally
Some LLMs are trained on fanfic scraped from AO3 without consent. That's not okay
But ethical SLMs exist, which are developed on open, community-curated data that aims to avoid bias and misinformation - and you can even train your own models
These models can run on laptops and phones, using less energy
AI is a tool, it's up to humans to wield it responsibly
Tumblr media
It means everything – and nothing
Everything – in the sense that it might remove some of the barriers and concerns people have which makes them reluctant to use AI. This may lead to more people using it - which will raise more questions on how to use it well.
It also means that nothing's changed – because even these ethical Small Language Models should be used in the same way as the other AI tools - ethically, transparently and responsibly.
So now what? Now, more than ever, we need to be having an open, respectful and curious discussion on how to use AI well in writing.
In the area of creative writing, it has the potential to be an awesome and insightful tool - a psychological mirror to analyse yourself through your stories, a narrative experimentation device (e.g. in the form of RPGs), to identify themes or emotional patterns in your fics and brainstorming when you get stuck -
but it also has capacity for great darkness too. It can steal your voice (and the voice of others), damage fandom community spirit, foster tech dependency and shortcut the whole creative process.
Tumblr media
Just to add my two pence at the end - I don't think it has to be so all-or-nothing. AI shouldn't replace elements we love about fandom community; rather it can help fill the gaps and pick up the slack when people aren't available, or to help writers who, for whatever reason, struggle or don't have access to fan communities.
People who use AI as a tool are also part of fandom community. Let's keep talking about how to use AI well.
Feel free to push back on this, DM me or leave me an ask (the anon function is on for people who need it to be). You can also read more on my FAQ for an AI-using fanfic writer Master Post in which I reflect on AI transparency, ethics and something I call 'McWriting'.
4 notes · View notes
shituationist · 11 months ago
Text
the company that fired me really did have a shitload of people who had email jobs where it wasn't entirely clear what they did or how they contributed value to the company. it's that part of management that seems to exist just to make the people who are actually producing value deal with their bullshit. the position could be terminated and no one who is doing actual work would be any wiser. and of course these are the types of people who are in charge of deciding who gets terminated, so their position is never on the chopping block. we had project managers who never talked to me about how the project was going, who seemed to sit in on meetings only to get info on how the project was going so they could report this to someone else, and this is a guy making six figures doing this. why? i had 1-on-1s with the CTO where I could communicate this myself!
the company I worked for was also a monster conglomerate that was merging with and acquiring all these companies in the self-insurance space, which only added to the organizational dysfunction because no one knew who tf to report to. my first boss became a "enterprise software architect" so he wouldn't have people under him reporting to him, but because they kept hiring people who were not competent as "programming managers" (they really should have just hired a lead developer) he ended up having to take on all those responsibilities anyway up until the time I left. he was also an asshole who hated explaining things and was siloing project information, refusing to write any kind of documentation to help developers out on what they're working on, leaving us instead with hastily drawn drawio files that made no sense and were on some part of the onedrive none of us could find. but because he was the prototypical IT beardo everyone treated him with reverence.
alright so I'm obviously still salty. and maybe I only liked working there because I was able to goof off as long as I was. but that company was a shitshow and I hope their shitty LLM that parses nurses's notes kills someone and they get sued into oblivion. ok!
8 notes · View notes
bharatpatel1061 · 2 months ago
Text
Memory and Context: Giving AI Agents a Working Brain
Tumblr media
For AI agents to function intelligently, memory is not optional—it’s foundational. Contextual memory allows an agent to remember past interactions, track goals, and adapt its behavior over time.
Memory in AI agents can be implemented through various strategies—long short-term memory (LSTM) for sequence processing, vector databases for semantic recall, or simple context stacks in LLM-based agents. These memory systems help agents operate in non-Markovian environments, where past information is crucial to decision-making.
In practical applications like chat-based assistants or automated reasoning engines, a well-structured memory improves coherence, task persistence, and personalization. Without it, AI agents lose continuity, leading to erratic or repetitive behavior.
For developers building persistent agents, the AI agents service page offers insights into modular design for memory-enhanced AI workflows.
Combine short-term and long-term memory modules—this hybrid approach helps agents balance responsiveness and recall.
Image Prompt: A conceptual visual showing an AI agent with layers representing short-term and long-term memory modules.
3 notes · View notes