#How To Use Devin Ai
Explore tagged Tumblr posts
Text
Devin AI: the world’s ‘first fully autonomous’ AI software engineer
#Devin Site Ai#How To Use Devin Ai#What Is Devin AI Software?#Who Created Devin AI?#How Can I Become An AI Developer In India?#Devin AI
0 notes
Text
Hello! Welcome to the official Double Dead Studio Tumblr, the solodev behind Reanimated Heart, Another Rose in His Garden, and Pygmalion's Folly.
Reanimated Heart is a character-driven horror romance visual novel about finding love in a mysterious small town. There are three monstrous love interests with their own unique personalities and storylines.
Another Rose in His Garden is an 18+ erotic Omegaverse BL visual novel. Abel Valencia is an Omega who's hidden his secondary sex his entire life. Life's alright, until he meets the wealthy tycoon, Mars Rosales, and the two get embroiled in a sexual affair that changes his life forever.
Pygmalion's Folly is a survival murdersim where you play as Roxham Police Department's star detective, hellbent on finding your sister's killer... until he finds you.
Content Warning: All my games are 18+! They contains dark subject matter such as violence and sexual content. Player discretion is advised.
This blog is ran by Jack, the creator.
Itch | Link Tree | Patreon | Twitter
Guidelines
My policy for fanwork is that anything goes in fiction, but respect my authority and copyright outside it. This means normal fan activity like taking screencaps, posting playthroughs, and making fanart/fanfiction is completely allowed, but selling this game or its assets isn't allowed (selling fanwork of it is fine, though). You are also not allowed to feed any of my assets to AI bots, period, even if it's free.
Do not use my stuff for illegal or hateful content.
Also, I expect everyone to respect the Content Warnings on the page. I'm old and do not tolerate fandom wank.
For more details about how I view Fanwork, refer to this post.
F.A.Q.
Who are the main Love Interests in Reanimated Heart?
Read their character profiles here!!
Who's the team?
Jack (creator, writer, artist), mostly. I closely work with Exodus (main programmer) and Claira (music composer). My husband edits the drafts.
For Reanimated Heart, my friend Bonny makes art assets. I've also gotten help from outsiders like Sleepy (prologue music + vfx) and my friend Gumjamin (main menu heart animation).
For Reanimated Heart's VOs, Alex Ross voices Crux, Devin McLaughlin voices Vincenzo, Christian Cruz voices Black, Maganda Marie voices Grete, and Zoe D. Lee voices Missy.
Basically, it's mostly just me & outsourcing stuff to my friends and professionals.
How can I support Double Dead Studio productions?
You can pay for the game, or join our monthly Patreon! If you don't have any money, just giving it a nice rating and recommending it to a friend is already good enough. :)
Where do the funds go to?
Almost 100% gets poured back into the game. More voice acting, more music, more trailers, more art, etc. I also like to give my programmer a monthly tip for helping me.
This game is really my insane passion project, and I want to make it better with community support.
I live in the Philippines and the purchasing power of php is not high, especially since many of the people I outsource to prefer USD. (One time I spent P10k of my own money in one month just to get things.) I'll probably still do that, even if no money comes in, until I'm in danger of getting kicked out the street… but maybe even then? (jk)
What platforms will Reanimated Heart be released in?
Itch and then Steam when it's fully finished. Still looking into other options, as I hear both are getting bad.
Will Reanimated Heart be free?
Chapter 1 will be free. The rest will be updated on Patreon exclusively until full release.
Are you doing a mobile version?
Yeah. Just Android for now, but it's in the works.
Where can I listen to Reanimated Heart's OST?
It is currently up on YouTube, Spotify, and Bandcamp!
Why didn't you answer my ask?
A number of things! Two big ones that keep coming up are Spoilers (as in, you asked something that will be put in an update) or it's already been asked. If you're really dying to know, check the character tags or the meta commentary. You might find what you're looking for there. :)
Will there be a sequel to Pygmalion's Folly?
It's not my first concern right now, but I am planning on it.
Tag List for Navigation
Just click the tags to get to where you wanna go!
#reanimated heart#updates#asks#official art#crux hertz#black lumaban#vincenzo maria fontana#grete braun#townies#fanwork#additional content#aesthetic#spoilers#lore#meta commentary#memes#horror visual novel#romance visual novel#yandere OC#prompts#another rose in his garden#abel valencia#mars rosales#florentin blanchett#pygmalion's folly
156 notes
·
View notes
Text
A trove of leaked internal messages and documents from the militia American Patriots Three Percent—also known as AP3—reveals how the group coordinated with election denial groups as part of a plan to conduct paramilitary surveillance of ballot boxes during the midterm elections in 2022.
This information was leaked to Distributed Denial of Secrets (DDoSecrets), a nonprofit that says it publishes hacked and leaked documents in the public interest. The person behind these AP3 leaks is an individual who, according to their statement uploaded by DDoSecrets, infiltrated the militia and grew so alarmed by what they were seeing that they felt compelled to go public with the information ahead of the upcoming presidential election.
Election and federal officials have already voiced concern about possible voter intimidation this November, in part due to the proliferation of politically violent rhetoric and election denialism. Some right-wing groups have already committed to conducting surveillance of ballot boxes remotely using AI-driven cameras. And last month, a Homeland Security bulletin warned that domestic extremist groups could plan on sabotaging election infrastructure including ballot drop boxes.
Devin Burghart, president and executive director of the Institute for Research and Education on Human Rights, says that AP3’s leaked plans for the 2022 midterms should be a warning for what may transpire next month. “Baseless election denial conspiracies stoking armed militia surveillance of ballot drop boxes is a dangerous form of voter intimidation,” Burghart tells WIRED. “The expansion of the election denial, increased militia activity, and growing coordination between them, is cause for serious concern heading into November. Now with voter suppression groups like True the Vote and some GOP elected officials targeting drop boxes for vigilante activity, the situation should be raising alarms.”
The leaked messages from 2022 show how AP3 and other militias provided paramilitary heft to ballot box monitoring operations organized by “The People’s Movement,” the group that spearheaded the 2021 anti-vaccine convoy protest, and Clean Elections USA, a group with links to the team behind the 2000 Mules film that falsely claimed widespread voter fraud. In the leaked chats, People’s Movement leader Carolyn Smith identifies herself as an honorary AP3 member.
AP3 is run by Scot Seddon, a former Army Reservist, Long Islander, and male model, according to a ProPublica profile on him published in August. That profile, which relied on the same anonymous infiltrator who leaked AP3’s internal messages to DDoSecrets, explains that AP3 escaped scrutiny in the aftermath of January 6 in part because Seddon, after spending weeks preparing his ranks to go to DC, ultimately decided to save his soldiers for another day. ProPublica reported that some members went anyway but were under strict instruction to forgo any AP3 insignia. According to the leaked messages, Seddon also directed his state leaders to participate in the “operation.”
“All of us have a vested interest in this nation,” Seddon said in a leaked video. “So all the state leaders should be getting their people out and manning and observing ballot boxes, to watch for ballot stuffing. This is priority. This goes against getting your double cheeseburger at mcdonalds … Our nation depends on this. Our future depends on this. This ain't no bullshit issue. We need to be tight on this, straight up.”
A flier using militaristic language shared across various state-specific Telegram channels lays out how this operation would work. With “Rules of Engagement” instructions, “Volunteers” are told not to interfere with anyone dropping off their ballots. If someone is suspected of dropping off “multiple ballots,” then observers are told to record the event, and make a note of the individual's appearance and their vehicle's license plate number. In the event of any sort of confrontation, they’re supposed to “report as soon as possible to your area Captain.”
“At the end of each shift, Patriots will prepare a brief report of activity and transmit it to [the ballot] box team Captain promptly,” the flier states.
The person who leaked these documents and messages says that these paramilitary observers masquerading as civilians will often have a QRF—quick reaction force—on standby who can “stage an armed response” should a threat or confrontation arise.
The goal of the “operation,” according to that flier, was to “Stop the Mules.”
“These are the individuals stuffing ballot boxes,” the flier says. “They are well trained and financed. There is a global network backing them up. They pick up fake ballots from phony non-profits and deliver them to ballot boxes, usually between 2400 hours and 0600 hours.” (This was the core conspiracy of 2,000 Mules; the conservative media company behind the film has since issued an apology for promoting election conspiracies and committed to halting its distribution).
Fears about widespread armed voter intimidation during the 2022 midterms—stemming from online chatter and warnings from federal agencies—never materialized in full. However, there were scattered instances of people showing up to observe ballot drops in Arizona. These individuals, according to the statement by the anonymous leaker in the DDoSecrets files, were not “lone wolves”—they were part of “highly organized groups who intended to influence the elections through intimidation.”
In one widely-publicized incident, two clearly armed people wearing balaclavas and tactical gear showed up in Mesa, Arizona, to conduct drop box surveillance. They were never identified, though a Telegram video on DDoSecrets shows AP3’s North Carolina chapter head Burley Ross admitting that one of them was part of his unit. Ross says that the individual was Elias Humiston, who had previously been conducting vigilante border surveillance. “I was well aware they were doing drop box observations,” said Ross. “I was not aware they were doing so in full kit.” Ross added that Humiston had since resigned from the group.
Seddon also addressed the ��little incident in Arizona,” stressing the importance of maintaining clean optics to avoid scrutiny. “We had pushed for helping to maintain election integrity through monitoring the ballot boxes,” said Seddon, in a video message on Telegram. “We never told anyone to do it like that.”
The militia movement largely retreated from public view in the aftermath of the January 6 riot at the US Capitol in 2021. The high-profile implication of the Oath Keepers in the riot, which at the time was America’s biggest militia, thrust the broader militia movement into the spotlight. Amid intense scrutiny, stigma, and creeping paranoia about federal informants, some militias rebanded or even disbanded. But as WIRED reporting has shown, after a brief lull in activity, the militia movement has been quietly rebuilding, reorganizing, and recruiting. With AP3 at its helm, it’s also been engaging in armed training.
Election conspiracies have only continued to fester since 2022, and AP3 has been aggressively recruiting and organizing. Moreover, the rhetoric in the group has also intensified. “The next election won’t be decided at a Ballot Box,” wrote an AP3 leader earlier this year in a private chat, according to ProPublica. “It’ll be decided at the ammo box.”
“Every American has a right to go to the ballot box without fear and the authorities need to urgently learn the lessons of 2022—and the lessons contained in these documents—so they can prevent something even worse from happening in the coming weeks,” the infiltrator wrote in the DDoS statement.
21 notes
·
View notes
Text
The thing about ai is that all these news articles keep saying different things about its future but never connecting the dots. Why is it that ai is so accessible? If it takes so much processing power and so much energy to use why is my for you page covered in ai art? Why is my coworker using it to write all of his emails? If there's supposedly no future in it, then why is it being implemented in every app and program?
"The best part of ai is it removes the requirement of skill to create art" < paraphrased from the horrifying opinion piece I just read that sparked this post.
The problem for the upper class in a capitalist society is that they still have to rely on the labour of the skilled. They're pulling a walmart. They want everyone to rely on ai until the people who have the skills, who know how to do skilled labor and can teach others are gone. The average person does not have the resources to use ai themselves. Once the skilled and skills are gone do you think the access to ai for the common person will remain free? Skill is not a dirty word. Even if you don't have talent you can have skill. You don't need to be the best or to be perfect or even be all that good for the skills you work for to have value.
Creation is an act of deliberate rebellion. If you love art, if you love music or paintings or reading or writing do NOT let them erase you. You HAVE to write that shitty fanfiction; you HAVE to pick up that music editing program you've been afraid to try; you have to start watching the how to code videos in your watch later. The only way to learn is to start. And wont it be amazing? To participate in the devine act of creation and watch your art improve all while knowing elon musk desperatly doesn't want you to? To finally have a piece of art youre proud of, to listen to a song you made that makes you bop your head, to write that python script that does the thing thats been annoying you for years? Create and love and learn and teach everything you know and cherish. Or they'll eventually be gone.
Community is an act of deliberate rebellion. With the tik tok ban its become very obvious that they're afraid to let us communicate freely. They can try and control our speech through technology but they are NOT omnipotent. And as much as you might feel like you have no voice when yet another app goes down and with it your community, you still have your actual voice. Talk to the people in your communities now. Make the memes that you joke about with your friends but never post. Go to the conventions you've always wanted to go to, message the people who make the art you admire, go to your local craft fairs. Or they will eventually be gone.
The rich want nothing more than for us to stick our nose to the grind, sit down, shut up, and die when they can no longer use us. Don't let them.
#ai#tiktok#tiktok ban#art#love#eat the rich#hypocritcal of me maybe to say this and never share the art i make but it needs to be said#and as long as i make it then i am doing the best i can do for right now#and you can too#and maybe ill make an actual step forward and post this draft#“if we want the rewards of being loved#we have to submit to the mortifying ordeal of being known.”
11 notes
·
View notes
Text
Addressing the rumors, between me and SKYND
Dear Skyndicates...
I didn't wish to write this, but I felt like I needed to get it out of my chest because of some posts I've seen on this site, of people wanting to see a collaboration with me and SKYND…
I'm writing once again to express my deep disappointment and hurt due to the silence I've received from SKYND. We had previously discussed collaborating on a music video, which felt like a dream come true. However, when I reached out to her on Instagram about a serious personal concern, she chose not to respond. This lack of response felt dismissive and hurtful, especially given the sensitivity of the situation.
She messaged me personally, expressing interest in collaborating on a music video using the software ‘Blender’, inspired by a Korn music video as well as the negative reactions to the band's ‘Bianca Devins’ video, particularly the AI-usage. That moment felt like a dream come true, an artist I admired reaching out to me with genuine creative interest. But when I later reached out to them about a serious personal concern, specifically about a former friend who has stalked me in the past, and who I learned has been attending several of the band's shows in Sweden - SKYND, and even Father, chose to ignore me. I reached out with vulnerability, hoping for acknowledgment, understanding, or even a brief reply. Instead, there has been silence for over six months. Given the sensitivity of the situation and the context, the lack of response has felt incredibly dismissive and hurtful.
Because of this I’ve stepped down as an admin of the fan group ‘The Skyndicate 🖤’ . This wasn’t an impulsive decision, it came after a great deal of reflection and conversations with fellow admins, who are also disappointed in how things have unfolded. SKYND's recent behavior has left many of us feeling alienated and disheartened. Even the mother of 'The Skyndicate 🖤' has expressed confusion over the band's actions.
Believe me, Skyndicates, I wish things had gone differently. There was a time when I was proud to support SKYND and connect with others who resonated with their vision and enjoyed being an admin of 'The Skyndicate 🖤'. I still want to admire them, but I can't overlook how their silence and choices have affected not only me, but others who have tried to support them in good faith.
PS: I wish to ask Skynd to tell her Patreon chat moderator to stop spreading unfounded and frankly bizarre accusations, namely that I plagiarized SKYND-themed memes from her. These claims are completely baseless. Not to mention, I’ve never found her content funny. There is no evidence of me stealing anyone's memes or, even SKYND-art, because nothing of the sort ever happened... If SKYND wish to find a new moderator then I nominate my "loyal raven" @najadahmer to take over that role, as she has proven to be an excellent admin of 'The Skyndicate 🖤' fan-group for years. Unfortunately it's not possible for me to send SKYND any DMs because she keeps on ignoring me, and prevents me from trying to leave a comment on her posts. And I've got the feeling she'll try and do the same thing on TikTok...
3 notes
·
View notes
Text
So I colored that image (and added a couple of characters that were missing). Anyway, it's the full lineup of BFFs. I feel so normal about these blorbos. Thank you to @viridiandruid for running such a fun game with great characters!
[characer names credits and other info under the cut]
A)ale! my PC and the loml. Massuraman Binder. They have a minimum of three souls in their body at one time and a maximum of five. they're doing so okay right now; don't worry about it
B) Devin, an eagle ale can summon him through their armor as an ally.
C)Cpt. James Hawkins. played by @theboombardbox, spellsword/barbarian. he's missing half of his soul <3, the Hat Man is following him, and he has worms.
D) Sash played by @halfandhalfling our wich/werewolf bestie. Apparently, she's a princess of the moon, but we don't have time to unpack all of that. Also, her (adoptive) mom fucked James.
E) Fig! Sash's familear
F) Orsa, Kiri's animal companion
G) Revazi, once played by @werepaladin. a barbarian whose "grandfather" Grandfather, a red greatwyrm, has been the patron of the party since like session five.
H) Chosen, in a human-only setting, she's an elf child! She's the reincarnation of ale's greatest enemy from both of their past lives, and they're apparently destined to end each other. But for right now they're buds, and she's the adoptive daughter of Kiri.
I) Kiri, @recoveringrevenant's PC, a Spirit Shaman. The newest Pc of the BFFs, our very chaotic party's grounding force, she can see a lot of stuff going on that no one else can see, both literally and figuratively.
J) Beren, a strange teenager our party was charged with keeping safe, then promptly lost (we're working on it). The person who charged us with keeping him safe may or may not be an abloeth, allied with the red dragon previously mentioned.
K) Cloves, the horse we bought in session two or three, that was then awakened by a random druid. He was one of the oldest members of the BFFs until... recently...
L) Erina, @recoveringrevenant 's retired PC, lets us crash at her place most nights, and she was a founding member of the BFFs, so even though she's not adventuring with us anymore, she's still one of us.
M) Agamemnon, our ship's AI, he's an orb that likes to dress up as a wizard :>
N) Ajatus, a guy we threatened to get to help us, but somehow became fully one of the crew. also, we need him to drive the boat, so, a BFF it is.
O) Father Lagi, a priest whose church collapsed into a skink hole after we visited it. We offered him a job in recompense, though by some of the conversations we've had, he's pitty chill in general. Also he embezzled like all of the church's coffers, so that's funny
P) Dahlver-Nar, one of the many souls that take up residence inside of ale's body. Though he's a bit more permanent. He helps out and gives advice alongside some skick-ass powers.
Q) Naz, this is the character I forgot in the first render of this piece. she's functionally dead atm (permanently trapped inside an amber tree), but you know how it is. a character once played by @recoveringrevenant as well
R) Cousin Chet! the half-dragon half ???? freak. we love him though
S) Bailey Wick, a once stow-away, now rogue-for-hire she has the sticky fingers and high dex our party lacks. So whenever we need to infiltrate a place, we send her on those missions. Though we've recently found out she's like 19, which is horrifying lmao. She also is one of the most competent members of the BFFs only because she's the only character that consistently rolls above a 10 (like genuinely)
T) Tansu, Revazi's twin sister. Our most recent true member of the BFFs, she's.... gone through it
#illustration#artists on tumblr#dnd#drawing#dungeons and dragons#player character#oc#dungons and dragons#artist on tumblr#pc#please rb
24 notes
·
View notes
Text
DevLog 2 - The Devining
well. it only took 3 months. but here is our new devlog! or however you call it... We did write a whole devlog for early march, but with school and work taking up most of our schedule, we did not post it, and most of our progress fell to the sands of time.
Snail (@snailmusic) -
Yeah I didn't do nearly as much as freep, so most of those changes will be down there. part of the reason though is that ive been doing a lot of work on my music (haha yes self promo) so if you want to check that out it'd be great! (most of yall are just from my acc so you probably alr know) (my current style of music is probably not representative of O2's audio style or vibe, still working towards that)
The main thing I did was improve trenchbroom (level editor)/qodot/godot interop, which can bring us closer to building some levels (and who knows, a little alpha test in the future ;)). It was actually realllyyyy annoying due to a lack of documentation for qodot 4 (and also ill admit it, a bit of my stupidity) so there was a bug that I couldn't fix for a long time but eventually it was fixed and now it works great!
I also started looking more into the art style of the game, and I'm even learning a bit of how to draw (thanks to my friends! I wouldn't be able to learn like at all without them lol).
^ guy on a cube
oh yeah speaking of outside help im getting this is (very slightly) now bigger than us two! the others aren't doing too much we can note right now (one doesnt have a tumblr acc either) but when their contributions come more into play we'll include them here.
See ya next time!
Freep (@freepdryer) -
Back in march, i spent a lot of time working on the AI, getting it to move… and run away, sort of. But more of that will come later.
Lots of these last week or so has been on the character controller, and reinventing the wheel to introduce a state machine and get a lot cleaner code so its easier to revisit if we ever had to.
Im proud of the work that we've done so far, as we come close to a prototype with *Gameplay*
New Things
Changed the look of the enemy slightly to remove the “amongus factor”
Rewrote the entire script for nav pathing
New enemy prototype can now feel pain / has a health pool that can be depleted using bullets from the player
Added a new line of sight for the enemy to check whether or not the player is in the area to follow
Added the ability for the enemy to hide - WIP - enemy can hide but isnt very good at it. Kinda like a child who turns away while hiding in the corner.
Enemy can also detect when youre in a certain range, I will be adding more flags later on for detection (when the player shoots, sneezes, or explodes on accident)
New testing map!
New areas for target practice, line of sight testing, following and hiding
New player character controller!
Rewrote the entire script for the inclusion of State machines
This was painful.
Added 6(?) new states for several movement states
Added animations for
Walking
Running
Jumping
Crouching
Fixed the stair problem
Whats next?
Continue work on enemy AI - finish hiding, add roaming, add attacking
Dunno?
Fix the stair problem again, but more?
Weapons!
The end?
Thanks for coming to our devlog! We will be back hopefully very soon!
9 notes
·
View notes
Text
How the Devin AI software developer 'might' be a scam
youtube
saying that an artificial intelligence is autonimous is one thing, showing that it is indeed the case is another.
the closing line of the video tells viewers to sent in tasks, so Devin can handle those tasks unseen and send you the results.
okay... but if it really is doing all of this in time, why not just live stream the process at work?
and if deep learning is being used, you should have results you can use, that show the steps the ai took at each decision, a la a teacher asking a student to show their working, so that they can see that they didn't use a cheat sheet, or in this case, that a human operator on the other end didn't do the work for them.
they showed us a prerecorded video of the problem being solved, by now we should know that through the wonders of television, time can seemingly skip ahead by editing out the time between.
they give no explanation as to how Devin solves the issues, they only state that ai can do that you know, which holds as much weight as an illusionist saying it's just magic.
until we can see Devin being given a task and solving it in real time, and a results screen to show the path of deep learning it used, Devin is about as believable an ai software as any sci-fi movie robot.
youtube
12 notes
·
View notes
Text
Top 10 Agentic AI Tools Transforming 2025
Agentic AI is no longer a futuristic concept it’s here, driving transformation across industries by enabling intelligent systems that act autonomously on behalf of users. From decision-making bots to self-optimizing tools, 2025 is shaping up to be the breakout year for Agentic AI.
Below is a curated list of the top 10 Agentic AI tools that are setting new benchmarks for productivity, autonomy, and innovation.
1. Adept’s Action Transformer
Adept is pushing boundaries with its Action Transformer an AI that understands and executes tasks across multiple digital interfaces. From managing spreadsheets to automating design workflows, it adapts seamlessly to user needs.
2. Rewind AI
Rewind AI lets users search their past digital activity conversations, documents, meetings—just like searching the web. Its memory-based interface makes knowledge retrieval and context switching effortless.
3. Devin by Cognition
Billed as the world’s first fully autonomous software engineer, Devin writes code, debugs, runs tests, and even deploys applications with minimal human supervision. It’s built for speed, efficiency, and scalability.
4. AutoGPT
AutoGPT is a popular framework that chains tasks together using a self-prompting loop, making it ideal for goal-driven agents. It’s open-source and customizable for a wide range of use cases.
5. LangChain
LangChain empowers developers to build agent-based AI applications that can plan, reason, and act across external tools and APIs. Its ecosystem supports everything from chatbots to research assistants.
6. BabyAGI
Designed as a lightweight alternative to AutoGPT, BabyAGI uses a task queue system to continually generate and prioritize goals. It’s ideal for startups experimenting with autonomous workflows.
7. CrewAI
CrewAI brings collaborative agents to life, enabling multi-agent teams to complete complex tasks together. It’s particularly effective in research, finance, and project management.
8. OpenAgents
OpenAgents offers plug-and-play access to pre-trained agents that can read, write, and summarize across platforms. With a focus on privacy and reliability, it's a favorite for productivity geeks.
9. Smol Developer
This compact yet powerful tool acts as a micro-agent that writes, tests, and integrates code for small projects. Smol Developer is perfect for solo developers and lean engineering teams.
10. ChatDev
ChatDev simulates software companies using a hierarchy of autonomous agents CEO, CTO, engineers all working together to build and deploy products. It’s a fascinating experiment in agentic collaboration.
As Agentic AI continues to evolve, these tools are laying the groundwork for a more automated, intelligent future. Whether you're a founder, developer, or product manager, tapping into this ecosystem could be the game-changer your workflow needs.
For an in-depth look at each tool and how they’re shaping the AI landscape, read the original article on Agami Technologies: Top 10 Agentic AI Tools of 2025.
0 notes
Text
Why agentic AI pilots fail and how to scale safely
New Post has been published on https://thedigitalinsider.com/why-agentic-ai-pilots-fail-and-how-to-scale-safely/
Why agentic AI pilots fail and how to scale safely
At the AI Accelerator Institute Summit in New York, Oren Michels, Co-founder and CEO of Barndoor AI, joined a one-on-one discussion with Alexander Puutio, Professor and Author, to explore a question facing every enterprise experimenting with AI: Why do so many AI pilots stall, and what will it take to unlock real value?
Barndoor AI launched in May 2025. Its mission addresses a gap Oren has seen over decades working in data access and security: how to secure and manage AI agents so they can deliver on their promise in enterprise settings.
“What you’re really here for is the discussion about AI access,” he told the audience. “There’s a real need to secure AI agents, and frankly, the approaches I’d seen so far didn’t make much sense to me.”
AI pilots are being built, but Oren was quick to point out that deployment is where the real challenges begin.
As Alexander noted:
“If you’ve been around AI, as I know everyone here has, you’ve seen it. There are pilots everywhere…”
Why AI pilots fail
Oren didn’t sugarcoat the current state of enterprise AI pilots:
“There are lots of them. And many are wrapping up now without much to show for it.”
Alexander echoed that hard truth with a personal story. In a Forbes column, he’d featured a CEO who was bullish on AI, front-loading pilots to automate calendars and streamline doctor communications. But just three months later, the same CEO emailed him privately:
“Alex, I need to talk to you about the pilot.”
The reality?
“The whole thing went off the rails. Nothing worked, and the vendor pulled out.”
Why is this happening? According to Oren, it starts with a misconception about how AI fits into real work:
“When we talk about AI today, people often think of large language models, like ChatGPT. And that means a chat interface.”
But this assumption is flawed.
“That interface presumes that people do their jobs by chatting with a smart PhD about what to do. That’s just not how most people work.”
Oren explained that most employees engage with specific tools and data. They apply their training, gather information, and produce work products. That’s where current AI deployments miss the mark, except in coding:
“Coding is one of those rare jobs where you do hand over your work to a smart expert and say, ‘Here’s my code, it’s broken, help me fix it.’ LLMs are great at that. But for most functions, we need AI that engages with tools the way people do, so it can do useful, interesting work.”
The promise of agents and the real bottleneck
Alexander pointed to early agentic AI experiments, like Devin, touted as the first AI software engineer:
“When you actually looked at what the agent did, it didn’t really do that much, right?”
Oren agreed. The issue wasn’t the technology; it was the disconnect between what people expect agents to do and how they actually work:
“There’s this promise that someone like Joe in finance will know how to tell an agent to do something useful. Joe’s probably a fantastic finance professional, but he’s not part of that subset who knows how to instruct computers effectively.”
He pointed to Zapier as proof: a no-code tool that didn’t replace coders.
“The real challenge isn’t just knowing how to code. It’s seeing these powerful tools, understanding the business problems, and figuring out how to connect the two. That’s where value comes from.”
And too often, Oren noted, companies think money alone will solve it. CEOs invest heavily and end up with nothing to show because:
“Maybe the human process, or how people actually use these tools, just isn’t working.”
This brings us to what Oren called the real bottleneck: access, not just to AI, but what AI can access.
“We give humans access based on who they are, what they’re doing, and how much we trust them. But AI hasn’t followed that same path. Just having AI log in like a human and click around isn’t that interesting; that’s just scaled-up robotic process automation.”
Instead, enterprises need to define:
What they trust an agent to do
The rights of the human behind it
The rules of the system it’s interacting with
And the specific task at hand
These intersect to form what Oren called a multi-dimensional access problem:
“Without granular controls, you end up either dialing agents back so much they’re less useful than humans, or you risk over-permissioning. The goal is to make them more useful than humans.”
Why specialized agents are the future (and how to manage the “mess”)
As the conversation shifted to access, Alexander posed a question many AI leaders grapple with: When we think about role- and permission-based access, are we really debating the edges of agentic AI?
“Should agents be able to touch everything, like deleting Salesforce records, or are we heading toward hyper-niche agents?”
Oren was clear on where he stands:
“I’d be one of those people making the case for niche agents. It’s the same as how we hire humans. You don’t hire one person to do everything. There’s not going to be a single AI that rules them all, no matter how good it is.”
Instead, as companies evolve, they’ll seek out specialized tools, just like they hire specialized people.
“You wouldn’t hire a bunch of generalists and hope the company runs smoothly. The same will happen with agents.”
But with specialization comes complexity. Alexander put it bluntly:
“How do we manage the mess? Because, let’s face it, there’s going to be a mess.”
Oren welcomed that reality:
“The mess is actually a good thing. We already have it with software. But you don’t manage it agent by agent, there will be way too many.”
The key is centralized management:
A single place to manage all agents
Controls based on what agents are trying to do, and the role of the human behind them
System-specific safeguards, because admins (like your Salesforce or HR lead) need to manage what’s happening in their domain
“If each agent or its builder had its own way of handling security, that wouldn’t be sustainable. And you don’t want agents or their creators deciding their own security protocols – that’s probably not a great idea.”
Why AI agents need guardrails and onboarding
The question of accountability loomed large. When humans manage fleets of AI agents, where does responsibility sit?
Oren was clear:
“There’s human accountability. But we have to remember: humans don’t always know what the agents are going to do, or how they’re going to do it. If we’ve learned anything about AI so far, it’s that it can have a bit of a mind of its own.”
He likened agents to enthusiastic interns – eager to prove themselves, sometimes overstepping in their zeal:
“They’ll do everything they can to impress. And that’s where guardrails come in. But it’s hard to build those guardrails inside the agent. They’re crafty. They’ll often find ways around internal limits.”
The smarter approach? Start small:
Give agents a limited scope.
Watch their behavior.
Extend trust gradually, just as you would with a human intern who earns more responsibility over time.
This led to the next logical step: onboarding. Alexander asked whether bringing in AI agents is like an HR function.
Oren agreed and shared a great metaphor from Nvidia’s Jensen Huang:
“You have your biological workforce, managed by HR, and your agent workforce, managed by IT.”
Just as companies use HR systems to manage people, they’ll need systems to manage, deploy, and train AI agents so they’re efficient and, as Alexander added, safe.
How to manage AI’s intent
Speed is one of AI’s greatest strengths and risks. As Oren put it:
“Agents are, at their core, computers, and they can do things very, very fast. One CISO I know described it perfectly: she wants to limit the blast radius of the agents when they come in.”
That idea resonated. Alexander shared a similar reflection from a security company CEO:
“AI can sometimes be absolutely benevolent, no problem at all, but you still want to track who’s doing what and who’s accessing what. It could be malicious. Or it could be well-intentioned but doing the wrong thing.”
Real-world examples abound from models like Anthropic’s Claude “snitching” on users, to AI trying to protect its own code base in unintended ways.
So, how do we manage the intent of AI agents?
Oren drew a striking contrast to traditional computing:
“Historically, computers did exactly what you told them; whether that’s what you wanted or not. But that’s not entirely true anymore. With AI, sometimes they won’t do exactly what you tell them to.”
That makes managing them a mix of art and science. And, as Oren pointed out, this isn’t something you can expect every employee to master:
“It’s not going to be Joe in finance spinning up an agent to do their job. These tools are too powerful, too complex. Deploying them effectively takes expertise.”
Why pilots stall and how innovation spreads
If agents could truly do it all, Oren quipped:
“They wouldn’t need us here, they’d just handle it all on their own.”
But the reality is different. When Alexander asked about governance failures, Oren pointed to a subtle but powerful cause of failure. Not reckless deployments, but inertia:
“The failure I see isn’t poor governance in action, it’s what’s not happening. Companies are reluctant to really turn these agents loose because they don’t have the visibility or control they need.”
The result? Pilot projects that go nowhere.
“It’s like hiring incredibly talented people but not giving them access to the tools they need to do their jobs and then being disappointed with the results.”
In contrast, successful AI deployments come from open organizations that grant broader access and trust. But Oren acknowledged the catch:
“The larger you get as a company, the harder it is to pull off. You can’t run a large enterprise that way.”
So, where does innovation come from?
“It’s bottom-up, but also outside-in. You’ll see visionary teams build something cool, showcase it, and suddenly everyone wants it. That’s how adoption spreads, just like in the API world.”
And to bring that innovation into safe, scalable practice:
Start with governance and security so people feel safe experimenting.
Engage both internal teams and outside experts.
Focus on solving real business problems, not just deploying tech for its own sake.
Oren put it bluntly:
“CISOs and CTOs, they don’t really have an AI problem. But the people creating products, selling them, managing finance – they need AI to stay competitive.”
Trusting AI from an exoskeleton to an independent agent
The conversation circled back to a critical theme: trust.
Alexander shared a reflection that resonated deeply:
“Before ChatGPT, the human experience with computers was like Excel: one plus one is always two. If something went wrong, you assumed it was your mistake. The computer was always right.”
But now, AI behaves in ways that can feel unpredictable, even untrustworthy. What does that mean for how we work with it?
Oren saw this shift as a feature, not a flaw:
“If AI were completely linear, you’d just be programming, and that’s not what AI is meant to be. These models are trained on the entirety of human knowledge. You want them to go off and find interesting, different ways of looking at problems.”
The power of AI, he argued, comes not from treating it like Google, but from engaging it in a process:
“My son works in science at a biotech startup in Denmark. He uses AI not to get the answer, but to have a conversation about how to find the answer. That’s the mindset that leads to success with AI.”
And that mindset extends to gradual trust:
“Start by assigning low-risk tasks. Keep a human in the loop. As the AI delivers better results over time, you can reduce that oversight. Eventually, for certain tasks, you can take the human out of the loop.”
Oren summed it up with a powerful metaphor:
“You start with AI as an exoskeleton; it makes you bigger, stronger, faster. And over time, it can become more like the robot that does the work itself.”
The spectrum of agentic AI and why access controls are key
Alexander tied the conversation to a helpful analogy from a JP Morgan CTO: agentic AI isn’t binary.
“There’s no clear 0 or 1 where something is agentic or isn’t. At one end, you have a fully trusted system of agents. On the other hand, maybe it’s just a one-shot prompt or classic RPA with a bit of machine learning on top.”
Oren agreed:
“You’ve described the two ends of the spectrum perfectly. And with all automation, the key is deciding where on that spectrum we’re comfortable operating.”
He compared it to self-driving cars:
“Level 1 is cruise control; Level 5 is full autonomy. We’re comfortable somewhere in the middle right now. It’ll be the same with agents. As they get better, and as we get better at guiding them, we’ll move further along that spectrum.”
And how do you navigate that safely? Oren returned to the importance of access controls:
“When you control access outside the agent layer, you don’t have to worry as much about what’s happening inside. The agent can’t see or write to anything it isn’t allowed to.”
That approach offers two critical safeguards:
It prevents unintended actions.
It provides visibility into attempts, showing when an agent tries to do something it shouldn’t, so teams can adjust the instructions before harm is done.
“That lets you figure out what you’re telling it that’s prompting that behavior, without letting it break anything.”
The business imperative and the myth of the chat interface
At the enterprise level, Oren emphasized that the rise of the Chief AI Officer reflects a deeper truth:
“Someone in the company recognized that we need to figure this out to compete. Either you solve this before your competitors and gain an advantage, or you fall behind.”
And that, Oren stressed, is why this is not just a technology problem, it’s a business problem:
“You’re using technology, but you’re solving business challenges. You need to engage the people who have the problems, and the folks solving them, and figure out how AI can make that more efficient.”
When Alexander asked about the biggest myth in AI enterprise adoption, Oren didn’t hesitate:
“That the chat interface will win.”
While coders love chat interfaces because they can feed in code and get help most employees don’t work that way:
“Most people don’t do their jobs through chat-like interaction. And most don’t know how to use a chat interface effectively. They see a box, like Google search, and that doesn’t work well with AI.”
He predicted that within five years, chat interfaces will be niche. The real value?
“Agents doing useful things behind the scenes.”
How to scale AI safely
Finally, in response to a closing question from Alexander, Oren offered practical advice for enterprises looking to scale AI safely:
“Visibility is key. We don’t fully understand what happens inside these models; no one really does. Any tool that claims it can guarantee behavior inside the model? I’m skeptical.”
Instead, Oren urged companies to focus on where they can act:
“Manage what goes into the tools, and what comes out. Don’t believe you can control what happens within them.”
Final thoughts
As enterprises navigate the complex realities of AI adoption, one thing is clear: success won’t come from chasing hype or hoping a chat interface will magically solve business challenges.
It will come from building thoughtful guardrails, designing specialized agents, and aligning AI initiatives with real-world workflows and risks. The future belongs to companies that strike the right balance; trusting AI enough to unlock its potential, but governing it wisely to protect their business.
The path forward isn’t about replacing people; it’s about empowering them with AI that truly works with them, not just beside them.
#2025#adoption#Advice#agent#Agentic AI#agents#ai#AI adoption#AI AGENTS#anthropic#API#approach#Art#Articles#Artificial Intelligence#author#automation#Behavior#binary#biotech#box#Building#Business#Cars#CEO#challenge#chatGPT#chief AI officer#CISO#CISOs
0 notes
Text
What is Agentic AI and How It Will Change Work in 2025?
In 2025, we’re entering a bold new era of work—one shaped not just by automation, but by intelligent, autonomous collaborators. Meet Agentic AI—an evolution of artificial intelligence that doesn’t just assist, but acts.

🔍 What Is Agentic AI?
Agentic AI refers to autonomous AI systems—called “agents”—that can set goals, make decisions, execute multi-step tasks, and adapt in real time, all with minimal human intervention. These AI agents move beyond reactive tools (like chatbots or autocomplete) to proactive co-workers.
They are capable of:
Reasoning and planning
Learning from feedback and context
Collaborating with humans and other agents
Taking action across systems (email, APIs, databases, etc.)
Think of Agentic AI as an intelligent teammate—not just a tool.
🚀 Why It’s a Game-Changer for the Workplace
Agentic AI is set to radically transform how we work by:
1. Automating Complex Workflows
From end-to-end marketing campaigns to customer onboarding and legal document drafting, agents can manage multistep processes—coordinating tools, content, and outcomes.
2. Boosting Human Productivity
By offloading repetitive, time-consuming tasks (data entry, meeting notes, reporting), Agentic AI gives employees more time for high-value, strategic, and creative work.
3. Creating AI-Powered Roles
Expect titles like “AI Workflow Designer”, “Agent Supervisor”, or “Prompt Engineer” to emerge as humans manage, refine, and co-create with AI agents.
4. 24/7 Operations
Unlike human teams, AI agents don’t sleep. Businesses can run essential functions (like monitoring, support, scheduling) around the clock—cost-efficiently.
🧠 Real-World Agentic AI Use Cases in 2025
IndustryAI Agent ExampleSoftware DevelopmentAn agent like Devin writes, tests, and ships code based on project goals.MarketingAI campaigns run by agents that design graphics, schedule posts, analyze metrics, and optimize in real-time.HealthcareAgents help with patient intake, symptom triage, appointment booking, and insurance verification.FinanceAutomated financial advisors plan budgets, detect fraud, and adjust portfolios on your behalf.
⚠️ Challenges and Ethical Considerations
Job Displacement: Routine administrative and coordination roles may decline.
Bias and Errors: Without oversight, agents can hallucinate, misinterpret data, or reinforce harmful biases.
Transparency: It’s crucial that businesses make it clear when users are interacting with an AI agent—not a human.
Security & Control: Agents acting autonomously must follow strict access and control protocols to avoid data breaches.
🔮 What’s Next?
Agentic AI will not replace all jobs—it will redefine them. Just like spreadsheets didn’t eliminate accountants, but changed their work, agents will transform professionals into AI supervisors, strategists, and system architects.
Companies that integrate Agentic AI today will become the digital pioneers of tomorrow.
✅ Final Thoughts: Why You Should Care
The future of work is agent-driven
Employees will work with AI, not against it
Businesses that adopt Agentic AI will gain agility, speed, and insight
📌 Call to Action
Want to see how Agentic AI can transform your business?
👉 Explore Agentic AI Services by IndoSakura 👉 Read More: What is Agentic AI and How It Will Change Work?
👉 Location : Bangalore
0 notes
Text
6/3/25
I'm gonna start calling him Thom Thumb. Just like when he was a kid. Thom Thumb using AI to find his way home. With the lawn watering timer.
Thom Thumb in the botanical bunker. And after my spouse serves her burritos and chimichanga, while we play house all day, we can experience once again, our lovely and reliable...even dependable cups of Robusta Coffees.
In ceramic cups and mugs.
Sitting him once more in his Devine placement for lost souls. As I shut her mouth... The spirits of his feminine side.
You know in your inner soul she is not lost with me.
-------
6/4/25
TSMC is reporting they are doing 2nm lithography with a 90% yield on ASML DUV Lithography Machines.
This is not only impressive, it is institutionally remarkable. At least in my mind.
This is stating that the raw technique and skillset of institutional craftsmanship, really stands up the quality of ASML Machinery, but the Chemistry of Lithography as well.
Original content is one thing to speak of, but this business is promoting the quality of hand craftsmanship, far beyond what these companies are advertising.
Maintaining high quality at very low equipment enumerations is a very difficult trade to offer in the modern industry. But producing this kind of product quality at minimal equipment enumerations is something that really leaves merit to the idea of leaving the traditional system of commerce, for something as a payroll career.
I've spent plenty of time with the chemistry of lithography. Especially when you are talking about chip quality that goes in consumer products. But these practices are also suggesting that a true valuation can't exactly be reported in a standard financial forum. Because the quality of human capital, really has produced a product that exceeds expectations. Not to mention the quality of the equipment, materials, and procedure that goes with it.
What makes technology amazing is when it is done right, within the enumeration of how it's applied.
This is merely stating that the quality of career opportunity, by the procedure of a business model, is generating a success in an industry that has never seen this kind of quality before.
What this translates to is providing the confidence in the industry for large Software Corporations, such as Microsoft and Google, to really get back to the institutional values of computation itself. And providing enough incentive, with the reliability of such a manufacturing process, for these Software Companies to develop their products into a tangible Hardware Asset.
As far as market share, a hardware asset is a billable real estate. Making Software Companies more open to finalizing institutional values for their own business, but turning the tech sector into the real estate values the institution condones and encourages.
Besides the idea of a traditional computer processor, and how it is engineered, not only does this expand the demand for hardware engineering, it allows tech companies to distinguish themselves in market share and corporate entity, by the hardware assets they would be developing.
That's a completely different ideology of what we think of when you talk about the "virtual world" of technology in whole.
0 notes
Text
Mastering Mcp To A2a: Everything A Developer Needs To Know
We have all seen the frenzy of Devin AI—teams racing to spin up models, orchestrate data flows, and automate every possible touchpoint. But beyond the hype, two architectural patterns quietly power next-generation pipelines: Model Context Protocol (MCP) and Agent-to-Agent (A2A).
Below is the roadmap for our deep dive into MCP (Model Context Protocol) and A2A (Agent-to-Agent):
Why We Need MCP & What Problem It Solves: MCP standardizes how AI assistants connect to data sources and tools, eliminating brittle point-to-point integrations and enabling seamless context sharing across systems.
What Is A2A (Agent-to-Agent): A2A defines a vendor-agnostic, secure messaging protocol for agents to delegate tasks and collaborate, reusing standards like HTTP, JSON-RPC.
Key Differences Between MCP & A2A: While MCP focuses on connecting LLMs to external tools, A2A enables peer-to-peer agent orchestration; we’ll compare their communication patterns, security models, and use cases
Hands-On Walkthrough: Connecting to a Google Maps MCP Server Using Smithery: Step-by-step with the mcp-google-map server—installing via npx @smithery/cliInspecting available Google Maps tools, defining a mapRequest schema, publishing events, and consuming responses.
Why did MCP even come into the picture?
If you’ve ever tried building an app from scratch—no starter kits, no boilerplate—you know it’s a rare beast these days, with countless templates and low-code tools doing the heavy lifting for you. Even for a simple feature that involves only a handful of actions, you end up defining dozens of functions, wiring up API calls, and wrestling with sprawling context just to keep everything in sync. When an LLM is responsible for scaffolding that code, you run into even more headaches: prompt length limits, out-of-date API knowledge, and unpredictable behavior.
LLMs have been in the mainstream long enough that it’s time to agree on a clear, uniform architecture. Enter MCP: Instead of juggling thousands of variable names across multiple data structures, you define a single protocol for events and context. No more patching together mismatched structs—just a consistent, scalable way to connect your model to the tools and services you need.
Outdated Context
Stale API Definitions Over time, APIs evolve—endpoints change, parameters get deprecated, and response schemas shift. Without a centralized contract, your app’s code can keep calling methods that no longer exist or behave differently.
Fragmented Knowledge Sources When different modules or team members maintain their own partial docs or code snippets, it’s easy for some parts of the system to reference legacy behavior while others assume the latest version.
Hard-to-Track Dependencies Without a single source of truth, you’ll spend hours hunting down which version of an API a given function was written against—worse if multiple microservices rely on slightly different variants.
Context Limit by LLMs
Token Budget Constraints Large language models typically cap out around 8k–32k tokens. If your prompt needs to load dozens of function signatures, data models, and example payloads, you risk hitting that limit—and getting incomplete or truncated responses.
Dynamic Context Switching As your dialog flows between user interactions, error messages, and tooling calls, you must continuously stitch new context in while pruning older details. This overhead can lead to inconsistent behavior when the LLM “forgets” earlier conversation state.
Performance Degradation Pushing near the token ceiling may slow inference and increase latency. Every extra line of JSON schema or API spec you include in the prompt reduces headroom for meaningful instructions or real-time data.
Too Many APIs—What Is Relevant?
Discovery Overhead Modern systems expose dozens of endpoints (CRUD, batch jobs, analytics, webhooks, etc.). Manually curating which ones the LLM should know about becomes a maintenance burden.
Noise vs. Signal Feeding every available API into your prompt can drown the model in irrelevant options, causing it to suggest calls that aren’t needed or to pick the wrong endpoint for a given task.
Version & Permission Mismatch Even if you filter by endpoint name, differences in authentication schemes, rate limits, and version tags can lead to runtime errors if the model isn’t aware of the specific requirements for each API.
What problems does MCP solve?
Clients = Your Frontend/SDKs
Examples: Cursor app, Windsurf CLI, Claude Desktop
Responsibilities:
Call mcp.listTools() on the MCP server
Pass those tool definitions into the LLM prompt (“Here’s how you call Slack.postMessage”)
When the model returns a tool name + params, invoke mcp.callTool(toolName, params) and surface the JSON response back to your app
Servers = Protocol Adapters/Proxies
Expose a JSON-RPC API (listTools, callTool) that any client can consume
Describe each tool with a simple JSON schema + human-readable docs so the LLM knows method names, param types, and auth rules
Translate between the MCP RPC calls and the provider’s native REST/gRPC endpoints, handling auth tokens, retries, and error mappings
Service Providers = Your Existing APIs
Slack, GitHub, Notion, custom microservices, etc.
No code changes needed—the MCP server wraps its APIs behind the unified protocol
Providers keep their own rate limits, versioning, and data models; MCP handles the integration glue
credits: builder.io
What is A2A?
A2A (Agent-to-Agent) is an open standard introduced by Google DeepMind for multi-agent communication in AI systems. It provides a lightweight, JSON-based protocol for agents to announce their “Agent Cards” (metadata describing their capabilities), subscribe to peer events, and invoke actions on one another without requiring custom glue code.
Core Components
Agent Card: A public metadata file listing an agent’s supported tasks, input/output schemas, and authentication requirements.
Discovery: Agents broadcast or query registries to find peers with compatible capabilities
Messaging: Uses JSON-RPC or HTTP transport with optional streaming (e.g., Server-Sent Events) to exchange requests and responses
A2A vs MCP
MCP (Model Context Protocol): Lets a single AI model talk to external tools in a consistent way. It handles:
Discovering which tools are available
Invoking them via a standard JSON-RPC calls
Passing in any needed data (schemas, context) and parsing their replies
A2A (Agent-to-Agent): Lets multiple AI “agents” talk directly to each other, peer-to-peer. It handles:
Announcing each agent’s capabilities
Finding and authenticating a fellow agent to do a task
Messaging back and forth in a secure, structured format
Protocol Mechanics
Feature
MCP
A2A
Transport
JSON-RPC over HTTP(S)
JSON-RPC over HTTP(S), optional SSE streaming
Discovery
listTools() RPC call
“Agent Cards” registry or peer broadcast
Invocation
callTool(toolName, params)
invokeAgent(method, params)
Definition Format
JSON Schemas + human-readable docs
Agent Card metadata: capabilities, schemas, endpoints
Security
Server-managed auth (API keys, OAuth)
Peer-to-peer auth (tokens, mutual TLS), ACLs
State Sharing
Context payloads are passed in a prompt
Streaming updates, push notifications
Typical Topology
Model → MCP Server → Service Provider
Agent A ↔ Agent B ↔ Agent C (mesh or registry)
Simple setup of MCP with Google Maps MCP Server Using Smithery
Prerequisites
FAQ
Q1: What’s the easiest way to add a new service to MCP?
A: Write or configure an MCP server adapter for that service. Define its tool list in a JSON schema (method names, input/output types, auth), expose listTools and callTool over JSON-RPC, and point your client at its RPC URL. No client changes required.
Q2: How do I handle authentication and secrets in MCP?
A: Store API keys or OAuth tokens on the MCP server side (e.g., via environment variables or a secrets vault). The server injects them when calling the provider’s API, so clients only invoke callTool without ever seeing raw credentials.
Q3: Can I mix MCP and A2A in the same application?
A: Absolutely. Use MCP for “vertical” calls from your LLM to external tools, and A2A for “horizontal” agent hand-offs. For example, one agent may use MCP to enrich data from Slack, then delegate follow-up tasks to another agent via A2A.
Q4: How do I version MCP servers or tools without breaking clients?
A: Maintain backward-compatible JSON schemas. When you need breaking changes, publish a new tool name or a new server endpoint. Clients can list both old and new versions via listTools and choose the version they support.
0 notes
Text
So I'm not a fan of ai art at all. But I've struggled for years to accurately capture my oldest OCs, Devin, on paper. He has a simple design but it never comes out right. I was curious and wanted to see if ai could nudge my brain in the right direction
And-
Y'all
We're going to be okay. Ai ain't gonna take art away from us. This is hilarious. How can something be so good and yet completely wrong??
What I asked for:
Native American male, tall, shoulder length black hair, dark brown eyes, wearing a grey tank top, black jacket, a necklace, cargo pants, with a cigarette
What I got:
Not bad, okay, but why are his eyes red?? Sir, where did the other half of your second necklace go?? Your pockets have thumb holes and for some reason... half a jacket?
Nice, his eyes are the right color at least but dude- hands?? I blame the floating hand on that one time he was a ghost. Also, sir, that is an excessive amount of necklaces and fix your shirt- WHERE IS THE OTHER HALF OF YOUR JACKET, AGAIN?!
At least there's two halves to his jacket this time but um ... hey Devin? What was the plot point where you LOST AN ARM?!
Soooooo many of these gave him red eyes
Why red eyes?
Devin are you hiding something from me?
You possessed?
Dev?
Talk to me you figment of my imagination
#still don't like ai art#but this was a fun little adventure#I'll try to draw Devin again#i can't fuck it up as much as ai#ai art#ai oc#oc#original character#ai generated#devin
0 notes
Text
How Google Cloud Workstations Reshape Federal Development

Teams are continuously challenged to provide creative solutions while maintaining the highest security standards in the difficult field of federal software development. It is easy to become overwhelmed by the intricacy of maintaining consistent development environments, scaling teams, and managing infrastructure. Devin Dickerson, Principal Analyst at Forrester, highlighted the findings of a commissioned TEI study on the effects of Google Cloud Workstations on the software development lifecycle in a recent webinar hosted by Forrester. The study focused on how these workstations can improve security, consistency, and agility while lowering expenses and risks.
The Numbers: A Forrester Total Economic Impact (TEI) Study
According to a TEI study that Forrester Consulting commissioned for Google Cloud in April 2024, cloud workstations have a big influence on development teams:
Executive Synopsis
Complex onboarding procedures, erratic workflow settings, and local code storage policies are common obstacles faced by organizations trying to grow their development teams. These issues hinder productivity and jeopardize business security. As a result, businesses are looking for a solution that gives developers a reliable and safe toolkit without requiring expensive on-premises resources.
Google Cloud Workstations give developers access to a safe, supervised development environment that streamlines onboarding and boosts workflow efficiency. Administrators and platform teams make preset workstations available to developers via browser or local IDE, allowing them to perform customisation as needed. To help developers solve code problems and create apps more quickly, Google Cloud Workstations come with a built-in interface with Gemini Code Assist, an AI-powered collaborator.
Forrester Consulting was hired by Google to carry out a Total Economic Impact (TEI) research to investigate the possible return on investment (ROI) that businesses could achieve through the use of Google Cloud Workstations. One This study aims to give readers a methodology for assessing the possible financial impact that cloud workstations may have on their companies.
Productivity gains for developers of 30%: Bottlenecks are removed via AI-powered solutions like Gemini Code Assist, pre-configured environments, and simplified onboarding.
Three-year 293% ROI: Cost savings and increased productivity make the investment in cloud workstations pay for itself quickly.
Department of Defense Experience
The industry expert described a concerning event that occurred while he was in the Navy: a contract developer lost his laptop while on the road, which may have exposed private data and put national security at risk. This warning story emphasizes how important it is to have safe cloud-based solutions, such as Google Cloud Workstations. Workstations reduce the possibility of lost or stolen devices and safeguard sensitive data by centralizing development environments and doing away with the requirement for local code storage. They also improve security and operational efficiency by streamlining onboarding procedures and guaranteeing that new developers have immediate access to safe, standardized environments.
Streamlining Government IT Modernization with Google Cloud Workstations
Google Cloud Workstations provide a powerful solution made especially to ease the difficulties government organizations encounter when developing software today:
Simplified Cloud-Native Development: Reduce the overhead of maintaining several development environments by managing and integrating complicated toolchains, dependencies, and cloud-native architectures with ease.
Decreased Platform Team Overhead: Simplify the processes for infrastructure provisioning, developer onboarding and offboarding, and maintenance to free up important resources for critical projects.
Standardized development environments reduce the infamous “works on my machine” issue and promote smooth teamwork by guaranteeing uniformity and repeatability across teams.
Enhanced Security & Compliance: Use FedRAMP to meet and surpass the strict federal security and compliance requirements. Comprehensive data protection is achieved by centralized administration, high authorization, and integrated security controls.
The Way Forward
Now FedRAMP High Authorized, Google Cloud Workstations are more than just a technical advancement they are a calculated investment in the creativity, security, and productivity of teams. Government agencies may save money, simplify processes, and free up developers to concentrate on what they do best creating innovative solutions that benefit the country by adopting this cloud-native solution.
Read more on Govindhtech.com
#GoogleCloudWorkstations#GoogleCloud#CloudWorkstation#GeminiCodeAssist#FedRAMP#GeminiCode#News#Technews#Technology#Technologynews#Technologytrends#govindhtech
1 note
·
View note