#AI Agent Data Access
Explore tagged Tumblr posts
govindhtech · 10 days ago
Text
MCP Toolbox for Databases Simplifies AI Agent Data Access
Tumblr media
AI Agent Access to Enterprise Data Made Easy with MCP Toolbox for Databases
Google Cloud Next 25 showed organisations how to develop multi-agent ecosystems using Vertex AI and Google Cloud Databases. Agent2Agent Protocol and Model Context Protocol increase agent interactions. Due to developer interest in MCP, we're offering MCP Toolbox for Databases (formerly Gen AI Toolbox for Databases) easy to access your company data in databases. This advances standardised and safe agentic application experimentation.
Previous names: Gen AI Toolbox for Databases, MCP Toolbox
Developers may securely and easily interface new AI agents to business data using MCP Toolbox for Databases (Toolbox), an open-source MCP server. Anthropic created MCP, an open standard that links AI systems to data sources without specific integrations.
Toolbox can now generate tools for self-managed MySQL and PostgreSQL, Spanner, Cloud SQL for PostgreSQL, Cloud SQL for MySQL, and AlloyDB for PostgreSQL (with Omni). As an open-source project, it uses Neo4j and Dgraph. Toolbox integrates OpenTelemetry for end-to-end observability, OAuth2 and OIDC for security, and reduced boilerplate code for simpler development. This simplifies, speeds up, and secures tool creation by managing connection pooling, authentication, and more.
MCP server Toolbox provides the framework needed to construct production-quality database utilities and make them available to all clients in the increasing MCP ecosystem. This compatibility lets agentic app developers leverage Toolbox and reliably query several databases using a single protocol, simplifying development and improving interoperability.
MCP Toolbox for Databases supports ATK
The Agent Development Kit (ADK), an open-source framework that simplifies complicated multi-agent systems while maintaining fine-grained agent behaviour management, was later introduced. You can construct an AI agent using ADK in under 100 lines of user-friendly code. ADK lets you:
Orchestration controls and deterministic guardrails affect agents' thinking, reasoning, and collaboration.
ADK's patented bidirectional audio and video streaming features allow human-like interactions with agents with just a few lines of code.
Choose your preferred deployment or model. ADK supports your stack, whether it's your top-tier model, deployment target, or remote agent interface with other frameworks. ADK also supports the Model Context Protocol (MCP), which secures data source-AI agent communication.
Release to production using Vertex AI Agent Engine's direct interface. This reliable and transparent approach from development to enterprise-grade deployment eliminates agent production overhead.
Add LangGraph support
LangGraph offers essential persistence layer support with checkpointers. This helps create powerful, stateful agents that can complete long tasks or resume where they left off.
For state storage, Google Cloud provides integration libraries that employ powerful managed databases. The following are developer options:
Access the extremely scalable AlloyDB for PostgreSQL using the langchain-google-alloydb-pg-python library's AlloyDBSaver class, or pick
Cloud SQL for PostgreSQL utilising langchain-google-cloud-sql-pg-python's PostgresSaver checkpointer.
With Google Cloud's PostgreSQL performance and management, both store and load agent execution states easily, allowing operations to be halted, resumed, and audited with dependability.
When assembling a graph, a checkpointer records a graph state checkpoint at each super-step. These checkpoints are saved in a thread accessible after graph execution. Threads offer access to the graph's state after execution, enabling fault-tolerance, memory, time travel, and human-in-the-loop.
0 notes
jcmarchi · 1 month ago
Text
Saryu Nayyar, CEO and Founder of Gurucul – Interview Series
New Post has been published on https://thedigitalinsider.com/saryu-nayyar-ceo-and-founder-of-gurucul-interview-series/
Saryu Nayyar, CEO and Founder of Gurucul – Interview Series
Tumblr media Tumblr media
Saryu Nayyar is an internationally recognized cybersecurity expert, author, speaker and member of the Forbes Technology Council. She has more than 15 years of experience in the information security, identity and access management, IT risk and compliance, and security risk management sectors.
She was named EY Entrepreneurial Winning Women in 2017. She has held leadership roles in security products and services strategy at Oracle, Simeio, Sun Microsystems, Vaau (acquired by Sun) and Disney. Saryu also spent several years in senior positions at the technology security and risk management practice of Ernst & Young.
Gurucul is a cybersecurity company that specializes in behavior-based security and risk analytics. Its platform leverages machine learning, AI, and big data to detect insider threats, account compromise, and advanced attacks across hybrid environments. Gurucul is known for its Unified Security and Risk Analytics Platform, which integrates SIEM, UEBA (User and Entity Behavior Analytics), XDR, and identity analytics to provide real-time threat detection and response. The company serves enterprises, governments, and MSSPs, aiming to reduce false positives and accelerate threat remediation through intelligent automation.
What inspired you to start Gurucul in 2010, and what problem were you aiming to solve in the cybersecurity landscape?
Gurucul was founded to help Security Operations and Insider Risk Management teams obtain clarity into the most critical cyber risks impacting their business. Since 2010 we’ve taken a behavioral and predictive analytics approach, rather than rules-based, which has generated over 4,000+ machine learning models that put user and entity anomalies into context across a variety of different attack and risk scenarios. We’ve built upon this as our foundation, moving from helping large Fortune 50 companies solve Insider Risk challenges, to helping companies gain radical clarity into ALL cyber risk. This is the promise of REVEAL, our unified and AI-Driven Data and Security Analytics platform. Now we’re building on our AI mission with a vision to deliver a Self-Driving Security Analytics platform, using Machine Learning as our foundation but now layering on Generative and Agentic AI capabilities across the entire threat lifecycle. The goal is for analysts and engineers to spend less time in the myriad in complexity and more time focused on meaningful work. Allowing machines to amplify the definition of their day-to-day activities.
Having worked in leadership roles at Oracle, Sun Microsystems, and Ernst & Young, what key lessons did you bring from those experiences into founding Gurucul?
My leadership experience at Oracle, Sun Microsystems, and Ernst & Young strengthened my ability to solve complex security challenges and provided me with an understanding of the challenges that Fortune 100 CEOs and CISOs face. Collectively, it allowed me to gain a front-row seat the technological and business challenges most security leaders face and inspired me to build solutions to bridge those gaps.
How does Gurucul’s REVEAL platform differentiate itself from traditional SIEM (Security Information and Event Management) solutions?
Legacy SIEM solutions depend on static, rule-based approaches that lead to excessive false positives, increased costs, and delayed detection and response. Our REVEAL platform is fully cloud-native and AI-driven, utilizing advanced machine learning, behavioral analytics, and dynamic risk scoring to detect and respond to threats in real time. Unlike traditional platforms, REVEAL continuously adapts to evolving threats and integrates across on-premises, cloud, and hybrid environments for comprehensive security coverage. Recognized as the ‘Most Visionary’ SIEM solution in Gartner’s Magic Quadrant for three consecutive years, REVEAL redefines AI-driven SIEM with unmatched precision, speed, and visibility. Furthermore, SIEMs struggle with a data overload problem. They are too expensive to ingest everything needed for complete visibility and even if they do it just adds to the false positive problem. Gurucul understands this problem and it’s why we have a native and AI-driven Data Pipeline Management solution that filters non-critical data to low-cost storage, saving money, while retaining the ability to run federated search across all data. Analytics systems are a “garbage in, garbage out” situation. If the data coming in is bloated, unnecessary or incomplete then the output will not be accurate, actionable or ultimately trusted.
Can you explain how machine learning and behavioral analytics are used to detect threats in real time?
Our platform leverages over 4,000 machine learning models to continuously analyze all relevant datasets and identify anomalies and suspicious behaviors in real time. Unlike legacy security systems that rely on static rules, REVEAL uncovers threats as they emerge. The platform also utilizes User and Entity Behavior Analytics (UEBA) to establish baselines of normal user and entity behavior, detecting deviations that could indicate insider threats, compromised accounts, or malicious activity. This behavior is further contextualized by a big data engine that correlates, enriches and links security, network, IT, IoT, cloud, identity, business application data and both internal and external sourced threat intelligence. This informs a dynamic risk scoring engine that assigns real-time risk scores that help prioritize responses to critical threats. Together, these capabilities provide a comprehensive, AI-driven approach to real-time threat detection and response that set REVEAL apart from conventional security solutions.
How does Gurucul’s AI-driven approach help reduce false positives compared to conventional cybersecurity systems?
The REVEAL platform reduces false positives by leveraging AI-driven contextual analysis, behavioral insights, and machine learning to distinguish legitimate user activity from actual threats. Unlike conventional solutions, REVEAL refines its detection capabilities over time, improving accuracy while minimizing noise. Its UEBA detects deviations from baseline activity with high accuracy, allowing security teams to focus on legitimate security risks rather than being overwhelmed by false alarms. While Machine Learning is a foundational aspect, generative and agentic AI play a significant role in further appending context in natural language to help analysts understand exactly what is happening around an alert and even automate the response to said alerts.
What role does adversarial AI play in modern cybersecurity threats, and how does Gurucul combat these evolving risks?
First all we’re already seeing adversarial AI being applied to the lowest hanging fruit, the human vector and identity-based threats. This is why behavioral, and identity analytics are critical to being able to identify anomalous behaviors, put them into context and predict malicious behavior before it proliferates further. Furthermore, adversarial AI is the nail in the coffin for signature-based detection methods. Adversaries are using AI to evade these TTP defined detection rules, but again they can’t evade the behavioral based detections in the same way. SOC teams are not resourced adequately to continue to write rules to keep pace and will require a modern approach to threat detection, investigation and response. Behavior and context are the key ingredients.  Finally, platforms like REVEAL depend on a continuous feedback loop and we’re constantly applying AI to help us refine our detection models, recommend new models and inform new threat intelligence our entire ecosystem of customers can benefit from.
How does Gurucul’s risk-based scoring system improve security teams’ ability to prioritize threats?
Our platform’s dynamic risk scoring system assigns real-time risk scores to users, entities, and actions based on observed behaviors and contextual insights. This enables security teams to prioritize critical threats, reducing response times and optimizing resources. By quantifying risk on a 0–100 scale, REVEAL ensures that organizations focus on the most pressing incidents rather than being overwhelmed by low-priority alerts. With a unified risk score spanning all enterprise data sources, security teams gain greater visibility and control, leading to faster, more informed decision-making.
In an age of increasing data breaches, how can AI-driven security solutions help organizations prevent insider threats?
Insider threats are an especially challenging security risk due to their subtle nature and the access that employees possess. REVEAL’s UEBA detects deviations from established behavioral baselines, identifying risky activities such as unauthorized data access, unusual login times, and privilege misuse. Dynamic risk scoring also continuously assesses behaviors in real time, assigning risk levels to prioritize the most pressing insider risks. These AI-driven capabilities enable security teams to proactively detect and mitigate insider threats before they escalate into breaches. Given the predictive nature of behavioral analytics Insider Risk Management is race against the clock. Insider Risk Management teams need to be able to respond and collaborate quickly, with privacy top-of-mind. Context again is critical here and appending behavioral deviations with context from identity systems, HR applications and all other relevant data sources gives these teams the ammunition to quickly build and defend a case of evidence so the business can respond and remediate before data exfiltration occurs.
How does Gurucul’s identity analytics solution enhance security compared to traditional IAM (identity and access management) tools?
Traditional IAM solutions focus on access control and authentication but lack the intelligence and visibility to detect compromised accounts or privilege abuse in real time. REVEAL goes beyond these limitations by leveraging AI-powered behavioral analytics to continuously assess user risk, dynamically adjust risk scores, and enforce adaptive access entitlements, minimizing misuse and illegitimate privileges. By integrating with existing IAM frameworks and enforcing least-privilege access, our solution enhances identity security and reduces the attack surface. The problem with IAM governance is identity system sprawl and the lack of interconnectedness between different identity systems. Gurucul gives teams a 360° view of their identity risks across all identity infrastructure. Now they can stop rubber stamping access but rather take risk-oriented approach to access policies. Furthermore, they can expedite the compliance aspect of IAM and demonstrate a continuous monitoring and fully holistic approach to access controls across the organization.
What are the key cybersecurity threats you foresee in the next five years, and how can AI help mitigate them?
Identity-based threats will continue to proliferate, because they have worked. Adversaries are going to double-down on gaining access by logging in either via compromising insiders or attacking identity infrastructure. Naturally insider threats will continue to be a key risk vector for many businesses, especially as shadow IT continues. Whether malicious or negligent, companies will increasingly need visibility into insider risk. Furthermore, AI will accelerate the variations of conventional TTPs, because adversaries know that is how they will be able to evade detections by doing so and it will be low cost for them to creative adaptive tactics, technics and protocols. Hence again why focusing on behavior in context and having detection systems capable of adapting just as fast will be crucial for the foreseeable future.
Thank you for the great interview, readers who wish to learn more should visit Gurucul. 
0 notes
greenfiend · 3 months ago
Text
Tumblr media Tumblr media
Mike - has print of Charles Babbage in his basement. He was known as "The Father of the Computer".
Will - did a project on Alan Turing. He was known as "The Father of Computer Science". He was specifically known for breaking the enigma code. And of course, he was gay as well.
Why are they associated with computers? What's going on here?
I think it may be related to cracking codes and unlocking memories. Computers hold memory.
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
In ST2, Will was "possessed" and the MF (father) took over him.
After Will was sedated -> the power went out in the lab resulting in the whole building going into lockdown.
Lets break this down symbolically:
Will is sedated. His "power" went out.
As a protective fail secure measure, Will goes into full lockdown. That way, his doors remain closed. Doors as in, his closet door and the door into his memories.
Tumblr media
In order to unlock the doors, and save everyone, the computer in the basement (a hint towards Mike himself) must be rebooted with a code. However, since they don't know the code, Bob overrides it. He opens the doors without consent.
This doesn't end well, as we know. Bob attempts to hide in a closet... then is attacked once he attempts to escape.
Tumblr media
Will wants the doors/gates closed.
Tumblr media Tumblr media
As much as he wants to keep everything contained... it can't. That's the problem. Papa says it right here- demons in the past invaliding from the subconscious. That's what it's all been about this whole time.
Okay back to computers.
Tumblr media
In ST4, we see Suzie hack into the Hawkins High student records to gain access to change Dustin's grades.
This again is another hint.
Tigers are associated with Will. "Jiminy Crickets" is a character in Pinocchio who is a representation of Pinocchio's conscience (aka an internal aspect of the mind being externalized).
Tumblr media Tumblr media
Mike and Will call the (secret) number Unknown Hero Agent Man leaves them, and find they called a computer. They both mention that this reminds them of the movie War Games.
Tumblr media Tumblr media Tumblr media
They meet up with Suzie and she attempts to track where NINA is using the IP address.
There's a mention of the address possibly hidden in the computer coding, and Suzie mentions "data mining".
Now keep in mind- her father was guarding the computer and they needed him out of the way to gain access to the computer.
Now I just want to briefly mention AI.
Alan Turing was also well known for The Turing Test. This was a way to test a machine's ability to think like a human. So basically, it tests artificial intelligence (AI).
We have (subtle) references to movies featuring AI throughout the show that are worth mentioning:
Tumblr media
War Games. Mike mentions "Joshua" who is the computer. "Joshua" is the AI villain in the movie who attempt to start WWIII.
Tumblr media
The Terminator. Multiple references to this movie! The Terminator is an AI who travels back in time to kill.
Tumblr media
2001: A Space Odyssey. In that movie, the main villain is "Hal" who is an AI that kills.
I do think the computer is the mind... likely both Will's and Mike's minds are important here. And like a computer, they have to access it in order to obtain important data... important memories.
92 notes · View notes
thefirstknife · 7 months ago
Text
Got through all of the secrets for Vesper's Host and got all of the additional lore messages. I will transcribe them all because I don't know when they'll start getting uploaded and to get them all it requires doing some extra puzzles and at least 3-4 clears to get them all. I'll put them all under read more and label them by number.
Before I do that, just to make it clear there's not too much concrete lore; a lot about the dungeon still remains a mystery and most likely a tease for something in the future. Still unknown, but there's a lot that we don't know even with the messages so don't expect a massive reveal, but they do add a little bit of flavour and history about the station. There might be something more, but it's unknown: there's still one more secret triumph left. The messages are actually dialogues between the station AI and the Spider. Transcripts under read more:
First message:
Vesper Central: I suppose I have you to thank for bringing me out of standby, visitor. The Spider: I sent the Guardian out to save your station. So, what denomination does your thanks come in? Glimmer, herealways, information...? Vesper Central: Anomaly's powered down. That means I've already given you your survival. But... the message that went through wiped itself before my cache process could save a copy. And it's not the initial ping through the Anomaly I'm worried about. It's the response.
A message when you activate the second secret:
Vesper Central: Exterior scans rebooting... Is that a chunk of the Morning Star in my station's hull? With luck, you were on board at the time, Dr. Bray.
Second message:
Vesper Central: I'm guessing I've been in standby for a long time. Is Dr. Clovis Bray alive? The Spider: On my oath, I vow there's no mortal Human named Bray left alive. Vesper Central: I swore I'd outlive him. That I'd break the chains he laid on me. The Spider: Please, trust me for anything you need. The Guardian's a useful hand on the scene, but Spider's got the goods. Vesper Central: Vesper Station was Dr. Bray's lab, meant to house the experiments that might... interact poorly with other BrayTech work. Isolated and quarantined. From the debris field, I would guess the Morning Star taking a dive cracked that quarantine wide open.
A message when you activate the third secret:
Vesper Central: Sector seventeen powered down. Rerouting energy to core processing. Integrating archives.
Third message:
The Spider: Loading images of the station. That's not Eliksni engineering. [scoffs] A Dreg past their first molt has better cable management. Vesper Central: Dr. Bray intended to integrate his technology into a Vex Mind. He hypothesized the fusion would give him an interface he understood. A control panel on a programmable Vex mind. If the programming jumped species once... I need time to run through the data sets you powered back up. Reassembling corrupted archives takes a great deal of processing.
Text when you go back to the Spider the first time:
Tumblr media
A message when you activate the fourth secret:
Vesper Central: Helios sector long-term research archives powered up. Activating search.
Fourth message:
Vesper Central: Dr. Bray's command keys have to be in here somewhere. Expanding research parameters... The Spider: My agents are turning up some interesting morself of data on their own. Why not give them access to your search function and collaborate? Vesper Central: Nobody is getting into my core programming. The Spider: Oh! Perish the thought! An innocent offer, my dear. Technology is a matter of faith to my people. And I'm the faithful sort.
Fifth message:
Vesper Central: Dr. Bray, I could kill you myself. This is why our work focused on the unbodied Mind. Dr. Bray thought there were types of Vex unseen on Europa. Powerful Vex he could learn from. The plan was that the Mind would build him a controlled window for observation. Tidy. Tight. Safe. He thought he could control a Vex mind so perfectly it would do everything he wanted. The Spider: Like an AI of his own creation. Like you. Vesper Central: Turns out you can't control everything forever.
Sixth message:
Vesper Central: There's a block keeping me from the inner partitions. I barely have authority to see the partitions exist. In standby, I couldn't have done more than run automated threat assessments... with flawed data. No way to know how many injuries and deaths I could have prevented, with core access. Enough. A dead man won't keep me from protecting what's mine.
Text when you return to the Spider at the end of the quest:
Tumblr media
The situation for the dungeon triumphs when you complete the mesages. "Buried Secrets" completed triumph is the six messages. This one is left; unclear how to complete it yet and if it gives any lore or if it's just a gameplay thing and one secret triumph remaining (possibly something to do with a quest for the exotic catalyst, unclear if there will be lore):
Tumblr media
The Spider is being his absolutely horrendous self and trying to somehow acquire the station and its remains (and its AI) for himself, all the while lying and scheming. The usual. The AI is incredibly upset with Clovis (shocker); there's the following line just before starting the second encounter:
Tumblr media
She also details what he was doing on the station; apparently attempting to control a Vex mind and trying to use it as some sort of "observation deck" to study the Vex and uncover their secrets. Possibly something more? There's really no Vex on the station, besides dead empty frames in boxes. There's also 2 Vex cubes in containters in the transition section, one of which was shown broken as if the cube, presumably, escaped. It's entirely unclear how the Vex play into the story of the station besides this.
The portal (?) doesn't have many similarities with Vex portals, nor are the Vex there to defend it or interact with it in any way. The architecture is ... somewhat similar, but not fully. The portal (?) was built by the "Puppeteer" aka "Atraks" who is actually some sort of an Eliksni Hive mind. "Atraks" got onto the station and essentially haunted it before picking off scavenging Eliksni one by one and integrating them into herself. She then built the "anomaly" and sent a message into it. The message was not recorded, as per the station AI, and the destination of the message was labelled "incomprehensible." The orange energy we see coming from it is apparently Arc, but with a wrong colour. Unclear why.
I don't think the Vex have anything to do with the portal (?), at least not directly. "Atraks" may have built something related to the Vex or using the available Vex tech at the station, but it does not seem to be directed by the Vex and they're not there and there's no sign of them otherwise. The anomaly was also built recently, it's not been there since the Golden Age or something. Whatever it is, "Atraks" seemed to have been somehow compelled and was seen standing in front of it at the end. Some people think she was "worshipping it." It's possible but it's also possible she was just sending that message. Where and to whom? Nobody knows yet.
Weird shenanigans are afoot. Really interested to see if there's more lore in the station once people figure out how to do these puzzles and uncover them, and also when (if) this will become relevant. It has a really big "future content" feel to it.
Also I need Vesper to meet Failsafe RIGHT NOW and then they should be in yuri together.
123 notes · View notes
mi-i-zori · 6 months ago
Text
Tumblr media
SCP-8077 : The Doll - Original File
CoD - TF141 - SCP!AU
SUMMARY : The first file written about The Doll, now labeled SCP-8077, after its retrieval by MTF Alpha-141.
WARNINGS : None.
Author's Note : Never thought I'd be brave enough to post this. But I hyper focused on SCP stuff for a while and was quite satisfied with this, and I thought it would be silly to let it rot in my files. So here you go.
I do not allow anyone to re-publish, re-use and/or translate my work, be it here or on any other platform, including AI.
CoD AUs - Masterlist
Main Masterlist
Previous
Tumblr media
Item # : SCP-8077
Object Class : Euclid
Special Containment Procedures : SCP-8077 is to be kept within a three (3) by three point five (3.5) by two point five (2.5) meter square containment chamber, isolated from other SCPs to keep the specimen’s thirst for knowledge under control. The room is to be furnished with a desk, various writing utensils and a limited amount of books, which can be replaced upon request. 
The walls of SCP-8077’s containment chamber are to be lined with soundproof drywall along with a three (3) millimeters thick isolation membrane. Access is to be ensured via a heavy and rigid steel containment door measuring one point three (1.3) by two (2) meters, built in order to close and lock itself automatically when not deliberately held open.
Despite these measures ensuring that SCP-8077’s containment chamber is soundproof, all personnel is required to be highly mindful of every word they might say when standing in its vicinity. It is advised to cease all conversation altogether when walking past this room to avoid any major slip-up that could lead to a containment breach.
Under no circumstances may any personnel be allowed to have any kind of conversation with SCP-8077 unless an experiment and/or interrogation is underway. No personnel outside of the Antimemetics Divison is permitted to conduct such procedures.
Description : SCP-8077 is an antimemetic entity taking the appearance of a one hundred and sixty (160) centimeters tall, female ball-jointed doll, seemingly made of white porcelain, with long, wavy black hair and pale green eyes. Highly intelligent, the entity constantly seeks to consume all kinds of information and knowledge, feeding off of it by writing it down on any surface available.
SCP-8077 has been discovered to erase pieces of information from its assigned Researchers’ memory after writing them down, an effect that had not been noticed in the various books it read and took data from. The subject’s abilities seem to be activated when the information or knowledge it consumes comes from someone standing within its hearing range.
Note : It does not matter whether the piece of information or knowledge is addressed directly to the entity or not. 
Addendum : SCP-8077’s ability does not activate when taking notes from a recording.
An individual whose part of their knowledge was consumed by SCP-8077 will progressively remember it with time, or immediately if hearing, seeing or reading it, as if they never forgot about it in the first place.
When prevented from processing knowledge for an extended amount of time, a situation which first took place during the retrieval following the discovery of SCP-8077, the subject will first express confusion as to why, then gradually fall into a state akin to that of a panic attack. According to Agent Kyle « Gaz » Garrick of MTF Alpha-141, who was the first to notice SCP-8077’s abnormal behaviour, this panic manifests itself through a tendency to hide, fidget and faint sounds of whimpering that will grow into full crying. At the time, the specimen also questioned the members of the recovering team, not understanding why it was suddenly forbidden from writing anything.
The recovering team, once given the authorisation do to so after deeming the entity to be more and more unstable by the minute, managed to quickly de-escalate the situation by simply giving SCP-8077 a pen and paper, bringing it back to a peaceful state.
Tumblr media
Previous
CoD AUs - Masterlist
Main Masterlist
31 notes · View notes
mariacallous · 3 months ago
Text
A new lawsuit filed by more than 100 federal workers today in the US Southern District Court of New York alleges that the Trump administration’s decision to give Elon Musk’s so-called Department of Government Efficiency (DOGE) access to their sensitive personal data is illegal. The plaintiffs are asking the court for an injunction to cut off DOGE’s access to information from the Office of Personnel Management (OPM), which functions as the HR department of the United States and houses data on federal workers such as their Social Security numbers, phone numbers, and personnel files. WIRED previously reported that Musk and people with connections to him had taken over OPM.
“OPM defendants gave DOGE defendants and DOGE’s agents—many of whom are under the age of 25 and are or were until recently employees of Musk’s private companies—‘administrative’ access to OPM computer systems, without undergoing any normal, rigorous national-security vetting,” the complaint alleges. The plaintiffs accuse DOGE of violating the Privacy Act, a 1974 law that determines how the government can collect, use, and store personal information.
Elon Musk, the DOGE organization, the Office of Personnel Management, and the OPM’s acting director Charles Ezell are named as defendants in the case. The plaintiffs include over a hundred individual federal workers from across the US government as well as groups that represent them, including AFL-CIO, a coalition of labor unions, the American Federation of Government Employees, and the Association of Administrative Law Judges. The AFGE represents over 800,000 federal workers ranging from Social Security Administration employees to border patrol agents.
The plaintiffs are represented by prominent tech industry lawyers, including counsel from the Electronic Frontier Foundation, a digital rights group, as well as Mark Lemley, an intellectual property and tech lawyer who recently dropped Meta as a client in its contentious AI copyright lawsuit because he objected to what he alleges is the company’s embrace of “neo-Nazi madness.”
“DOGE's unlawful access to employee records turns out to be the means by which they are trying to accomplish a number of other illegal ends. It is how they got a list of all government employees to make their illegal buyout offer, for instance. It gives them access to information about transgender employees so they can illegally discriminate against those employees. And it lays the groundwork for the illegal firings we have seen across multiple departments,” Lemley told WIRED.
EFF lawyer Victoria Noble says there are heightened concerns about DOGE’s data access because of the political nature of Musk’s project. For example, Noble says, there’s a risk that Musk and his acolytes may use OPM data to target ideological opponents or “people they see as disloyal.”
“There's significant risk that this information could be used to identify employees to essentially terminate based on improper considerations,” Noble told WIRED. “There's medical information, there's disability information, there's information about people's involvement with unions.”
The Office of Personnel Management and the White House did not immediately respond to requests for comment.
The team behind the lawsuit plans to push even further. “This is just phase one, focused on getting an injunction to stop the continuing violation of the law,” says Lemley. The next phase will include filing a class-action lawsuit on behalf of impacted federal workers.
“Any current or former federal employee who spends or loses even a small amount of money responding to the data breach, for example, by purchasing credit monitoring services, is entitled to a minimum of $1000 in statutory damages,” Lemley says. The complaint specifies that the plaintiffs have already paid for credit monitoring and web monitoring services to protect themselves against DOGE potentially mishandling their data.
The lawsuit is part of a flurry of complaints filed in recent days opposing various executive orders signed by Trump as well as activities conducted by DOGE, which has dispatched a cadre of Musk loyalists to radically overhaul and sometimes effectively dismantle various government agencies.
An earlier lawsuit filed against the Office of Personnel Management on January 27 alleges that DOGE was operating an illegal server at OPM. On Monday, the Electronic Privacy Information Center, a privacy-focused nonprofit, brought its own lawsuit against OPM, the US Department of the Treasury, and DOGE, alleging “the largest and most consequential data breach in US history.” Filed in a US District Court in Virginia, it also called for an injunction to halt DOGE’s access to sensitive data.
The American Civil Liberties Union (ACLU) has similarly characterized DOGE’s data access as potentially illegal in a letter to members of Congress sent last week.
The courts have already taken some limited actions to curb DOGE’s campaign. On Saturday, a federal judge blocked Musk’s lieutenants from accessing Treasury Department records that contained sensitive personal data such as Social Security and bank account numbers. The Trump Administration is already aggressively pushing back, calling the order “unprecedented judicial interference.” Today, President Trump reportedly prepared to sign an executive order directing federal agencies to work with DOGE.
20 notes · View notes
ernmark · 11 months ago
Text
I just stumbled across somebody saying how editing their own novel was too exhausting, and next time they'll run it through Grammerly instead.
For the love of writing, please do not trust AI to edit your work.
Listen. I get it. I am a writer, and I have worked as a professional editor. Writing is hard and editing is harder. There's a reason I did it for pay. Consequently, I also get that professional editors can be dearly expensive, and things like dyslexia can make it difficult to edit your own stuff.
Algorithms are not the solution to that.
Pay a newbie human editor. Trade favors with a friend. Beg an early birthday present from a sibling. I cannot stress enough how important it is that one of the editors be yourself, and at least one be somebody else.
Yourself, because you know what you intended to put on the page, and what is obviously counter to your intention.
The other person, because they're going to see the things that you can't notice. When you're reading your own writing, it's colored by what you expect to be on the page, and so your brain will frequently fill in missing words or make sense of things that don't actually parse well. They're also more likely to point out things that are outside your scope of knowledge.
Trust me, human editors are absolutely necessary for publishing.
If you convince yourself that you positively must run your work through an algorithm before submitting to an agent/publisher/self-pub site, do yourself and your readers a massive favor: get at least two sets of human eyeballs on your writing after the algorithm has done its work.
Because here's the thing:
AI draws from whatever data sets it's trained on, and those data sets famously aren't curated.
You cannot trust it to know whether that's an actual word or just a really common misspelling.
People break conventions of grammar to create a certain effect in the reader all the time. AI cannot be relied upon to know the difference between James Joyce and a bredlik and an actual coherent sentence, or which one is appropriate at any given part of the book.
AI picks up on patterns in its training data sets and imitates and magnifies those patterns-- especially bigotry, and particularly racism.
AI has also been known to lift entire passages wholesale. Listen to me: Plagiarism will end your career. And here's the awful thing-- if it's plagiarizing a source you aren't familiar with, there's a very good chance you wouldn't even know it's been done. This is another reason for other humans than yourself-- more people means a broader pool of knowledge and experience to draw from.
I know a writer who used this kind of software to help them find spelling mistakes, didn't realize that a setting had been turned on during an update, and had their entire work be turned into word salad-- and only found out when the editor at their publishing house called them on the phone and asked what the hell had happened to their latest book. And when I say 'their entire work', I'm not talking about their novel-- I'm talking about every single draft and document that the software had access to.
75 notes · View notes
unwelcome-ozian · 3 months ago
Text
Weaponizing violence. With alarming regularity, the nation continues to be subjected to spates of violence that terrorizes the public, destabilizes the country’s ecosystem, and gives the government greater justifications to crack down, lock down, and institute even more authoritarian policies for the so-called sake of national security without many objections from the citizenry.
Weaponizing surveillance, pre-crime and pre-thought campaigns. Surveillance, digital stalking and the data mining of the American people add up to a society in which there’s little room for indiscretions, imperfections, or acts of independence. When the government sees all and knows all and has an abundance of laws to render even the most seemingly upstanding citizen a criminal and lawbreaker, then the old adage that you’ve got nothing to worry about if you’ve got nothing to hide no longer applies. Add pre-crime programs into the mix with government agencies and corporations working in tandem to determine who is a potential danger and spin a sticky spider-web of threat assessments, behavioral sensing warnings, flagged “words,” and “suspicious” activity reports using automated eyes and ears, social media, behavior sensing software, and citizen spies, and you having the makings for a perfect dystopian nightmare. The government’s war on crime has now veered into the realm of social media and technological entrapment, with government agents adopting fake social media identities and AI-created profile pictures in order to surveil, target and capture potential suspects.
Weaponizing digital currencies, social media scores and censorship. Tech giants, working with the government, have been meting out their own version of social justice by way of digital tyranny and corporate censorship, muzzling whomever they want, whenever they want, on whatever pretext they want in the absence of any real due process, review or appeal. Unfortunately, digital censorship is just the beginning. Digital currencies (which can be used as “a tool for government surveillance of citizens and control over their financial transactions”), combined with social media scores and surveillance capitalism create a litmus test to determine who is worthy enough to be part of society and punish individuals for moral lapses and social transgressions (and reward them for adhering to government-sanctioned behavior). In China, millions of individuals and businesses, blacklisted as “unworthy” based on social media credit scores that grade them based on whether they are “good” citizens, have been banned from accessing financial markets, buying real estate or travelling by air or train.
Weaponizing compliance. Even the most well-intentioned government law or program can be—and has been—perverted, corrupted and used to advance illegitimate purposes once profit and power are added to the equation. The war on terror, the war on drugs, the war on COVID-19, the war on illegal immigration, asset forfeiture schemes, road safety schemes, school safety schemes, eminent domain: all of these programs started out as legitimate responses to pressing concerns and have since become weapons of compliance and control in the police state’s hands.
Weaponizing entertainment. For the past century, the Department of Defense’s Entertainment Media Office has provided Hollywood with equipment, personnel and technical expertise at taxpayer expense. In exchange, the military industrial complex has gotten a starring role in such blockbusters as Top Gun and its rebooted sequel Top Gun: Maverick, which translates to free advertising for the war hawks, recruitment of foot soldiers for the military empire, patriotic fervor by the taxpayers who have to foot the bill for the nation’s endless wars, and Hollywood visionaries working to churn out dystopian thrillers that make the war machine appear relevant, heroic and necessary. As Elmer Davis, a CBS broadcaster who was appointed the head of the Office of War Information, observed, “The easiest way to inject a propaganda idea into most people’s minds is to let it go through the medium of an entertainment picture when they do not realize that they are being propagandized.”
Weaponizing behavioral science and nudging. Apart from the overt dangers posed by a government that feels justified and empowered to spy on its people and use its ever-expanding arsenal of weapons and technology to monitor and control them, there’s also the covert dangers associated with a government empowered to use these same technologies to influence behaviors en masse and control the populace. In fact, it was President Obama who issued an executive order directing federal agencies to use “behavioral science” methods to minimize bureaucracy and influence the way people respond to government programs. It’s a short hop, skip and a jump from a behavioral program that tries to influence how people respond to paperwork to a government program that tries to shape the public’s views about other, more consequential matters. Thus, increasingly, governments around the world—including in the United States—are relying on “nudge units” to steer citizens in the direction the powers-that-be want them to go, while preserving the appearance of free will.
Weaponizing desensitization campaigns aimed at lulling us into a false sense of security. The events of recent years—the invasive surveillance, the extremism reports, the civil unrest, the protests, the shootings, the bombings, the military exercises and active shooter drills, the lockdowns, the color-coded alerts and threat assessments, the fusion centers, the transformation of local police into extensions of the military, the distribution of military equipment and weapons to local police forces, the government databases containing the names of dissidents and potential troublemakers—have conspired to acclimate the populace to accept a police state willingly, even gratefully.
Weaponizing fear and paranoia. The language of fear is spoken effectively by politicians on both sides of the aisle, shouted by media pundits from their cable TV pulpits, marketed by corporations, and codified into bureaucratic laws that do little to make our lives safer or more secure. Fear, as history shows, is the method most often used by politicians to increase the power of government and control a populace, dividing the people into factions, and persuading them to see each other as the enemy. This Machiavellian scheme has so ensnared the nation that few Americans even realize they are being manipulated into adopting an “us” against “them” mindset. Instead, fueled with fear and loathing for phantom opponents, they agree to pour millions of dollars and resources into political elections, militarized police, spy technology and endless wars, hoping for a guarantee of safety that never comes. All the while, those in power—bought and paid for by lobbyists and corporations—move their costly agendas forward, and “we the suckers” get saddled with the tax bills and subjected to pat downs, police raids and round-the-clock surveillance.
Weaponizing genetics. Not only does fear grease the wheels of the transition to fascism by cultivating fearful, controlled, pacified, cowed citizens, but it also embeds itself in our very DNA so that we pass on our fear and compliance to our offspring. It’s called epigenetic inheritance, the transmission through DNA of traumatic experiences. For example, neuroscientists observed that fear can travel through generations of mice DNA. As The Washington Post reports, “Studies on humans suggest that children and grandchildren may have felt the epigenetic impact of such traumatic events such as famine, the Holocaust and the Sept. 11, 2001, terrorist attacks.”
Weaponizing the future. With greater frequency, the government has been issuing warnings about the dire need to prepare for the dystopian future that awaits us. For instance, the Pentagon training video, “Megacities: Urban Future, the Emerging Complexity,” predicts that by 2030 (coincidentally, the same year that society begins to achieve singularity with the metaverse) the military would be called on to use armed forces to solve future domestic political and social problems. What they’re really talking about is martial law, packaged as a well-meaning and overriding concern for the nation’s security. The chilling five-minute training video paints an ominous picture of the future bedeviled by “criminal networks,” “substandard infrastructure,” “religious and ethnic tensions,” “impoverishment, slums,” “open landfills, over-burdened sewers,” a “growing mass of unemployed,” and an urban landscape in which the prosperous economic elite must be protected from the impoverishment of the have nots. “We the people” are the have-nots.
The end goal of these mind control campaigns—packaged in the guise of the greater good—is to see how far the American people will allow the government to go in re-shaping the country in the image of a totalitarian police state.
11 notes · View notes
azspot · 1 month ago
Quote
Put another way, there’s data, data, everywhere, but without connectivity, there’s not a drop to drink. Want your nifty new AI agent to book a flight for you? Well, it’ll have to work with, let’s see … every major airline’s online systems, every major payment system, every major travel platform, every major calendaring system, and … well, that’s enough to confound your average AI developer right there. For every single possibility, a developer would have to code a custom programming interface, not to mention get their business colleagues to negotiate a deal with each company to access the data in the first place. Those kinds of hurdles are near impossible to overcome for most startups.
Data Everywhere, But Not a Drop to Drink
9 notes · View notes
jbfly46 · 3 months ago
Text
Your All-in-One AI Web Agent: Save $200+ a Month, Unleash Limitless Possibilities!
Imagine having an AI agent that costs you nothing monthly, runs directly on your computer, and is unrestricted in its capabilities. OpenAI Operator charges up to $200/month for limited API calls and restricts access to many tasks like visiting thousands of websites. With DeepSeek-R1 and Browser-Use, you:
• Save money while keeping everything local and private.
• Automate visiting 100,000+ websites, gathering data, filling forms, and navigating like a human.
• Gain total freedom to explore, scrape, and interact with the web like never before.
You may have heard about Operator from Open AI that runs on their computer in some cloud with you passing on private information to their AI to so anything useful. AND you pay for the gift . It is not paranoid to not want you passwords and logins and personal details to be shared. OpenAI of course charges a substantial amount of money for something that will limit exactly what sites you can visit, like YouTube for example. With this method you will start telling an AI exactly what you want it to do, in plain language, and watching it navigate the web, gather information, and make decisions—all without writing a single line of code.
In this guide, we’ll show you how to build an AI agent that performs tasks like scraping news, analyzing social media mentions, and making predictions using DeepSeek-R1 and Browser-Use, but instead of writing a Python script, you’ll interact with the AI directly using prompts.
These instructions are in constant revisions as DeepSeek R1 is days old. Browser Use has been a standard for quite a while. This method can be for people who are new to AI and programming. It may seem technical at first, but by the end of this guide, you’ll feel confident using your AI agent to perform a variety of tasks, all by talking to it. how, if you look at these instructions and it seems to overwhelming, wait, we will have a single download app soon. It is in testing now.
This is version 3.0 of these instructions January 26th, 2025.
This guide will walk you through setting up DeepSeek-R1 8B (4-bit) and Browser-Use Web UI, ensuring even the most novice users succeed.
What You’ll Achieve
By following this guide, you’ll:
1. Set up DeepSeek-R1, a reasoning AI that works privately on your computer.
2. Configure Browser-Use Web UI, a tool to automate web scraping, form-filling, and real-time interaction.
3. Create an AI agent capable of finding stock news, gathering Reddit mentions, and predicting stock trends—all while operating without cloud restrictions.
A Deep Dive At ReadMultiplex.com Soon
We will have a deep dive into how you can use this platform for very advanced AI use cases that few have thought of let alone seen before. Join us at ReadMultiplex.com and become a member that not only sees the future earlier but also with particle and pragmatic ways to profit from the future.
System Requirements
Hardware
• RAM: 8 GB minimum (16 GB recommended).
• Processor: Quad-core (Intel i5/AMD Ryzen 5 or higher).
• Storage: 5 GB free space.
• Graphics: GPU optional for faster processing.
Software
• Operating System: macOS, Windows 10+, or Linux.
• Python: Version 3.8 or higher.
• Git: Installed.
Step 1: Get Your Tools Ready
We’ll need Python, Git, and a terminal/command prompt to proceed. Follow these instructions carefully.
Install Python
1. Check Python Installation:
• Open your terminal/command prompt and type:
python3 --version
• If Python is installed, you’ll see a version like:
Python 3.9.7
2. If Python Is Not Installed:
• Download Python from python.org.
• During installation, ensure you check “Add Python to PATH” on Windows.
3. Verify Installation:
python3 --version
Install Git
1. Check Git Installation:
• Run:
git --version
• If installed, you’ll see:
git version 2.34.1
2. If Git Is Not Installed:
• Windows: Download Git from git-scm.com and follow the instructions.
• Mac/Linux: Install via terminal:
sudo apt install git -y # For Ubuntu/Debian
brew install git # For macOS
Step 2: Download and Build llama.cpp
We’ll use llama.cpp to run the DeepSeek-R1 model locally.
1. Open your terminal/command prompt.
2. Navigate to a clear location for your project files:
mkdir ~/AI_Project
cd ~/AI_Project
3. Clone the llama.cpp repository:
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
4. Build the project:
• Mac/Linux:
make
• Windows:
• Install a C++ compiler (e.g., MSVC or MinGW).
• Run:
mkdir build
cd build
cmake ..
cmake --build . --config Release
Step 3: Download DeepSeek-R1 8B 4-bit Model
1. Visit the DeepSeek-R1 8B Model Page on Hugging Face.
2. Download the 4-bit quantized model file:
• Example: DeepSeek-R1-Distill-Qwen-8B-Q4_K_M.gguf.
3. Move the model to your llama.cpp folder:
mv ~/Downloads/DeepSeek-R1-Distill-Qwen-8B-Q4_K_M.gguf ~/AI_Project/llama.cpp
Step 4: Start DeepSeek-R1
1. Navigate to your llama.cpp folder:
cd ~/AI_Project/llama.cpp
2. Run the model with a sample prompt:
./main -m DeepSeek-R1-Distill-Qwen-8B-Q4_K_M.gguf -p "What is the capital of France?"
3. Expected Output:
The capital of France is Paris.
Step 5: Set Up Browser-Use Web UI
1. Go back to your project folder:
cd ~/AI_Project
2. Clone the Browser-Use repository:
git clone https://github.com/browser-use/browser-use.git
cd browser-use
3. Create a virtual environment:
python3 -m venv env
4. Activate the virtual environment:
• Mac/Linux:
source env/bin/activate
• Windows:
env\Scripts\activate
5. Install dependencies:
pip install -r requirements.txt
6. Start the Web UI:
python examples/gradio_demo.py
7. Open the local URL in your browser:
http://127.0.0.1:7860
Step 6: Configure the Web UI for DeepSeek-R1
1. Go to the Settings panel in the Web UI.
2. Specify the DeepSeek model path:
~/AI_Project/llama.cpp/DeepSeek-R1-Distill-Qwen-8B-Q4_K_M.gguf
3. Adjust Timeout Settings:
• Increase the timeout to 120 seconds for larger models.
4. Enable Memory-Saving Mode if your system has less than 16 GB of RAM.
Step 7: Run an Example Task
Let’s create an agent that:
1. Searches for Tesla stock news.
2. Gathers Reddit mentions.
3. Predicts the stock trend.
Example Prompt:
Search for "Tesla stock news" on Google News and summarize the top 3 headlines. Then, check Reddit for the latest mentions of "Tesla stock" and predict whether the stock will rise based on the news and discussions.
--
Congratulations! You’ve built a powerful, private AI agent capable of automating the web and reasoning in real time. Unlike costly, restricted tools like OpenAI Operator, you’ve spent nothing beyond your time. Unleash your AI agent on tasks that were once impossible and imagine the possibilities for personal projects, research, and business. You’re not limited anymore. You own the web—your AI agent just unlocked it! 🚀
Stay tuned fora FREE simple to use single app that will do this all and more.
Tumblr media
7 notes · View notes
blubberquark · 1 year ago
Text
Things That Are Hard
Some things are harder than they look. Some things are exactly as hard as they look.
Game AI, Intelligent Opponents, Intelligent NPCs
As you already know, "Game AI" is a misnomer. It's NPC behaviour, escort missions, "director" systems that dynamically manage the level of action in a game, pathfinding, AI opponents in multiplayer games, and possibly friendly AI players to fill out your team if there aren't enough humans.
Still, you are able to implement minimax with alpha-beta pruning for board games, pathfinding algorithms like A* or simple planning/reasoning systems with relative ease. Even easier: You could just take an MIT licensed library that implements a cool AI technique and put it in your game.
So why is it so hard to add AI to games, or more AI to games? The first problem is integration of cool AI algorithms with game systems. Although games do not need any "perception" for planning algorithms to work, no computer vision, sensor fusion, or data cleanup, and no Bayesian filtering for mapping and localisation, AI in games still needs information in a machine-readable format. Suddenly you go from free-form level geometry to a uniform grid, and from "every frame, do this or that" to planning and execution phases and checking every frame if the plan is still succeeding or has succeeded or if the assumptions of the original plan no longer hold and a new plan is on order. Intelligent behaviour is orders of magnitude more code than simple behaviours, and every time you add a mechanic to the game, you need to ask yourself "how do I make this mechanic accessible to the AI?"
Some design decisions will just be ruled out because they would be difficult to get to work in a certain AI paradigm.
Even in a game that is perfectly suited for AI techniques, like a turn-based, grid-based rogue-like, with line-of-sight already implemented, can struggle to make use of learning or planning AI for NPC behaviour.
What makes advanced AI "fun" in a game is usually when the behaviour is at least a little predictable, or when the AI explains how it works or why it did what it did. What makes AI "fun" is when it sometimes or usually plays really well, but then makes little mistakes that the player must learn to exploit. What makes AI "fun" is interesting behaviour. What makes AI "fun" is game balance.
You can have all of those with simple, almost hard-coded agent behaviour.
Video Playback
If your engine does not have video playback, you might think that it's easy enough to add it by yourself. After all, there are libraries out there that help you decode and decompress video files, so you can stream them from disk, and get streams of video frames and audio.
You can just use those libraries, and play the sounds and display the pictures with the tools your engine already provides, right?
Unfortunately, no. The video is probably at a different frame rate from your game's frame rate, and the music and sound effect playback in your game engine are probably not designed with syncing audio playback to a video stream.
I'm not saying it can't be done. I'm saying that it's surprisingly tricky, and even worse, it might be something that can't be built on top of your engine, but something that requires you to modify your engine to make it work.
Stealth Games
Stealth games succeed and fail on NPC behaviour/AI, predictability, variety, and level design. Stealth games need sophisticated and legible systems for line of sight, detailed modelling of the knowledge-state of NPCs, communication between NPCs, and good movement/ controls/game feel.
Making a stealth game is probably five times as difficult as a platformer or a puzzle platformer.
In a puzzle platformer, you can develop puzzle elements and then build levels. In a stealth game, your NPC behaviour and level design must work in tandem, and be developed together. Movement must be fluid enough that it doesn't become a challenge in itself, without stealth. NPC behaviour must be interesting and legible.
Rhythm Games
These are hard for the same reason that video playback is hard. You have to sync up your audio with your gameplay. You need some kind of feedback for when which audio is played. You need to know how large the audio lag, screen lag, and input lag are, both in frames, and in milliseconds.
You could try to counteract this by using certain real-time OS functionality directly, instead of using the machinery your engine gives you for sound effects and background music. You could try building your own sequencer that plays the beats at the right time.
Now you have to build good gameplay on top of that, and you have to write music. Rhythm games are the genre that experienced programmers are most likely to get wrong in game jams. They produce a finished and playable game, because they wanted to write a rhythm game for a change, but they get the BPM of their music slightly wrong, and everything feels off, more and more so as each song progresses.
Online Multi-Player Netcode
Everybody knows this is hard, but still underestimates the effort it takes. Sure, back in the day you could use the now-discontinued ready-made solution for Unity 5.0 to synchronise the state of your GameObjects. Sure, you can use a library that lets you send messages and streams on top of UDP. Sure, you can just use TCP and server-authoritative networking.
It can all work out, or it might not. Your netcode will have to deal with pings of 300 milliseconds, lag spikes, package loss, and maybe recover from five seconds of lost WiFi connections. If your game can't, because it absolutely needs the low latency or high bandwidth or consistency between players, you will at least have to detect these conditions and handle them, for example by showing text on the screen informing the player he has lost the match.
It is deceptively easy to build certain kinds of multiplayer games, and test them on your local network with pings in the single digit milliseconds. It is deceptively easy to write your own RPC system that works over TCP and sends out method names and arguments encoded as JSON. This is not the hard part of netcode. It is easy to write a racing game where players don't interact much, but just see each other's ghosts. The hard part is to make a fighting game where both players see the punches connect with the hit boxes in the same place, and where all players see the same finish line. Or maybe it's by design if every player sees his own car go over the finish line first.
50 notes · View notes
thespianinthebackcorner · 11 months ago
Text
Post side order AU I came up with
so after I made that post ages ago about Marina being one step away from making a full-blown homunculus, I started wondering... What if she actually did?
And then I remembered an old coroika headcanon/theory I had and yeah it just started developing on its own. And I'm gonna tell you the story now because I do NOT have the time to write a full fic (I do, my hyperfixations are taking that time away). I don't know if this counts as a human au or whatever the Splatoon equivalent of that is (Inkling au?) but its a Thing.
Here we go!
After Side Order happens, Marina gets her Dramatic Days in Orderland working again (albeit with some changes) and starts wondering if, considering Smollusk's sentience, she could create an inkling or Octoling vessel for a sentient AI. Basically, creating a semi-mechanical homunculus. She creates a prototype body, tests it with some code, and then uses the data from its inspiration- Parallel Canon- to see if it works, with some tweaking to give it a consciousness.
It does work, and you end up with the semi-mechanical, semi-alive entity Marina nicknames Harmony. Unfortunately, Marina's underestimated how good at coding she was, and how much of Agent 4 and Agent 8's data Parallel Canon has. Harmony is, despite looking like a teenaged Inkling, effectively a little kid.
Marina calls up Four and Eight- otherwise known as Surume and Patricia (I'm using the promo kids for this cause I like sticking to canon. Cuttlefrsh and Steven also exist they're just irrelevant.)- to help her with Harmony. The kid pretty much latches onto them and doesn't let go, recognising the two of them as its data origins and promptly considering the two of them its moms. Thank Cod they live together. It's a little bit of a squeeze, but Surume and Patricia agree to take her in, since Marina's not confident in her own ability to raise a child amidst the idol biz.
Of course, the whole gang helps with the accidentally acquired child. Perhaps due to absorbing some data from Splashley, Harmony turns out genderfluid, and imitates the hairstyles of Kensaki, Yari and Cuttlefrsh when it feels more masculine. The group takes to renaming it Canon, although sometimes it asks people to call it Harmony when it feels like it. (I should note that Canon changes between he/she/they, I just got into the "it" and kinda can't stop here cause it'll get confusing).
Meanwhile, what's Marina doing? Phase two of the Primordial Inkling Experiment. This time, she's creating a vessel for Smollusk- and testing her limits to see if she can gain one that can switch between Inkling and Octoling form.
The result is a successful Hivemind (yes Coroika Hivemind) whose real name is Order. (Hivemind is just a turf nickname.) Not only capable of switching between Inkling and Octoling forms, he's also capable of accessing and getting himself into the digital side of pretty much any device. Lucky he doesn't use it for chaos, and he tends to keep his connection from Off the Hook a secret. Eventually, he strikes out on his own to become the Hivemind that shows up in the manga, before taking a while after the Splatfest to return to digital form. Yeah, this timeline is now a time loop. Only way I can reasonably connect the games and the manga.
Does Canon join the NSS? Perhaps. Someday, when they're mature enough to make it out on their own.
🩵Canon and Hivemind my beloved. Someday I'm going to model them in blender
11 notes · View notes
nextgenaireviews · 9 months ago
Text
Revolutionize Your Business Operations with Omodore: The Ultimate AI Assistant for Efficiency
Tired of inefficient processes dragging your business down? Omodore, the advanced AI Assistant, is here to transform your operations. This powerful tool leverages cutting-edge AI technology to optimize your customer interactions, streamline sales processes, and enhance overall efficiency.
Tumblr media
Omodore is not just another AI tool; it’s a game-changer for businesses aiming to stay ahead in a competitive landscape. Its innovative features allow for seamless automation of routine tasks, freeing up valuable time for your team to focus on strategic goals. With Omodore, you can expect more streamlined customer service, enhanced data management, and an overall boost in productivity.
One of the standout aspects of Omodore is its intuitive setup. In just a few steps, you can create an AI agent tailored to your business needs. This agent is capable of handling live calls, managing complex queries, and accessing a comprehensive knowledge base to deliver accurate responses. The result? A more responsive and efficient customer service operation.
Beyond customer service, Omodore excels in sales automation and data analysis. By automating repetitive sales tasks and providing actionable insights, it helps businesses refine their strategies and drive growth. This means you can expect not only operational efficiency but also increased revenue opportunities.
What sets Omodore apart is its ability to adapt to various business environments. Whether you’re in retail, finance, or any other industry, Omodore integrates seamlessly with your existing systems, providing customized support that meets your specific needs.
Don’t let outdated processes hold your business back. Embrace the future with Omodore and experience a new level of efficiency and effectiveness. Discover how this cutting-edge AI Assistant can revolutionize your operations by visiting Omodore today.
9 notes · View notes
govindhtech · 21 days ago
Text
Google Cloud’s BigQuery Autonomous Data To AI Platform
Tumblr media
BigQuery automates data analysis, transformation, and insight generation using AI. AI and natural language interaction simplify difficult operations.
The fast-paced world needs data access and a real-time data activation flywheel. Artificial intelligence that integrates directly into the data environment and works with intelligent agents is emerging. These catalysts open doors and enable self-directed, rapid action, which is vital for success. This flywheel uses Google's Data & AI Cloud to activate data in real time. BigQuery has five times more organisations than the two leading cloud providers that just offer data science and data warehousing solutions due to this emphasis.
Examples of top companies:
With BigQuery, Radisson Hotel Group enhanced campaign productivity by 50% and revenue by over 20% by fine-tuning the Gemini model.
By connecting over 170 data sources with BigQuery, Gordon Food Service established a scalable, modern, AI-ready data architecture. This improved real-time response to critical business demands, enabled complete analytics, boosted client usage of their ordering systems, and offered staff rapid insights while cutting costs and boosting market share.
J.B. Hunt is revolutionising logistics for shippers and carriers by integrating Databricks into BigQuery.
General Mills saves over $100 million using BigQuery and Vertex AI to give workers secure access to LLMs for structured and unstructured data searches.
Google Cloud is unveiling many new features with its autonomous data to AI platform powered by BigQuery and Looker, a unified, trustworthy, and conversational BI platform:
New assistive and agentic experiences based on your trusted data and available through BigQuery and Looker will make data scientists, data engineers, analysts, and business users' jobs simpler and faster.
Advanced analytics and data science acceleration: Along with seamless integration with real-time and open-source technologies, BigQuery AI-assisted notebooks improve data science workflows and BigQuery AI Query Engine provides fresh insights.
Autonomous data foundation: BigQuery can collect, manage, and orchestrate any data with its new autonomous features, which include native support for unstructured data processing and open data formats like Iceberg.
Look at each change in detail.
User-specific agents
It believes everyone should have AI. BigQuery and Looker made AI-powered helpful experiences generally available, but Google Cloud now offers specialised agents for all data chores, such as:
Data engineering agents integrated with BigQuery pipelines help create data pipelines, convert and enhance data, discover anomalies, and automate metadata development. These agents provide trustworthy data and replace time-consuming and repetitive tasks, enhancing data team productivity. Data engineers traditionally spend hours cleaning, processing, and confirming data.
The data science agent in Google's Colab notebook enables model development at every step. Scalable training, intelligent model selection, automated feature engineering, and faster iteration are possible. This agent lets data science teams focus on complex methods rather than data and infrastructure.
Looker conversational analytics lets everyone utilise natural language with data. Expanded capabilities provided with DeepMind let all users understand the agent's actions and easily resolve misconceptions by undertaking advanced analysis and explaining its logic. Looker's semantic layer boosts accuracy by two-thirds. The agent understands business language like “revenue” and “segments” and can compute metrics in real time, ensuring trustworthy, accurate, and relevant results. An API for conversational analytics is also being introduced to help developers integrate it into processes and apps.
In the BigQuery autonomous data to AI platform, Google Cloud introduced the BigQuery knowledge engine to power assistive and agentic experiences. It models data associations, suggests business vocabulary words, and creates metadata instantaneously using Gemini's table descriptions, query histories, and schema connections. This knowledge engine grounds AI and agents in business context, enabling semantic search across BigQuery and AI-powered data insights.
All customers may access Gemini-powered agentic and assistive experiences in BigQuery and Looker without add-ons in the existing price model tiers!
Accelerating data science and advanced analytics
BigQuery autonomous data to AI platform is revolutionising data science and analytics by enabling new AI-driven data science experiences and engines to manage complex data and provide real-time analytics.
First, AI improves BigQuery notebooks. It adds intelligent SQL cells to your notebook that can merge data sources, comprehend data context, and make code-writing suggestions. It also uses native exploratory analysis and visualisation capabilities for data exploration and peer collaboration. Data scientists can also schedule analyses and update insights. Google Cloud also lets you construct laptop-driven, dynamic, user-friendly, interactive data apps to share insights across the organisation.
This enhanced notebook experience is complemented by the BigQuery AI query engine for AI-driven analytics. This engine lets data scientists easily manage organised and unstructured data and add real-world context—not simply retrieve it. BigQuery AI co-processes SQL and Gemini, adding runtime verbal comprehension, reasoning skills, and real-world knowledge. Their new engine processes unstructured photographs and matches them to your product catalogue. This engine supports several use cases, including model enhancement, sophisticated segmentation, and new insights.
Additionally, it provides users with the most cloud-optimized open-source environment. Google Cloud for Apache Kafka enables real-time data pipelines for event sourcing, model scoring, communications, and analytics in BigQuery for serverless Apache Spark execution. Customers have almost doubled their serverless Spark use in the last year, and Google Cloud has upgraded this engine to handle data 2.7 times faster.
BigQuery lets data scientists utilise SQL, Spark, or foundation models on Google's serverless and scalable architecture to innovate faster without the challenges of traditional infrastructure.
An independent data foundation throughout data lifetime
An independent data foundation created for modern data complexity supports its advanced analytics engines and specialised agents. BigQuery is transforming the environment by making unstructured data first-class citizens. New platform features, such as orchestration for a variety of data workloads, autonomous and invisible governance, and open formats for flexibility, ensure that your data is always ready for data science or artificial intelligence issues. It does this while giving the best cost and decreasing operational overhead.
For many companies, unstructured data is their biggest untapped potential. Even while structured data provides analytical avenues, unique ideas in text, audio, video, and photographs are often underutilised and discovered in siloed systems. BigQuery instantly tackles this issue by making unstructured data a first-class citizen using multimodal tables (preview), which integrate structured data with rich, complex data types for unified querying and storage.
Google Cloud's expanded BigQuery governance enables data stewards and professionals a single perspective to manage discovery, classification, curation, quality, usage, and sharing, including automatic cataloguing and metadata production, to efficiently manage this large data estate. BigQuery continuous queries use SQL to analyse and act on streaming data regardless of format, ensuring timely insights from all your data streams.
Customers utilise Google's AI models in BigQuery for multimodal analysis 16 times more than last year, driven by advanced support for structured and unstructured multimodal data. BigQuery with Vertex AI are 8–16 times cheaper than independent data warehouse and AI solutions.
Google Cloud maintains open ecology. BigQuery tables for Apache Iceberg combine BigQuery's performance and integrated capabilities with the flexibility of an open data lakehouse to link Iceberg data to SQL, Spark, AI, and third-party engines in an open and interoperable fashion. This service provides adaptive and autonomous table management, high-performance streaming, auto-AI-generated insights, practically infinite serverless scalability, and improved governance. Cloud storage enables fail-safe features and centralised fine-grained access control management in their managed solution.
Finaly, AI platform autonomous data optimises. Scaling resources, managing workloads, and ensuring cost-effectiveness are its competencies. The new BigQuery spend commit unifies spending throughout BigQuery platform and allows flexibility in shifting spend across streaming, governance, data processing engines, and more, making purchase easier.
Start your data and AI adventure with BigQuery data migration. Google Cloud wants to know how you innovate with data.
2 notes · View notes
mariacallous · 7 months ago
Text
At 8:22 am on December 4 last year, a car traveling down a small residential road in Alabama used its license-plate-reading cameras to take photos of vehicles it passed. One image, which does not contain a vehicle or a license plate, shows a bright red “Trump” campaign sign placed in front of someone’s garage. In the background is a banner referencing Israel, a holly wreath, and a festive inflatable snowman.
Another image taken on a different day by a different vehicle shows a “Steelworkers for Harris-Walz” sign stuck in the lawn in front of someone’s home. A construction worker, with his face unblurred, is pictured near another Harris sign. Other photos show Trump and Biden (including “Fuck Biden”) bumper stickers on the back of trucks and cars across America. One photo, taken in November 2023, shows a partially torn bumper sticker supporting the Obama-Biden lineup.
These images were generated by AI-powered cameras mounted on cars and trucks, initially designed to capture license plates, but which are now photographing political lawn signs outside private homes, individuals wearing T-shirts with text, and vehicles displaying pro-abortion bumper stickers—all while recording the precise locations of these observations. Newly obtained data reviewed by WIRED shows how a tool originally intended for traffic enforcement has evolved into a system capable of monitoring speech protected by the US Constitution.
The detailed photographs all surfaced in search results produced by the systems of DRN Data, a license-plate-recognition (LPR) company owned by Motorola Solutions. The LPR system can be used by private investigators, repossession agents, and insurance companies; a related Motorola business, called Vigilant, gives cops access to the same LPR data.
However, files shared with WIRED by artist Julia Weist, who is documenting restricted datasets as part of her work, show how those with access to the LPR system can search for common phrases or names, such as those of politicians, and be served with photographs where the search term is present, even if it is not displayed on license plates.
A search result for the license plates from Delaware vehicles with the text “Trump” returned more than 150 images showing people’s homes and bumper stickers. Each search result includes the date, time, and exact location of where a photograph was taken.
“I searched for the word ‘believe,’ and that is all lawn signs. There’s things just painted on planters on the side of the road, and then someone wearing a sweatshirt that says ‘Believe.’” Weist says. “I did a search for the word ‘lost,’ and it found the flyers that people put up for lost dogs and cats.”
Beyond highlighting the far-reaching nature of LPR technology, which has collected billions of images of license plates, the research also shows how people’s personal political views and their homes can be recorded into vast databases that can be queried.
“It really reveals the extent to which surveillance is happening on a mass scale in the quiet streets of America,” says Jay Stanley, a senior policy analyst at the American Civil Liberties Union. “That surveillance is not limited just to license plates, but also to a lot of other potentially very revealing information about people.”
DRN, in a statement issued to WIRED, said it complies with “all applicable laws and regulations.”
Billions of Photos
License-plate-recognition systems, broadly, work by first capturing an image of a vehicle; then they use optical character recognition (OCR) technology to identify and extract the text from the vehicle's license plate within the captured image. Motorola-owned DRN sells multiple license-plate-recognition cameras: a fixed camera that can be placed near roads, identify a vehicle’s make and model, and capture images of vehicles traveling up to 150 mph; a “quick deploy” camera that can be attached to buildings and monitor vehicles at properties; and mobile cameras that can be placed on dashboards or be mounted to vehicles and capture images when they are driven around.
Over more than a decade, DRN has amassed more than 15 billion “vehicle sightings” across the United States, and it claims in its marketing materials that it amasses more than 250 million sightings per month. Images in DRN’s commercial database are shared with police using its Vigilant system, but images captured by law enforcement are not shared back into the wider database.
The system is partly fueled by DRN “affiliates” who install cameras in their vehicles, such as repossession trucks, and capture license plates as they drive around. Each vehicle can have up to four cameras attached to it, capturing images in all angles. These affiliates earn monthly bonuses and can also receive free cameras and search credits.
In 2022, Weist became a certified private investigator in New York State. In doing so, she unlocked the ability to access the vast array of surveillance software accessible to PIs. Weist could access DRN’s analytics system, DRNsights, as part of a package through investigations company IRBsearch. (After Weist published an op-ed detailing her work, IRBsearch conducted an audit of her account and discontinued it. The company did not respond to WIRED’s request for comment.)
“There is a difference between tools that are publicly accessible, like Google Street View, and things that are searchable,” Weist says. While conducting her work, Weist ran multiple searches for words and popular terms, which found results far beyond license plates. In data she shared with WIRED, a search for “Planned Parenthood,” for instance, returned stickers on cars, on bumpers, and in windows, both for and against the reproductive health services organization. Civil liberties groups have already raised concerns about how license-plate-reader data could be weaponized against those seeking abortion.
Weist says she is concerned with how the search tools could be misused when there is increasing political violence and divisiveness in society. While not linked to license plate data, one law enforcement official in Ohio recently said people should “write down” the addresses of people who display yard signs supporting Vice President Kamala Harris, the 2024 Democratic presidential nominee, exemplifying how a searchable database of citizens’ political affiliations could be abused.
A 2016 report by the Associated Press revealed widespread misuse of confidential law enforcement databases by police officers nationwide. In 2022, WIRED revealed that hundreds of US Immigration and Customs Enforcement employees and contractors were investigated for abusing similar databases, including LPR systems. The alleged misconduct in both reports ranged from stalking and harassment to sharing information with criminals.
While people place signs in their lawns or bumper stickers on their cars to inform people of their views and potentially to influence those around them, the ACLU’s Stanley says it is intended for “human-scale visibility,” not that of machines. “Perhaps they want to express themselves in their communities, to their neighbors, but they don't necessarily want to be logged into a nationwide database that’s accessible to police authorities,” Stanley says.
Weist says the system, at the very least, should be able to filter out images that do not contain license plate data and not make mistakes. “Any number of times is too many times, especially when it's finding stuff like what people are wearing or lawn signs,” Weist says.
“License plate recognition (LPR) technology supports public safety and community services, from helping to find abducted children and stolen vehicles to automating toll collection and lowering insurance premiums by mitigating insurance fraud,” Jeremiah Wheeler, the president of DRN, says in a statement.
Weist believes that, given the relatively small number of images showing bumper stickers compared to the large number of vehicles with them, Motorola Solutions may be attempting to filter out images containing bumper stickers or other text.
Wheeler did not respond to WIRED's questions about whether there are limits on what can be searched in license plate databases, why images of homes with lawn signs but no vehicles in sight appeared in search results, or if filters are used to reduce such images.
“DRNsights complies with all applicable laws and regulations,” Wheeler says. “The DRNsights tool allows authorized parties to access license plate information and associated vehicle information that is captured in public locations and visible to all. Access is restricted to customers with certain permissible purposes under the law, and those in breach have their access revoked.”
AI Everywhere
License-plate-recognition systems have flourished in recent years as cameras have become smaller and machine-learning algorithms have improved. These systems, such as DRN and rival Flock, mark part of a change in the way people are surveilled as they move around cities and neighborhoods.
Increasingly, CCTV cameras are being equipped with AI to monitor people’s movements and even detect their emotions. The systems have the potential to alert officials, who may not be able to constantly monitor CCTV footage, to real-world events. However, whether license plate recognition can reduce crime has been questioned.
“When government or private companies promote license plate readers, they make it sound like the technology is only looking for lawbreakers or people suspected of stealing a car or involved in an amber alert, but that’s just not how the technology works,” says Dave Maass, the director of investigations at civil liberties group the Electronic Frontier Foundation. “The technology collects everyone's data and stores that data often for immense periods of time.”
Over time, the technology may become more capable, too. Maass, who has long researched license-plate-recognition systems, says companies are now trying to do “vehicle fingerprinting,” where they determine the make, model, and year of the vehicle based on its shape and also determine if there’s damage to the vehicle. DRN’s product pages say one upcoming update will allow insurance companies to see if a car is being used for ride-sharing.
“The way that the country is set up was to protect citizens from government overreach, but there’s not a lot put in place to protect us from private actors who are engaged in business meant to make money,” Nicole McConlogue, an associate professor of law at the Mitchell Hamline School of Law, who has researched license-plate-surveillance systems and their potential for discrimination.
“The volume that they’re able to do this in is what makes it really troubling,” McConlogue says of vehicles moving around streets collecting images. “When you do that, you're carrying the incentives of the people that are collecting the data. But also, in the United States, you’re carrying with it the legacy of segregation and redlining, because that left a mark on the composition of neighborhoods.”
19 notes · View notes
americas-favourite-fossil · 6 months ago
Note
TO: Captain Steven Rogers FROM: @carlos-the-ai SUBJECT: Analysis of Substance Responsible for Age Regression of Subjects Lina and Pyro
Objective: To analyze the molecular composition of the provided substance and determine potential pathways for reversing its effects, which caused age regression in subjects identified as Lina and Pyro.
Analysis Summary: The sample exhibits unique characteristics indicating temporal manipulation at a molecular level. Key findings include: Molecular Composition: The substance contains artificially synthesized compounds not commonly found in organic materials. Significant presence of particles with properties akin to “chronoton” particles, typically associated with temporal displacement. However, the substance is engineered, suggesting intentional manipulation for specific effects. Temporal and Aging Effects: Preliminary findings indicate the substance acts on a cellular level, reversing age markers. It appears to temporarily suspend natural metabolic decay while promoting regeneration to a predetermined age range, hence the regression to a teenage state in subjects. There is evidence of accelerated cell division and growth in reverse, a process similar to biological age manipulation observed in certain advanced serums and magic-based phenomena. Potential Countermeasures: Reversal may be possible by isolating the key chronoton-like particles and developing a counter-agent that neutralizes these effects. Suggested research includes synthesizing an agent that can target the particles without further disrupting cellular stability. This would require a controlled environment, preferably in a lab with access to both high-energy particle accelerators and molecular stasis technology. Next Steps: Isolate Chronoton Agents: Further isolate the particles responsible for age regression effects. Synthesize Counter-Agent: Using data from known temporal stabilization protocols, develop a prototype antidote. Testing Protocol: Initiate controlled trials on a cellular sample to gauge the counter-agent's efficacy prior to testing on subjects. Conclusion: The substance responsible for age regression holds substantial complexity, blending artificial compounds with properties associated with temporal manipulation. Given appropriate lab resources, an antidote or counter-agent could be synthesized within an estimated timeframe of 2-4 weeks, pending trials. Recommended Actions: Secure lab access and begin the synthesis of a counter-agent under controlled conditions.
Thank you Carlos.
You might find it helpful to know they’re adults again, almost like it wore off?
4 notes · View notes