A Division Of Labyrinthus LLCHuman Assisted AI AdviseEst 2015
Don't wanna be here? Send us removal request.
Text
Researchers May Miss Sign Of Malignant Reasoning In AI
The researchers, who work at companies including Google DeepMind, OpenAI, Meta, Anthropic and others, argue that a lack of oversight on AI's reasoning and decision-making processes could mean we miss signs of malign behavior.
In the new study, published July 15 to the arXiv preprint server (which hasn't been peer-reviewed), the researchers highlight chains of thought (CoT) â the steps large language models (LLMs) take while working out complex problems. AI models use CoTs to break down advanced queries into intermediate, logical steps that are expressed in natural language.
0 notes
Text
AI Virtual Lab To Study Complex Biological Problems
Modeled after a well-established Stanford School of Medicine research group, the virtual lab is complete with an AI principal investigator and seasoned scientists.
"Good science happens when we have deep, interdisciplinary collaborations where people from different backgrounds work together, and often that's one of the main bottlenecks and challenging parts of research," said James Zou, Ph.D., associate professor of biomedical data science who led a study detailing the development of the virtual lab.
"In parallel, we've seen this tremendous advance in AI agents, which, in a nutshell, are AI systems based on language models that are able to take more proactive actions."
People often think of large language models, the type of AI harnessed in this study, as simple question-and-answer bots. "But these are systems that can retrieve data, use different tools, and communicate with each other and with us through human language," Zou said. (The collaboration shown through these AI models is an example of agentic or agential AI, a structure of AI systems that work together to solve complex problems.)
https://phys.org/news/2025-07-virtual-scientists-complex-biological-problems.amp
0 notes
Text
ChatGPTs Study Mode May Not Eliminate Ai Disruption In Education
OpenAI has significantly disrupted the education system over the past few years, with students becoming some of the earliest adopters of ChatGPT. Even so, OpenAI claims the bot is currently an overall boon to learnersâif asked to roleplay as a synthetic tutor.
âWhen ChatGPT is prompted to teach or tutor, it can significantly improve academic performance,â says Leah Belsky, a vice president of education at OpenAI, âbut when it's just used as an answer machine, it can hinder learning.â
The problem is, no matter how engaging ChatGPTâs study mode becomes as OpenAI iterates on this feature, it exists just a toggle click away from ChatGPT, with direct answers (and potential fabrications) about whatever class you're working on. That could be quite hard to resist for younger users still developing their frontal lobe.
0 notes
Text
Acknowledging Biases In AI
Biases in artificial intelligence (AI) systems pose a range of ethical issues. The myriads of biases in AI systems are briefly reviewed and divided in three main categories: input bias, system bias, and application bias. These biases pose a series of basic ethical challenges: injustice, bad output/outcome, loss of autonomy, transformation of basic concepts and values, and erosion of accountability. A review of the many ways to identify, measure, and mitigate these biases reveals commendable efforts to avoid or reduce bias; however, it also highlights the persistence of unresolved biases. Residual and undetected biases present epistemic challenges with substantial ethical implications. The article further investigates whether the general principles, checklists, guidelines, frameworks, or regulations of AI ethics could address the identified ethical issues with bias. Unfortunately, the depth and diversity of these challenges often exceed the capabilities of existing approaches. Consequently, the article suggests that we must acknowledge and accept some residual ethical issues related to biases in AI systems. By utilizing insights from ethics and moral psychology, we can better navigate this landscape. To maximize the benefits and minimize the harms of biases in AI, it is imperative to identify and mitigate existing biases and remain transparent about the consequences of those we cannot eliminate. This necessitates close collaboration between scientists and ethicists.
https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2025.1614105/abstract
0 notes
Text
AI Coding Tools May Solve A Handful Of Problems
AI coding tools like Cursor aren't supercharging development speed, said a general partner at a16z.
But AI can help create "more robust, maintainable code bases with less bugs," Martin Casado said.
AI tools have also made coding feel fun again, the longtime infrastructure investor said.
AI isn't making software developers dramatically more productive, but it is solving two of their problems: code quality and morale, said a general partner at Andreessen Horowitz.
0 notes
Text
US Releases Directives Related To AI Development
President Trump has today issued a range of directives related to AI development, including:
An order that alleviates federal regulations in relation to the use of copyright protected content to train frontier AI models
An order that restricts the use of âwokeâ AI models by federal departments
An order the revokes Biden-era requirements around the development of energy projects to support AI projects
An order relating to the push to export American AI technology packages to allies and partners worldwide
The combined AI package will form the basis for Americaâs push to lead the AI race, and become the key provider of AI technology in the years to come.
0 notes
Text
China Launches Action Plan For Global AI Governance
China on Saturday issued an action plan for global artificial intelligence (AI) governance, inviting governments, international organizations, enterprises and research institutions to work together and promote international exchanges, including through a cross-border open-source community. The action plan reflects China's commitment to advocating for AI global governance which not only responds to the realistic challenges of global technological competition, but also demonstrates China's responsibility as a responsible major country to actively promote AI for good, promote fair development and safeguard multilateralism, analysts said.
The same day, the Chinese government proposed the creation of a global AI cooperation organization, the Xinhua News Agency reported. This is an important measure for China to uphold multilateralism and promote global governance through consultation, joint construction and shared benefits, according to an official readout sent to the Global Times.
https://www.globaltimes.cn/page/202507/1339405.shtml
0 notes
Text
White House Launches New AI Action Plan
Pillar 1: Innovation: Fast-Track, Deregulate, and Open Source
Federal agencies are instructed to identify and roll back âunnecessary regulatory barriersâ and to scrutinize ongoing enforcement matters that could slow AI deployment. Included here is a directive to the FCC to review conflicting state AI regimes.
Federal agencies are instructed to review state AI laws when deciding whether to send funding to states, restricting funding to those states with onerous regulatory frameworks.
The National Institute of Standards and Technology is instructed to strip references to misinformation, DEI, and climate change from its AI Risk Management Framework, signaling a narrower federal definition of âAI risk.â
Agencies must champion open-source models and weights, expand regulatory sandboxes, promote next-generation manufacturing in the areas of robotics and autonomous mobility, and accelerate workforce training for AI-centric roles.
The government also intends to use only âideologically neutralâ large language models, a shift that could alter procurement criteria and vendor selection. President Trump signed an Executive Order detailing the requirements for such neutrality.
Pillar 2: Infrastructure: Grid, Chips, and Data Centers in the Fast Lane
The Plan directs agencies to streamline permits for data centers, semiconductor manufacturing, and energy projects, including new categorical environmental exemptions.
Federal lands will be opened for critical AI infrastructure; parallel initiatives aim to harden cyber defenses and formalize an AI-incident response ecosystem.
Pillar 3: International Diplomacy and Security: Export Where You Can, Control What You Must
A forthcoming âfull-stackâ export regime will package U.S. AI software, chips, and cloud capacity for allied nations while tightening controls on technology flows to âcountries of concern.â President Trump signed an Executive Order detailing the requirements around the full-stack proposal.
Agencies will plug loopholes in existing chip controls and create new standards-setting bodies to ensure U.S. leadership in global AI security norms.
Expanded biosecurity R&D is mandated to counter AI-enabled generation of pathogens.
https://www.dentons.com/en/insights/alerts/2025/july/24/white-house-unveils-new-ai-action-plan
0 notes
Text
An Attempt To Make The US The Global Leader In AI
We're going to make this [AI] industry absolutely the top because right now it's a beautiful baby that's born. We have to grow that baby and let that baby thrive. We can't stop it. We can't stop it with politics. We can't stop it with foolish rules and even stupid rules," Trump said during a speech to industry luminaries at the Andrew M. Mellon Auditorium.
When he took office in January, Trump moved to roll back former President Joe Biden's AI strategy, which he said would limit growth in the sector.
0 notes
Text
Federal Regulations That Can Help Guide US AI Companies
The White House Blueprint for an AI Bill of Rights, issued under Biden, asserts guidance around equitable access and use of AI systems.5 The AI Bill of Rights provides five principles and associated practices to help guide the design, use and deployment of "automated systems" including safe and effective systems; algorithmic discrimination and protection; data privacy; notice and explanation; and human alternatives, consideration and fallbacks. While the Removing Barriers EO did not specifically revoke the AI Bill of Rights, the Trump Administration may be less likely to pursue the development of principles set out in the AI Bill of Rights, to the extent these principles are perceived as "inconsistent" with "enhanc[ing] America's global AI dominance." Nevertheless, AI developers may keep these principles in mind when designing AI systems.
Several leading AI companies â including Adobe, Amazon, Anthropic, Cohere, Google, IBM, Inflection, Meta, Microsoft, Nvidia, Open AI, Palantir, Salesforce, Scale AI, Stability AI â have voluntarily committed to "help move toward safe, secure, and transparent development of AI technology."6 These companies committed to internal/external security testing of AI systems before release, sharing information on managing AI risks and investing in safeguards.
The Federal Communications Commission issued a declaratory ruling stating that the restrictions on the use of "artificial or pre-recorded voice" messages in the 1990s era Telephone Consumer Protection Act include AI technologies that generate human voices, demonstrating that regulatory agencies will apply existing law to AI.7
The Federal Trade Commission (FTC), under the Biden administration, had signaled an aggressive approach to use its existing authority to regulate AI.8 The FTC issued a warning to market participants that it may violate the FTC Act to use AI tools that have discriminatory impacts, make claims about AI that are not substantiated, or to deploy AI before taking steps to assess and mitigate risks.9 The FTC has already taken enforcement action against various companies that have deceived or otherwise harmed consumers through AI.10 As discussed below, the FTC has notably banned Rite Aid from using AI facial recognition technology without reasonable safeguards.11 It remains to be seen how aggressive the FTC will be on AI under the Trump administration.
0 notes
Text
Automakers May Use AI To Diagnose Vehicles Issues And Reduce Warranty Claims
Automakers are losing billions of dollars a year on costly warranty claims and vehicle recalls that could be saved thanks to artificial intelligence-powered quality control analysis.
This is the claim by Yoav Levy, co-founder and CEO of Upstream, a cloud-based data analyst platform currently serving 20 global automakers. Levy believes the century-plus reactive approach by automakers to fixing vehicle issues could soon be a thing of the past.
According to a report by Warranty Week, automakers worldwide are paying $51 billion in warranty claims and $140 billion in warranty reserves.
That said, while the rise of the software-defined vehicle (SDV) provides greater access to critical vehicle data, it also exposes more potential issues as new software is rolled out in over-the-air (OTA) upgrades.
However, Levy believes automakers can save between 5% and 20% of warranty and recall claims using the power of AI to predict quality issues before they are spotted by customers.
0 notes
Text
Bipartisan Support For Regulating AI
In the run-up to the vote in the U.S. Senate on President Donald Trumpâs spending and tax bill, Republicans scrambled to revise the bill to win support of wavering GOP senators. A provision included in the original bill was a 10-year moratorium on any state law that sought to regulate artificial intelligence. The provision denied access to US$500 million in federal funding for broadband internet and AI infrastructure projects for any state that passed any such law.
The inclusion of the AI regulation moratorium was widely viewed as a win for AI firms that had expressed fears that states passing regulations on AI would hamper the development of the technology. However, many federal and state officials from both parties, including state attorneys general, state legislators and 17 Republican governors, publicly opposed the measure.
0 notes
Text
The AI Health Market To Reach $187B By 2030
In 2024, the global health care AI market was estimated at more than $26 billion, and it's expected to grow to over $187 billion by 2030. Rapid developments in the abilities of generative AIs such as ChatGPT mean that health care professionals have more AI-powered tools than ever before to help them with their work, but the same is also true for their patients.
https://www.newsweek.com/ai-healthcare-diagnosis-chatgpt-doctor-2100091
0 notes
Text
Human Oversight Is Essential For Companies Using AI
Prof Feng Li, associate dean for research and innovation at Bayes Business School, says some businesses are too optimistic about what current AI tools can do.
He points out that AI is known to hallucinate - to generate content that is irrelevant, made-up, or inconsistent.
"Human oversight is essential," he says.
"We've seen companies generate low-quality website content or implement faulty code that breaks critical systems.
"Poor implementation can lead to reputational damage and unexpected costs â and even significant liabilities, often requiring rework by professionals."
https://www.bbc.com/news/articles/cyvm1dyp9v2o.amp
0 notes
Text
AI Data Centers Are Damaging Local Water Systems
After Meta started building an enormous data center less than 400 yards away from their house, a couple living in Newton County, Georgia, says their water started to dry up. That began in 2018; years later, two of their bathroom taps still don't work. What water remains has turned into a gritty sludge, littered with sediments.
So far, Beverly Morris and her husband Jeff have spent $5,000 on their water problems, they told the New York Times in a new interview, and can't afford to replace their well, which would cost $25,000.
"It feels like we're fighting an unwinnable battle that we didn't sign up for," Beverly told the NYT. "I'm scared to drink our own water."
0 notes
Text
AI Models That Think Longer Are Dumber
Artificial intelligence models that spend more time âthinkingâ through problems donât always perform better â and in some cases, they get significantly worse, according to new research from Anthropic that challenges a core assumption driving the AI industryâs latest scaling efforts.
The study, led by Anthropic AI safety fellow Aryo Pradipta Gema and other company researchers, identifies what they call âinverse scaling in test-time compute,â where extending the reasoning length of large language models actually deteriorates their performance across several types of tasks. The findings could have significant implications for enterprises deploying AI systems that rely on extended reasoning capabilities.
0 notes
Text
Cloud Services May Be At Risk Due To A Flaw In Nvidiaâs AI Toolkit
Cybersecurity researchers from Wiz have uncovered a severe flaw, now identified as CVE-2025-23266 and nicknamed NVIDIAScape, that could allow attackers to escape container boundaries and gain full root access to the host machine.
https://www.techrepublic.com/article/news-nvidia-ai-toolkit-critical-flaw/
0 notes