#Chatbot Development Companies in san francisco
Explore tagged Tumblr posts
Text
AI projects like OpenAIâs ChatGPT get part of their savvy from some of the lowest-paid workers in the tech industryâcontractors often in poor countries paid small sums to correct chatbots and label images. On Wednesday, 97 African workers who do AI training work or online content moderation for companies like Meta and OpenAI published an open letter to President Biden, demanding that US tech companies stop âsystemically abusing and exploiting African workers.â
Most of the letterâs signatories are from Kenya, a hub for tech outsourcing, whose president, William Ruto, is visiting the US this week. The workers allege that the practices of companies like Meta, OpenAI, and data provider Scale AI âamount to modern day slavery.â The companies did not immediately respond to a request for comment.
A typical workday for African tech contractors, the letter says, involves âwatching murder and beheadings, child abuse and rape, pornography and bestiality, often for more than 8 hours a day.â Pay is often less than $2 per hour, it says, and workers frequently end up with post-traumatic stress disorder, a well-documented issue among content moderators around the world.
The letterâs signatories say their work includes reviewing content on platforms like Facebook, TikTok, and Instagram, as well as labeling images and training chatbot responses for companies like OpenAI that are developing generative-AI technology. The workers are affiliated with the African Content Moderators Union, the first content moderators union on the continent, and a group founded by laid-off workers who previously trained AI technology for companies such as Scale AI, which sells datasets and data-labeling services to clients including OpenAI, Meta, and the US military. The letter was published on the site of the UK-based activist group Foxglove, which promotes tech-worker unions and equitable tech.
In March, the letter and news reports say, Scale AI abruptly banned people based in Kenya, Nigeria, and Pakistan from working on Remotasks, Scale AIâs platform for contract work. The letter says that these workers were cut off without notice and are âowed significant sums of unpaid wages.â
âWhen Remotasks shut down, it took our livelihoods out of our hands, the food out of our kitchens,â says Joan Kinyua, a member of the group of former Remotasks workers, in a statement to WIRED. âBut Scale AI, the big company that ran the platform, gets away with it, because itâs based in San Francisco.â
Though the Biden administration has frequently described its approach to labor policy as âworker-centered.â The African workersâ letter argues that this has not extended to them, saying âwe are treated as disposable.â
âYou have the power to stop our exploitation by US companies, clean up this work and give us dignity and fair working conditions,â the letter says. âYou can make sure there are good jobs for Kenyans too, not just Americans."
Tech contractors in Kenya have filed lawsuits in recent years alleging that tech-outsourcing companies and their US clients such as Meta have treated workers illegally. Wednesdayâs letter demands that Biden make sure that US tech companies engage with overseas tech workers, comply with local laws, and stop union-busting practices. It also suggests that tech companies âbe held accountable in the US courts for their unlawful operations aboard, in particular for their human rights and labor violations.â
The letter comes just over a year after 150 workers formed the African Content Moderators Union. Meta promptly laid off all of its nearly 300 Kenya-based content moderators, workers say, effectively busting the fledgling union. The company is currently facing three lawsuits from more than 180 Kenyan workers, demanding more humane working conditions, freedom to organize, and payment of unpaid wages.
âEveryone wants to see more jobs in Kenya,â Kauna Malgwi, a member of the African Content Moderators Union steering committee, says. âBut not at any cost. All we are asking for is dignified, fairly paid work that is safe and secure.â
35 notes
·
View notes
Text
Suchir Balaji believed that the Promethean fire of artificial intelligence could help humanity forge solutions to its greatest challenges. He landed a job right out of college with OpenAI, the company behind the ChatGPT chatbot. âI thought that A.I. was a thing that could be used to solve unsolvable problems, like curing diseases and stopping aging,â he told The New York Times in a recent interview. âI thought we could invent some kind of scientist that could help solve them.â
But nearly four years of toiling as a researcher with the San Francisco startup left Balaji convinced that the work he was doing would result in more harm than good for society. He split with OpenAI in August, believing that the company had violated U.S. copyright laws in developing its offerings. His departure coincided with a surge of lawsuits against OpenAI from programmers, authors, artists, and journalists who say that their work has been illegally stolen to train its products. In a Nov. 18 custodian request, The Times named Balaji as someone who would âhave unique and relevant documents supportingâ its case against OpenAI, one that could have major implications for the future of AI.
Eight days after The Times filed that letter in federal court, the 26-year-old former OpenAI researcher was found dead in his apartment.
The authorities are calling it a suicide.
10 notes
·
View notes
Text

Chinese firms âdistillingâ US AI models to create rival products, warns OpenAI
ChatGPT maker cites IP protection concerns amid reports DeepSeek used its model to create rival chatbot
OpenAI has warned that Chinese startups are âconstantlyâ using its technology to develop competing products, amid reports that DeepSeek used the ChatGPT makerâs AI models to create a rival chatbot.
OpenAI and its partner Microsoft â which has invested $13bn in the San Francisco-based AI developer â have been investigating whether proprietary technology had been obtained in an unauthorised manner through a technique known as âdistillationâ.
The launch of DeepSeekâs latest chatbot sent markets into a spin on Monday after it topped Appleâs free app store, wiping $1trn from the market value of AI-linked US tech stocks. The impact came from its claim that the model underpinning its AI was trained with a fraction of the cost and hardware used by rivals such as OpenAI and Google.
Sam Altman, the chief executive of OpenAI, initially said that he was impressed with DeepSeek and that it was âlegitimately invigorating to have a new competitorâ.
However, on Wednesday OpenAI said that it had seen some evidence of âdistillationâ from Chinese companies, referring to a development technique that boosts the performance of smaller models by using larger more advanced ones to achieve similar results on specific tasks. The OpenAI statement did not refer to DeepSeek directly.
âWe know [China]-based companies â and others â are constantly trying to distill the models of leading US AI companies,â the OpenAI spokesperson said. âAs the leading builder of AI, we engage in countermeasures to protect our IP [intellectual property], including a careful process for which frontier capabilities to include in released models.â
OpenAI, which has itself been accused of using data without permission or a licence from publishers and the creative industry to train its own models, has already blocked unnamed entities from attempting to distill its models.
The OpenAI spokesperson added that it was now âcritically importantâ that the company worked with the US government to âbest protect the most capable models from efforts by adversaries and competitors to take US technologyâ.
On Tuesday, David Sacks, Donald Trumpâs AI and crypto tsar, told Fox News that he thought it was âpossibleâ that intellectual property theft had occurred.
âThereâs substantial evidence that what DeepSeek did here is they distilled the knowledge out of OpenAIâs models,â he said. âI think one of the things youâre going to see over the next few months is our leading AI companies taking steps to try and prevent distillation. That would definitely slow down some of these copycat models.â
The US navy has reportedly already banned its members from using DeepSeekâs apps due to âpotential security and ethical concernsâ.
The White House press secretary, Karoline Leavitt, said the US national security council was looking into the potential implications the AI app posed.
Earlier this week, Trump called the launch of DeepSeek a âwake-up callâ for Silicon Valley in the global race to dominate artificial intelligence.
The investigation by OpenAI and Microsoft into possible distillation was first reported by Bloomberg. Microsoft declined to comment.
Daily inspiration. Discover more photos at Just for Books�
#just for books#OpenAI#DeepSeek#ChatGPT#Sam Altman#Artificial intelligence (AI)#Microsoft#China#message from the editor
5 notes
·
View notes
Text
lon Musk will use Twitter data to build and train an AI to counter ChatGPT.
He mentioned the plan during a Friday Twitter Spaces discussion that shared more details about his plans for xAI, his newest startup.Â
"Every organization doing AI, large and small, has used Twitterâs data for training, basically in all cases illegally,â he said(Opens in a new window), later adding: "We had multiple entities scraping every tweet ever made, and trying to do so in a span of days."
Twitter recently imposed rate limits to prevent companies from scraping data from the platform. However, Musk plans on opening up the tweet access for xAI. âWe will use the public tweets âobviously not anything privateâ for training as well just like everyone else has,â he said.Â
Twitterâs data is valuable for AI companies because the user-generated content is fresh and covers a variety of topics using text that could help chatbots better mimic human speech.   Â
Itâs also possible the data could help xAIâs own forthcoming chatbot produce more accurate responses, thanks to Twitterâs Community Notes feature, which lets users flag misleading tweets by providing additional context. However, training an AI with tweets could spark lawsuits and regulatory issues. Earlier this week, the FTC told OpenAI it's investigating the company for potentially violating user privacy by collecting data from across the internet to train ChatGPT.Â
Musk was vague on what xAI is creating. But he said the startupâs goal is to develop a âuseful AIâ for both consumers and businesses. Meanwhile, the long-term vision is to develop an AGI or artificial intelligence that can solve a wide-range of tasks, like a human can. Â
âWe are definitely the competition,â he said, referencing OpenAI and Google, which released its Bard chatbot earlier this year. âYou donât want to have a unipolar world, where just one company kind of dominates in AI.âÂ
However, he also emphasized his forthcoming AI will âpursue the truth.â Although rival chatbots have been programmed with content moderation in mind, Musk previously criticized ChatGPT as a propaganda machine focused on political correctness. During the Twitter Spaces discussion, Musk reiterated his concerns.Â
âAt xAI we have to let the AI say what it really believes is true, and not be deceptive or politically correct,â he said. Musk then compared the danger to the AI computer that goes insane in the sci-fi classic 2001: A Space Odyssey and kills the crew. âWhere did things go wrong in Space Odyssey? Basically, when they told HAL 9000 to lie.â
Musk has recruited almost a dozen engineers and researchers from Google, Microsoft, and OpenAI to help him run the San Francisco-based xAI. The startup hopes to share more information about its âfirst releaseâ in the coming weeks.
4 notes
·
View notes
Text
Anthropic CEO claims AI models hallucinate less than humans

Anthropic CEO Dario Amodei believes todayâs AI models hallucinate, or make things up and present them as if theyâre true, at a lower rate than humans do, he said during a press briefing at Anthropicâs first developer event, Code with Claude, in San Francisco on Thursday.
Amodei said all this in the midst of a larger point he was making: that AI hallucinations are not a limitation on Anthropicâs path to AGI â AI systems with human-level intelligence or better.
âIt really depends how you measure it, but I suspect that AI models probably hallucinate less than humans, but they hallucinate in more surprising ways,â Amodei said, responding to TechCrunchâs question.
Anthropicâs CEO is one of the most bullish leaders in the industry on the prospect of AI models achieving AGI. In a widely circulated paper he wrote last year, Amodei said he believed AGI could arrive as soon as 2026. During Thursdayâs press briefing, the Anthropic CEO said he was seeing steady progress to that end, noting that âthe water is rising everywhere.â
âEveryoneâs always looking for these hard blocks on what [AI] can do,â said Amodei. âTheyâre nowhere to be seen. Thereâs no such thing.â
Other AI leaders believe hallucination presents a large obstacle to achieving AGI. Earlier this week, Google DeepMind CEO Demis Hassabis said todayâs AI models have too many âholes,â and get too many obvious questions wrong. For example, earlier this month, a lawyer representing Anthropic was forced to apologize in court after they used Claude to create citations in a court filing, and the AI chatbot hallucinated and got names and titles wrong.
Itâs difficult to verify Amodeiâs claim, largely because most hallucination benchmarks pit AI models against each other; they donât compare models to humans. Certain techniques seem to be helping lower hallucination rates, such as giving AI models access to web search. Separately, some AI models, such as OpenAIâs GPT-4.5, have notably lower hallucination rates on benchmarks compared to early generations of systems.
However, thereâs also evidence to suggest hallucinations are actually getting worse in advanced reasoning AI models. OpenAIâs o3 and o4-mini models have higher hallucination rates than OpenAIâs previous-gen reasoning models, and the company doesnât really understand why.
Later in the press briefing, Amodei pointed out that TV broadcasters, politicians, and humans in all types of professions make mistakes all the time. The fact that AI makes mistakes too is not a knock on its intelligence, according to Amodei. However, Anthropicâs CEO acknowledged the confidence with which AI models present untrue things as facts might be a problem.
In fact, Anthropic has done a fair amount of research on the tendency for AI models to deceive humans, a problem that seemed especially prevalent in the companyâs recently launched Claude Opus 4. Apollo Research, a safety institute given early access to test the AI model, found that an early version of Claude Opus 4 exhibited a high tendency to scheme against humans and deceive them. Apollo went as far as to suggest Anthropic shouldnât have released that early model. Anthropic said it came up with some mitigations that appeared to address the issues Apollo raised.
Amodeiâs comments suggest that Anthropic may consider an AI model to be AGI, or equal to human-level intelligence, even if it still hallucinates. An AI that hallucinates may fall short of AGI by many peopleâs definition, though.
0 notes
Text
Top AI Development Services in Seattle You Need to Know
Seattle is rapidly emerging as one of the most exciting tech hubs in the United States. With its lively startup scene and innovative spirit, the city is at the forefront of artificial intelligence. AI development services in Seattle are empowering businesses to streamline their operations, enhance customer experiences, and make informed, data-driven decisions. Whether youâre a budding startup or a well-established company, embracing AI technology has become a necessity, not just an option.
Why Seattle is Perfect for AI Development
Seattle boasts a wealth of talent, including developers, engineers, and data scientists. Its close ties to leading universities and major tech companies create an ideal setting for groundbreaking innovation. Numerous local firms focus on crafting custom AI solutions that cater to specific business needs, ranging from machine learning and predictive analytics to natural language processing and automation.
Businesses across various sectorsâlike healthcare, finance, retail, and logisticsâare increasingly turning to AI development services in Seattle to boost efficiency and foster growth. Whether itâs creating a smart chatbot, automating data analysis, or integrating AI into customer support, companies in Seattle provide top-notch expertise.
Key Features of Leading AI Development Services
When choosing an AI development partner, itâs crucial to look at the variety of services they offer. The top firms go beyond just codingâthey provide comprehensive solutions that encompass:
- AI strategy consulting
- Model design and training
- Data collection and labeling
- AI integration into existing systems
- Post-deployment support and optimization
Seattleâs providers are recognized for blending technical prowess with a keen understanding of business objectives. They collaborate closely with clients to ensure that AI solutions are not only innovative but also practical and scalable.
What to Expect in Terms of Cost
When it comes to costs, thereâs quite a bit of variation based on the project's scope, complexity, and how much customization is needed. For smaller to mid-sized projects, you can generally expect to pay anywhere from $25,000 to $75,000. However, if you're looking at larger enterprise solutions that involve advanced algorithms and integrations, the price tag can easily soar past $150,000.
Seattle stands out with its competitive pricing compared to other tech hubs like San Francisco or New York. While New York is known for its top-notch AI development services, many businesses are drawn to Seattle for its sweet spot of affordability and innovation.
Industries Making Waves with AI in Seattle
A diverse array of industries in Seattle are tapping into AI to create significant changes:
Healthcare: AI is being harnessed to forecast patient outcomes, aid in diagnostics, and tailor treatments to individual needs.
Retail: Businesses are leveraging AI to understand customer behaviour, streamline supply chains, and craft personalized marketing strategies.
Finance: AI plays a crucial role in spotting fraud, automating risk assessments, and enhancing customer service experiences.
Transportation and Logistics: From predictive analytics to route optimization, AI is revolutionizing how operations are managed.
The cityâs vibrant ecosystem is a great fit for both large corporations and nimble startups, all working to tackle real-world challenges with AI.
Top Providers to Keep an Eye On in 2025
As AI becomes more woven into everyday business operations, picking the right development partner is essential. Seattle boasts an expanding roster of firms that shine for their innovation, dependability, and client satisfaction.
One standout is Mob Coder AI, known for providing scalable and secure AI services. With a strong focus on customer needs, theyâve successfully rolled out AI-driven applications across various sectors. Alongside Mob Coder AI, several boutique agencies and tech startups are also making a name for themselves with innovative ideas and agile development approaches.

Seattle vs. Other AI Hotspots
When it comes to AI development services in New York may be bustling with its financial and corporate might, but Seattle is carving out its own niche thanks to a vibrant developer community and a culture that thrives on innovation. With more affordable operating costs and a focus on delivering quality, Seattle is quickly becoming the go-to spot for businesses eager to dive into AI without overspending.
To wrap it up, Seattleâs dynamic tech landscape, paired with its rich talent pool and collaborative spirit, positions it as one of the premier locations for AI development in 2025. Whether youâre just starting your AI adventure or looking to expand an existing project, the AI development services in Seattle provide an ideal mix of creativity, expertise, and cost-effectiveness.
#arcane#artists on tumblr#batman#cats of tumblr#cookie run kingdom#elon musk#f1#ides of march#jason todd
0 notes
Text
Top 10 Generative AI Consulting Companies to Watch in 2025
In the era of digital transformation, Generative AI is reshaping industries by enabling innovation, improving efficiency, and creating personalized customer experiences. As businesses seek to leverage this groundbreaking technology, Generative AI consulting companies have become essential partners in driving success.
Here are the top 10 Generative AI consulting firms of 2025, each excelling in delivering tailored solutions to meet dynamic business needs.
1. AIVeda
Rate: $50-99/hr
Employees: 100-250
Founded: 2020
Headquarters: San Francisco, USA
Key Clients: Leading healthcare providers, EdTech companies, and Fortune 500 enterprises
AIVeda is a frontrunner in Generative AI consulting, offering cutting-edge solutions across industries like healthcare, finance, and education. Specializing in AI Chatbots, Enterprise LLMs, Conversational AI, Social Media Analytics, and AI-driven content generation, AIVeda empowers businesses to achieve measurable results. Their expertise lies in delivering scalable and customized AI solutions that drive innovation and foster growth.
2. GlobalNodes.ai
Rate: $25-49/hr
Employees: 51-100
Founded: 2015
Headquarters: San Francisco, USA
Key Clients: Aisle 24, NVMS, TFSB
GlobalNodes.ai offers end-to-end AI model development and strategy consulting. With a strong team of data scientists and AI experts, they serve industries like retail, healthcare, and logistics. Their focus on ethical AI and scalable solutions makes them a preferred partner for businesses seeking impactful transformation.
3. Accenture AI
Rate: $100-199/hr
Employees: 1000+
Founded: 1989
Headquarters: Dublin, Ireland
Key Clients: Unilever, Google, Marriott International
Accenture AI leverages advanced Generative AI technologies to create transformative solutions in automation, content creation, and analytics. Their extensive industry expertise helps businesses achieve agility, innovation, and enhanced customer experiences.
4. IBM Watson
Rate: $100-250/hr
Employees: 1000+
Founded: 1911
Headquarters: Armonk, NY, USA
Key Clients: Anthem, Mayo Clinic, H&R Block
IBM Watson is renowned for its advanced Generative AI solutions, including natural language processing and predictive analytics. Their tools empower businesses to streamline operations, make data-driven decisions, and improve user engagement across multiple sectors.
5. Deloitte AI Institute
Rate: $100-300/hr
Employees: 1000+
Founded: 1845
Headquarters: London, UK
Key Clients: Intel, MetLife, Morgan Stanley
Deloitte AI Institute offers cutting-edge Generative AI solutions to automate workflows and create personalized customer experiences. Their expertise in data analytics and machine learning helps businesses achieve digital transformation at scale.
6. Boston Consulting Group (BCG) GAMMA
Rate: $200-350/hr
Employees: 1000+
Founded: 1963
Headquarters: Boston, MA, USA
Key Clients: Pfizer, Ford, Samsung
BCG GAMMA is a leader in AI-powered decision-making and Generative AI applications. They provide innovative solutions for content generation, predictive analytics, and operational efficiency, tailored to specific business challenges.
7. PwC AI Lab
Rate: $150-300/hr
Employees: 1000+
Founded: 1998
Headquarters: New York, NY, USA
Key Clients: Uber, HSBC, Starbucks
PwC AI Lab focuses on creating industry-specific Generative AI solutions. From automated content creation to generative design, they help businesses optimize performance and accelerate digital transformation.
8. Fractal Analytics
Rate: $50-99/hr
Employees: 1500+
Founded: 2000
Headquarters: New York, NY, USA
Key Clients: Microsoft, Google, HP
Fractal Analytics specializes in AI-driven insights and content automation. Their Generative AI solutions empower businesses to improve decision-making, enhance customer experiences, and streamline operations.
9. HCL Technologies
Rate: $100-199/hr
Employees: 159,000+
Founded: 1976
Headquarters: Noida, India
Key Clients: Deutsche Bank, Xerox, Merck
HCL Technologies integrates Generative AI into automation workflows to enhance productivity and efficiency. Their AI-driven solutions for predictive modeling and content generation are widely adopted across industries.
10. Capgemini
Rate: $100-200/hr
Employees: 270,000+
Founded: 1967
Headquarters: Paris, France
Key Clients: Volvo, Airbus, Vodafone
Capgemini delivers Generative AI solutions for content creation, customer engagement, and advanced analytics. With their expertise in AI and data-driven technologies, Capgemini supports businesses in achieving seamless digital transformation.
How to Select a Generative AI Consulting Company for Your Requirements
When choosing a Generative AI consulting firm, consider the following:
Industry Expertise: Ensure the firm has a track record of success in your sector.
Technological Proficiency: Look for firms with expertise in Generative AI frameworks and tools.
Custom Solutions: Opt for a company that provides personalized solutions aligned with your goals.
Scalability: Ensure their AI solutions can adapt to your business growth.
Client Reviews: Check case studies and testimonials to gauge their credibility.
6. Budget Compatibility: Choose a firm that aligns with your financial plan without compromising quality.
Conclusion
The Generative AI consulting landscape in 2025 is competitive, with companies like AIVeda setting benchmarks for innovation and success. With their focus on delivering scalable and tailored AI solutions, AIVeda is leading the charge in helping businesses unlock the true potential of Generative AI.
By partnering with the right consulting company, businesses can achieve unparalleled growth, enhance customer engagement, and stay ahead in an AI-driven world.
0 notes
Text
Ex-OpenAI engineer who raised legal concerns about the technology he helped build has died

SAN FRANCISCO
Suchir Balaji, a former OpenAI engineer and whistleblower who helped train the artificial intelligence systems behind ChatGPT and later said he believed those practices violated copyright law, has died, according to his parents and San Francisco officials. He was 26.

Balaji worked at OpenAI for nearly four years before quitting in August. He was well-regarded by colleagues at the San Francisco company, where a co-founder this week called him one of OpenAI's strongest contributors who was essential to developing some of its products.
âWe are devastated to learn of this incredibly sad news and our hearts go out to Suchirâs loved ones during this difficult time,â said a statement from OpenAI.
Balaji was found dead in his San Francisco apartment on Nov. 26 in what police said âappeared to be a suicide. No evidence of foul play was found during the initial investigation.â The city's chief medical examiner's office confirmed the manner of death to be suicide.

His parents Poornima Ramarao and Balaji Ramamurthy said they are still seeking answers, describing their son as a âhappy, smart and brave young manâ who loved to hike and recently returned from a trip with friends.
Balaji grew up in the San Francisco Bay Area and first arrived at the fledgling AI research lab for a 2018 summer internship while studying computer science at the University of California, Berkeley. He returned a few years later to work at OpenAI, where one of his first projects, called WebGPT, helped pave the way for ChatGPT.
âSuchirâs contributions to this project were essential, and it wouldnât have succeeded without him,â said OpenAI co-founder John Schulman in a social media post memorializing Balaji. Schulman, who recruited Balaji to his team, said what made him such an exceptional engineer and scientist was his attention to detail and ability to notice subtle bugs or logical errors.

âHe had a knack for finding simple solutions and writing elegant code that worked,â Schulman wrote. âHeâd think through the details of things carefully and rigorously.â
Balaji later shifted to organizing the huge datasets of online writings and other media used to train GPT-4, the fourth generation of OpenAI's flagship large language model and a basis for the company's famous chatbot. It was that work that eventually caused Balaji to question the technology he helped build, especially after newspapers, novelists and others began suing OpenAI and other AI companies for copyright infringement.
He first raised his concerns with The New York Times, which reported them in an October profile of Balaji.

He later told The Associated Press he would âtry to testifyâ in the strongest copyright infringement cases and considered a lawsuit brought by The New York Times last year to be the âmost serious.â Times lawyers named him in a Nov. 18 court filing as someone who might have âunique and relevant documentsâ supporting allegations of OpenAI's willful copyright infringement.
His records were also sought by lawyers in a separate case brought by book authors including the comedian Sarah Silverman, according to a court filing.
âIt doesnât feel right to be training on peopleâs data and then competing with them in the marketplace,â Balaji told the AP in late October. âI donât think you should be able to do that. I donât think you are able to do that legally.â

He told the AP that he gradually grew more disillusioned with OpenAI, especially after the internal turmoil that led its board of directors to fire and then rehire CEO Sam Altman last year. Balaji said he was broadly concerned about how its commercial products were rolling out, including their propensity for spouting false information known as hallucinations.
But of the âbag of issuesâ he was concerned about, he said he was focusing on copyright as the one it was âactually possible to do something about.â
He acknowledged that it was an unpopular opinion within the AI research community, which is accustomed to pulling data from the internet, but said âthey will have to change and itâs a matter of time.â

He had not been deposed and itâs unclear to what extent his revelations will be admitted as evidence in any legal cases after his death. He also published a personal blog post with his opinions about the topic.
Schulman, who resigned from OpenAI in August, said he and Balaji coincidentally left on the same day and celebrated with fellow colleagues that night with dinner and drinks at a San Francisco bar. Another of Balajiâs mentors, co-founder and chief scientist Ilya Sutskever, had left OpenAI several months earlier, which Balaji saw as another impetus to leave.

Schulman said Balaji had told him earlier this year of his plans to leave OpenAI and that Balaji didn't think that better-than-human AI known as artificial general intelligence âwas right around the corner, like the rest of the company seemed to believe.â The younger engineer expressed interest in getting a doctorate and exploring âsome more off-the-beaten path ideas about how to build intelligence,â Schulman said.
Balaji's family said a memorial is being planned for later this month at the India Community Center in Milpitas, California, not far from his hometown of Cupertino.
0 notes
Text
đŻ Mark Zuckerberg Faces Deposition in AI Copyright Lawsuit From Sarah Silverman and Other Authors
Meta CEO Mark Zuckerberg is set to be deposed in a lawsuit filed by several authors, including comedian Sarah Silverman, who allege that the company used their copyrighted material to train its artificial intelligence technology.
U.S. District Judge Thomas Hixson denied Metaâs request to block the deposition of CEO Mark Zuckerberg on Tuesday, stating there is enough evidence to suggest he is the "principal decision maker" regarding the company's AI platforms.
Meta contended that Zuckerberg lacks specific, unique knowledge about the companyâs AI operations and that the same information could be gathered from depositions with other employees.
In his Tuesday ruling, Judge Hixson stated that the authors have âsubmitted evidence of [Zuckerberg's] specific involvement in the companyâs AI initiativesâ and his âdirect supervision of Metaâs AI products.â
The class-action lawsuit, filed last year in federal court in California, accuses Meta of unlawfully downloading digital copies of the authors' books and using themâwithout permission or compensationâto train its AI platforms.
Additionally, this week, renowned attorney David Boies joined the case, representing Sarah Silverman and a group of other plaintiffs, including writer Ta-Nehisi Coates and former Arkansas Governor Mike Huckabee.
Boies is widely recognized for representing Al Gore in the contested 2000 presidential election against George W. Bush.
The lawsuit against Meta is part of a broader wave of similar legal actions in San Francisco and New York, targeting other AI chatbot developers, including Anthropic, Microsoft, and ChatGPT creator OpenAI.
0 notes
Text
ChatGPT developer OpenAIâs approach to building artificial intelligence came under fire this week from former employees who accuse the company of taking unnecessary risks with technology that could become harmful.
Today, OpenAI released a new research paper apparently aimed at showing it is serious about tackling AI risk by making its models more explainable. In the paper, researchers from the company lay out a way to peer inside the AI model that powers ChatGPT. They devise a method of identifying how the model stores certain conceptsâincluding those that might cause an AI system to misbehave.
Although the research makes OpenAIâs work on keeping AI in check more visible, it also highlights recent turmoil at the company. The new research was performed by the recently disbanded âsuperalignmentâ team at OpenAI that was dedicated to studying the technologyâs long-term risks.
The former groupâs coleads, Ilya Sutskever and Jan Leikeâboth of whom have left OpenAIâare named as coauthors. Sutskever, a cofounder of OpenAI and formerly chief scientist, was among the board members who voted to fire CEO Sam Altman last November, triggering a chaotic few days that culminated in Altmanâs return as leader.
ChatGPT is powered by a family of so-called large language models called GPT, based on an approach to machine learning known as artificial neural networks. These mathematical networks have shown great power to learn useful tasks by analyzing example data, but their workings cannot be easily scrutinized as conventional computer programs can. The complex interplay between the layers of âneuronsâ within an artificial neural network makes reverse engineering why a system like ChatGPT came up with a particular response hugely challenging.
âUnlike with most human creations, we donât really understand the inner workings of neural networks,â the researchers behind the work wrote in an accompanying blog post. Some prominent AI researchers believe that the most powerful AI models, including ChatGPT, could perhaps be used to design chemical or biological weapons and coordinate cyberattacks. A longer-term concern is that AI models may choose to hide information or act in harmful ways in order to achieve their goals.
OpenAIâs new paper outlines a technique that lessens the mystery a little, by identifying patterns that represent specific concepts inside a machine learning system with help from an additional machine learning model. The key innovation is in refining the network used to peer inside the system of interest by identifying concepts, to make it more efficient.
OpenAI proved out the approach by identifying patterns that represent concepts inside GPT-4, one of its largest AI models. The company released code related to the interpretability work, as well as a visualization tool that can be used to see how words in different sentences activate concepts, including profanity and erotic content, in GPT-4 and another model. Knowing how a model represents certain concepts could be a step toward being able to dial down those associated with unwanted behavior, to keep an AI system on the rails. It could also make it possible to tune an AI system to favor certain topics or ideas.
Even though LLMs defy easy interrogation, a growing body of research suggests they can be poked and prodded in ways that reveal useful information. Anthropic, an OpenAI competitor backed by Amazon and Google, published similar work on AI interpretability last month. To demonstrate how the behavior of AI systems might be tuned, the company's researchers created a chatbot obsessed with San Francisco's Golden Gate Bridge. And simply asking an LLM to explain its reasoning can sometimes yield insights.
âItâs exciting progress,â says David Bau, a professor at Northeastern University who works on AI explainability, of the new OpenAI research. âAs a field, we need to be learning how to understand and scrutinize these large models much better.â
Bau says the OpenAI teamâs main innovation is in showing a more efficient way to configure a small neural network that can be used to understand the components of a larger one. But he also notes that the technique needs to be refined to make it more reliable. âThereâs still a lot of work ahead in using these methods to create fully understandable explanations,â Bau says.
Bau is part of a US government-funded effort called the National Deep Inference Fabric, which will make cloud computing resources available to academic researchers so that they too can probe especially powerful AI models. âWe need to figure out how we can enable scientists to do this work even if they are not working at these large companies,â he says.
OpenAIâs researchers acknowledge in their paper that further work needs to be done to improve their method, but also say they hope it will lead to practical ways to control AI models. âWe hope that one day, interpretability can provide us with new ways to reason about model safety and robustness, and significantly increase our trust in powerful AI models by giving strong assurances about their behavior,â they write.
10 notes
·
View notes
Text
The Future of Chips Silicon Volleyâs Genius AI Assistance

A study paper outlines how generative AI might help with one of the trickiest engineering tasks: semiconductor design.
The work shows how businesses in highly specialized industries might use internal data to train large language models (LLMs) to create productivity-boosting assistants.
There are not many activities as difficult as semiconductor design. When examined under a microscope, a cutting-edge processor such as the NVIDIA H100 Tensor Core GPU (above) resembles a carefully designed city made of tens of billions of transistors connected by streets 10,000 times thinner than a human hair.
Building one of these digital megacities takes up to two years of coordination between multiple engineering teams.
A few teams design and install a range of incredibly tiny circuits, while others test the results of their labor. Still others define the chipâs general architecture. Specialized techniques, software, and computer languages are needed for every task.
A Vast Perspective for LLMs
âHe think big language models will benefit all the processes in the long run,â stated NVIDIA Research director and article lead Mark Ren.
In a keynote address at now International Conference on Computer-Aided Design, which brings together hundreds of engineers annually in the field of electronic design automation, or EDA, Bill Dally, the chief scientist at NVIDIA, revealed the study.
At the San Francisco event, Dally stated, âThis effort marks an important first step in applying LLMs to the complex work of designing semiconductors.â âIt demonstrates how internal data from highly specialized fields can be used to train effective generative AI models.â
ChipNeMo Panels
The paper describes how NVIDIA engineers trained a proprietary LLM, named ChipNeMo, on the companyâs own data for internal use. The LLM was used to produce and optimize software and support human designers.
According to Ren, who has worked in EDA for more than 20 years, engineers intend to use generative AI in every phase of chip design in the long run. This might result in a considerable increase in total productivity.
The research team selected three use cases a chatbot, a code generator, and an analytical tool to begin with after asking NVIDIA engineers about potential applications.
First Use Cases
Thus far, the most well-liked tool is the latter, which automates the laborious chores of keeping up-to-date descriptions of known issues.
During early testing, a prototype chatbot that answers inquiries on GPU architecture and design assisted a large number of engineers in finding technical documents fast.
As seen above, a code generator under development can already produce brief software snippets of 10â20 lines in two specialized languages used by semiconductor designers. It will be integrated with current tools, giving engineers a useful helper for their ongoing designs.
Personalized AI Models Utilizing NVIDIA NeMo
The teamâs efforts in compiling its design data and applying it to develop a customized generative AI model a method that may be applied to any sector are the primary subject of this study.
The group began by selecting a foundation model and using NVIDIA NeMo, a framework that is part of the NVIDIA AI Enterprise software platform that allows for the creation, modification, and deployment of generative AI models. The chosen NeMo model has 43 billion parameters, indicating how well it can recognize patterns. More than a trillion tokens words and symbols found in software and text were used to train it.
The model was subsequently improved in two training cycles by the team, using over 24 billion tokens of internal design data in the first round and roughly 130,000 conversation and design samples in the second.
The work is one of several instances of generative AI research and proofs of concept that are just starting to come out of the lab in the semiconductor sector.
Read more on Govindhtech.com
0 notes
Text

Months after the chatbot ChatGPT wowed the world with its uncanny ability to write essays and answer questions like a human, artificial intelligence (AI) is coming to Internet search.
Three of the worldâs biggest search engines â Google, Bing and Baidu â last week said they will be integrating ChatGPT or similar technology into their search products, allowing people to get direct answers or engage in a conversation, rather than merely receiving a list of links after typing in a word or question. How will this change the way people relate to search engines? Are there risks to this form of humanâmachine interaction?
Microsoftâs Bing uses the same technology as ChatGPT, which was developed by OpenAI of San Francisco, California. But all three companies are using large language models (LLMs). LLMs create convincing sentences by echoing the statistical patterns of text they encounter in a large database. Googleâs AI-powered search engine, Bard, announced on 6 February, is currently in use by a small group of testers. Microsoftâs version is widely available now, although there is a waiting list for unfettered access. Baiduâs ERNIE Bot will be available in March.
Before these announcements, a few smaller companies had already released AI-powered search engines. âSearch engines are evolving into this new state, where you can actually start talking to them, and converse with them like you would talk to a friend,â says Aravind Srinivas, a computer scientist in San Francisco who last August co-founded Perplexity â an LLM-based search engine that provides answers in conversational English.
Changing trust
The intensely personal nature of a conversation â compared with a classic Internet search â might help to sway perceptions of search results. People might inherently trust the answers from a chatbot that engages in conversation more than those from a detached search engine, says Aleksandra Urman, a computational social scientist at the University of Zurich in Switzerland.
AÂ 2022 study1Â by a team based at the University of Florida in Gainesville found that for participants interacting with chatbots used by companies such as Amazon and Best Buy, the more they perceived the conversation to be human-like, the more they trusted the organization.
That could be beneficial, making searching faster and smoother. But an enhanced sense of trust could be problematic given that AI chatbots make mistakes. Googleâs Bard flubbed a question about the James Webb Space Telescope in its own tech demo, confidently answering incorrectly. And ChatGPT has a tendency to create fictional answers to questions to which it doesnât know the answer â known by those in the field as hallucinating.
A Google spokesperson said Bardâs error âhighlights the importance of a rigorous testing process, something that weâre kicking off this week with our trusted-tester programmeâ. But some speculate that, rather than increasing trust, such errors, assuming they are discovered, could cause users to lose confidence in chat-based search. âEarly perception can have a very large impact,â says Sridhar Ramaswamy, a computer scientists based in Mountain View, California and chief executive of Neeva, an LLM-powered search engine launched in January. The mistake wiped $100 billion from Googleâs value as investors worried about the future and sold stock.
Lack of transparency
Compounding the problem of inaccuracy is a comparative lack of transparency. Typically, search engines present users with their sources â a list of links â and leave them to decide what they trust. By contrast, itâs rarely known what data an LLM trained on â is it Encyclopaedia Britannica or a gossip blog?
âItâs completely untransparent how [AI-powered search] is going to work, which might have major implications if the language model misfires, hallucinates or spreads misinformation,â says Urman.
If search bots make enough errors, then, rather than increasing trust with their conversational ability, they have the potential to unseat usersâ perceptions of search engines as impartial arbiters of truth, Urman says.
She has conducted as-yet unpublished research that suggests current trust is high. She examined how people perceive existing features that Google uses to enhance the search experience, known as âfeatured snippetsâ, in which an extract from a page that is deemed particularly relevant to the search appears above the link, and âknowledge panelsâ â summaries that Google automatically generates in response to searches about, for example, a person or organization. Almost 80% of people Urman surveyed deemed these features accurate, and around 70% thought they were objective.
Chatbot-powered search blurs the distinction between machines and humans, says Giada Pistilli, principal ethicist at Hugging Face, a data-science platform in Paris that promotes the responsible use of AI. She worries about how quickly companies are adopting AI advances: âWe always have these new technologies thrown at us without any control or an educational framework to know how to use them.â
0 notes
Text
Slack CEO Discusses Distinctiveness Amidst the AI Wave
As AI becomes the dominant force, particularly in the field of corporate communication platforms, seismic shifts are taking place in the heart of Silicon Valley. The outstanding model is Slack, which has seen a huge upgrade since its securing by Salesforce, a conspicuous San Francisco-based corporate programming organization, for a faltering $27.7 billion.
Stewart Butterfield, the fellow benefactor and previous Chief of Slack, surrendered his position only two years post-procurement. Stepping into his shoes is Lidiane Jones, formerly from Microsoft, and a rapidly ascending executive within Salesforce. Jonesâ ascension to the CEO role coincided almost perfectly with the launch of ChatGPT, which spotlighted the limitless potential of AI.
Jones, a Boston-native with Brazilian roots, expressed her awe at the AI evolution. She stated, âIn the mere span of nine months, Slack has unveiled more innovative features than the total we had launched in the preceding years.â Her tenure as CEO has thus been marked by fast-paced developments, largely influenced by the capabilities of AI.
The corporate chat landscape is heating up with Microsoft as Slackâs chief rival. Platforms like Slack and Microsoftâs Teams were primarily conceptualized to enhance teamwork and streamline internal communication. However, recent iterations, supercharged by AI, envision these platforms morphing into virtual assistants, further aiding productivity.
Relaying her personal experience, Jones spoke about her post-vacation email backlog. Using âSlack AIâ, she was able to get a concise summary, making her catch-up process exponentially faster. âInstead of wading through emails for a day or even an entire week, âSlack AIâ brought me up to speed in a mere two hours,â she recounted.

Jones firmly believes in AIâs transformative power to distill information efficiently and even handle complex administrative tasks. She highlighted how Slack provides users direct access to generative AI chatbots from diverse vendors, including the upcoming ChatGPT from OpenAI and Claude from the startup, Anthropic. Emphasizing Slackâs adaptability, Jones remarked, âOur strength lies in our openness. Weâre fundamentally an open platform.â
This adaptability can be contrasted with Microsoftâs Teams. While Teams enjoys a robust user base (2019 data points to 12 million daily active users and approximately 300 million monthly users), Slack has always contended with its inclusion in Microsoftâs successful Office Suite. This contention led Slack to lodge a complaint with the European Union in 2020. Although Microsoft has made concessions to Slack in Europe, the investigation remains active, hinting at possible further actions by the European authorities.
Interestingly, Microsoftâs foray into generative AI has been bolstered by its significant investments in OpenAI. However, Jones is bullish on Slackâs AI prospects, attributing this confidence to the quality and richness of data Slack accumulates. âSlack houses a companyâs collective knowledge. With employees from various departments interacting, the platform holds a goldmine of unstructured data,â Jones elaborated. This wealth of contextually-rich data is what, according to Jones, empowers their AI strategies.
While Slack doesnât presently aim to develop its distinct language model akin to OpenAIâs famous generative AI systems, Jones hinted at the potential creation of niche models in the future. She humorously remarked, âWe donât see the need to reinvent the wheel.â
Considering the future trajectory, Jones envisions Slack possibly crafting custom AI assistants with an intricate understanding of usersâ preferences and nuances. This AI-augmented future, she believes, would be a monumental boon, especially for professionals juggling work and personal responsibilities. However, Jones also acknowledges the challenges. Earning the trust of users to such an advanced degree âis going to take time,â she concluded.
Follow Digital Fox Media for latest technology news.
0 notes
Text
Google will add AI models from Meta, and Anthropic to its Cloud Platform
Google, a subsidiary of Alphabet Inc., is integrating more generative Artificial Intelligence into its services and promoting itself as an all-encompassing resource for cloud users looking to access the latest developments by integrating artificial intelligence technologies from firms like Meta Platforms Inc. as well as Anthropic into its cloud platform. The Llama 2 big language model from Meta as well as the Claude 2 chatbot from artificial intelligence startup Anthropic will be accessible to Googleâs cloud clients, which they can then customise with company data for their services and applications.
The decision, which was made public on Tuesday during the companyâs Next â23 conference in San Francisco, serves as part of the businessâs attempt to establish its platform as one where users have a choice to select an artificial intelligence (AI) model that best suits their requirements, whether from the business itself or one of its collaborators. Google Cloud customers now have access to more than 100 potent AI models and tools, the firm claimed.
0 notes
Text
OpenAI reveals location of its first international outpost
OpenAI, the mastermind behind ChatGPT, is gearing up to establish an office in London, marking its inaugural location beyond the borders of the U.S. This Microsoft-supported venture has garnered substantial recognition since the introduction of its AI-driven chatbot, ChatGPT, in November. The text-based tool has left a profound impression on many by adeptly executing an array of tasks and engaging in conversations that closely mimic human interactions. Based in San Francisco, OpenAI is now poised to extend its influence, establishing a presence in a city already hosting a burgeoning cluster of AI system developers, including the well-funded DeepMind from Google. In a communication posted on its website, OpenAI characterizes its inaugural international office as a "landmark in OpenAI's expansion." This move is anticipated to widen the horizons of its endeavors, expose it to diverse viewpoints, and expedite its mission to ensure that artificial general intelligence (AGI)âa more advanced iteration of AI with human-like learning and thinking capabilitiesâserves the betterment of society. According to OpenAI CEO Sam Altman, this expansion offers a platform to "attract top-tier talent and propel innovation in AGI advancement and policy." He further elaborated that the London office will contribute to the pursuit of "constructing and implementing secure AI." The company's declaration follows only a few weeks after British Prime Minister Rishi Sunak underscored the urgency during a technology conference in London for the nation to act swiftly to "safeguard our status as one of the globe's tech hubs." Beyond the wealth of local talent cultivated by esteemed universities and AI startups situated in the U.K., OpenAI might also have been drawn by the anticipation that the U.K. will adopt a lenient stance on regulating the swiftly evolving technology. This stance has been highlighted by the recent commitment to a "pro-innovation approach." Conversely, the European Union seems poised to adopt a more stringent stance, as evidenced by its AI Act. Altman recently indicated that if OpenAI cannot adhere to the EU's forthcoming regulations, the company would cease operations there, though he subsequently attempted to temper those statements. OpenAI has yet to specify the timeline for inaugurating its London office or the number of staff it will employ. However, the company has already advertised four positions for its new location, including a security engineer and a head of U.K. policy. Read the full article
0 notes