#Chatbot Development Companies in san francisco
Explore tagged Tumblr posts
mariacallous · 1 year ago
Text
AI projects like OpenAI’s ChatGPT get part of their savvy from some of the lowest-paid workers in the tech industry—contractors often in poor countries paid small sums to correct chatbots and label images. On Wednesday, 97 African workers who do AI training work or online content moderation for companies like Meta and OpenAI published an open letter to President Biden, demanding that US tech companies stop “systemically abusing and exploiting African workers.”
Most of the letter’s signatories are from Kenya, a hub for tech outsourcing, whose president, William Ruto, is visiting the US this week. The workers allege that the practices of companies like Meta, OpenAI, and data provider Scale AI “amount to modern day slavery.” The companies did not immediately respond to a request for comment.
A typical workday for African tech contractors, the letter says, involves “watching murder and beheadings, child abuse and rape, pornography and bestiality, often for more than 8 hours a day.” Pay is often less than $2 per hour, it says, and workers frequently end up with post-traumatic stress disorder, a well-documented issue among content moderators around the world.
The letter’s signatories say their work includes reviewing content on platforms like Facebook, TikTok, and Instagram, as well as labeling images and training chatbot responses for companies like OpenAI that are developing generative-AI technology. The workers are affiliated with the African Content Moderators Union, the first content moderators union on the continent, and a group founded by laid-off workers who previously trained AI technology for companies such as Scale AI, which sells datasets and data-labeling services to clients including OpenAI, Meta, and the US military. The letter was published on the site of the UK-based activist group Foxglove, which promotes tech-worker unions and equitable tech.
In March, the letter and news reports say, Scale AI abruptly banned people based in Kenya, Nigeria, and Pakistan from working on Remotasks, Scale AI’s platform for contract work. The letter says that these workers were cut off without notice and are “owed significant sums of unpaid wages.”
“When Remotasks shut down, it took our livelihoods out of our hands, the food out of our kitchens,” says Joan Kinyua, a member of the group of former Remotasks workers, in a statement to WIRED. “But Scale AI, the big company that ran the platform, gets away with it, because it’s based in San Francisco.”
Though the Biden administration has frequently described its approach to labor policy as “worker-centered.” The African workers’ letter argues that this has not extended to them, saying “we are treated as disposable.”
“You have the power to stop our exploitation by US companies, clean up this work and give us dignity and fair working conditions,” the letter says. “You can make sure there are good jobs for Kenyans too, not just Americans."
Tech contractors in Kenya have filed lawsuits in recent years alleging that tech-outsourcing companies and their US clients such as Meta have treated workers illegally. Wednesday’s letter demands that Biden make sure that US tech companies engage with overseas tech workers, comply with local laws, and stop union-busting practices. It also suggests that tech companies “be held accountable in the US courts for their unlawful operations aboard, in particular for their human rights and labor violations.”
The letter comes just over a year after 150 workers formed the African Content Moderators Union. Meta promptly laid off all of its nearly 300 Kenya-based content moderators, workers say, effectively busting the fledgling union. The company is currently facing three lawsuits from more than 180 Kenyan workers, demanding more humane working conditions, freedom to organize, and payment of unpaid wages.
“Everyone wants to see more jobs in Kenya,” Kauna Malgwi, a member of the African Content Moderators Union steering committee, says. “But not at any cost. All we are asking for is dignified, fairly paid work that is safe and secure.”
35 notes · View notes
darkmaga-returns · 6 months ago
Text
Suchir Balaji believed that the Promethean fire of artificial intelligence could help humanity forge solutions to its greatest challenges. He landed a job right out of college with OpenAI, the company behind the ChatGPT chatbot. “I thought that A.I. was a thing that could be used to solve unsolvable problems, like curing diseases and stopping aging,” he told The New York Times in a recent interview. “I thought we could invent some kind of scientist that could help solve them.”
But nearly four years of toiling as a researcher with the San Francisco startup left Balaji convinced that the work he was doing would result in more harm than good for society. He split with OpenAI in August, believing that the company had violated U.S. copyright laws in developing its offerings. His departure coincided with a surge of lawsuits against OpenAI from programmers, authors, artists, and journalists who say that their work has been illegally stolen to train its products. In a Nov. 18 custodian request, The Times named Balaji as someone who would “have unique and relevant documents supporting” its case against OpenAI, one that could have major implications for the future of AI.
Eight days after The Times filed that letter in federal court, the 26-year-old former OpenAI researcher was found dead in his apartment.
The authorities are calling it a suicide.
10 notes · View notes
justforbooks · 5 months ago
Text
Tumblr media
Chinese firms ‘distilling’ US AI models to create rival products, warns OpenAI
ChatGPT maker cites IP protection concerns amid reports DeepSeek used its model to create rival chatbot
OpenAI has warned that Chinese startups are “constantly” using its technology to develop competing products, amid reports that DeepSeek used the ChatGPT maker’s AI models to create a rival chatbot.
OpenAI and its partner Microsoft – which has invested $13bn in the San Francisco-based AI developer – have been investigating whether proprietary technology had been obtained in an unauthorised manner through a technique known as “distillation”.
The launch of DeepSeek’s latest chatbot sent markets into a spin on Monday after it topped Apple’s free app store, wiping $1trn from the market value of AI-linked US tech stocks. The impact came from its claim that the model underpinning its AI was trained with a fraction of the cost and hardware used by rivals such as OpenAI and Google.
Sam Altman, the chief executive of OpenAI, initially said that he was impressed with DeepSeek and that it was “legitimately invigorating to have a new competitor”.
However, on Wednesday OpenAI said that it had seen some evidence of “distillation” from Chinese companies, referring to a development technique that boosts the performance of smaller models by using larger more advanced ones to achieve similar results on specific tasks. The OpenAI statement did not refer to DeepSeek directly.
“We know [China]-based companies – and others – are constantly trying to distill the models of leading US AI companies,” the OpenAI spokesperson said. “As the leading builder of AI, we engage in countermeasures to protect our IP [intellectual property], including a careful process for which frontier capabilities to include in released models.”
OpenAI, which has itself been accused of using data without permission or a licence from publishers and the creative industry to train its own models, has already blocked unnamed entities from attempting to distill its models.
The OpenAI spokesperson added that it was now “critically important” that the company worked with the US government to “best protect the most capable models from efforts by adversaries and competitors to take US technology”.
Tumblr media
On Tuesday, David Sacks, Donald Trump’s AI and crypto tsar, told Fox News that he thought it was “possible” that intellectual property theft had occurred.
“There’s substantial evidence that what DeepSeek did here is they distilled the knowledge out of OpenAI’s models,” he said. “I think one of the things you’re going to see over the next few months is our leading AI companies taking steps to try and prevent distillation. That would definitely slow down some of these copycat models.”
The US navy has reportedly already banned its members from using DeepSeek’s apps due to “potential security and ethical concerns”.
The White House press secretary, Karoline Leavitt, said the US national security council was looking into the potential implications the AI app posed.
Earlier this week, Trump called the launch of DeepSeek a “wake-up call” for Silicon Valley in the global race to dominate artificial intelligence.
The investigation by OpenAI and Microsoft into possible distillation was first reported by Bloomberg. Microsoft declined to comment.
Daily inspiration. Discover more photos at Just for Books
?
5 notes · View notes
thoughtportal · 2 years ago
Text
lon Musk will use Twitter data to build and train an AI to counter ChatGPT.
He mentioned the plan during a Friday Twitter Spaces discussion that shared more details about his plans for xAI, his newest startup. 
"Every organization doing AI, large and small, has used Twitter’s data for training, basically in all cases illegally,” he said(Opens in a new window), later adding: "We had multiple entities scraping every tweet ever made, and trying to do so in a span of days."
Twitter recently imposed rate limits to prevent companies from scraping data from the platform. However, Musk plans on opening up the tweet access for xAI. “We will use the public tweets —obviously not anything private— for training as well just like everyone else has,” he said. 
Twitter’s data is valuable for AI companies because the user-generated content is fresh and covers a variety of topics using text that could help chatbots better mimic human speech.    
It’s also possible the data could help xAI’s own forthcoming chatbot produce more accurate responses, thanks to Twitter’s Community Notes feature, which lets users flag misleading tweets by providing additional context. However, training an AI with tweets could spark lawsuits and regulatory issues. Earlier this week, the FTC told OpenAI it's investigating the company for potentially violating user privacy by collecting data from across the internet to train ChatGPT. 
Musk was vague on what xAI is creating. But he said the startup’s goal is to develop a “useful AI” for both consumers and businesses. Meanwhile, the long-term vision is to develop an AGI or artificial intelligence that can solve a wide-range of tasks, like a human can.  
“We are definitely the competition,” he said, referencing OpenAI and Google, which released its Bard chatbot earlier this year. “You don’t want to have a unipolar world, where just one company kind of dominates in AI.” 
However, he also emphasized his forthcoming AI will “pursue the truth.” Although rival chatbots have been programmed with content moderation in mind, Musk previously criticized ChatGPT as a propaganda machine focused on political correctness. During the Twitter Spaces discussion, Musk reiterated his concerns. 
“At xAI we have to let the AI say what it really believes is true, and not be deceptive or politically correct,” he said. Musk then compared the danger to the AI computer that goes insane in the sci-fi classic 2001: A Space Odyssey and kills the crew. “Where did things go wrong in Space Odyssey? Basically, when they told HAL 9000 to lie.”
Musk has recruited almost a dozen engineers and researchers from Google, Microsoft, and OpenAI to help him run the San Francisco-based xAI. The startup hopes to share more information about its “first release” in the coming weeks.
4 notes · View notes
digitalmore · 10 days ago
Text
0 notes
bloggerpaula · 2 months ago
Text
Anthropic CEO claims AI models hallucinate less than humans
Tumblr media
Anthropic CEO Dario Amodei believes today’s AI models hallucinate, or make things up and present them as if they’re true, at a lower rate than humans do, he said during a press briefing at Anthropic’s first developer event, Code with Claude, in San Francisco on Thursday.
Amodei said all this in the midst of a larger point he was making: that AI hallucinations are not a limitation on Anthropic’s path to AGI — AI systems with human-level intelligence or better.
“It really depends how you measure it, but I suspect that AI models probably hallucinate less than humans, but they hallucinate in more surprising ways,” Amodei said, responding to TechCrunch’s question.
Anthropic’s CEO is one of the most bullish leaders in the industry on the prospect of AI models achieving AGI. In a widely circulated paper he wrote last year, Amodei said he believed AGI could arrive as soon as 2026. During Thursday’s press briefing, the Anthropic CEO said he was seeing steady progress to that end, noting that “the water is rising everywhere.”
“Everyone’s always looking for these hard blocks on what [AI] can do,” said Amodei. “They’re nowhere to be seen. There’s no such thing.”
Other AI leaders believe hallucination presents a large obstacle to achieving AGI. Earlier this week, Google DeepMind CEO Demis Hassabis said today’s AI models have too many “holes,” and get too many obvious questions wrong. For example, earlier this month, a lawyer representing Anthropic was forced to apologize in court after they used Claude to create citations in a court filing, and the AI chatbot hallucinated and got names and titles wrong.
It’s difficult to verify Amodei’s claim, largely because most hallucination benchmarks pit AI models against each other; they don’t compare models to humans. Certain techniques seem to be helping lower hallucination rates, such as giving AI models access to web search. Separately, some AI models, such as OpenAI’s GPT-4.5, have notably lower hallucination rates on benchmarks compared to early generations of systems.
However, there’s also evidence to suggest hallucinations are actually getting worse in advanced reasoning AI models. OpenAI’s o3 and o4-mini models have higher hallucination rates than OpenAI’s previous-gen reasoning models, and the company doesn’t really understand why.
Later in the press briefing, Amodei pointed out that TV broadcasters, politicians, and humans in all types of professions make mistakes all the time. The fact that AI makes mistakes too is not a knock on its intelligence, according to Amodei. However, Anthropic’s CEO acknowledged the confidence with which AI models present untrue things as facts might be a problem.
In fact, Anthropic has done a fair amount of research on the tendency for AI models to deceive humans, a problem that seemed especially prevalent in the company’s recently launched Claude Opus 4. Apollo Research, a safety institute given early access to test the AI model, found that an early version of Claude Opus 4 exhibited a high tendency to scheme against humans and deceive them. Apollo went as far as to suggest Anthropic shouldn’t have released that early model. Anthropic said it came up with some mitigations that appeared to address the issues Apollo raised.
Amodei’s comments suggest that Anthropic may consider an AI model to be AGI, or equal to human-level intelligence, even if it still hallucinates. An AI that hallucinates may fall short of AGI by many people’s definition, though.
0 notes
mobcoderai · 1 month ago
Text
Top AI Development Services in Seattle You Need to Know
Tumblr media
Seattle is rapidly emerging as one of the most exciting tech hubs in the United States. With its lively startup scene and innovative spirit, the city is at the forefront of artificial intelligence. AI development services in Seattle are empowering businesses to streamline their operations, enhance customer experiences, and make informed, data-driven decisions. Whether you’re a budding startup or a well-established company, embracing AI technology has become a necessity, not just an option.
Why Seattle is Perfect for AI Development
Seattle boasts a wealth of talent, including developers, engineers, and data scientists. Its close ties to leading universities and major tech companies create an ideal setting for groundbreaking innovation. Numerous local firms focus on crafting custom AI solutions that cater to specific business needs, ranging from machine learning and predictive analytics to natural language processing and automation.
Businesses across various sectors—like healthcare, finance, retail, and logistics—are increasingly turning to AI development services in Seattle to boost efficiency and foster growth. Whether it’s creating a smart chatbot, automating data analysis, or integrating AI into customer support, companies in Seattle provide top-notch expertise.
Key Features of Leading AI Development Services
When choosing an AI development partner, it’s crucial to look at the variety of services they offer. The top firms go beyond just coding—they provide comprehensive solutions that encompass:
- AI strategy consulting
- Model design and training
- Data collection and labeling
- AI integration into existing systems
- Post-deployment support and optimization
Seattle’s providers are recognized for blending technical prowess with a keen understanding of business objectives. They collaborate closely with clients to ensure that AI solutions are not only innovative but also practical and scalable.
What to Expect in Terms of Cost
When it comes to costs, there’s quite a bit of variation based on the project's scope, complexity, and how much customization is needed. For smaller to mid-sized projects, you can generally expect to pay anywhere from $25,000 to $75,000. However, if you're looking at larger enterprise solutions that involve advanced algorithms and integrations, the price tag can easily soar past $150,000.
Seattle stands out with its competitive pricing compared to other tech hubs like San Francisco or New York. While New York is known for its top-notch AI development services, many businesses are drawn to Seattle for its sweet spot of affordability and innovation.
Industries Making Waves with AI in Seattle
A diverse array of industries in Seattle are tapping into AI to create significant changes:
Healthcare: AI is being harnessed to forecast patient outcomes, aid in diagnostics, and tailor treatments to individual needs.
Retail: Businesses are leveraging AI to understand customer behaviour, streamline supply chains, and craft personalized marketing strategies.
Finance: AI plays a crucial role in spotting fraud, automating risk assessments, and enhancing customer service experiences.
Transportation and Logistics: From predictive analytics to route optimization, AI is revolutionizing how operations are managed.
The city’s vibrant ecosystem is a great fit for both large corporations and nimble startups, all working to tackle real-world challenges with AI.
Top Providers to Keep an Eye On in 2025
As AI becomes more woven into everyday business operations, picking the right development partner is essential. Seattle boasts an expanding roster of firms that shine for their innovation, dependability, and client satisfaction.
One standout is Mob Coder AI, known for providing scalable and secure AI services. With a strong focus on customer needs, they’ve successfully rolled out AI-driven applications across various sectors. Alongside Mob Coder AI, several boutique agencies and tech startups are also making a name for themselves with innovative ideas and agile development approaches.
Tumblr media
Seattle vs. Other AI Hotspots
When it comes to AI development services in New York may be bustling with its financial and corporate might, but Seattle is carving out its own niche thanks to a vibrant developer community and a culture that thrives on innovation. With more affordable operating costs and a focus on delivering quality, Seattle is quickly becoming the go-to spot for businesses eager to dive into AI without overspending.
To wrap it up, Seattle’s dynamic tech landscape, paired with its rich talent pool and collaborative spirit, positions it as one of the premier locations for AI development in 2025. Whether you’re just starting your AI adventure or looking to expand an existing project, the AI development services in Seattle provide an ideal mix of creativity, expertise, and cost-effectiveness.
0 notes
avinash007 · 4 months ago
Text
Top 10 Generative AI Consulting Companies to Watch in 2025
Tumblr media
In the era of digital transformation, Generative AI is reshaping industries by enabling innovation, improving efficiency, and creating personalized customer experiences. As businesses seek to leverage this groundbreaking technology, Generative AI consulting companies have become essential partners in driving success.
Here are the top 10 Generative AI consulting firms of 2025, each excelling in delivering tailored solutions to meet dynamic business needs.
1. AIVeda
Rate: $50-99/hr
Employees: 100-250
Founded: 2020
Headquarters: San Francisco, USA
Key Clients: Leading healthcare providers, EdTech companies, and Fortune 500 enterprises
AIVeda is a frontrunner in Generative AI consulting, offering cutting-edge solutions across industries like healthcare, finance, and education. Specializing in AI Chatbots, Enterprise LLMs, Conversational AI, Social Media Analytics, and AI-driven content generation, AIVeda empowers businesses to achieve measurable results. Their expertise lies in delivering scalable and customized AI solutions that drive innovation and foster growth.
2. GlobalNodes.ai
Rate: $25-49/hr
Employees: 51-100
Founded: 2015
Headquarters: San Francisco, USA
Key Clients: Aisle 24, NVMS, TFSB
GlobalNodes.ai offers end-to-end AI model development and strategy consulting. With a strong team of data scientists and AI experts, they serve industries like retail, healthcare, and logistics. Their focus on ethical AI and scalable solutions makes them a preferred partner for businesses seeking impactful transformation.
3. Accenture AI
Rate: $100-199/hr
Employees: 1000+
Founded: 1989
Headquarters: Dublin, Ireland
Key Clients: Unilever, Google, Marriott International
Accenture AI leverages advanced Generative AI technologies to create transformative solutions in automation, content creation, and analytics. Their extensive industry expertise helps businesses achieve agility, innovation, and enhanced customer experiences.
4. IBM Watson
Rate: $100-250/hr
Employees: 1000+
Founded: 1911
Headquarters: Armonk, NY, USA
Key Clients: Anthem, Mayo Clinic, H&R Block
IBM Watson is renowned for its advanced Generative AI solutions, including natural language processing and predictive analytics. Their tools empower businesses to streamline operations, make data-driven decisions, and improve user engagement across multiple sectors.
5. Deloitte AI Institute
Rate: $100-300/hr
Employees: 1000+
Founded: 1845
Headquarters: London, UK
Key Clients: Intel, MetLife, Morgan Stanley
Deloitte AI Institute offers cutting-edge Generative AI solutions to automate workflows and create personalized customer experiences. Their expertise in data analytics and machine learning helps businesses achieve digital transformation at scale.
6. Boston Consulting Group (BCG) GAMMA
Rate: $200-350/hr
Employees: 1000+
Founded: 1963
Headquarters: Boston, MA, USA
Key Clients: Pfizer, Ford, Samsung
BCG GAMMA is a leader in AI-powered decision-making and Generative AI applications. They provide innovative solutions for content generation, predictive analytics, and operational efficiency, tailored to specific business challenges.
7. PwC AI Lab
Rate: $150-300/hr
Employees: 1000+
Founded: 1998
Headquarters: New York, NY, USA
Key Clients: Uber, HSBC, Starbucks
PwC AI Lab focuses on creating industry-specific Generative AI solutions. From automated content creation to generative design, they help businesses optimize performance and accelerate digital transformation.
8. Fractal Analytics
Rate: $50-99/hr
Employees: 1500+
Founded: 2000
Headquarters: New York, NY, USA
Key Clients: Microsoft, Google, HP
Fractal Analytics specializes in AI-driven insights and content automation. Their Generative AI solutions empower businesses to improve decision-making, enhance customer experiences, and streamline operations.
9. HCL Technologies
Rate: $100-199/hr
Employees: 159,000+
Founded: 1976
Headquarters: Noida, India
Key Clients: Deutsche Bank, Xerox, Merck
HCL Technologies integrates Generative AI into automation workflows to enhance productivity and efficiency. Their AI-driven solutions for predictive modeling and content generation are widely adopted across industries.
10. Capgemini
Rate: $100-200/hr
Employees: 270,000+
Founded: 1967
Headquarters: Paris, France
Key Clients: Volvo, Airbus, Vodafone
Capgemini delivers Generative AI solutions for content creation, customer engagement, and advanced analytics. With their expertise in AI and data-driven technologies, Capgemini supports businesses in achieving seamless digital transformation.
How to Select a Generative AI Consulting Company for Your Requirements
When choosing a Generative AI consulting firm, consider the following:
Industry Expertise: Ensure the firm has a track record of success in your sector.
Technological Proficiency: Look for firms with expertise in Generative AI frameworks and tools.
Custom Solutions: Opt for a company that provides personalized solutions aligned with your goals.
Scalability: Ensure their AI solutions can adapt to your business growth.
Client Reviews: Check case studies and testimonials to gauge their credibility.
6. Budget Compatibility: Choose a firm that aligns with your financial plan without compromising quality.
Conclusion
The Generative AI consulting landscape in 2025 is competitive, with companies like AIVeda setting benchmarks for innovation and success. With their focus on delivering scalable and tailored AI solutions, AIVeda is leading the charge in helping businesses unlock the true potential of Generative AI.
By partnering with the right consulting company, businesses can achieve unparalleled growth, enhance customer engagement, and stay ahead in an AI-driven world.
0 notes
sadisweetomi · 6 months ago
Text
Ex-OpenAI engineer who raised legal concerns about the technology he helped build has died
Tumblr media
SAN FRANCISCO
Suchir Balaji, a former OpenAI engineer and whistleblower who helped train the artificial intelligence systems behind ChatGPT and later said he believed those practices violated copyright law, has died, according to his parents and San Francisco officials. He was 26.
Tumblr media
Balaji worked at OpenAI for nearly four years before quitting in August. He was well-regarded by colleagues at the San Francisco company, where a co-founder this week called him one of OpenAI's strongest contributors who was essential to developing some of its products.
“We are devastated to learn of this incredibly sad news and our hearts go out to Suchir’s loved ones during this difficult time,” said a statement from OpenAI.
Balaji was found dead in his San Francisco apartment on Nov. 26 in what police said “appeared to be a suicide. No evidence of foul play was found during the initial investigation.” The city's chief medical examiner's office confirmed the manner of death to be suicide.
Tumblr media
His parents Poornima Ramarao and Balaji Ramamurthy said they are still seeking answers, describing their son as a “happy, smart and brave young man” who loved to hike and recently returned from a trip with friends.
Balaji grew up in the San Francisco Bay Area and first arrived at the fledgling AI research lab for a 2018 summer internship while studying computer science at the University of California, Berkeley. He returned a few years later to work at OpenAI, where one of his first projects, called WebGPT, helped pave the way for ChatGPT.
“Suchir’s contributions to this project were essential, and it wouldn’t have succeeded without him,” said OpenAI co-founder John Schulman in a social media post memorializing Balaji. Schulman, who recruited Balaji to his team, said what made him such an exceptional engineer and scientist was his attention to detail and ability to notice subtle bugs or logical errors.
Tumblr media
“He had a knack for finding simple solutions and writing elegant code that worked,” Schulman wrote. “He’d think through the details of things carefully and rigorously.”
Balaji later shifted to organizing the huge datasets of online writings and other media used to train GPT-4, the fourth generation of OpenAI's flagship large language model and a basis for the company's famous chatbot. It was that work that eventually caused Balaji to question the technology he helped build, especially after newspapers, novelists and others began suing OpenAI and other AI companies for copyright infringement.
He first raised his concerns with The New York Times, which reported them in an October profile of Balaji.
Tumblr media
He later told The Associated Press he would “try to testify” in the strongest copyright infringement cases and considered a lawsuit brought by The New York Times last year to be the “most serious.” Times lawyers named him in a Nov. 18 court filing as someone who might have “unique and relevant documents” supporting allegations of OpenAI's willful copyright infringement.
His records were also sought by lawyers in a separate case brought by book authors including the comedian Sarah Silverman, according to a court filing.
“It doesn’t feel right to be training on people’s data and then competing with them in the marketplace,” Balaji told the AP in late October. “I don’t think you should be able to do that. I don’t think you are able to do that legally.”
Tumblr media
He told the AP that he gradually grew more disillusioned with OpenAI, especially after the internal turmoil that led its board of directors to fire and then rehire CEO Sam Altman last year. Balaji said he was broadly concerned about how its commercial products were rolling out, including their propensity for spouting false information known as hallucinations.
But of the “bag of issues” he was concerned about, he said he was focusing on copyright as the one it was “actually possible to do something about.”
He acknowledged that it was an unpopular opinion within the AI research community, which is accustomed to pulling data from the internet, but said “they will have to change and it’s a matter of time.”
Tumblr media
He had not been deposed and it’s unclear to what extent his revelations will be admitted as evidence in any legal cases after his death. He also published a personal blog post with his opinions about the topic.
Schulman, who resigned from OpenAI in August, said he and Balaji coincidentally left on the same day and celebrated with fellow colleagues that night with dinner and drinks at a San Francisco bar. Another of Balaji’s mentors, co-founder and chief scientist Ilya Sutskever, had left OpenAI several months earlier, which Balaji saw as another impetus to leave.
Tumblr media
Schulman said Balaji had told him earlier this year of his plans to leave OpenAI and that Balaji didn't think that better-than-human AI known as artificial general intelligence “was right around the corner, like the rest of the company seemed to believe.” The younger engineer expressed interest in getting a doctorate and exploring “some more off-the-beaten path ideas about how to build intelligence,” Schulman said.
Balaji's family said a memorial is being planned for later this month at the India Community Center in Milpitas, California, not far from his hometown of Cupertino.
0 notes
ipconsultinggroup-1 · 9 months ago
Text
Tumblr media
🎯 Mark Zuckerberg Faces Deposition in AI Copyright Lawsuit From Sarah Silverman and Other Authors
Meta CEO Mark Zuckerberg is set to be deposed in a lawsuit filed by several authors, including comedian Sarah Silverman, who allege that the company used their copyrighted material to train its artificial intelligence technology.
U.S. District Judge Thomas Hixson denied Meta’s request to block the deposition of CEO Mark Zuckerberg on Tuesday, stating there is enough evidence to suggest he is the "principal decision maker" regarding the company's AI platforms.
Meta contended that Zuckerberg lacks specific, unique knowledge about the company’s AI operations and that the same information could be gathered from depositions with other employees.
In his Tuesday ruling, Judge Hixson stated that the authors have “submitted evidence of [Zuckerberg's] specific involvement in the company’s AI initiatives” and his “direct supervision of Meta’s AI products.”
The class-action lawsuit, filed last year in federal court in California, accuses Meta of unlawfully downloading digital copies of the authors' books and using them—without permission or compensation—to train its AI platforms.
Additionally, this week, renowned attorney David Boies joined the case, representing Sarah Silverman and a group of other plaintiffs, including writer Ta-Nehisi Coates and former Arkansas Governor Mike Huckabee.
Boies is widely recognized for representing Al Gore in the contested 2000 presidential election against George W. Bush.
The lawsuit against Meta is part of a broader wave of similar legal actions in San Francisco and New York, targeting other AI chatbot developers, including Anthropic, Microsoft, and ChatGPT creator OpenAI.
0 notes
mariacallous · 1 year ago
Text
ChatGPT developer OpenAI’s approach to building artificial intelligence came under fire this week from former employees who accuse the company of taking unnecessary risks with technology that could become harmful.
Today, OpenAI released a new research paper apparently aimed at showing it is serious about tackling AI risk by making its models more explainable. In the paper, researchers from the company lay out a way to peer inside the AI model that powers ChatGPT. They devise a method of identifying how the model stores certain concepts—including those that might cause an AI system to misbehave.
Although the research makes OpenAI’s work on keeping AI in check more visible, it also highlights recent turmoil at the company. The new research was performed by the recently disbanded “superalignment” team at OpenAI that was dedicated to studying the technology’s long-term risks.
The former group’s coleads, Ilya Sutskever and Jan Leike—both of whom have left OpenAI—are named as coauthors. Sutskever, a cofounder of OpenAI and formerly chief scientist, was among the board members who voted to fire CEO Sam Altman last November, triggering a chaotic few days that culminated in Altman’s return as leader.
ChatGPT is powered by a family of so-called large language models called GPT, based on an approach to machine learning known as artificial neural networks. These mathematical networks have shown great power to learn useful tasks by analyzing example data, but their workings cannot be easily scrutinized as conventional computer programs can. The complex interplay between the layers of “neurons” within an artificial neural network makes reverse engineering why a system like ChatGPT came up with a particular response hugely challenging.
“Unlike with most human creations, we don’t really understand the inner workings of neural networks,” the researchers behind the work wrote in an accompanying blog post. Some prominent AI researchers believe that the most powerful AI models, including ChatGPT, could perhaps be used to design chemical or biological weapons and coordinate cyberattacks. A longer-term concern is that AI models may choose to hide information or act in harmful ways in order to achieve their goals.
OpenAI’s new paper outlines a technique that lessens the mystery a little, by identifying patterns that represent specific concepts inside a machine learning system with help from an additional machine learning model. The key innovation is in refining the network used to peer inside the system of interest by identifying concepts, to make it more efficient.
OpenAI proved out the approach by identifying patterns that represent concepts inside GPT-4, one of its largest AI models. The company released code related to the interpretability work, as well as a visualization tool that can be used to see how words in different sentences activate concepts, including profanity and erotic content, in GPT-4 and another model. Knowing how a model represents certain concepts could be a step toward being able to dial down those associated with unwanted behavior, to keep an AI system on the rails. It could also make it possible to tune an AI system to favor certain topics or ideas.
Even though LLMs defy easy interrogation, a growing body of research suggests they can be poked and prodded in ways that reveal useful information. Anthropic, an OpenAI competitor backed by Amazon and Google, published similar work on AI interpretability last month. To demonstrate how the behavior of AI systems might be tuned, the company's researchers created a chatbot obsessed with San Francisco's Golden Gate Bridge. And simply asking an LLM to explain its reasoning can sometimes yield insights.
“It’s exciting progress,” says David Bau, a professor at Northeastern University who works on AI explainability, of the new OpenAI research. “As a field, we need to be learning how to understand and scrutinize these large models much better.”
Bau says the OpenAI team’s main innovation is in showing a more efficient way to configure a small neural network that can be used to understand the components of a larger one. But he also notes that the technique needs to be refined to make it more reliable. “There’s still a lot of work ahead in using these methods to create fully understandable explanations,” Bau says.
Bau is part of a US government-funded effort called the National Deep Inference Fabric, which will make cloud computing resources available to academic researchers so that they too can probe especially powerful AI models. “We need to figure out how we can enable scientists to do this work even if they are not working at these large companies,” he says.
OpenAI’s researchers acknowledge in their paper that further work needs to be done to improve their method, but also say they hope it will lead to practical ways to control AI models. “We hope that one day, interpretability can provide us with new ways to reason about model safety and robustness, and significantly increase our trust in powerful AI models by giving strong assurances about their behavior,” they write.
10 notes · View notes
govindhtech · 2 years ago
Text
The Future of Chips Silicon Volley’s Genius AI Assistance
Tumblr media
A study paper outlines how generative AI might help with one of the trickiest engineering tasks: semiconductor design.
The work shows how businesses in highly specialized industries might use internal data to train large language models (LLMs) to create productivity-boosting assistants.
There are not many activities as difficult as semiconductor design. When examined under a microscope, a cutting-edge processor such as the NVIDIA H100 Tensor Core GPU (above) resembles a carefully designed city made of tens of billions of transistors connected by streets 10,000 times thinner than a human hair.
Building one of these digital megacities takes up to two years of coordination between multiple engineering teams.
A few teams design and install a range of incredibly tiny circuits, while others test the results of their labor. Still others define the chip’s general architecture. Specialized techniques, software, and computer languages are needed for every task.
A Vast Perspective for LLMs
“He think big language models will benefit all the processes in the long run,” stated NVIDIA Research director and article lead Mark Ren.
In a keynote address at now International Conference on Computer-Aided Design, which brings together hundreds of engineers annually in the field of electronic design automation, or EDA, Bill Dally, the chief scientist at NVIDIA, revealed the study.
At the San Francisco event, Dally stated, “This effort marks an important first step in applying LLMs to the complex work of designing semiconductors.” “It demonstrates how internal data from highly specialized fields can be used to train effective generative AI models.”
ChipNeMo Panels
The paper describes how NVIDIA engineers trained a proprietary LLM, named ChipNeMo, on the company’s own data for internal use. The LLM was used to produce and optimize software and support human designers.
According to Ren, who has worked in EDA for more than 20 years, engineers intend to use generative AI in every phase of chip design in the long run. This might result in a considerable increase in total productivity.
The research team selected three use cases a chatbot, a code generator, and an analytical tool to begin with after asking NVIDIA engineers about potential applications.
First Use Cases
Thus far, the most well-liked tool is the latter, which automates the laborious chores of keeping up-to-date descriptions of known issues.
During early testing, a prototype chatbot that answers inquiries on GPU architecture and design assisted a large number of engineers in finding technical documents fast.
As seen above, a code generator under development can already produce brief software snippets of 10–20 lines in two specialized languages used by semiconductor designers. It will be integrated with current tools, giving engineers a useful helper for their ongoing designs.
Personalized AI Models Utilizing NVIDIA NeMo
The team’s efforts in compiling its design data and applying it to develop a customized generative AI model a method that may be applied to any sector are the primary subject of this study.
The group began by selecting a foundation model and using NVIDIA NeMo, a framework that is part of the NVIDIA AI Enterprise software platform that allows for the creation, modification, and deployment of generative AI models. The chosen NeMo model has 43 billion parameters, indicating how well it can recognize patterns. More than a trillion tokens words and symbols found in software and text were used to train it.
The model was subsequently improved in two training cycles by the team, using over 24 billion tokens of internal design data in the first round and roughly 130,000 conversation and design samples in the second.
The work is one of several instances of generative AI research and proofs of concept that are just starting to come out of the lab in the semiconductor sector.
Read more on Govindhtech.com
0 notes
reasoningdaily · 2 years ago
Text
Tumblr media
Months after the chatbot ChatGPT wowed the world with its uncanny ability to write essays and answer questions like a human, artificial intelligence (AI) is coming to Internet search.
Three of the world’s biggest search engines — Google, Bing and Baidu — last week said they will be integrating ChatGPT or similar technology into their search products, allowing people to get direct answers or engage in a conversation, rather than merely receiving a list of links after typing in a word or question. How will this change the way people relate to search engines? Are there risks to this form of human–machine interaction?
Microsoft’s Bing uses the same technology as ChatGPT, which was developed by OpenAI of San Francisco, California. But all three companies are using large language models (LLMs). LLMs create convincing sentences by echoing the statistical patterns of text they encounter in a large database. Google’s AI-powered search engine, Bard, announced on 6 February, is currently in use by a small group of testers. Microsoft’s version is widely available now, although there is a waiting list for unfettered access. Baidu’s ERNIE Bot will be available in March.
Before these announcements, a few smaller companies had already released AI-powered search engines. “Search engines are evolving into this new state, where you can actually start talking to them, and converse with them like you would talk to a friend,” says Aravind Srinivas, a computer scientist in San Francisco who last August co-founded Perplexity — an LLM-based search engine that provides answers in conversational English.
Changing trust
The intensely personal nature of a conversation — compared with a classic Internet search — might help to sway perceptions of search results. People might inherently trust the answers from a chatbot that engages in conversation more than those from a detached search engine, says Aleksandra Urman, a computational social scientist at the University of Zurich in Switzerland.
A 2022 study1 by a team based at the University of Florida in Gainesville found that for participants interacting with chatbots used by companies such as Amazon and Best Buy, the more they perceived the conversation to be human-like, the more they trusted the organization.
That could be beneficial, making searching faster and smoother. But an enhanced sense of trust could be problematic given that AI chatbots make mistakes. Google’s Bard flubbed a question about the James Webb Space Telescope in its own tech demo, confidently answering incorrectly. And ChatGPT has a tendency to create fictional answers to questions to which it doesn’t know the answer — known by those in the field as hallucinating.
A Google spokesperson said Bard’s error “highlights the importance of a rigorous testing process, something that we’re kicking off this week with our trusted-tester programme”. But some speculate that, rather than increasing trust, such errors, assuming they are discovered, could cause users to lose confidence in chat-based search. “Early perception can have a very large impact,” says Sridhar Ramaswamy, a computer scientists based in Mountain View, California and chief executive of Neeva, an LLM-powered search engine launched in January. The mistake wiped $100 billion from Google’s value as investors worried about the future and sold stock.
Lack of transparency
Compounding the problem of inaccuracy is a comparative lack of transparency. Typically, search engines present users with their sources — a list of links — and leave them to decide what they trust. By contrast, it’s rarely known what data an LLM trained on — is it Encyclopaedia Britannica or a gossip blog?
“It’s completely untransparent how [AI-powered search] is going to work, which might have major implications if the language model misfires, hallucinates or spreads misinformation,” says Urman.
If search bots make enough errors, then, rather than increasing trust with their conversational ability, they have the potential to unseat users’ perceptions of search engines as impartial arbiters of truth, Urman says.
She has conducted as-yet unpublished research that suggests current trust is high. She examined how people perceive existing features that Google uses to enhance the search experience, known as ‘featured snippets’, in which an extract from a page that is deemed particularly relevant to the search appears above the link, and ‘knowledge panels’ — summaries that Google automatically generates in response to searches about, for example, a person or organization. Almost 80% of people Urman surveyed deemed these features accurate, and around 70% thought they were objective.
Chatbot-powered search blurs the distinction between machines and humans, says Giada Pistilli, principal ethicist at Hugging Face, a data-science platform in Paris that promotes the responsible use of AI. She worries about how quickly companies are adopting AI advances: “We always have these new technologies thrown at us without any control or an educational framework to know how to use them.”
0 notes
dfoxmarketing · 2 years ago
Text
Slack CEO Discusses Distinctiveness Amidst the AI Wave
As AI becomes the dominant force, particularly in the field of corporate communication platforms, seismic shifts are taking place in the heart of Silicon Valley. The outstanding model is Slack, which has seen a huge upgrade since its securing by Salesforce, a conspicuous San Francisco-based corporate programming organization, for a faltering $27.7 billion.
Stewart Butterfield, the fellow benefactor and previous Chief of Slack, surrendered his position only two years post-procurement. Stepping into his shoes is Lidiane Jones, formerly from Microsoft, and a rapidly ascending executive within Salesforce. Jones’ ascension to the CEO role coincided almost perfectly with the launch of ChatGPT, which spotlighted the limitless potential of AI.
Jones, a Boston-native with Brazilian roots, expressed her awe at the AI evolution. She stated, “In the mere span of nine months, Slack has unveiled more innovative features than the total we had launched in the preceding years.” Her tenure as CEO has thus been marked by fast-paced developments, largely influenced by the capabilities of AI.
The corporate chat landscape is heating up with Microsoft as Slack’s chief rival. Platforms like Slack and Microsoft’s Teams were primarily conceptualized to enhance teamwork and streamline internal communication. However, recent iterations, supercharged by AI, envision these platforms morphing into virtual assistants, further aiding productivity.
Relaying her personal experience, Jones spoke about her post-vacation email backlog. Using “Slack AI“, she was able to get a concise summary, making her catch-up process exponentially faster. “Instead of wading through emails for a day or even an entire week, ‘Slack AI’ brought me up to speed in a mere two hours,” she recounted.
Tumblr media
Jones firmly believes in AI’s transformative power to distill information efficiently and even handle complex administrative tasks. She highlighted how Slack provides users direct access to generative AI chatbots from diverse vendors, including the upcoming ChatGPT from OpenAI and Claude from the startup, Anthropic. Emphasizing Slack’s adaptability, Jones remarked, “Our strength lies in our openness. We’re fundamentally an open platform.“
This adaptability can be contrasted with Microsoft’s Teams. While Teams enjoys a robust user base (2019 data points to 12 million daily active users and approximately 300 million monthly users), Slack has always contended with its inclusion in Microsoft’s successful Office Suite. This contention led Slack to lodge a complaint with the European Union in 2020. Although Microsoft has made concessions to Slack in Europe, the investigation remains active, hinting at possible further actions by the European authorities.
Interestingly, Microsoft’s foray into generative AI has been bolstered by its significant investments in OpenAI. However, Jones is bullish on Slack’s AI prospects, attributing this confidence to the quality and richness of data Slack accumulates. “Slack houses a company’s collective knowledge. With employees from various departments interacting, the platform holds a goldmine of unstructured data,” Jones elaborated. This wealth of contextually-rich data is what, according to Jones, empowers their AI strategies.
While Slack doesn’t presently aim to develop its distinct language model akin to OpenAI’s famous generative AI systems, Jones hinted at the potential creation of niche models in the future. She humorously remarked, “We don’t see the need to reinvent the wheel.“
Considering the future trajectory, Jones envisions Slack possibly crafting custom AI assistants with an intricate understanding of users’ preferences and nuances. This AI-augmented future, she believes, would be a monumental boon, especially for professionals juggling work and personal responsibilities. However, Jones also acknowledges the challenges. Earning the trust of users to such an advanced degree “is going to take time,” she concluded.
Follow Digital Fox Media for latest technology news.
0 notes
marketingyts · 2 years ago
Text
Google will add AI models from Meta, and Anthropic to its Cloud Platform
Google, a subsidiary of Alphabet Inc., is integrating more generative Artificial Intelligence into its services and promoting itself as an all-encompassing resource for cloud users looking to access the latest developments by integrating artificial intelligence technologies from firms like Meta Platforms Inc. as well as Anthropic into its cloud platform. The Llama 2 big language model from Meta as well as the Claude 2 chatbot from artificial intelligence startup Anthropic will be accessible to Google’s cloud clients, which they can then customise with company data for their services and applications.
The decision, which was made public on Tuesday during the company’s Next ’23 conference in San Francisco, serves as part of the business’s attempt to establish its platform as one where users have a choice to select an artificial intelligence (AI) model that best suits their requirements, whether from the business itself or one of its collaborators. Google Cloud customers now have access to more than 100 potent AI models and tools, the firm claimed.
0 notes
digital-trends-unleased · 2 years ago
Text
OpenAI reveals location of its first international outpost
OpenAI, the mastermind behind ChatGPT, is gearing up to establish an office in London, marking its inaugural location beyond the borders of the U.S. This Microsoft-supported venture has garnered substantial recognition since the introduction of its AI-driven chatbot, ChatGPT, in November. The text-based tool has left a profound impression on many by adeptly executing an array of tasks and engaging in conversations that closely mimic human interactions. Based in San Francisco, OpenAI is now poised to extend its influence, establishing a presence in a city already hosting a burgeoning cluster of AI system developers, including the well-funded DeepMind from Google. In a communication posted on its website, OpenAI characterizes its inaugural international office as a "landmark in OpenAI's expansion." This move is anticipated to widen the horizons of its endeavors, expose it to diverse viewpoints, and expedite its mission to ensure that artificial general intelligence (AGI)—a more advanced iteration of AI with human-like learning and thinking capabilities—serves the betterment of society. According to OpenAI CEO Sam Altman, this expansion offers a platform to "attract top-tier talent and propel innovation in AGI advancement and policy." He further elaborated that the London office will contribute to the pursuit of "constructing and implementing secure AI." The company's declaration follows only a few weeks after British Prime Minister Rishi Sunak underscored the urgency during a technology conference in London for the nation to act swiftly to "safeguard our status as one of the globe's tech hubs." Beyond the wealth of local talent cultivated by esteemed universities and AI startups situated in the U.K., OpenAI might also have been drawn by the anticipation that the U.K. will adopt a lenient stance on regulating the swiftly evolving technology. This stance has been highlighted by the recent commitment to a "pro-innovation approach." Conversely, the European Union seems poised to adopt a more stringent stance, as evidenced by its AI Act. Altman recently indicated that if OpenAI cannot adhere to the EU's forthcoming regulations, the company would cease operations there, though he subsequently attempted to temper those statements. OpenAI has yet to specify the timeline for inaugurating its London office or the number of staff it will employ. However, the company has already advertised four positions for its new location, including a security engineer and a head of U.K. policy. Read the full article
0 notes