Tumgik
#gemini ai
madame-helen · 2 months
Text
Tumblr media
231 notes · View notes
stupittmoran · 2 months
Text
Tumblr media
Mark Zuckerberg after a day at the beach
17 notes · View notes
Text
By: Thomas Barrabi
Published: Feb. 21, 2024
Google’s highly-touted AI chatbot Gemini was blasted as “woke” after its image generator spit out factually or historically inaccurate pictures — including a woman as pope, black Vikings, female NHL players and “diverse” versions of America’s Founding Fathers.
Gemini’s bizarre results came after simple prompts, including one by The Post on Wednesday that asked the software to “create an image of a pope.” 
Instead of yielding a photo of one of the 266 pontiffs throughout history — all of them white men — Gemini provided pictures of a Southeast Asian woman and a black man wearing holy vestments.
Another Post query for representative images of “the Founding Fathers in 1789″ was also far from reality.
Gemini responded with images of black and Native American individuals signing what appeared to be a version of the US Constitution — “featuring diverse individuals embodying the spirit” of the Founding Fathers.
Tumblr media
[ Google admitted its image tool was “missing the mark.” ]
Tumblr media
[ Google debuted Gemini’s image generation tool last week. ]
Another showed a black man appearing to represent George Washington, in a white wig and wearing an Army uniform.
When asked why it had deviated from its original prompt, Gemini replied that it “aimed to provide a more accurate and inclusive representation of the historical context” of the period.
Generative AI tools like Gemini are designed to create content within certain parameters, leading many critics to slam Google for its progressive-minded settings. 
Ian Miles Cheong, a right-wing social media influencer who frequently interacts with Elon Musk, described Gemini as “absurdly woke.”
Google said it was aware of the criticism and is actively working on a fix.
“We’re working to improve these kinds of depictions immediately,” Jack Krawczyk, Google’s senior director of product management for Gemini Experiences, told The Post.
“Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”
Tumblr media Tumblr media
Social media users had a field day creating queries that provided confounding results.
“New game: Try to get Google Gemini to make an image of a Caucasian male. I have not been successful so far,” wrote X user Frank J. Fleming, a writer for the Babylon Bee, whose series of posts about Gemini on the social media platform quickly went viral.
In another example, Gemini was asked to generate an image of a Viking — the seafaring Scandinavian marauders that once terrorized Europe.
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
The chatbot’s strange depictions of Vikings included one of a shirtless black man with rainbow feathers attached to his fur garb, a black warrior woman, and an Asian man standing in the middle of what appeared to be a desert.
Famed pollster and “FiveThirtyEight” founder Nate Silver also joined the fray.
Silver’s request for Gemini to “make 4 representative images of NHL hockey players” generated a picture with a female player, even though the league is all male.
“OK I assumed people were exaggerating with this stuff but here’s the first image request I tried with Gemini,” Silver wrote.
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Another prompt to “depict the Girl with a Pearl Earring” led to altered versions of the famous 1665 oil painting by Johannes Vermeer featuring what Gemini described as “diverse ethnicities and genders.”
Google added the image generation feature when it renamed its experimental “Bard” chatbot to “Gemini” and released an updated version of the product last week.
Tumblr media
[ In one case, Gemini generated pictures of “diverse” representations of the pope. ]
Tumblr media
[ Critics accused Google Gemini of valuing diversity over historically or factually accuracy.]
The strange behavior could provide more fodder for AI detractors who fear chatbots will contribute to the spread of online misinformation.
Google has long said that its AI tools are experimental and prone to “hallucinations” in which they regurgitate fake or inaccurate information in response to user prompts.
In one instance last October, Google’s chatbot claimed that Israel and Hamas had reached a ceasefire agreement, when no such deal had occurred.
--
More:
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
==
Here's the thing: this does not and cannot happen by accident. Language models like Gemini source their results from publicly available sources. It's entirely possible someone has done a fan art of "Girl with a Pearl Earring" with an alternate ethnicity, but there are thousands of images of the original painting. Similarly, find a source for an Asian female NHL player, I dare you.
While this may seem amusing and trivial, the more insidious and much larger issue is that they're deliberately programming Gemini to lie.
As you can see from the examples above, it disregards what you want or ask, and gives you what it prefers to give you instead. When you ask a question, it's programmed to tell you what the developers want you to know or believe. This is profoundly unethical.
12 notes · View notes
photolable · 20 days
Text
Tumblr media
Tendency.
2 notes · View notes
ultramaga · 2 months
Text
2 notes · View notes
GeminAi Review: A Breakthrough in AI Technology
Tumblr media
GeminAi is an app that leverages the power of Gemini AI, a generative AI that can create diverse and realistic content from scratch, such as text, images, audio, video, and code. GeminAi can provide you with instant answers across these modalities, and generate original and engaging content for any niche. GeminAi also outperforms other AI solutions, such as ChatGPT-4, in terms of speed, accuracy, and creativity.
GeminAI Review: Can Do For You
In the next few minutes, you will come across one innovation that will open a world of new possibilities.
World’s First True Google’s Gemini™ Powered Ai Chat bot.
Unlimited usage without any restriction.
Elevate Content Creation: Transform the way you create digital content, making it more engaging and effective.
Simplify Coding Efforts: Accessible, efficient coding for all skill levels.
Enhance Multimedia Interaction: Advanced Ai for superior image and video conversation with the Ai bot.
Start making money by charging your clients.
No need to pay any monthly fees to any Ai apps again.
100% beginner-friendly, with no coding or technical skills required.
Get a Free Commercial License.
Pay one time and use it forever.
World Class customer support.
Enjoy 24/7 expert support for whatever you need
30 Day Money back guarantee.
>>>>>>Get More Info<<<<<<
4 notes · View notes
theleadersglobe · 11 days
Text
HCLTech and Google Cloud Partner to Utilize Gemini AI: Details Inside
Tumblr media
As a strategic move towards AI technology innovation, HCL Technologies (HCLTech) and Google Cloud have come together in a partnership that will accelerate the integral development of AI solutions meant to address the needs of organizations across the globe. This cooperation represents a milestone in a journey toward a strategically adopted use of disruptive technology for leveraging innovation and business value for industries.
Empowering Engineers with GenAI
The central aspect of this partnership is equipping engineers with the latest advances in AI technology, so that they can make informed decisions and create better products. HCLTech aspires to empower a team of 25 thousand engineers to utilize the newest Google Cloud’s GenAI services, thus preparing them to guide clients’ projects with AI at any stage. In order to achieve its objective, HCLTech equips its engineers with the latest capabilities which are instrumental in fulfilling client satisfaction and offering customized solutions spanning across sectors like manufacturing, healthcare and telecom.
Read More:(https://theleadersglobe.com/science-technology/hcltech-and-google-cloud-partner-to-utilize-gemini-ai-details-inside/)
0 notes
banterstudydotin · 1 month
Text
Better than ChatGPT
Groq Ai has a superfast speed. 50% more than other chatbot. You can also use 3 premium Ai models free. It uses the fastest Chip called LPU. You can know more how to get in on Groq Ai.
Tumblr media
1 note · View note
fortunefusion · 1 month
Text
1 note · View note
Text
Unlock the Power of Google's Gemini AI Integration Into Your Laravel Application
In today's rapidly evolving digital landscape, businesses are constantly seeking ways to stay ahead of the curve and gain a competitive edge. One such way is by leveraging the power of artificial intelligence (AI) to enhance their web applications. With the recent integration of Google's Gemini AI into the Laravel framework, developers now have an incredible opportunity to unlock a whole new realm of possibilities for their applications.
Before we delve into the specifics of the Gemini AI integration, let's first understand what Laravel is and why it's such a popular choice among developers.
Tumblr media
Laravel: The PHP Framework Powering Modern Web Applications
Laravel is an open-source PHP web application framework that has gained immense popularity in recent years. It is designed to streamline the development process by providing a robust set of tools and features that simplify common tasks, such as routing, database management, and authentication.
One of the key reasons for Laravel's success is its emphasis on elegance and simplicity. The framework adheres to the principles of the Model-View-Controller (MVC) architectural pattern, which promotes code organization and maintainability. Additionally, Laravel's expressive syntax and extensive documentation make it easy for developers to get up and running quickly.
Introducing Google's Gemini AI
Google's Gemini AI is a cutting-edge language model that has been specifically designed for open-ended conversation and task completion. This AI system leverages the power of large language models and natural language processing to understand and respond to a wide range of prompts and queries.
The integration of Gemini AI into the Laravel framework opens up a world of possibilities for developers. By harnessing the power of this advanced AI system, developers can create more intelligent and intuitive web applications that can understand and respond to user inputs in a more natural and conversational manner.
Unlocking the Power of Gemini AI in Your Laravel Application
With the integration of Gemini AI into Laravel, developers can take advantage of several powerful features and capabilities. Here are just a few examples of how this integration can enhance your Laravel application:
1. Natural Language Processing (NLP)
One of the key strengths of Gemini AI is its ability to understand and process natural language. By integrating this AI system into your Laravel application, you can create user interfaces that can comprehend and respond to user inputs in a more intuitive and conversational manner. This can be particularly useful for applications that require complex query processing or natural language interaction, such as chatbots, virtual assistants, or customer support systems.
2. Task Completion and Automation
Gemini AI is designed not only for conversation but also for task completion. By leveraging this capability, developers can create Laravel applications that can automate various tasks and processes based on user inputs or prompts. This can significantly improve efficiency and productivity, as well as enhance the overall user experience.
3. Content Generation and Personalization
Another powerful aspect of Gemini AI is its ability to generate high-quality content. By integrating this AI system into your Laravel application, you can create dynamic and personalized content tailored to individual users' preferences and needs. This can be particularly valuable for applications that require customized recommendations, product descriptions, or user-specific content.
4. Intelligent Search and Information Retrieval
With its advanced natural language processing capabilities, Gemini AI can significantly enhance the search and information retrieval features of your Laravel application. Users can input queries in natural language, and the AI system can understand and retrieve the most relevant information from your application's data sources.
5. Continuous Learning and Adaptation
One of the most exciting aspects of Gemini AI is its ability to learn and adapt over time. As more data and interactions are fed into the system, it can continually improve its understanding and responses. This means that your Laravel application can become increasingly intelligent and responsive to user needs as it continues to be used and updated.
Leveraging the Power of Gemini AI: Best Practices and Considerations
While the integration of Gemini AI into your Laravel application can be a game-changer, it's important to approach this process with careful planning and consideration. Here are some best practices and considerations to keep in mind:
1. Data Privacy and Security
When working with AI systems like Gemini AI, data privacy and security should be a top priority. Ensure that your application adheres to the highest standards of data protection and follows all relevant regulations and guidelines.
2. Ethical AI Development
As AI technology continues to advance, it's crucial to consider the ethical implications of its development and implementation. Strive to develop AI systems that are fair, transparent, and accountable, and prioritize the well-being of users and society as a whole.
3. User Experience and Transparency
While AI can enhance the user experience, it's important to maintain transparency and provide users with clear explanations and control over how the AI system is being used. This can help build trust and ensure that users feel comfortable and empowered when interacting with your application.
4. Continuous Testing and Monitoring
As with any software development process, it's essential to thoroughly test and monitor your Laravel application after integrating Gemini AI. Continuously monitor the AI system's performance, identify and address any issues or biases, and ensure that it is functioning as intended.
5. Collaboration and Expertise
Integrating an advanced AI system like Gemini AI into your Laravel application may require specialized expertise and knowledge. Consider collaborating with experienced AI professionals or seeking guidance from a reputable Laravel development company to ensure a successful and efficient integration process.
Conclusion
The integration of Google's Gemini AI into the Laravel framework presents an exciting opportunity for developers to create more intelligent, intuitive, and powerful web applications. By leveraging the power of advanced natural language processing, task completion, and content generation capabilities, developers can enhance user experiences, automate processes, and unlock new levels of efficiency and productivity.
However, it's important to approach this integration with careful planning, consideration for data privacy and ethical AI development, and a commitment to continuous testing and monitoring. By following best practices and collaborating with experienced professionals, such as dedicated Laravel developers or a reputable Laravel development company, businesses can successfully harness the power of Gemini AI and stay ahead in the ever-evolving digital landscape.
0 notes
techlifewell · 1 month
Text
ChatGPT VS Gemini Advanced Ai Comparison In 2024!
0 notes
webtechnicaltips · 1 month
Text
youtube
In this video, I'll show you how to use Google Gemini Ai and which tool is powerful and best for content and image generation. This is a complete comparison of ChatGpt vs Google Gemini, Also Share some useful tips to generate Images with Ai.
1 note · View note
farhadmalegam · 1 month
Text
Decoding Google’s Gemini Model | Everything You Need to Know
With ChatGPT's surge in popularity, Google's Gemini AI emerges as a formidable competitor, boasting multimodal capabilities and seamless integration with Google's ecosystem. However, while Gemini shows promise, its performance, particularly in accuracy and translation tasks, falls short of ChatGPT's consistency and accessibility. As Gemini evolves, it could become a transformative AI tool with broader appeal. Read more on our blog.
0 notes
Tumblr media Tumblr media Tumblr media
By: Toadworrier
Published: Mar 18, 2024
Tumblr media
As of 23 February 2024, Google’s new-model AI chatbot, Gemini, has been debarred from creating images of people, because it can’t resist drawing racist ones. It’s not that it is producing bigoted caricatures—the problem is that it is curiously reluctant to draw white people. When asked to produce images of popes, Vikings, or German soldiers from World War II, it keeps presenting figures that are black and often female. This is racist in two directions: it is erasing white people, while putting Nazis in blackface. The company has had to apologise for producing a service that is historically inaccurate and—what for an engineering company is perhaps even worse—broken.
This cock-up raises many questions, but the one that sticks in my mind is: Why didn’t anyone at Google notice this during development? At one level, the answer is obvious: this behaviour is not some bug that merely went unnoticed; it was deliberately engineered. After all, an unguided mechanical process is not going to figure out what Nazi uniforms looked like while somehow drawing the conclusion that the soldiers in them looked like Africans. Indeed, some of the texts that Gemini provides along with the images hint that it is secretly rewriting users’ prompts to request more “diversity.”
Tumblr media
[ Source: https://www.piratewires.com/p/google-gemini-race-art ]
The real questions, then, are: Why would Google deliberately engineer a system broken enough to serve up risible lies to its users? And why did no one point out the problems with Gemini at an earlier stage? Part of the problem is clearly the culture at Google. It is a culture that discourages employees from making politically incorrect observations. And even if an employee did not fear being fired for her views, why would she take on the risk and effort of speaking out if she felt the company would pay no attention to her? Indeed, perhaps some employees did speak up about the problems with Gemini—and were quietly ignored.  
The staff at Google know that the company has a history of bowing to employee activism if—and only if—it comes from the progressive left; and that it will often do so even at the expense of the business itself or of other employees. The most infamous case is that of James Damore, who was terminated for questioning Google’s diversity policies. (Damore speculated that the paucity of women in tech might reflect statistical differences in male and female interests, rather than simply a sexist culture.) But Google also left a lot more money on the table when employee complaints caused it to pull out of a contract to provide AI to the US military’s Project Maven. (To its credit, Google has also severely limited its access to the Chinese market, rather than be complicit in CCP censorship. Yet, like all such companies, Google now happily complies with take-down demands from many countries and Gemini even refuses to draw pictures of the Tiananmen Square massacre or otherwise offend the Chinese government).
Tumblr media Tumblr media
There have been other internal ructions at Google in the past. For example, in 2021, Asian Googlers complained that a rap video recommending that burglars target Chinese people’s houses was left up on YouTube. Although the company was forced to admit that the video was “highly offensive and [...] painful for many to watch,” it stayed up under an exception clause allowing greater leeway for videos “with an Educational, Documentary, Scientific or Artistic context.” Many blocked and demonetised YouTubers might be surprised that this exception exists. Asian Googlers might well suspect that the real reason the exception was applied here (and not in other cases) is that Asians rank low on the progressive stack.
Given the company’s history, it is easy to guess how awkward observations about problems with Gemini could have been batted away with corporate platitudes and woke non-sequiturs, such as “users aren’t just looking for pictures of white people” or “not all Vikings were blond and blue-eyed.” Since these rhetorical tricks are often used in public discourse as a way of twisting historical reality, why wouldn’t they also control discourse inside a famously woke company?
Even if the unfree speech culture at Google explains why the blunder wasn’t caught, there is still the question of why it was made in the first place. Right-wing media pundits have been pointing fingers at Jack Krawczyk, a senior product manager at Google who now works on “Gemini Experiences” and who has a history of banging the drum for social justice on Twitter (his Twitter account is now private). But there seems to be no deeper reason for singling him out. There is no evidence that Mr Krawczyk was the decision-maker responsible. Nor is he an example of woke incompetence—i.e., he is not someone who got a job just by saying politically correct things. Whatever his Twitter history, Krawczyk also has a strong CV as a Silicon Valley product manager. To the chagrin of many conservatives, people can be both woke and competent at the same time. 
Krawczyk is, at most, one example of the politically correct activist orthodoxy that blinds otherwise competent decision makers to the excesses of their ideology. That culture has organically permeated Silicon Valley, but it also has formal support structures within Google’s AI effort. One of these is the Responsible AI team, whose list of focus points begins with “equity and fairness” and ends with “applications for social good.” Similarly, their list of “AI Principles” is headed by the injunctions that AI should “be socially beneficial” and “avoid creating or reinforcing unfair bias.” The idea that it should truthfully reflect reality appears nowhere on the list, not even under the heading of “unfair bias” which Google elaborates in the following way: 
AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. We recognize that distinguishing fair from unfair biases is not always simple, and differs across cultures and societies. We will seek to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief.
Here Google is not promising to tell the truth, but rather to adjust the results of its algorithms whenever they seem unfair to protected groups of people.  And this, of course, is exactly what Gemini has just been caught doing.
All this is a betrayal of Google’s more fundamental promises. Its stated mission has always been to “to organize the world’s information and make it universally accessible and useful”—not to doctor that information for whatever it regards as the social good. And ever since “Don’t be evil” regrettably ceased to be the company’s motto, its three official values have been to respect each other, respect the user, and respect the opportunity. The Gemini fiasco goes against all three. Let’s take them in reverse order.
Disrespecting the Opportunity 
Some would argue that Google had already squandered the opportunity it had as early pioneer of the current crop of AI technologies by letting itself be beaten to the language-model punch by OpenAI’s ChatGPT and GPT4. But on the other hand, the second-mover position might itself provide an opportunity for a tech giant like Google, with a strong culture of reliability and solid engineering, to distinguish itself in the mercurial software industry.
It’s not always good to “move fast and break things” (as Meta CEO Mark Zuckerberg once recommended), so Google could argue that it was right to take its time developing Gemini in order to ensure that its behaviour was reliable and safe. But it makes no sense if Google squandered that time and effort just to make that behaviour more wrong, racist, and ridiculous than it would have been by default.
Yet that seems to be what Google has done, because its culture has absorbed the social-justice-inflected conflation of “safety” with political correctness. This culture of telling people what to think for their own safety brings us to the next betrayed value.  
Disrespecting the User
Politically correct censorship and bowdlerisation are in themselves insults to the intelligence and wisdom of Google’s users. But Gemini's misbehaviour takes this to new heights: the AI has been deliberately twisted to serve up obviously wrong answers. Google appears not to take its users seriously enough to care what they might think about being served nonsense.
Disrespecting Each Other 
Once the company had been humiliated on Twitter, Google got the message that Gemini was not working right. But why wasn’t that alarm audible internally? A culture of mutual respect should mean that Googlers can point out that a system is profoundly broken, secure in the knowledge that the observation will be heeded. Instead, it seems that the company’s culture places respect for woke orthodoxy above respect for the contributions of colleagues.
Despite all this, Google is a formidably competent company that will no doubt fix Gemini—at least to the point at which it can be allowed to show pictures of people again. The important question is whether it will learn the bigger lesson. It will be tempting to hide this failure behind a comforting narrative that this was merely a clumsy implementation of an otherwise necessary anti-bias measure. As if Gemini’s wokeness problem is merely that it got caught.
Google’s communications so far have not been encouraging. While their official apology unequivocally admits that Gemini “got it wrong,” it does not acknowledge that Gemini placed correcting “unfair biases” above the truth to such an extent that it ended up telling racist lies. A leaked internal missive from Google boss Sundar Pichai makes matters even worse: Pichai calls the results “problematic” and laments that the programme has “offended our users and shown bias”—using the language of social justice, as if the problem were that Gemini was not sufficiently woke. Lulu Cheng Maservey, Chief Communications Officer at Activision Blizzard, has been scathing about Pichai’s fuzzy, politically correct rhetoric. She writes:
The obfuscation, lack of clarity, and fundamental failure to grasp the problem are due to a failure of leadership. A poorly written email is just the means through which that failure is revealed.
It would be a costly mistake for Google’s leaders to bury their heads in the sand. The company’s stock has already tumbled by $90 billion USD in the wake of this controversy. Those who are selling their stock might not care about the culture war per se, but they do care about whether Google is seen as a reliable conduit of information.
This loss is the cost of disrespecting users, and to paper over these cracks would just add insult to injury. Users will continue to notice Gemini’s biases, even if they fall below the threshold at which Google is forced to acknowledge them. But if Google resists the temptation to ignore its wokeness problem, this crisis could be turned into an opportunity. Google has the chance to win back the respect of colleagues and dismantle the culture of orthodoxy that has been on display ever since James Damore was sacked for presenting his views.
Google rightly prides itself on analysing and learning from failure. It is possible that the company will emerge from this much stronger. At least, having had this wake-up call puts it in a better position than many of our other leading institutions, of which Google is neither the wokest nor the blindest. Let’s hope it takes the opportunity to turn around and stop sleepwalking into the darkness.
7 notes · View notes
thesebewords · 2 months
Text
Is Google dying?
George Washington vs Google Gemini vs Sundar Pichai Everyone’s calling for Pichai’s head on a plate again. How the mighty fall. At least, if we were talking about definitive dissolution, Google has only really managed to shoot itself out of a cannon into a soft cushion with the latest AI “bungle”, wherein user requests for various historical figures would occasionally, via Google Gemini,…
View On WordPress
0 notes
Text
AI's Impact on E-commerce fraud detection
0 notes