Tumgik
#ChatGPT-powered products
yestobetop · 10 months
Text
The Art of ChatGPT Profit: Monetization Techniques for Financial Growth
How to make money with ChatGPT? What is ChatGPT? OpenAI created ChatGPT, an advanced language model. It is intended to generate human-like text responses in response to given prompts. ChatGPT, which is powered by deep learning algorithms, can engage in natural and dynamic conversations, making it an ideal tool for a variety of applications. ChatGPT can be used for a variety of purposes,…
Tumblr media
View On WordPress
0 notes
omegaphilosophia · 28 days
Text
The Philosophy of Standpoint
The philosophy of standpoint, often referred to as standpoint theory, is a framework used primarily in feminist and critical theory to explore how knowledge and experience are shaped by social positions and power dynamics. It asserts that individuals' perspectives are influenced by their social and political contexts, particularly their positions within systems of power and oppression. Standpoint theory argues that marginalized or oppressed groups can offer unique and more accurate insights into social realities because their perspectives are shaped by experiences that are often overlooked or devalued by dominant groups.
Key Concepts:
Situated Knowledge: Standpoint theory posits that all knowledge is situated, meaning it is rooted in specific social contexts and power structures. This challenges the notion of objective, universal knowledge that is detached from the knower’s social identity.
Epistemic Privilege: A key idea in standpoint theory is that marginalized groups, because of their lived experiences of oppression, have an epistemic advantage in understanding certain aspects of social reality. For example, women might have a clearer understanding of gender relations than men because they experience gender-based oppression directly.
Standpoint as an Achievement: A standpoint is not simply given by one's social position but must be actively achieved through reflection, consciousness-raising, and collective struggle. This process involves critically examining and interpreting one's experiences within the broader social and political context.
Power and Knowledge: Standpoint theory emphasizes the relationship between power and knowledge production. It argues that dominant groups often shape what is considered legitimate knowledge, thereby marginalizing alternative perspectives that challenge the status quo.
Intersectionality: The philosophy of standpoint often intersects with intersectional theory, which recognizes that people experience oppression and privilege in multiple, overlapping ways (e.g., race, class, gender, sexuality). Thus, standpoint theory takes into account the complexity of identity and how it influences one's perspective.
The philosophy of standpoint challenges traditional notions of objectivity and universal knowledge by asserting that our social positions deeply influence how we perceive and understand the world. By recognizing and valuing the perspectives of marginalized groups, standpoint theory seeks to expose and address systemic inequalities in knowledge production and societal structures. It invites a more inclusive and reflective approach to understanding social reality, where diverse voices are heard and respected.
4 notes · View notes
thisisgraeme · 6 days
Text
The AI-Powered Creative Workflow: How the Creative TechnoStack is Shaping the Future of Creativity
Discover how an AI-Powered Creative Workflow is transforming the way creators work! Learn how the Creative TechnoStack blends AI and traditional tools to shape the future of creativity. Ready to unlock your full potential? Dive in now!
Unlocking the AI-Powered Creative Workflow: The Rise of the Creative Technomancer Creativity is entering a bold new era—one where human ingenuity is supercharged by artificial intelligence, and the boundaries between artistic disciplines blur into seamless, multimedia experiences. Just as software developers pioneered the concept of full-stack to describe those who could manage both front-end…
0 notes
ah-123-4 · 18 days
Text
sun tumble -Copper Vacuum Insulated Tumbler
Tumblr media
Stay hydrated in style on a daily basis with this personalized tumbler. With a 22oz capacity and copper vacuum insulation, our tumbler keeps your favorite beverages refreshingly cold for 24 hours, and soothingly hot for up to 6. Its stainless steel exterior is condensation-resistant while the powder coating adds extra style points.
https://sunlight-store.printify.me/product/12827451/sun-tumble-copper-vacuum-insulated-tumbler-22oz
1 note · View note
onewebinc · 3 months
Text
ChatGPT Goes Desktop: Powerful AI Assistant Now Has a Mac App!
Tumblr media
ChatGPT Goes Desktop: Powerful AI Assistant Now Has a Mac App!
📖To read more visit here🌐🔗: https://onewebinc.com/news/chatgpt-releases-desktop-app-for-mac/
0 notes
jcmarchi · 4 months
Text
Harnessing Silicon: How In-House Chips Are Shaping the Future of AI
New Post has been published on https://thedigitalinsider.com/harnessing-silicon-how-in-house-chips-are-shaping-the-future-of-ai/
Harnessing Silicon: How In-House Chips Are Shaping the Future of AI
Artificial intelligence, like any software, relies on two fundamental components: the AI programs, often referred to as models, and the computational hardware, or chips, that drive these programs. So far, the focus in AI development has been on refining the models, while the hardware was typically seen as a standard component provided by third-party suppliers. Recently, however, this approach has started to change. Major AI firms such as Google, Meta, and Amazon have started developing their own AI chips. The in-house development of custom AI chips is heralding a new era in AI advancement. This article will explore the reasons behind this shift in approach and will highlight the latest developments in this evolving area.
Why In-house AI Chip Development?
The shift toward in-house development of custom AI chips is being driven by several critical factors, which include:  
Increasing Demand of AI Chips
Creating and using AI models demands significant computational resources to effectively handle large volumes of data and generate precise predictions or insights. Traditional computer chips are incapable of handling computational demands when training on trillions of data points. This limitation has led to the creation of cutting-edge AI chips specifically designed to meet the high performance and efficiency requirements of modern AI applications. As AI research and development continue to grow, so does the demand for these specialized chips.
Nvidia, a leader in the production of advanced AI chips and well ahead of its competitors, is facing challenges as demand greatly exceeds its manufacturing capacity. This situation has led to the waitlist for Nvidia’s AI chips being extended to several months, a delay that continues to grow as demand for their AI chips surges. Moreover, the chip market, which includes major players like Nvidia and Intel, encounters challenges in chip production. This issue stems from their dependence on Taiwanese manufacturer TSMC for chip assembly. This reliance on a single manufacturer leads to prolonged lead times for manufacturing these advanced chips.
Making AI Computing Energy-efficient and Sustainable
The current generation of AI chips, which are designed for heavy computational tasks, tend to consume a lot of power, and generate significant heat. This has led to substantial environmental implications for training and using AI models. OpenAI researchers note that: since 2012, the computing power required to train advanced AI models has doubled every 3.4 months, suggesting that by 2040, emissions from the Information and Communications Technology (ICT) sector could comprise 14% of global emissions. Another study showed that training a single large-scale language model can emit up to 284,000 kg of CO2, which is approximately equivalent to the energy consumption of five cars over their lifetime. Moreover,  it is estimated that the energy consumption of data centers will grow 28 percent by 2030. These findings emphasize the necessity to strike a balance between AI development and environmental responsibility. In response, many AI companies are now investing in the development of more energy-efficient chips, aiming to make AI training and operations more sustainable and environment friendly.
Tailoring Chips for Specialized Tasks
Different AI processes have varying computational demands. For instance, training deep learning models requires significant computational power and high throughput to handle large datasets and execute complex calculations quickly. Chips designed for training are optimized to enhance these operations, improving speed and efficiency. On the other hand, the inference process, where a model applies its learned knowledge to make predictions, requires fast processing with minimal energy use, especially in edge devices like smartphones and IoT devices. Chips for inference are engineered to optimize performance per watt, ensuring prompt responsiveness and battery conservation. This specific tailoring of chip designs for training and inference tasks allows each chip to be precisely adjusted for its intended role, enhancing performance across different devices and applications. This kind of specialization not only supports more robust AI functionalities but also promotes greater energy efficiency and cost-effectiveness broadly.
Reducing Financial Burdens
The financial burden of computing for AI model training and operations remains substantial. OpenAI, for instance, uses an extensive supercomputer created by Microsoft for both training and inference since 2020. It cost OpenAI about $12 million to train its GPT-3 model, and the expense surged to $100 million for training GPT-4. According to a report by SemiAnalysis, OpenAI needs roughly 3,617 HGX A100 servers, totaling 28,936 GPUs, to support ChatGPT, bringing the average cost per query to approximately $0.36. With these high costs in mind, Sam Altman, CEO of OpenAI, is reportedly seeking significant investments to build a worldwide network of AI chip production facilities, according to a Bloomberg report.
Harnessing Control and Innovation
Third-party AI chips often come with limitations. Companies relying on these chips may find themselves constrained by off-the-shelf solutions that don’t fully align with their unique AI models or applications. In-house chip development allows for customization tailored to specific use cases. Whether it’s for autonomous cars or mobile devices, controlling the hardware enables companies to fully leverage their AI algorithms. Customized chips can enhance specific tasks, reduce latency, and improve overall performance.
Latest Advances in AI Chip Development
This section delves into the latest strides made by Google, Meta, and Amazon in building AI chip technology.
Google’s Axion Processors
Google has been steadily progressing in the field of AI chip technology since the introduction of the Tensor Processing Unit (TPU) in 2015. Building on this foundation, Google has recently launched the Axion Processors, its first custom CPUs specifically designed for data centers and AI workloads. These processors are based on Arm architecture, known for their efficiency and compact design. The Axion Processors aim to enhance the efficiency of CPU-based AI training and inferencing while maintaining energy efficiency. This advancement also marks a significant improvement in performance for various general-purpose workloads, including web and app servers, containerized microservices, open-source databases, in-memory caches, data analytics engines, media processing, and more.
Meta’s MTIA
Meta is pushing forward in AI chip technology with its Meta Training and Inference Accelerator (MTIA). This tool is designed to boost the efficiency of training and inference processes, especially for ranking and recommendation algorithms. Recently, Meta outlined how the MTIA is a key part of its strategy to strengthen its AI infrastructure beyond GPUs. Initially set to launch in 2025, Meta has already put both versions of the MTIA into production, showing a quicker pace in their chip development plans. While the MTIA currently focuses on training certain types of algorithms, Meta aims to expand its use to include training for generative AI, like its Llama language models.
Amazon’s Trainium and Inferentia
Since introducing its custom Nitro chip in 2013, Amazon has significantly expanded its AI chip development. The company recently unveiled two innovative AI chips, Trainium and Inferentia. Trainium is specifically designed to enhance AI model training and is set to be incorporated into EC2 UltraClusters. These clusters, capable of hosting up to 100,000 chips, are optimized for training foundational models and large language models in an energy efficient way. Inferentia, on the other hand, is tailored for inference tasks where AI models are actively applied, focusing on decreasing latency and costs during inference to better serve the needs of millions of users interacting with AI-powered services.
The Bottom Line
The movement towards in-house development of custom AI chips by major companies like Google, Microsoft, and Amazon reflects a strategic shift to address the increasing computational needs of AI technologies. This trend highlights the necessity for solutions that are specifically tailored to efficiently support AI models, meeting the unique demands of these advanced systems. As demand for AI chips continues to grow, industry leaders like Nvidia are likely to see a significant rise in market valuation, underlining the vital role that custom chips play in advancing AI innovation. By creating their own chips, these tech giants are not only enhancing the performance and efficiency of their AI systems but also promoting a more sustainable and cost-effective future. This evolution is setting new standards in the industry, driving technological progress and competitive advantage in a rapidly changing global market.
1 note · View note
yousseferqa · 7 months
Text
Visit our website https://linktr.ee/Bodycareandhealth to explore our full range of natural health products and place your order today. Feel free to reach out to our friendly customer service team if you have any questions or need assistance
#User#ارسال رسالة تتعلق بمنتوجات صحية طبيعية لزبناء بالانجليزية#ChatGPT#Subject: Discover our Natural Health Products!#Dear Valued Customers#We are excited to introduce our range of natural health products that are designed to promote wellness and vitality. At [Your Company Name]#we believe in the power of nature to nurture and enhance our well-being.#Our products are carefully crafted using high-quality#natural ingredients sourced from trusted suppliers. From herbal supplements to organic skincare#we offer a variety of options to support your health journey.#Here are some highlights from our collection:#Herbal Supplements: Boost your immune system#improve digestion#and enhance your overall health with our range of herbal supplements.#Organic Skincare: Pamper your skin with our gentle and nourishing skincare products made from organic botanicals and essential oils.#Nutritious Snacks: Fuel your body with wholesome snacks that are free from artificial additives and preservatives.#Herbal Teas: Relax and unwind with our selection of herbal teas#carefully blended to soothe the mind and body.#Whether you're looking to maintain your health or address specific concerns#we have something for everyone. Plus#all of our products are backed by our commitment to quality and customer satisfaction.#Visit our website [Your Website URL] to explore our full range of natural health products and place your order today. Feel free to reach ou#Thank you for choosing [Your Company Name] for your health and wellness needs. Here's to a happier#healthier you!#Warm regards#[Your Name]#[Your Title/Position]#[Your Contact Information]#ترجمة الى العربية#موضوع: اكتشف منتجاتنا الصحية الطبيعية!
1 note · View note
useailikeapro · 8 months
Text
Discover the most effective ChatGPT forms that streamline your work, making tasks easier and faster. Unlock the potential of ChatGPT to enhance productivity and efficiency in your workflow.
0 notes
dfoxmarketing · 1 year
Text
Google’s Upcoming AI Software Gemini Nears Release
Google, part of Alphabet, has granted access to an early version of Gemini, its conversational AI programme, to a select number of businesses.
According to the source, Gemini is meant to compete with OpenAI’s GPT-4 model.
The debut of Gemini holds a lot of significance for Google. As it attempts to catch up following Microsoft-backed OpenAI’s unveiling of ChatGPT last year, which rocked the tech world, Google has increased its spending in generative AI this year.
According to the reports, Gemini is a group of large-language models that enable everything from chatbots to tools that either summarise material or create fresh content according on what people like to read, such as email draughts, song lyrics, or news articles.
Tumblr media
Additionally, it is anticipated to assist software developers in creating novel code and pictures depending on user requests.
According to the publication, Google is developing a more substantial version of Gemini that would be more equivalent to GPT-4, but it has not yet made it accessible to developers.
Through its Google Cloud Vertex AI service, the leading search and advertising company intends to make Gemini accessible to businesses.
This month, the company added generative AI to its Search feature for customers in India and Japan, which will reply to inputs with text or visual results, including summaries. Additionally, it offers corporate clients access to its AI-powered solutions for $30 per user each month.
Source: ANI
Follow Digital Fox Media for latest  technology news.
0 notes
keshavkumar · 1 year
Text
Power of ChatGPT: Monetizing AI Language Model for Success and Making Money
How ChatGPT, the game-changing language model, can revolutionize your business and generate income. Explore content creation, virtual assistance, online tutoring, and more. Find out how to overcome challenges and use ChatGPT ethically for maximum impact. Understanding ChatGPT: A Game-Changing Language Model In the rapidly evolving field of artificial intelligence, language models have become…
View On WordPress
0 notes
blubberquark · 6 months
Text
Auto-Generated Junk Web Sites
I don't know if you heard the complaints about Google getting worse since 2018, or about Amazon getting worse. Some people think Google got worse at search. I think Google got worse because the web got worse. Amazon got worse because the supply side on Amazon got worse, but ultimately Amazon is to blame for incentivising the sale of more and cheaper products on its platform.
In any case, if you search something on Google, you get a lot of junk, and if you search for a specific product on Amazon, you get a lot of junk, even though the process that led to the junk is very different.
I don't subscribe to the "Dead Internet Theory", the idea that most online content is social media and that most social media is bots. I think Google search has gotten worse because a lot of content from as recently as 2018 got deleted, and a lot of web 1.0 and the blogosphere got deleted, comment sections got deleted, and content in the style of web 1.0 and the blogosphere is no longer produced. Furthermore, many links are now broken because they don't directly link to web pages, but to social media accounts and tweets that used to aggregate links.
I don't think going back to web 1.0 will help discoverability, and it probably won't be as profitable or even monetiseable to maintain a useful web 1.0 page compared to an entertaining but ephemeral YouTube channel. Going back to Web 1.0 means more long-term after-hours labour of love site maintenance, and less social media posting as a career.
Anyway, Google has gotten noticeably worse since GPT-3 and ChatGPT were made available to the general public, and many people blame content farms with language models and image synthesis for this. I am not sure. If Google had started to show users meaningless AI generated content from large content farms, that means Google has finally lost the SEO war, and Google is worse at AI/language models than fly-by-night operations whose whole business model is skimming clicks off Google.
I just don't think that's true. I think the reality is worse.
Real web sites run by real people are getting overrun by AI-generated junk, and human editors can't stop it. Real people whose job it is to generate content are increasingly turning in AI junk at their jobs.
Furthermore, even people who are setting up a web site for a local business or an online presence for their personal brand/CV are using auto-generated text.
I have seen at least two different TV commercials by web hosting and web design companies that promoted this. Are you starting your own business? Do you run a small business? A business needs a web site. With our AI-powered tools, you don't have to worry about the content of your web site. We generate it for you.
There are companies out there today, selling something that's probably a re-labelled ChatGPT or LLaMA plus Stable Diffusion to somebody who is just setting up a bicycle repair shop. All the pictures and written copy on the web presence for that repair shop will be automatically generated.
We would be living in a much better world if there was a small number of large content farms and bot operators poisoning our search results. Instead, we are living in a world where many real people are individually doing their part.
164 notes · View notes
Text
The cryptocurrency hype of the past few years already started to introduce people to these problems. Despite producing little to no tangible benefits — unless you count letting rich people make money off speculation and scams — Bitcoin consumed more energy and computer parts than medium-sized countries and crypto miners were so voracious in their energy needs that they turned shuttered coal plants back on to process crypto transactions. Even after the crypto crash, Bitcoin still used more energy in 2023 than the previous year, but some miners found a new opportunity: powering the generative AI boom. The AI tools being pushed by OpenAI, Google, and their peers are far more energy intensive than the products they aim to displace. In the days after ChatGPT’s release in late 2022, Sam Altman called its computing costs “eye-watering” and several months later Alphabet chairman John Hennessy told Reuters that getting a response from Google’s chatbot would “likely cost 10 times more” than using its traditional search tools. Instead of reassessing their plans, major tech companies are doubling down and planning a massive expansion of the computing infrastructure available to them.
[...]
As the cloud took over, more computation fell into the hands of a few dominant tech companies and they made the move to what are called “hyperscale” data centers. Those facilities are usually over 10,000 square feet and hold more than 5,000 servers, but those being built today are often many times larger than that. For example, Amazon says its data centers can have up to 50,000 servers each, while Microsoft has a campus of 20 data centers in Quincy, Washington with almost half a million servers between them. By the end of 2020, Amazon, Microsoft, and Google controlled half of the 597 hyperscale data centres in the world, but what’s even more concerning is how rapidly that number is increasing. By mid-2023, the number of hyperscale data centres stood at 926 and Synergy Research estimates another 427 will be built in the coming years to keep up with the expansion of resource-intensive AI tools and other demands for increased computation. All those data centers come with an increasingly significant resource footprint. A recent report from the International Energy Agency (IEA) estimates that the global energy demand of data centers, AI, and crypto could more than double by 2026, increasing from 460 TWh in 2022 to up to 1,050 TWh — similar to the energy consumption of Japan. Meanwhile, in the United States, data center energy use could triple from 130 TWh in 2022 — about 2.5% of the country’s total — to 390 TWh by the end of the decade, accounting for a 7.5% share of total energy, according to Boston Consulting Group. That’s nothing compared to Ireland, where the IEA estimates data centers, AI, and crypto could consume a third of all power in 2026, up from 17% in 2022. Water use is going up too: Google reported it used 5.2 billion gallons of water in its data centers in 2022, a jump of 20% from the previous year, while Microsoft used 1.7 billion gallons in its data centers, an increase of 34% on 2021. University of California, Riverside researcher Shaolei Ren told Fortune, “It’s fair to say the majority of the growth is due to AI.” But these are not just large abstract numbers; they have real material consequences that a lot of communities are getting fed up with just as the companies seek to massively expand their data center footprints.
9 February 2024
75 notes · View notes
mariacallous · 4 months
Text
Last week OpenAI revealed a new conversational interface for ChatGPT with an expressive, synthetic voice strikingly similar to that of the AI assistant played by Scarlett Johansson in the sci-fi movie Her—only to suddenly disable the new voice over the weekend.
On Monday, Johansson issued a statement claiming to have forced that reversal, after her lawyers demanded OpenAI clarify how the new voice was created.
Johansson’s statement, relayed to WIRED by her publicist, claims that OpenAI CEO Sam Altman asked her last September to provide ChatGPT’s new voice but that she declined. She describes being astounded to see the company demo a new voice for ChatGPT last week that sounded like her anyway.
“When I heard the release demo I was shocked, angered, and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,” the statement reads. It notes that Altman appeared to encourage the world to connect the demo with Johansson’s performance by tweeting out “her,” in reference to the movie, on May 13.
Johansson’s statement says her agent was contacted by Altman two days before last week’s demo asking that she reconsider her decision not to work with OpenAI. After seeing the demo, she says she hired legal counsel to write to OpenAI asking for details of how it made the new voice.
The statement claims that this led to OpenAI’s announcement Sunday in a post on X that it had decided to “pause the use of Sky,” the company’s name for the synthetic voice. The company also posted a blog post outlining the process used to create the voice. “Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” the post said.
Sky is one of several synthetic voices that OpenAI gave ChatGPT last September, but at last week’s event it displayed a much more lifelike intonation with emotional cues. The demo saw a version of ChatGPT powered by a new AI model called GPT-4o appear to flirt with an OpenAI engineer in a way that many viewers found reminiscent of Johansson’s performance in Her.
“The voice of Sky is not Scarlett Johansson's, and it was never intended to resemble hers,” Sam Altman said in a statement provided by OpenAI. He claimed the voice actor behind Sky's voice was hired before the company contact Johannsson. “Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.”
The conflict with Johansson adds to OpenAI’s existing battles with artists, writers, and other creatives. The company is already defending a number of lawsuits alleging it inappropriately used copyrighted content to train its algorithms, including suits from The New York Times and authors including George R.R. Martin.
Generative AI has made it much easier to create realistic synthetic voices, creating new opportunities and threats. In January, voters in New Hampshire were bombarded with robocalls featuring a deepfaked voice message from Joe Biden. In March, OpenAI said that it had developed a technology that could clone someone’s voice from a 15-second clip, but the company said it would not release the technology because of how it might be misused.
87 notes · View notes
jackdaw-kraai · 1 year
Text
The thing I think isn't talked about enough in all these conversations about "AI art" is how, even if you work out all the kinks, even if you get it to the point that it works perfectly according to the most lofty goals set, even if all that came true... AI will still disappoint when set next to even just a moderately skilled human artist. Not because of any technical flaws with the product, but because of its fundamental limitations as a tool.
AI, as we understand it right now, without all the grandstanding and doomsday predictions and near-mythological qualities we ascribe to it, works on binary. Down to its core, stripped to its studs, it works on binary code, and you see that reflected in the design. Every choice it makes, every result it produces, is a result of a million, billion "yes or no" questions asked of it that chain together into a coherent response. Endless amounts of "TRUE or FALSE" results spat out when data is fed into it, that string together to form a conversation, or an essay, or a painting, or a comic. At least, when trained on enough data to weigh the odds in favor of what the creators want it to do.
If you ask ChatGPT to tell you something about romance, it filters its endless data banks for what that training data it was given matches your request and what results in those tests were rewarded by its programmers and which were discouraged and based on all that, it begins making TRUE or FALSE choices with the odds weighed by that data. That's how all AI we currently have fundamentally work, and that, in and of itself, is not a bad thing. It's a tool, and tools are hard pressed to be evil. What it is, however, is vastly inferior to the process of a human writer for one simple fact: when asked a question, we have more options than to answer it TRUE or FALSE.
If you ask a human writer to tell you about romance, they too will draw upon all the memories they have stored away of what they know about romance and base an answer off of that. But they will also draw up all the knowledge they have on astronomy, to compare the feeling it creates inside to that of hydrogen fusing, and that of medicine, because it burns so bright inside that it feels like your rib cage feels like it should be alight from the inside until it looks like an inverted x-ray image. A human writer will visualize the way love feels and draw connections an AI couldn't fathom, because it was never trained to do so. And more than that, if a human writer tells you about romance, they won't tell you just about romance.
They will tell you about how romance happens.
They will tell you about what romance between a young Polish woman and a young Polish man living in what would one day be the powiat of Bieszczadzki on the border with Ukraine, but for now are just the Bieszczadzki mountains, in the spring of 1914 would have looked like. And they will tell you about how it looked like all the months afterwards as the young man is drafted into the army and their home is ravaged by WWI as the Bieszczadzki mountains become one of the most bitterly contested regions in the Eastern Front during the war. They will tell you about how romance, how the love blooming from it, cannot fix the damage wrought by senseless battles fought by powers so much greater than the two of them, but how it carries them through the war nonetheless.
And what's more, they will know enough about the history of Poland to parallel the growing love between these two young people with the growing, not-yet-formed modern state of Poland that will once again rise from the ashes of the war after having previously been partitioned by greater powers into non-existence.
A human writer will not only have the knowledge to do that, they will have the skills and manner of thinking necessary to form the thoughts that will lead to such a story and make it into something incredible. An AI, no matter how well you train it, no matter how good you make it at emulating a writer's style, will not be able to form the same thought process. Not because it is flawed, but because it simply isn't built for that.
An AI cannot experience nationalism or patriotism for a country, an AI cannot reason out how people might have lived in the absence of credible historic evidence when it runs up against a gap in its data, an AI will not understand the link between fragile, young love blooming in adversity and a country struggling to be reborn in spite of the greater nations around it that wish it would remain dead. It cannot do this, because it isn't based in "TRUE or FALSE" questions. It's based in the painfully human experience of complicated emotions, difficult thoughts, and yes, even deeply flawed ways of looking at the world that nonetheless are beautiful exactly for having those flaws.
An AI, at its core, with where the technology is right now, is a machine of averages. Even if we polish it to peak performance, that will not change. At peak performance, it will still give you an average of all the possible answers it could give, it will be technically flawless, and it will never be anything even close to a fraction of the lightning in a bottle that a writer with categorically shit technique can capture if their heart and mind are put into it.
And let's be honest here and step a foot outside of the bubble of speculation, just for a bit: AI will never, ever, give you an answer or story that pushes boundaries or makes you think like even the most technically incompetent but passionate authors are able to.
Because in order to push boundaries, in order to deliver a message, you have to be willing to make people uncomfortable. You have to be willing to be messy and raw to the point that your story bleeds. And even if we polished AI to perfection, even if, by some miracle of a completely new and fresh coding base, it could do all those things... the humans pulling the strings of the machine would never allow it to do so. Because if their machine produces stories that push boundaries, that have things to say, that make people uncomfortable, it's not going to be profitable. It's not going to be advertiser-friendly. It's not going to please the stock market. And let's be honest here, in the end, that's what matters to those people.
AI cannot write the stories that people want it to, that they truly want it to, because in the end, the stories we want to hear are not the stories it can tell. The stories we want to hear are, in the end, painfully human, in all the best and worst ways possible. And if you want a human story, if you want to have something like that lightning in a bottle, AI can never be more than a tool in making them instead of the maker itself. A potentially useful and innovative tool, but nothing more than that.
Because if you want human stories, no one but a human will be able to write them. And no one but a human will be able to read them and understand what's being said.
206 notes · View notes
sixstringphonic · 1 year
Text
‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead
(Reported by Cade Metz, The New York Times)
Geoffrey Hinton was an artificial intelligence pioneer. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto created technology that became the intellectual foundation for the A.I. systems that the tech industry’s biggest companies believe is a key to their future.
On Monday, however, he officially joined a growing chorus of critics who say those companies are racing toward danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.
Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
“I console myself with the normal excuse: If I hadn’t done it, somebody else would have,” Dr. Hinton said during a lengthy interview last week in the dining room of his home in Toronto, a short walk from where he and his students made their breakthrough.
Dr. Hinton’s journey from A.I. groundbreaker to doomsayer marks a remarkable moment for the technology industry at perhaps its most important inflection point in decades. Industry leaders believe the new A.I. systems could be as important as the introduction of the web browser in the early 1990s and could lead to breakthroughs in areas ranging from drug research to education.
But gnawing at many industry insiders is a fear that they are releasing something dangerous into the wild. Generative A.I. can already be a tool for misinformation. Soon, it could be a risk to jobs. Somewhere down the line, tech’s biggest worriers say, it could be a risk to humanity.
“It is hard to see how you can prevent the bad actors from using it for bad things,” Dr. Hinton said.
After the San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems because A.I. technologies pose “profound risks to society and humanity.”
Several days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of A.I. That group included Eric Horvitz, chief scientific officer at Microsoft, which has deployed OpenAI’s technology across a wide range of products, including its Bing search engine.
Dr. Hinton, often called “the Godfather of A.I.,” did not sign either of those letters and said he did not want to publicly criticize Google or other companies until he had quit his job. He notified the company last month that he was resigning, and on Thursday, he talked by phone with Sundar Pichai, the chief executive of Google’s parent company, Alphabet. He declined to publicly discuss the details of his conversation with Mr. Pichai.
Google’s chief scientist, Jeff Dean, said in a statement: “We remain committed to a responsible approach to A.I. We’re continually learning to understand emerging risks while also innovating boldly.”
Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career was driven by his personal convictions about the development and use of A.I. In 1972, as a graduate student at the University of Edinburgh, Dr. Hinton embraced an idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.
In the 1980s, Dr. Hinton was a professor of computer science at Carnegie Mellon University, but left the university for Canada because he said he was reluctant to take Pentagon funding. At the time, most A.I. research in the United States was funded by the Defense Department. Dr. Hinton is deeply opposed to the use of artificial intelligence on the battlefield — what he calls “robot soldiers.”
As companies improve their A.I. systems, he believes, they become increasingly dangerous. “Look at how it was five years ago and how it is now,” he said of A.I. technology. “Take the difference and propagate it forwards. That’s scary.”
Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release something that might cause harm. But now that Microsoft has augmented its Bing search engine with a chatbot — challenging Google’s core business — Google is racing to deploy the same kind of technology. The tech giants are locked in a competition that might be impossible to stop, Dr. Hinton said.
His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will “not be able to know what is true anymore.”
He is also worried that A.I. technologies will in time upend the job market. Today, chatbots like ChatGPT tend to complement human workers, but they could replace paralegals, personal assistants, translators and others who handle rote tasks. “It takes away the drudge work,” he said. “It might take away more than that.”
But that may be impossible, he said. Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
He does not say that anymore.
(Reported by Cade Metz, The New York Times)
184 notes · View notes
rjzimmerman · 12 days
Text
The Carbon Footprint of Amazon, Google, and Facebook Is Growing. (Sierra Club)
Excerpt from this story from Sierra Club:
IN MARCH The Information reported that Microsoft was in talks with OpenAI, the creator of ChatGPT, about spending an eye-popping $100 billion on a gargantuan data center in Wisconsin dedicated to running artificial intelligence software. Code-named “Stargate,” the data center would, at full operation, consume five gigawatts of electricity, enough to power 3.7 million homes. For comparison purposes, that’s roughly the same amount of power produced by Plant Vogtle, the big nuclear power station in Georgia that cost $30 billion to build.
Stargate is in the earliest of planning stages, but the sheer scale of the proposal reflects a truth about artificial intelligence: AI is an energy hog. That’s an embarrassing about-face for the technology industry. For at least 20 years, American electricity consumption has hardly grown at all—owing in part, say computer scientists, to steady advances in energy efficiency that have percolated out of the tech industry into the larger economy. In 2023, according to the US Energy Information Administration, total electricity consumption fell slightly from 2022 levels.
But according to a report published last December by Grid Strategies, a consultancy that advises on energy policy, multiple electric utilities now predict that US energy demand will rise by up to 5 percent over the next five years. One of the chief culprits responsible for the surge, say the utilities, are new data centers designed to run AI. To meet the growing demand for power, those utilities want to build new fossil fuel power plants and to dismantle climate legislation that stands in their way.
For environmentalists, this represents a giant step backward. Artificial intelligence was supposed to help us solve problems. What good are ChatGPT and its ilk if using them worsens global warming?
This is a relatively new story—the AI gold rush is still in its infancy, ChatGPT only having debuted in fall 2022. But computing’s energy demands have been growing for decades, ever since the internet became an indispensable part of daily life. Every Zoom call, Netflix binge, Google search, YouTube video, and TikTok dance is processed in a windowless, warehouse-like building filled with thousands of pieces of computer hardware. These data centers are where the internet happens, the physical manifestation of the so-called cloud—perhaps as far away from ethereality as you can get.
In the popular mind, the cloud is often thought of in the simple sense of storage. This is where we back up our photos, our videos, our Google Docs. But that’s just a small slice of it: For the past 20 years, computation itself has increasingly been outsourced to data centers. Corporations, governments, research institutions, and others have discovered that it is cheaper and more efficient to rent computing services from Big Tech.
The crucial point, writes anthropologist Steven Gonzalez Monserrate in his case study The Cloud Is Material: On the Environmental Impacts of Computation and Data Storage, is that “heat is the waste product of computation.” Data centers consume so much energy because computer chips produce large amounts of heat. Roughly 40 percent of a data center’s electricity bill is the result of just keeping things cool. And the new generation of AI software is far more processor intensive and power hungry than just about anything—with the notable exception of cryptocurrency—that has come before.
The energy cost of AI and its perverse, climate-unfriendly incentives for electric utilities are a gut check for a tech industry that likes to think of itself as changing the world for the better. Michelle Solomon, an analyst at the nonprofit think tank Energy Innovation, calls the AI power crunch “a litmus test” for a society threatened by climate change.
11 notes · View notes