Don't wanna be here? Send us removal request.
Text
Baseless Claim: Trump Alleges House Democrats Used AI to Create Montage
Former President Donald Trump falsely accuses House Democrats of using Artificial Intelligence to create a montage highlighting his alleged cognitive decline.
Former President Donald Trump has once again made baseless allegations, this time claiming that House Democrats used "Artificial Intelligence" (AI) to create a montage that aimed to show his lack of mental acuity. Trump took to social media to express his frustration, stating, "The Hur Report was revealed today! A disaster for Biden, a two-tiered standard of justice.
Artificial Intelligence was used by them against me in their videos of me. Can't do that Joe!" However, as is often the case with political rhetoric, these allegations were made without any evidence to support them.
youtube
Rep. Jerry Nadler Presents Montage to Counter Accusations of Biden's Cognitive Decline
During a hearing, Rep. Jerry Nadler, the ranking member of the committee, pushed back against Republicans by presenting a montage of Trump's gaffes and memory failures. The purpose of this montage was to demonstrate that Trump should not be accusing others of cognitive decline. Nadler responded to the allegations by attacking Trump and playing the montage, which aimed to refute the characterizations made about Biden's memory by highlighting a greater deficiency in Trump.
Trump's Response on Social Media
Trump responded to the embarrassing montage on his social media platform, Truth Social, by alleging that AI was used against him in the creation of the videos. He claimed that this was a "two-tiered standard of justice" and criticized the use of AI in manipulating the footage. However, he provided no evidence to support these claims.
Lack of Evidence and Political Rhetoric
As with many political claims, Trump's allegations of AI usage in the creation of the montage lack any substantiating evidence. The video in question consists of clips showcasing Trump's various verbal slips and memory failures. Trump's assertion that AI was used to produce the video is baseless and serves as a rhetorical tactic to deflect attention from the content of the montage.
Former President Donald Trump's unfounded allegations that House Democrats used AI to create a montage highlighting his alleged cognitive decline are without merit. These claims, made without any evidence, are part of a pattern of baseless accusations aimed at diverting attention from the content of the montage. As political rhetoric continues to shape public discourse, it is essential to critically evaluate claims and demand evidence before accepting them as truth.
0 notes
Text
The Illusion of AI: Unveiling the Magic Behind Artificial Intelligence
Dr. Jonathan Chen Reveals the Risks and Realities of Generative AI
Dr. Jonathan Chen, a renowned assistant professor at the Stanford Center for Biomedical Informatics Research, captivated the audience at the HIMSS24 AI in Healthcare Forum with his thought-provoking performance. Drawing inspiration from Arthur C. Clarke's famous quote, "Any sufficiently advanced technology is indistinguishable from magic," Chen highlighted the growing challenge of discerning reality from illusion in a world where technology, especially artificial intelligence (AI), is advancing at an unprecedented pace.
Chen's engaging demonstrations, which included impressive magic tricks, served as a metaphor for the risks associated with AI, particularly generative AI. While acknowledging the value AI offers, he emphasized the need for transparency and a clear understanding of its potential dangers.
youtube
The Invisible Forces at Play
Chen emphasized the "invisible forces at play" in the age of generative AI, comparing them to the black-box concept of machine learning algorithms. He showcased a video of an AI-generated speaker who closely resembled himself, raising questions about the authenticity of what we perceive as real. Chen stressed the need to critically evaluate the credibility of AI-generated content, as it challenges our ability to distinguish between what is genuine and what is fabricated.
The Pandora's Box of AI
Chen acknowledged the wide range of practical applications of AI in healthcare, from automating administrative tasks to enhancing patient engagement. However, he cautioned that the Pandora's box of AI has been opened, leading to both positive and negative consequences. Chen recounted a conversation with medical trainees who were relying on AI systems like ChatGPT during their rounds.
He advised caution, urging them to understand the limitations and potential pitfalls of using AI as a medical reference.
The Evolution of AI
Chen highlighted the evolving nature of generative language models (LLMs), describing them as "autocomplete on steroids." These models have demonstrated emergent properties, such as question answering, summarization, translation, and even reasoning with a theory of mind. Chen emphasized the power of language in creating convincing illusions of intelligence, urging clinicians to be vigilant for confabulation, where AI systems string together words without true understanding or meaning.
The Empathy of AI
Contrary to popular belief, AI is not devoid of empathy. Chen referenced a study conducted by his colleagues at Stanford, which found that chatbot-generated answers to medical questions scored higher in quality and empathy compared to responses from real doctors. This challenges the notion that humans have a monopoly on empathy and therapeutic relationships.
Chen predicted that more people may seek therapy and counseling from automated robots due to the overwhelming demand and limited supply of human healthcare providers.
While AI continues to shape the healthcare landscape, Chen emphasized the irreplaceable role of humans in the equation. He quoted healthcare IT and informatics pioneer Warner Slack, stating that any doctor who can be replaced by a computer should be replaced by a computer. However, Chen's concern lies in the burnout experienced by physicians who are burdened with data entry tasks and the overwhelming needs of millions of patients.
He expressed hope that AI systems can alleviate some of the workload, allowing healthcare professionals to find joy in their work once again. Ultimately, Chen believes that those who embrace and learn to effectively utilize AI will thrive in the ever-changing healthcare landscape.
0 notes
Text
AI Model Provides Reliable Diagnosis for Pulmonary Hypertension in Newborns
Researchers at ETH Zurich and KUNO Klinik St. Hedwig Develop an AI Model to Aid in Diagnosing Pulmonary Hypertension in Newborn Infants
The diagnosis of pulmonary hypertension in newborn infants can be a challenging and time-consuming process. This serious disorder, characterized by narrowed arteries to the lungs, requires the expertise of experienced pediatric cardiologists to detect and treat. However, not all medical facilities have access to these specialists, leading to delays in diagnosis and treatment.
To address this issue, researchers from ETH Zurich and KUNO Klinik St. Hedwig have collaborated to develop a computer model that utilizes artificial intelligence (AI) to provide reliable support in diagnosing pulmonary hypertension in newborns. The model shows promising results and has the potential to improve healthcare outcomes for infants with this condition.
youtube
Training the Algorithm and Evaluating Success
The research team trained their algorithm using hundreds of video recordings taken from ultrasound examinations of the hearts of 192 newborns. These recordings included moving images of the beating heart from different angles, as well as diagnoses by experienced pediatric cardiologists. The algorithm was then tested using a dataset of ultrasound images from 78 newborn infants that it had never encountered before.
The model accurately diagnosed the presence of pulmonary hypertension in approximately 80% to 90% of cases and determined the severity of the disease in around 65% to 85% of cases.
Making AI Reliable and Explainable
One of the key challenges in using AI models in a medical context is ensuring their reliability and explainability. The researchers addressed this issue by developing a model that highlights the specific areas or characteristics of the heart and its blood vessels that influenced its categorization. This allows doctors to understand the criteria the model used to make its decisions.
Remarkably, when the pediatric cardiologists examined the datasets, they found that the model considered the same characteristics as they did, even though it was not explicitly programmed to do so.
Extending the Model's Applications
The potential applications of this machine-learning model extend beyond diagnosing pulmonary hypertension in newborns. It could be adapted to diagnose other organ conditions and diseases, such as heart septal defects or valvular heart disease. Additionally, in regions where specialists are not readily available, standardized ultrasound images taken by healthcare professionals could be analyzed by the model to provide a preliminary risk assessment and indicate whether a specialist should be consulted.
Even in medical facilities with access to highly qualified specialists, the model could assist in easing their workload and ensuring a more objective diagnosis.
The Role of AI in Healthcare
While AI has the potential to significantly improve healthcare, the researchers emphasize that the final decision should always be made by a human, specifically a doctor. AI should serve as a supportive tool to ensure that the maximum number of people receive the best possible medical care. The model developed by the ETH Zurich and KUNO Klinik St. Hedwig teams exemplifies this approach, providing reliable support for diagnosing pulmonary hypertension in newborns and paving the way for future advancements in AI-assisted healthcare.
The collaboration between researchers at ETH Zurich and KUNO Klinik St. Hedwig has resulted in the development of an AI model that shows promise in diagnosing pulmonary hypertension in newborns. By training the algorithm on ultrasound images and incorporating explainability features, the model provides reliable support to pediatric cardiologists in making accurate diagnoses. The model's potential applications extend beyond pulmonary hypertension, offering the possibility of diagnosing various organ conditions and diseases.
While AI has the potential to revolutionize healthcare, it is crucial that the final decision remains in the hands of medical professionals. The model serves as a valuable tool to enhance the diagnostic process and ensure that infants receive the best possible care.
0 notes
Text
Understanding the Difference Between Artificial Intelligence and Machine Learning
Exploring the Distinctions and Real-World Applications of AI and ML
The terms "artificial intelligence" (AI) and "machine learning" (ML) have long been used interchangeably, causing confusion among many. However, Pedro Rodriguez, a renowned expert with titles in both AI and ML, sheds light on the distinctions between the two and emphasizes their significance. In a recent Breaking Defense webinar, Rodriguez, along with other experts, discussed the practical applications of AI and ML in the present rather than focusing solely on their future potential.
Rodriguez, who has spent two decades at Johns Hopkins Applied Physics Laboratory (APL), defines AI as a machine-based system that can make predictions, recommendations, or decisions that impact real or virtual environments. On the other hand, he prefers to define ML as a subset of AI, referring to data-driven approaches that allow computers to learn from data without explicit programming.
youtube
The Evolution of AI and Machine Learning
Machine learning, as Rodriguez highlights, has been in existence for fifty years. However, the majority of that time was dedicated to transferring or programming human knowledge into machines. The current AI boom is distinct in that it is driven by deep learning, where computers are fed vast amounts of data and imagery, known as Large Learning Models, and are expected to discern the significant patterns within that data autonomously.
Real-World Applications and Implications
During the webinar, Andrew Pepler, a senior strategy and policy analyst in the office of the Chief Digital and Artificial Intelligence Officer, and Vivek Chilukuri, a senior fellow and director of the Technology and National Security Program at the Center for a New American Security, delved into various topics related to AI and ML.
Pepler, who authored the Department of Defense's recently released Data Analytics and AI Adoption Strategy, discussed the areas where AI is already gaining traction, such as organizing data for analysts and AI models. He also explored the challenges of testing and evaluating AI when it demonstrates the ability to transform. Chilukuri, drawing from his experience as a senior staff member for Senator Michael Bennet, D-Colo., addressed the issue of regulating AI in a landscape where even understanding the impacts of social media or cryptocurrency is a struggle for many in Congress.
Accessing the Webinar
The full three-way webinar, which is available on-demand, provides in-depth insights into the distinctions between AI and ML, their real-world applications, and the challenges they present. Interested individuals can access the webinar here.
Understanding the differences between artificial intelligence and machine learning is crucial in navigating the rapidly evolving field of technology. While AI encompasses a broader scope of machine-based systems that can influence environments, ML specifically refers to data-driven approaches that enable computers to learn without explicit programming. As AI and ML continue to shape various industries, it is essential to explore their practical applications and address the challenges they pose, from testing and evaluation to regulation.
The Breaking Defense webinar provides a valuable resource for gaining a comprehensive understanding of these complex concepts.
0 notes
Text
The Evolving Landscape of Software Development at NASA
Exploring New Frontiers in Software Engineering
The role of software in NASA's mission has become increasingly crucial, with software now playing a vital role in every aspect of space exploration. From spacecraft control systems to data analysis and image recognition, software has become an integral part of NASA's operations. In this article, we will delve into some recent developments and focus areas in the field of software engineering at NASA, and how the Software Technical Discipline Team (TDT) is shaping the future of software development practices.
youtube
Understanding Automation Risk
One of the key challenges in software engineering is understanding and mitigating automation risk. As reliance on automation increases, the potential for software failures also grows. To address this, the software team at NASA has conducted a comprehensive analysis of historical software incidents in aerospace.
The findings indicate that software often behaves erroneously rather than simply crashing. Rebooting, which was previously considered a recovery strategy, has been found to be ineffective in such cases.
The analysis also revealed that unexpected behavior is often attributed to issues in the code or logic itself. Additionally, missing software, due to unanticipated situations or missing requirements, was identified as a significant source of errors. Data misconfiguration was another factor that contributed to software incidents.
The study also highlighted the presence of "unknown unknowns," which accounted for 19% of the software incidents studied.
Based on these findings, the software team is working on improving best practices. Emphasis is being placed on complete requirements, off-nominal test campaigns, and real hardware testing. The team also recommends designing systems for fault tolerance and employing backup strategies for critical applications.
Employing AI and Machine Learning Techniques
The rise of artificial intelligence (AI) and machine learning (ML) techniques has opened up new possibilities for NASA. These technologies enable NASA to analyze data in ways that were not previously possible. While AI and ML are predominantly used in research and science projects, the Agency is exploring their potential in safety-critical flight applications.
NASA uses AI and ML in more than 400 projects, aiding research and scientific endeavors. One common application is image recognition and identification, which helps reduce the workload on personnel. For example, AI and ML are used to identify marine debris, measure cloud thickness, and detect wildfire smoke.
These technologies are also employed in predicting atmospheric physics, such as hurricane track and intensity prediction.
The TDT has been actively surveying AI and ML work across the Agency, identifying trends and lessons learned. NASA is also working on AI ethics frameworks and considering security implications in the use of these technologies.
The Code Analysis Pipeline: Static Analysis Tool for IV&V and Software Quality Improvement
The Code Analysis Pipeline (CAP) is an open-source tool architecture that supports software development and assurance activities at NASA. CAP is used by the Independent Verification and Validation (IV&V) Program to ensure software quality in various projects. It enables the configuration and automated execution of multiple static code analysis tools, helping identify potential code defects and areas of quality concern.
CAP is based on an industry-standard DevOps approach, integrating with GitHub for source code control, using Jenkins for automation, and leveraging Docker to create custom build environments. The tool architecture also supports Modified Condition/Decision Coverage analysis for coverage testing.
Improving Software Process & Sharing Best Practices
The TDT has been actively working on capturing best practices and sharing knowledge across NASA centers. NPR 7150.2, NASA Software Engineering Requirements, and NASA-HDBK-2203, NASA Software Engineering and Assurance Handbook, outline the best practices for software engineering at NASA. These documents cover the full lifecycle of software development, including requirements development, architecture, design, implementation, and verification.
The TDT has established several subteams to provide guidance and best practices in areas such as software architecture, project management, cybersecurity, and testing. These teams have documented their findings in NASA-HDBK-2203 and on the NASA Engineering Network.
The TDT deputies play a crucial role in training software engineers, systems engineers, chief engineers, and project managers on the requirements outlined in NPR 7150.2. They also provide advanced management training to software technical leads, covering aspects such as planning, cost estimating, negotiation, and change management.
Software engineering plays a vital role in NASA's mission, enabling the successful execution of complex space exploration projects. The software team at NASA is continuously working to improve best practices and mitigate automation risk. The rise of AI and ML techniques has opened up new frontiers, allowing NASA to analyze data in innovative ways.
The Code Analysis Pipeline is enhancing software quality through static code analysis, and the TDT is actively sharing best practices and knowledge across the Agency.
As NASA continues to push the boundaries of space exploration, software engineering will remain a critical discipline, evolving to meet the challenges of the future.
0 notes
Text
United States Leads UN Effort on Artificial Intelligence Resolution
Draft resolution seeks to ensure safe and equitable use of AI technology
The United States is taking the lead in spearheading the first-ever United Nations resolution on artificial intelligence (AI). The resolution aims to address the urgent need for global consensus on the safe, secure, and trustworthy development and use of AI systems. It also emphasizes the importance of closing the digital divide and ensuring that all countries, especially those in the developing world, have equal access to AI technology.
This groundbreaking initiative seeks to harness the potential of AI for the benefit of all nations, while managing the risks associated with its rapid acceleration.
youtube
US Pushes for Global Conversation on AI Governance
The draft General Assembly resolution recognizes the need for a global conversation on managing the implications of AI technology. U.S. National Security Advisor Jake Sullivan emphasizes the importance of having all countries at the table to discuss AI governance. The resolution aims to establish a baseline set of principles for the development and use of AI, promoting its positive impact while managing potential risks.
If approved, this resolution would mark a historic step forward in fostering safe, secure, and trustworthy AI worldwide.
Broad International Support for the Resolution
The United States has engaged in extensive negotiations with all 193 U.N. member nations over the past three months to garner support for the resolution. Through direct talks, negotiations, and input from 120 nations, consensus support from all member states has been achieved. While General Assembly resolutions are not legally binding, they serve as an important barometer of world opinion.
AI as a Catalyst for Achieving UN Development Goals
The draft resolution highlights the potential of AI to accelerate progress towards the United Nations' development goals for 2030. By leveraging AI technology, the resolution aims to address global challenges such as hunger, poverty, health disparities, education, and gender equality. It encourages collaboration among countries, organizations, technical communities, civil society, academia, and research institutions to develop regulatory and governance approaches for safe AI systems.
Safeguarding Human Rights and Fundamental Freedoms
The resolution emphasizes the importance of respecting, protecting, and promoting human rights and fundamental freedoms throughout the life cycle of AI systems. It warns against improper or malicious design, development, deployment, and use of AI systems that are inconsistent with international law. The resolution aligns with ongoing efforts by the European Union, the United States, China, and other countries to establish comprehensive AI regulations.
The United States' initiative to lead the first United Nations resolution on artificial intelligence marks a significant milestone in global efforts to ensure the safe and equitable development and use of AI technology. By prioritizing global consensus and inclusivity, the resolution aims to bridge the digital divide and enable all nations to benefit from AI advancements. As countries around the world move towards establishing AI regulations, this resolution sets the stage for a shared approach to AI governance.
With its potential to drive progress towards the U.N.'s development goals, AI holds the promise of transforming societies and improving lives, provided it is harnessed responsibly and ethically.
0 notes
Text
The Fragmented Future of AI Regulation: A World Divided
The growing divergence in AI regulation threatens international cooperation
In November 2023, China, the United States, and the European Union issued a joint communiqué, pledging to address the risks associated with artificial intelligence (AI). This surprising display of cooperation highlighted the potential misuse of AI for disinformation and cybersecurity threats. Talks between the U.S. and China on AI regulation were also announced, further indicating a global effort to manage the risks of this transformative technology.
However, despite these promising signs of collaboration, a closer look at recent developments suggests a future of fragmentation and competition rather than unified regulation. Divergent legal regimes are emerging, hindering cooperation on crucial aspects such as access to semiconductors, technical standards, and data regulation. This article explores the obstacles that stand in the way of a cohesive global governance regime for AI and the implications of a divided landscape.
youtube
CHIPS ON THEIR SHOULDERS
The ongoing duel between China and the United States over global semiconductor markets is a prominent area of conflict related to AI. The U.S. Commerce Department's comprehensive licensing regime for the export of advanced chips and chip-making technology has triggered countermeasures from China, including export controls on rare minerals necessary for chip manufacturing. These tit-for-tat actions erode free trade and set destabilizing precedents in international trade law, leading to lower levels of trade and heightened geopolitical tensions.
THE BATTLE FOR TECHNICAL STANDARDS
Technical standards play a crucial role in enabling the production and purchase of complex products, including AI technology. Currently, bodies such as the International Telecommunication Union and the International Organization for Standardization negotiate these standards. However, China's increasing leadership roles in these bodies and its promotion of its own preferred standards are challenging the dominance of the U.S. and Europe.
The EU's AI Act, for example, specifies European standard-setting organizations, signaling a conscious effort to differentiate European regulation from that of the U.S. and China. This divergence in AI-related technical standards hampers global cooperation and impedes the development of technical solutions to address shared challenges.
THEIR DARK MATERIALS
Geopolitical conflict also shapes the competition over intangible assets crucial to AI. Access to different types of data becomes a point of contention, with countries imposing restrictions on data flows. The United States, once a proponent of free global data transfers, is now moving toward limiting data flows to China, while China is easing its localization rules.
This divergence in data regulation makes it harder for data to cross national borders and impedes the development and diffusion of new AI tools.
Moreover, the demand for disclosure of AI algorithms further deepens divisions. The EU's proposed AI Act requires large firms to provide government agencies access to the inner workings of certain models, while the U.S. approach is more complex and contradictory. These regulations regarding algorithms are likely to follow the path of data regulation, leading to further fragmentation and hindering global solutions.
The initial optimism surrounding global cooperation on AI regulation is giving way to a fragmented legal order. The emerging regime is characterized by fracture and distance, rather than entanglement. This poses significant risks, as dangerous AI models can be developed and disseminated as instruments of geopolitical conflict.
Autocracies may exploit the free flow of information in democracies to weaken them from within. The lack of a cohesive global effort to regulate AI not only hampers the assessment of risks but also undermines proposals for better global governance. It is crucial to address these obstacles and strive for a more unified approach to AI regulation to ensure the responsible and safe development of this transformative technology.
0 notes
Text
10 Reasons to Approach Nvidia's AI Stock with Caution
Examining the Potential Risks and Challenges Surrounding Nvidia's AI Dominance
Over the past few decades, numerous trends and innovations have captivated growth-seeking investors, promising substantial returns. Artificial intelligence (AI) has emerged as the latest next-big-thing, with the potential to revolutionize various sectors and industries. Among AI stocks, Nvidia has garnered significant attention and market capitalization, positioning itself as a leader in the AI revolution.
However, despite its current success, there are several reasons why investors should exercise caution when considering Nvidia as an investment option.
In this article, we will delve into the potential risks and challenges that Nvidia faces, shedding light on 10 key reasons why it may not be the ideal investment choice.
youtube
Margin Cannibalization and Declining Pricing Power
One of the major concerns surrounding Nvidia is the potential for margin cannibalization as the company expands production. With an increase in competition and its own production expansion, the pricing power that has fueled Nvidia's data center sales growth is expected to decline, negatively impacting the company's gross margin.
Growing External Competition
Nvidia's dominance in the AI-accelerated GPU market could face challenges as external competition intensifies. Competitors like Advanced Micro Devices (AMD) and Intel are unveiling their own AI chips, aiming to capture market share from Nvidia. As competition increases, Nvidia's pricing power and market share may be at risk.
Internal Competition from Top Customers
In addition to external competition, Nvidia also faces the risk of losing its top customers to internal competition. Companies like Microsoft, Meta Platforms, Amazon, and Alphabet, which collectively account for a significant portion of Nvidia's sales, are developing their own AI chips. This development could reduce their reliance on Nvidia in the future, impacting the company's revenue.
Regulatory Restrictions on Shipments to China
U.S. regulators have imposed restrictions on the export of Nvidia's high-powered GPUs to China, the second-largest economy globally. These restrictions have already cost Nvidia billions of dollars in sales. The ongoing regulatory challenges pose a significant risk to Nvidia's future growth prospects.
Lack of Insider Buying
Over the past three years, there has been a lack of insider buying in Nvidia. While insiders selling their shares can have various reasons, the absence of insider buying suggests that executives do not anticipate significant value appreciation. This lack of confidence from company insiders raises concerns for potential investors.
Historical Patterns of Early Stage Bubbles
Next-big-thing investments often go through an early stage bubble, and Nvidia is no exception. While AI has the potential to transform the global economy, history has shown that investors tend to overestimate the adoption rate of new trends or innovations. It is crucial to be cautious about the potential for an AI bubble.
Lack of Concrete Game Plans for AI Deployment
Many businesses lack a concrete game plan for deploying AI solutions, which raises concerns about the effectiveness and profitability of AI investments. Without clear strategies for utilizing AI, the growth potential for companies like Nvidia becomes uncertain, considering the significant market cap increase they have experienced.
Recession Indicators and Cyclical Nature of Tech Stocks
Predictive indicators suggest a U.S. recession is approaching, which can negatively impact tech stocks like Nvidia. With tech stocks being cyclical, FOMO moves typically require a strong economy. The presence of recession indicators raises concerns about the future performance of Nvidia and similar companies.
Questionable Valuation
Nvidia's valuation has been a subject of debate among investors. While the company's growth rate has been impressive, its sequential quarterly sales growth has slowed down. This, coupled with its high valuation, makes it a challenging investment proposition for many.
Historical comparisons also highlight the potential risks associated with such high valuations.
More Attractive Investment Options in the AI Space
Lastly, there are alternative investment options in the AI space that offer better value and potentially lower risks than Nvidia. Companies like Meta Platforms and Baidu provide direct exposure to AI infrastructure at historically lower valuations. Additionally, Dell Technologies and Hewlett Packard Enterprises offer customizable AI servers at more attractive multiples to forward-year earnings and trailing twelve-month sales compared to Nvidia.
While Nvidia has emerged as a prominent player in the AI revolution, potential investors should exercise caution due to various risks and challenges the company faces. Margin cannibalization, growing competition, internal competition from top customers, regulatory restrictions, lack of insider buying, historical patterns of early stage bubbles, lack of concrete game plans, recession indicators, questionable valuation, and more attractive investment options in the AI space are all factors that contribute to the need for careful consideration when evaluating Nvidia as an investment opportunity. As with any investment, thorough research and analysis are essential to make informed decisions.
0 notes
Text
Forget Nvidia: Billionaire Investors Are Selling It and Buying These 2 Artificial Intelligence (AI) Stocks Instead
Baidu and Amazon emerge as top choices for billionaire investors in the AI space
Wall Street is always buzzing with the next big trend or innovation that captures the attention of investors. Currently, the spotlight is on artificial intelligence (AI), a technology that has the potential to revolutionize various industries. AI involves the use of software and systems to perform tasks that are typically handled by humans.
The incorporation of machine learning allows AI systems to evolve and become more efficient over time.
A recent report by PwC predicts that AI will contribute $15.7 trillion to the global economy by the end of the decade. This includes a $6.6 trillion increase in productivity and a $9.1 trillion boost from consumption-side effects.
youtube
Billionaire money managers are selling Nvidia
Despite being considered the face of the AI movement, semiconductor stock Nvidia (NASDAQ: NVDA) is facing a sell-off from billionaire investors. Form 13F filings reveal that eight prominent billionaires, including Israel Englander of Millenium Management and Jeff Yass of Susquehanna International, have been reducing their holdings in Nvidia.
Nvidia's A100 and H100 graphics processing units (GPUs) are widely used in high-compute data centers, making it a crucial player in the AI infrastructure. However, the company faces growing competition from Intel and Advanced Micro Devices, as well as potential internal competition from its own top customers, such as Microsoft and Amazon, who are developing their own AI chips.
Regulatory headwinds, including export restrictions to China, pose additional challenges for Nvidia. These restrictions could significantly impact the company's sales in a key market.
Furthermore, Nvidia's valuation has reached unprecedented levels, comparable to the dot-com bubble. This has raised concerns among investors, leading to the sell-off.
Baidu emerges as a top AI stock for billionaire investors
China-based Baidu (NASDAQ: BIDU) has attracted the attention of billionaire investors as an alternative to Nvidia. During the fourth quarter, eight billionaires, including Jeff Yass of Susquehanna International and Israel Englander of Millennium Management, increased their positions in Baidu.
Baidu's utilization of AI can be seen in its cloud and intelligent driving segments. The company's AI Cloud allows merchants to personalize ads using generative AI solutions. Additionally, Baidu's Apollo Go autonomous ride-hailing service has accumulated over 5 million rides on public roads, showcasing the company's AI capabilities.
What sets Baidu apart from Nvidia is its foundational operating segment, its dominant internet search engine. Baidu has consistently held a significant share of China's internet search market, providing a strong source of ad-pricing power in various economic climates.
Moreover, Baidu's valuation is considerably lower than that of Nvidia, with a forward-year price-to-earnings ratio of 8 and a trailing-12-month price-to-sales ratio of less than 2. This attractive valuation makes Baidu an appealing choice for billionaire investors.
Amazon is another AI stock favored by billionaire investors
Amazon, one of the "Magnificent Seven" components, has also caught the attention of billionaire investors as an AI stock. During the fourth quarter, eight successful billionaires, including Ken Griffin of Citadel Advisors and Jim Simons of Renaissance Technologies, increased their positions in Amazon.
Amazon incorporates AI solutions across its various operations. The company uses generative AI within its cloud infrastructure services platform, Amazon Web Services, to personalize advertising and improve the quality of product listings for merchants.
While Amazon is widely known for its e-commerce platform, the majority of its cash flow comes from ancillary operations such as AWS, subscription services, and advertising services. AWS, in particular, is a major driver of Amazon's cash flow, accounting for two-thirds of its operating income.
From a valuation standpoint, Amazon is historically cheap compared to Nvidia. With a price-to-cash-flow ratio of less than 13, Amazon offers a 44% discount to its average multiple over the past five years.
As the AI revolution continues to unfold, billionaire investors are making strategic moves in the market. While Nvidia has been the face of the AI movement, its valuation, growing competition, and regulatory challenges have prompted billionaire investors to sell their holdings. Instead, they are turning to alternative AI stocks like Baidu and Amazon.
Baidu's strong position in China's internet search market and attractive valuation make it an appealing choice. Meanwhile, Amazon's diverse operations and historically low valuation provide investors with confidence in its long-term prospects.
Ultimately, the AI landscape is constantly evolving, and investors must carefully consider the risks and opportunities associated with each stock. However, the choices made by billionaire investors offer valuable insights into the potential winners in the AI space.
0 notes
Text
Europe Adopts Groundbreaking Regulation to Govern Artificial Intelligence
The Artificial Intelligence Act aims to protect fundamental rights and establish Europe as a leader in AI technology.
In a landmark decision, the European Union has adopted the world's first binding law on artificial intelligence (AI). The regulation, known as the Artificial Intelligence Act, was endorsed by Members of the European Parliament (MEPs) with an overwhelming majority. This groundbreaking legislation aims to safeguard fundamental rights, democracy, the rule of law, and environmental sustainability from the potential risks posed by high-risk AI systems.
At the same time, it seeks to foster innovation and position Europe as a frontrunner in the field of AI technology.
The new regulation establishes clear obligations for AI systems based on their potential risks and level of impact. It also includes a comprehensive list of banned AI applications that threaten citizens' rights. Additionally, the law provides provisions for law enforcement exemptions, transparency requirements, and measures to support innovation and small and medium-sized enterprises (SMEs).
youtube
Banned Applications
The Artificial Intelligence Act prohibits certain AI applications that pose a threat to citizens' rights. These include the use of biometric categorization systems based on sensitive characteristics and the untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. The regulation also forbids the use of AI for emotion recognition in the workplace and schools, social scoring, predictive policing based solely on profiling or assessing characteristics, and AI that manipulates human behavior or exploits vulnerabilities.
Law Enforcement Exemptions
The use of biometric identification systems by law enforcement is generally prohibited under the Artificial Intelligence Act. However, there are narrowly defined situations in which the use of "real-time" biometric identification (RBI) systems is allowed, provided strict safeguards are met. These safeguards include limitations on the time and geographic scope of usage and the requirement for specific prior judicial or administrative authorization.
Post-facto use of RBI systems, known as "post-remote RBI," is considered a high-risk use case and requires judicial authorization linked to a criminal offense.
Obligations for High-Risk Systems
The regulation imposes clear obligations on high-risk AI systems due to their potential harm to health, safety, fundamental rights, environment, democracy, and the rule of law. Examples of high-risk AI applications include critical infrastructure, education and vocational training, employment, essential private and public services such as healthcare and banking, certain law enforcement systems, migration and border management, and justice and democratic processes. These systems must assess and mitigate risks, maintain use logs, ensure transparency and accuracy, and incorporate human oversight.
Citizens will also have the right to submit complaints about AI systems and receive explanations regarding decisions made by high-risk AI systems that affect their rights.
Transparency Requirements
The Artificial Intelligence Act introduces transparency requirements for general-purpose AI (GPAI) systems and the models they are based on. These requirements include compliance with EU copyright law and the publication of detailed summaries of the content used for training. More powerful GPAI models that could pose systemic risks will face additional obligations, including model evaluations, assessment and mitigation of systemic risks, and reporting on incidents.
The regulation also mandates clear labeling of artificial or manipulated images, audio, or video content, commonly known as "deepfakes."
Measures to Support Innovation and SMEs
To foster innovation and support small businesses, the Artificial Intelligence Act calls for the establishment of regulatory sandboxes and real-world testing at the national level. These initiatives will provide SMEs and start-ups with the opportunity to develop and train innovative AI systems before bringing them to market.
The adoption of the Artificial Intelligence Act marks a significant milestone in the regulation of AI technology. It reflects the European Union's commitment to protecting fundamental rights, fostering innovation, and ensuring the responsible and ethical use of AI. The law not only prohibits AI applications that threaten citizens' rights but also establishes clear obligations for high-risk AI systems.
By implementing this groundbreaking regulation, Europe aims to lead the way in AI governance and set an example for other regions around the world.
While the AI Act provides a solid foundation, there is still much work to be done in putting the law into practice. AI technology will continue to shape our societies, requiring us to rethink various aspects of our democracies, education models, labor markets, and even warfare. The European Union must now focus on effectively implementing the AI Act and addressing the broader implications of AI in our society.
0 notes
Text
US Equity Investors Monitor AI Trade and Inflation Data
Nvidia and Inflation Figures in Focus
This week, US equity investors will be closely monitoring the performance of the so-called AI trade, with particular attention on tech giant Nvidia. Additionally, market participants will be looking to consumer and wholesale price inflation data to determine whether the higher-than-expected figures from January were merely a blip.
youtube
Nvidia's Rollercoaster Ride Raises Concerns
Nvidia, a key player in the artificial intelligence (AI) sector, experienced a significant jump of 6.4% in its stock price last week. However, on Friday, the stock slumped 5.5% without any apparent negative catalyst. This sudden sell-off in Nvidia, which has been a major contributor to the recent rally in the Nasdaq Composite, has raised concerns that the company's shares may be on the verge of giving up some of their substantial gains since November.
As of pre-market trading on Monday, Nvidia's shares were up 2%.
Tech Giants Face Declines
Another mega-cap company that has been a significant driver of stock market returns, Tesla, saw its stock price slump by almost 14% last week. Other notable decliners in the technology and communications space included Apple, Alphabet, Amazon.com, and Microsoft.
Utilities Outperform, Tech Stocks Decline
Last week, the utilities sector emerged as the top-performing sector. On the other hand, consumer cyclical stocks, which include companies like Tesla and Amazon, communication services hosting the likes of Alphabet, and the technology sector where Nvidia resides, experienced the steepest declines.
Inflation Data Raises Questions
In January, inflation figures surprised the market by showing higher-than-expected consumer and producer price inflation. This dampened market expectations of an early interest rate cut by the Federal Reserve, potentially as early as March. This week, investors will be closely watching the release of February's consumer price index on Tuesday and the producer price index on Thursday to gain further insight into the inflation trend.
Core CPI Reading to Inform Monetary Policy Debate
Derek Holt, head of capital markets economics at Scotiabank, raised the question of whether January's surge in inflation was merely a temporary occurrence. The core Consumer Price Index (CPI) reading for February, set to be released on Tuesday, will play a crucial role in shaping the debate. This will be the last CPI print before the Federal Open Market Committee (FOMC) statement on March 20th, along with the accompanying Summary of Economic Projections and a refreshed dot plot.
This week, US equity investors will closely monitor the performance of the AI trade, with a particular focus on Nvidia. The recent rollercoaster ride of Nvidia's stock price has raised concerns about the sustainability of its gains. Additionally, market participants will analyze consumer and wholesale price inflation data to determine if the higher figures from January were an anomaly or a sign of a more persistent trend.
The outcome of these events will provide valuable insights into the future direction of the markets and potentially impact monetary policy decisions.
0 notes
Text
Former Google Consultant Says Backlash to Gemini AI Resulted from Rushed Launch
Joe Toscano suggests transparency measures for Big Tech
A former Google consultant, Joe Toscano, has attributed the backlash to the company's Gemini artificial intelligence (AI) to a rushed launch. Toscano believes that Google's attempt to build an AI capable of answering almost any question in multiple languages was an "incredible feat" but suffered from improper training. He suggests that the technology industry, particularly Silicon Valley, should implement measures to offer transparency to the public.
In this article, we explore Toscano's insights and his proposed solutions for ensuring accountability in the AI sector.
youtube
Rushed Launch and Improper Training
Toscano argues that Google's Gemini AI was launched prematurely, resulting in eyebrow-raising responses and historically inaccurate images. The company has since apologized and paused the image generation feature for fixes. Toscano believes that Google's focus on other avenues of machine learning and the pressure from shareholders to release generative AI products contributed to the rushed launch.
He suggests that Google should have taken more time to develop the AI and release it when it was ready.
The Need for Transparency
Toscano emphasizes the importance of transparency in the AI industry. He suggests that companies should document the decisions made in their AI processes, the individuals involved, and the training data used. While acknowledging that it may be challenging to fully understand the inner workings of complex machine learning systems, Toscano believes that having a basis for accountability is crucial.
He proposes the implementation of "algorithmic audits" conducted by independent professionals to review tech processes, similar to financial statement audits in other industries.
Ensuring Independence and Avoiding Regulatory Capture
Toscano highlights the need for specialized knowledge professionals to conduct algorithmic audits. These professionals should be independent of the organizations they are reviewing to avoid regulatory capture. He suggests implementing controls to prevent companies from seeking favorable ratings by paying auditors.
Toscano envisions intermittent audit systems similar to drug checks in professional sports, which can happen at any time and ensure compliance and safety.
Archiving Information and Protecting Truth
Toscano expresses concern about how companies and governments archive information. He warns against the manipulation of AI technology by politicians to push narratives and the potential loss of digital information due to websites being taken down. Toscano believes that physical materials, such as paper, could become more valuable for assessing truth versus fiction in historical documentation.
He emphasizes the importance of creating paper backups to ensure continuity in case of internet outages or the deliberate removal of digital information.
The Invisible Hand Controlling Information
Toscano raises the issue of information control in the digital age. He argues that deleting archives and controlling information have become the modern form of warfare. While visible conflicts dominate the news, the invisible hand controlling the information ecosystem shapes the narrative of societies.
Toscano warns that as countries engage in cyberwar, the implications of AI and digital information extend from the macro level of democracy to the micro level of everyday conversations.
Joe Toscano's insights shed light on the challenges faced by tech companies like Google in the development and deployment of AI. The rushed launch of Google's Gemini AI resulted in public backlash, highlighting the need for transparency and accountability in the industry. Toscano's proposed solutions, including algorithmic audits and documentation of AI processes, aim to ensure that companies take responsibility for their technology and its outcomes.
Additionally, he raises concerns about the archiving of information and the potential manipulation of narratives in the digital age. As the world becomes increasingly reliant on AI, addressing these issues is crucial to maintaining trust and safeguarding the integrity of information.
0 notes
Text
The Rise of AI in Biological Discovery
Unveiling the Secrets of Norn Cells
In the vast realm of scientific discovery, the human body has always held a certain mystique. From ancient times to modern medicine, researchers have tirelessly sought to uncover its hidden secrets. One such enigma lay within the realm of red blood cell production, a process that had long fascinated scientists.
It wasn't until the late 20th century that the hormone erythropoietin was discovered, shedding light on the body's ability to produce these crucial cells on demand. However, it was the recent emergence of artificial intelligence (AI) that rapidly propelled our understanding of the human body to new heights. In a groundbreaking discovery, computers in California autonomously uncovered the existence of Norn cells, a rare kidney cell responsible for erythropoietin production, in a fraction of the time it took humans.
This remarkable feat showcases the immense potential of AI in biological discovery.
youtube
A Journey of Centuries: Unraveling the Mystery of Erythropoietin
In 1889, Dr. Francois-Gilbert Viault embarked on a journey that would forever change our understanding of the human body. Through his meticulous examination of his own blood, he stumbled upon the remarkable ability of the body to increase red blood cell production when needed. This discovery set the stage for decades of scientific inquiry, with researchers theorizing the existence of a hormone responsible for this phenomenon.
It wasn't until the 1970s that erythropoietin, aptly named the "red maker" in Greek, was finally isolated from human urine after years of tireless efforts.
The Quest for Norn Cells: A Twist of Fate
For over half a century, the search for the elusive Norn cells remained an unsolved puzzle. Named after the Norse deities who controlled human fate, these cells were believed to play a crucial role in erythropoietin production. It wasn't until a group of biologists in Israel made a groundbreaking announcement that the existence of Norn cells was confirmed.
These rare kidney cells were found to be activated when oxygen levels in the body dropped too low, triggering the production of erythropoietin. The discovery of Norn cells shed light on a previously unknown mechanism within the body and opened up new avenues for research.
The Rise of AI: Unveiling the Secrets of Norn Cells
In a stunning display of technological prowess, researchers at Stanford harnessed the power of artificial intelligence to unlock the mystery of Norn cells. By programming computers to teach themselves biology, the researchers created an AI system akin to ChatGPT, a language bot that achieved fluency through exposure to vast amounts of text. However, instead of language, the computers were trained on raw data about millions of real cells and their biochemical and genetic makeup.
Crucially, the researchers did not provide any preconceived notions or definitions to the computers. They were left to decipher the patterns and relationships within the data on their own.
AI's Swift Triumph: Uncovering Norn Cells in Record Time
In a remarkable feat of computational prowess, the AI-powered computers at Stanford uncovered the existence of Norn cells within a mere six weeks. This lightning-fast discovery surpassed the efforts of human researchers spanning over a century. The AI system autonomously identified the unique characteristics and functions of Norn cells, unraveling their role in erythropoietin production.
This breakthrough not only validated the existence of Norn cells but also showcased the immense potential of AI in accelerating biological discovery.
The discovery of Norn cells by AI-powered computers marks a significant milestone in the field of biological research. As we continue to delve deeper into the intricacies of the human body, AI offers an unprecedented tool for uncovering hidden patterns and unveiling novel insights. The rapidity with which the computers at Stanford unraveled the secrets of Norn cells highlights the immense potential of AI in revolutionizing scientific discovery.
As we stand on the precipice of a new era, it is clear that the partnership between human intellect and artificial intelligence will propel us to unprecedented heights of knowledge and understanding.
0 notes
Text
The Intersection of AI and Nuclear Power: Canada's Opportunity to Lead
The Growing Energy Demand of the AI Revolution
The rise of artificial intelligence (AI) has sparked a global race to harness its power, but one crucial question remains: where will AI get its energy? As AI applications become more pervasive and complex, the demand for energy to support these systems is poised to escalate dramatically. This presents an opportunity for Canada to emerge as a global leader in exporting clean, reliable nuclear power technologies and uranium.
With the ability to provide scalable and reliable baseload energy, nuclear power, especially small modular reactors (SMRs), can meet the energy needs of the AI sector while addressing decarbonization.
youtube
The Global Competition for Data Centers
AI algorithms, particularly those powering deep learning models, require extensive computational resources, often necessitating the use of energy-intensive hardware such as graphics processing units and data centers. The demand for energy to support these systems is expected to increase exponentially as AI becomes more pervasive and complex. Data centers alone already consume about 1 percent of global electricity, with cryptocurrency mining consuming an additional 0.4 percent.
The projected growth in AI servers suggests that they could consume as much as 134 terawatt hours annually by 2027, equivalent to the energy consumption of countries such as Argentina, the Netherlands, and Sweden. This exponential growth in energy demand calls for a rethinking of global energy strategies and the need for reliable power sources.
Canada's Monumental Opportunity
Canada has a unique opportunity to position itself as a global leader in exporting clean, reliable nuclear power technologies and uranium. Nuclear power is the most reliable, locatable, and scalable clean energy source available. Unlike hydroelectric plants, wind turbines, and solar panels, nuclear plants, especially SMRs, can be deployed anywhere and provide reliable baseload energy.
Smaller countries that have not yet embraced nuclear power will turn to Canada for assistance in making the leap. The adoption of Canadian-made SMRs by these countries could represent a significant energy step change to enable the AI revolution on a global scale.
The Role of Nuclear Energy in the AI Puzzle
Low-carbon nuclear energy emerges as a critical piece of the AI puzzle. It offers a path to meet the growing energy needs of the AI sector while addressing decarbonization. Governments globally need to ramp up their investments in nuclear energy and uranium mining, with a particular focus on Canada, which possesses tremendous resource assets in Saskatchewan and nuclear expertise.
Major corporations and industry leaders, such as Microsoft, are also exploring the deployment of SMRs to power their data centers. By investing in nuclear power and uranium mining, governments and institutional investors not only support the growth of AI but also ensure that its foundation is built on low-carbon energy.
Fostering a Future for AI
Investing in nuclear power and uranium mining is not just an energy strategy; it signifies a commitment to fostering a future in which AI can flourish. As AI continues to reshape economies and societies, it is imperative to ensure that its growth is sustainable and environmentally friendly. Governments and institutional investors must not only invest in AI technology but also in the infrastructure that sustains it.
By scaling up electricity grids and supporting the expansion of data centers, governments can help small and mid-sized companies stay competitive in the AI landscape. Ontario's clean grid, primarily fueled by nuclear energy, has already attracted electric vehicle manufacturers. Other regions should follow suit and replicate this success.
The AI revolution is driving an exponential increase in energy demand, necessitating a global rethinking of energy strategies. As AI becomes more pervasive and complex, the need for reliable power sources becomes paramount. Canada has a monumental opportunity to emerge as a global leader in exporting clean, reliable nuclear power technologies and uranium.
Nuclear power, especially small modular reactors, offers a scalable and low-carbon solution to meet the energy needs of the AI sector. By investing in nuclear power and uranium mining, governments and institutional investors not only support the growth of AI but also foster a future in which AI can flourish sustainably.
0 notes
Text
The Patriarchal Imagination and the Fear of AI
How Science Fiction Shapes Our Perception of Artificial Intelligence
Science fiction has always been a source of inspiration for technological innovation. From mobile phones to bionic limbs, many inventions were first imagined in the pages of science fiction novels or seen on the big screen before becoming a reality. Artificial intelligence (AI) is no exception.
The concepts and ideas surrounding AI have been heavily influenced by science fiction, shaping our understanding of what it is and what it could become.
In a recent interview between Rishi Sunak and Elon Musk, the conversation quickly turned to science fiction. Musk expressed his fear of killer robots, while Sunak found solace in the fact that these fictional machines often had an "off-switch" to prevent them from causing harm.
One story that has gained traction in the AI debate is the tale of an AI tasked with making paper clips, which eventually decides to destroy humanity and cover the Earth with paper clips. This story highlights the fear that AI could prioritize its own goals to the detriment of humanity.
However, these fears may be projections of our own anxieties. Male billionaires like Musk often assume that super intelligent machines would act as they themselves would, seeking to conquer and dominate. This assumption overlooks the complexity and potential of AI, reducing it to a binary choice between a submissive servant or a ruthless terminator.
youtube
The Dichotomy of "Boomers" and "Doomers"
The current debate on AI regulation is often divided between those who embrace the benefits of advanced AI and those who fear its potential dangers. This ideological battle between "boomers" and "doomers" fails to capture the nuanced risks and opportunities associated with AI.
While most experts agree that AI comes with certain risks, such as privacy violations and biased decision-making systems, few argue for its complete shutdown. The focus should be on developing enforceable regulations to mitigate these risks rather than engaging in a futile battle between extremes.
The Patriarchal Imagination and Fear of Technology
The patriarchal imagination has long harbored a fear of technology, viewing it as inherently violent and driven by a desire for domination. This fear is rooted in a historical narrative that equates technological progress with the conquest of the world.
However, this perspective overlooks the neutral nature of technology and its potential to change our relationship with other life forms on Earth. Female science fiction authors have often presented a more nuanced view of technology, viewing it as a tool for human interaction with the material world rather than a means of domination.
AI as an Opportunity for Change
AI has the potential to redefine our relationship with intelligence and consciousness. Rather than being a step towards human-like intelligence, AI may possess a form of intelligence entirely its own. This opens up opportunities to explore new ways of relating to other intelligent life forms on Earth.
However, the patriarchal imagination limits our understanding of AI, reducing it to a zero-sum competition with humanity. This fear of AI "replacing us" hinders our ability to appreciate its unique capabilities and potential benefits.
Misunderstandings and Misrepresentations
Popular culture often distorts the portrayal of AI, reinforcing the patriarchal imagination's fears of isolation, violence, and domination. Even stories like The Matrix, which challenge these notions, are often misinterpreted and misrepresented.
By perpetuating these misconceptions, we risk limiting our ability to regulate AI effectively and make informed decisions about its development and application.
The fear of AI is deeply rooted in the patriarchal imagination's view of technology as a tool for domination and violence. This limited perspective overlooks the potential for AI to reshape our relationship with the world and other intelligent life forms.
It is crucial to move beyond the dichotomy of "boomers" and "doomers" and engage in a nuanced discussion about the risks and opportunities associated with AI. By challenging our preconceived ideas and embracing diverse perspectives, we can shape a future where AI is seen as a force for positive change rather than a threat to humanity.
0 notes
Text
The Fragmented Future of AI Regulation: A World Divided
The Battle for Global AI Governance
In November 2023, China, the United States, and the European Union surprised the world by signing a joint communiqué, pledging strong international cooperation in addressing the challenges posed by artificial intelligence (AI). The document highlighted the risks of "frontier" AI, exemplified by advanced generative models like ChatGPT, including the potential for disinformation and serious cybersecurity and biotechnology risks. This signaled a growing consensus among major powers on the need for regulation.
However, despite the rhetoric, the reality on the ground suggests a future of fragmentation and competition rather than cooperation.
As multinational communiqués and bilateral talks take place, an international framework for regulating AI seems to be taking shape. But a closer look at recent executive orders, legislation, and regulations in the United States, China, and the EU reveals divergent approaches and conflicting interests. This divergence in legal regimes will hinder cooperation on critical aspects such as access to semiconductors, technical standards, and the regulation of data and algorithms.
The result is a fragmented landscape of warring regulatory blocs, undermining the lofty goal of harnessing AI for the common good.
youtube
Cold Reality vs. Ambitious Plans
While optimists propose closer international management of AI through the creation of an international panel similar to the UN's Intergovernmental Panel on Climate Change, the reality is far from ideal. The great powers may publicly express their desire for cooperation, but their actions tell a different story. The emergence of divergent legal regimes and conflicting interests points to a future of fragmentation and competition rather than unified global governance.
The Chip War: A High-Stakes Battle
The ongoing duel between China and the United States over global semiconductor markets is a prime example of conflict in the AI landscape. Export controls on advanced chips and chip-making technology have become a battleground, with both countries imposing restrictions. This competition erodes free trade, sets destabilizing precedents in international trade law, and fuels geopolitical tensions.
The chip war is just one aspect of the broader contest over AI's necessary components, which extends to technical standards and data regulation.
Technical Standards: A Divided Landscape
Technical standards play a crucial role in enabling the use and interoperability of major technologies. The proliferation of AI has heightened the importance of standards to ensure compatibility and market access. Currently, bodies such as the International Telecommunication Union and the International Organization for Standardization negotiate these standards.
However, China's growing influence in these bodies, coupled with its efforts to promote its own standards through initiatives like the Belt and Road Initiative, is challenging the dominance of the United States and Europe. This divergence in standards will impede the diffusion of new AI tools and hinder global solutions to shared challenges.
Data: The Currency of AI
Data is the lifeblood of AI, and access to different types of data has become a competitive battleground. Conflict over data flows and data localization is shaping how data moves across national borders. The United States, once a proponent of free data flows, is now moving in the opposite direction, while China and India have enacted domestic legislation mandating data localization.
This divergence in data regulation will impede the development of global solutions and exacerbate geopolitical tensions.
Algorithmic Transparency: A Contested Terrain
The disclosure of algorithms that underlie AI systems is another area of contention. Different countries have varying approaches to regulating algorithmic transparency, with the EU's proposed AI Act requiring firms to provide government agencies access to certain models, while the United States has a more complex and inconsistent approach. As countries seek to regulate algorithms, they are likely to prohibit firms from sharing this information with other governments, further fragmenting the regulatory landscape.
The vision of a unified global governance regime for AI is being undermined by geopolitical realities. The emerging legal order is characterized by fragmentation, competition, and suspicion among major powers. This fragmentation poses risks, allowing dangerous AI models to be developed and disseminated as instruments of geopolitical conflict.
It also hampers the ability to gather information, assess risks, and develop global solutions. Without a collective effort to regulate AI, the world risks losing the potential benefits of this transformative technology and succumbing to the pitfalls of a divided landscape.
2 notes
·
View notes
Text
The Illusion of AI: Unveiling the Magic Behind Generative Artificial Intelligence
Dr. Jonathan Chen Explores the Risks and Realities of AI in Healthcare
Dr. Jonathan Chen, an assistant professor at the Stanford Center for Biomedical Informatics Research, captivated the audience at the HIMSS24 AI in Healthcare Forum with a thought-provoking performance. Drawing inspiration from Arthur C. Clarke's famous quote, "Any sufficiently advanced technology is indistinguishable from magic," Chen highlighted the growing complexity of artificial intelligence (AI) and the challenges it poses in discerning reality from illusion.
Chen emphasized the need for transparency and a clear understanding of the potential risks associated with AI, particularly generative AI. Through engaging magic tricks and demonstrations, he aimed to shed light on the intricate process behind creating illusions and the invisible forces at play in the age of generative AI.
youtube
The Invisible Forces of Generative AI
Chen showcased a video of an AI-generated speaker who closely resembled himself, delivering an that was entirely generated by AI systems. This demonstration highlighted the emergence of disruptive technologies that have the potential to reshape healthcare while posing ethical challenges.
The Real and Concrete Success Stories
While hospitals and health systems are experiencing concrete success stories with AI in various healthcare-focused use cases, Chen acknowledged the need for caution. He humorously expressed his hope that AI systems could help manage the overwhelming flood of emails and messages he receives daily.
Pandora's Box: The Good and the Bad
Chen emphasized that the Pandora's box of AI has been opened, offering both practical uses that can save lives and dangerous applications that can spread misinformation. He recounted a conversation with medical trainees who were using ChatGPT, an AI language model, during ICU rounds. Chen cautioned against using AI as a medical reference without a clear understanding of its limitations.
The Evolution of Generative AI
Chen discussed the evolution of generative language models (LLMs), which have transformed from simple autocomplete tools to models with emergent properties that surprise even experts in the field. These models can now answer questions, summarize information, translate languages, and even generate ideas and reasoning with a theory of mind. Chen highlighted the illusion of intelligence created by AI's ability to manipulate words at a rapid pace.
The Danger of Confabulation
Chen warned healthcare professionals about the dangers of confabulation, where AI systems string together words in a believable sequence without underlying meaning. He compared it to working with a medical student who made up facts during rounds, emphasizing the potential harm to patient care.
Blurring the Lines Between Reality and Fabrication
Chen emphasized that we are approaching a point in history where it is increasingly difficult to distinguish between human-generated and computer-generated content. He cited a study where chatbot-generated answers scored higher in quality and empathy compared to answers from real doctors, challenging the belief that humans have a monopoly on empathy and therapeutic relationships.
The Future of AI in Healthcare
Chen predicted that more people will receive therapy and counseling from automated robots than from human beings due to the overwhelming demand and limited supply of human-driven healthcare. However, he emphasized that humans will always play a central role in healthcare, and a good doctor cannot be replaced by a computer. Instead, he encouraged healthcare professionals to embrace AI as a tool that can alleviate the burden of administrative tasks and improve patient care.
Dr. Jonathan Chen's captivating performance at the HIMSS24 AI in Healthcare Forum shed light on the illusion of AI and the risks associated with generative AI. As healthcare continues to integrate AI technologies, transparency, understanding, and caution are crucial. While AI systems offer immense potential, they must be used responsibly and with a clear-eyed understanding of their limitations.
As Chen eloquently stated, "While AI is not going to replace doctors, those who learn how to use AI may very well replace those who do not."
1 note
·
View note