Tumgik
#ai safety summit
jcmarchi · 7 months
Text
UK and France to collaborate on AI following Horizon membership
New Post has been published on https://thedigitalinsider.com/uk-and-france-to-collaborate-on-ai-following-horizon-membership/
UK and France to collaborate on AI following Horizon membership
.pp-multiple-authors-boxes-wrapper display:none; img width:100%;
The UK and France have announced new funding initiatives and partnerships aimed at advancing global AI safety. The developments come in the wake of the UK’s association with Horizon Europe, a move that was broadly seen as putting the divisions of Brexit in the past and the repairing of relations for the good of the continent.
French Minister for Higher Education and Research, Sylvie Retailleau, is scheduled to meet with UK Secretary of State Michelle Donelan in London today for discussions marking a pivotal moment in bilateral scientific cooperation.
Building upon a rich history of collaboration that has yielded groundbreaking innovations such as the Concorde and the Channel Tunnel, the ministers will endorse a joint declaration aimed at deepening research ties between the two nations. This includes a commitment of £800,000 in new funding towards joint research efforts, particularly within the framework of Horizon Europe.
A landmark partnership between the UK’s AI Safety Institute and France’s Inria will also be unveiled, signifying a shared commitment to the responsible development of AI technology. This collaboration is timely, given France’s upcoming hosting of the AI Safety Summit later this year—which aims to build upon previous agreements and discussions on frontier AI testing achieved during the UK edition last year.
Furthermore, the establishment of the French-British joint committee on Science, Technology, and Innovation represents an opportunity to foster cooperation across a range of fields, including low-carbon hydrogen, space observation, AI, and research security.
UK Secretary of State Michelle Donelan said:
“The links between the UK and France’s brightest minds are deep and longstanding, from breakthroughs in aerospace to tackling climate change. It is only right that we support our innovators, to unleash the power of their ideas to create jobs and grow businesses in concert with our closest neighbour on the continent.
Research is fundamentally collaborative, and alongside our bespoke deal on Horizon Europe, this deepening partnership with France – along with our joint work on AI safety – is another key step in realising the UK’s science superpower ambitions.”
The collaboration between the UK and France underscores their shared commitment to advancing scientific research and innovation, with a focus on emerging technologies such as AI and quantum.
Sylvie Retailleau, French Minister of Higher Education and Research, commented:
“This joint committee is a perfect illustration of the international component of research – from identifying key priorities such as hydrogen, AI, space and research security – to enabling collaborative work and exchange of ideas and good practices through funding.
Doing so with a trusted partner as the UK – who just associated to Horizon Europe – is a great opportunity to strengthen France’s science capabilities abroad, and participate in Europe’s strategic autonomy openness.”
As the UK continues to deepen its engagement with global partners in the field of science and technology, these bilateral agreements serve as a testament to its ambition to lead the way in scientific discovery and innovation on the world stage.
(Photo by Aleks Marinkovic on Unsplash)
See also: UK Home Secretary sounds alarm over deepfakes ahead of elections
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Tags: ai safety summit, artificial intelligence, europe, france, government, horizon europe, research, safety, uk
2 notes · View notes
gamesatwork · 11 months
Text
e439 — Nuts About Sound
AI topics on safety, transparency & generated content, VR & MR/XR stories on making music with virtual instruments, cracking walnuts & bitcoin wallets & more!
Photo by Raffaele Ravaioli on Unsplash Published 6 November 2023 Michael and Michael get together for a lively discussion on AI, VR, Mixed Reality, a bluetooth in a nutshell, and a locked bitcoin wallet among other topics. The co-hosts start off the episode with the #ProjectPrimrose video of the interactive dress.  This is right in line with the stories from the October podcasts.  Then they…
Tumblr media
View On WordPress
0 notes
era-news · 11 months
Text
World Leaders Agree on Artificial Intelligence Risks
World leaders at a safety summit have agreed on the importance of mitigating risks posed by rapid advancements in the emerging technology of artificial intelligence. The inaugural two-day AI Safety Summit, hosted by British Prime Minister Rishi Sunak in Bletchley Park, England, started Wednesday with leaders from 28 nations, including the United States and China. The leaders agreed to work toward…
Tumblr media
View On WordPress
0 notes
michellesanches · 11 months
Text
The Bletchley Declaration on AI Safety: A Summary
Hurrah! Finally a concerted effort by both the private and public sectors to tackle the Artificial Intelligence (AI) regulation question. With much excitement and hot on the heels of Biden’s AI Safety Executive Order, the first global AI Safety Summit, currently taking place, looks to create an AI safety framework, recognising the need to move away from the current “self-regulation” system.…
Tumblr media
View On WordPress
0 notes
biglisbonnews · 1 year
Photo
Tumblr media
UK Government Details AI Safety Summit Ambitions Government announces key objectives for the global AI Safety Summit in November, being held at Bletchley Park https://www.silicon.co.uk/e-innovation/artificial-intelligence/uk-government-details-ai-safety-summit-ambitions-527829
0 notes
Text
Tackling the threat from artificially generated images of child sex abuse must be a priority at the UK-hosted global AI summit this year, an internet safety organisation warned as it published its first data on the subject.
Such “astoundingly realistic images” pose a risk of normalising child sex abuse and tracking them to identify whether they are genuine or artificially created could also distract from helping real victims, the Internet Watch Foundation (IWF) said.
The organisation – which works to identify and remove online images and videos of child abuse – said while the number of AI images being identified is still small “the potential exists for criminals to produce unprecedented quantities of life-like child sexual abuse imagery”.
Of 29 URLs (web addresses) containing suspected AI-generated child sexual abuse imagery reported to the IWF between May 24 and June 30, seven were confirmed to contain AI-generated imagery.
This is the first data on AI-generated child sexual abuse imagery the IWF has published.
It said it could not immediately give locations for which countries the URLs were hosted in, but that the images contained Category A and B material – some of the most severe kinds of sexual abuse – with children as young as three years old depicted.
Its analysts also discovered an online “manual” written by offenders with the aim of helping other criminals train the AI and refine their prompts to return more realistic results.
The organisation said such imagery – despite not featuring real children – is not a victimless crime, warning that it can normalise the sexual abuse of children, and make it harder to spot when real children might be in danger.
Last month, Rishi Sunak announced the first global summit on artificial intelligence (AI) safety to be held in the UK in the autumn, focusing on the need for international co-ordinated action to mitigate the risks of the emerging technology generally.
Susie Hargreaves, chief executive of the IWF, said fit-for-purpose legislation needs to be brought in “to get ahead” of the threat posed by the technology’s specific use to create child sex abuse images.
She said: “AI is getting more sophisticated all the time. We are sounding the alarm and saying the Prime Minister needs to treat the serious threat it poses as the top priority when he hosts the first global AI summit later this year.
“We are not currently seeing these images in huge numbers, but it is clear to us the potential exists for criminals to produce unprecedented quantities of life-like child sexual abuse imagery.
“This would be potentially devastating for internet safety and for the safety of children online.
“Offenders are now using AI image generators to produce sometimes astoundingly realistic images of children suffering sexual abuse.
“For members of the public – some of this material would be utterly indistinguishable from a real image of a child being sexually abused. Having more of this material online makes the internet a more dangerous place.”
She said the continued abuse of this technology “could have profoundly dark consequences – and could see more and more people exposed to this harmful content”.
She added: “Depictions of child sexual abuse, even artificial ones, normalise sexual violence against children. We know there is a link between viewing child sexual abuse imagery and going on to commit contact offences against children.”
Dan Sexton, chief technical officer at the IWF, said: “Our worry is that, if AI imagery of child sexual abuse becomes indistinguishable from real imagery, there is a danger that IWF analysts could waste precious time attempting to identify and help law enforcement protect children that do not exist.
“This would mean real victims could fall between the cracks, and opportunities to prevent real life abuse could be missed.”
He added that the machine learning to create the images, in some cases, has been trained on data sets of real child victims of sexual abuse, therefore “children are still being harmed, and their suffering is being worked into this artificial imagery”.
The National Crime Agency (NCA) said while AI-generated content features only “in a handful of cases”, the risk “is increasing and we are taking it extremely seriously”.
Chris Farrimond, NCA director of threat leadership, said: “The creation or possession of pseudo-images – one created using AI or other technology – is an offence in the UK. As with other such child sexual abuse material viewed and shared online, pseudo-images also play a role in the normalisation and escalation of abuse among offenders.
“There is a very real possibility that if the volume of AI-generated material increases, this could greatly impact on law enforcement resources, increasing the time it takes for us to identify real children in need of protection.”
348 notes · View notes
zvaigzdelasas · 11 months
Text
[JPost is Israeli Private Media]
"We are not going to create any conditions on the support that we are giving Israel to defend itself," [Kamala] Harris told a news conference in Britain following the conclusion of an AI Safety Summit.
3 Nov 23
189 notes · View notes
capitalisticveins · 10 months
Text
SURPRISE D.A.M.N CREW GENERAL HCs ‼️
I don’t like making hcs on my phone but Friendsgiving yesterday rlly motivated me to just do it
— Caelum likes to chew on his shirt when he’s bored
— Dear has 5 umbrellas even though they live alone
— FL has no idea AI art exists
— Dear likes to buy Lasko ties
— Caelum thinks cats are adorable but is absolutely terrified of them
— Gavin is shit at golf, bowling, and basically every sport in existence except for gymnastics, cheerleading, and table tennis
— Huxley can flex his tits
— Lasko buys every fan except Lasko brand fans
— Dear hasn’t played Mario Kart before but when they first played with Lasko they decimated him
— Damien’s favorite kind of animals are the hairless ones
— Caelum can’t hopscotch
— FL can’t be trusted to go grocery shopping for people because they have shit willpower and no common sense they will buy the wrong brand of item you want and buy 3 packs of pizza rolls for themselves with the person’s money
— Huxley can’t jump rope properly because the rope can’t go around his body
— Damien has a schedule of what to wear and when. He wears specific shirts on SPECIFIC days of the week.
— FL has a child safety lock on their computer for Caelum and Gavin
— Everyone has to tell Damien where they’re going whenever they leave their houses
— Dear somehow got everyone’s number before Friendsgiving and asked everyone to point out Lasko’s use of Mahogany/Burnt Sienna on the letters
— Gavin isn’t allowed inside Max’s Rustic Pizza anymore
— If Damien would let him, Huxley would touch lava, like seriously slap it
— FL likes to dress up as Aang for Halloween
— Lasko and Damien are the only ones to own a bidet
— Huxley owns a mermaid dress
— Gavin owns a fur coat
— As a kid, Huxley was too shy to ask his moms to peel his oranges for him so he just sorta ate them with the peel on until he was 11
— Gavin is banned on tiktok
- and twitter
— Caelum’s wings flap like a hummingbird’s
— Huxley is the only member to buy proper sweet snacks. Lasko buys offbrand and Damien doesn’t buy sweets
— Gavin’s favorite cartoon character is Bugs Bunny
— Despite popular belief, Damien is willing to wear an itchy ugly christmas sweater
— Dear owns a border collie
— Lasko writes fanfiction
— Huxley’s luck is fucking amazing when he plays DND, so much so that it pisses off Lasko and now whenever they campaign with others he makes Huxley the dungeon master
— Damien doesn’t know how to skip
— Gavin can make his own alcoholic beverage at will
— FL isn’t from Dahlia. They’re from NY, but have never been in NYC
— Dear has a very strong opinion on Dasani. I don’t know if they strongly dislike it or strongly like it, but they feel very strongly about it.
— Huxley heard the news about the Summit online, saw it was hosted by Vincent and Lovely, recognized Lovely’s name, and went “wait a minute—”.
— Damien has thought about burning his baby pictures when Huxley found them.
— He attempted to do it when Gavin found them.
— FL has a sweet tooth and since Huxley is the only member to buy sweets, they sneak into his house and consume most of it.
After Damien moved in with Huxley he once woke up at like 2am to get a glass of water and saw FL hunched over sucking the frosting off of the mini cupcakes Huxley buys and chugging milk out of the container, their eyes were glowing in the dark and there was a ton of containers on the floor.
Damien went back to sleep without saying a word. When he woke up all traces of FL being in the house were gone and they don’t recall the night ever happening. No one believes him.
The only reference to the night happening is that all the snacks he saw FL eating were gone.
Lasko believes him but FL won’t let him tell Damien he believes him (it has happened to Lasko too and that’s why he buys offbrand).
— Gavin has accidentally killed someone with a rift.
— Caelum too but he doesn’t know.
— Dear is gonna buy Lasko rash ointment for Christmas with no malicious intent whatsoever.
— Damien owns the same amount of shoes as Milo.
— Gavin can’t swim. Gavin’s bad at a lot of things.
— Caelum can swim with water wings. Caelum’s good at a lot of things.
145 notes · View notes
ingek73 · 1 month
Text
Prince Harry hits out at spread of disinformation via AI and social media
Duke speaks at summit on digital responsibility while on visit with Duchess of Sussex to Colombia
Caroline DaviesFri 16 Aug 2024 12.39 CESTS
The Duke of Sussex has hit out at online disinformation during a four-day visit to Colombia, warning: “What happens online within a matter of minutes transfers to the streets.”
Speaking in Bogotá at a summit on digital responsibility, Harry said of the spread of false information via artificial intelligence and social media: “People are acting on information that isn’t true.”
The warning, on the first day of the tour of Colombia by the Duke and Duchess of Sussex, did not name specific social media platforms, but Harry’s comments followed criticism of the tech billionaire and owner of X, Elon Musk, and social media platforms after the far-right riots in the UK.
Addressing experts at the summit, which was staged in part by Harry and Meghan’s Archewell Foundation, Harry said in comments reported by the BBC: “In an ideal world those with positions of influence would take more responsibility. We are no longer debating facts.
Tumblr media
Meghan and Harry were welcomed to Colombia by its vice-president, Francia Márquez. Photograph: Darwin Torres/Colombian Vicepresidency/LongVisual/Zuma Press Wire/Rex/Shutterstock
The couple’s visit is at the invitation of Francia Márquez, Colombia’s vice-president, who told journalists she had been “deeply moved” by the Sussexes’ Netflix docuseries about their lives. “It motivated me to say [of Meghan], ‘this is a woman who deserves to visit our country and share her story’, and undoubtedly, her visit will strengthen so many women around the world,” said Márquez as she welcomed them to Bogotá.
Márquez said she had previously invited Meghan to get involved with a “day of Afro-descendant women” which is commemorated annually on 25 July, but Meghan was unable to make it. “At that time, we sent her an invitation letter, and she responded saying that she couldn’t come but was very eager to visit and get to know our country,” Márquez said.
It has not been confirmed who is funding the trip to Colombia, but the couple have reportedly been given a full security detail, which they no longer receive in the UK after stepping down as working royals in 2020. During the visit, which appears to follow the format of official royal visits, they are expected to spend time in Cartagena and Cali and attend the Petronio Álvarez festival, a four-day event in celebration of Afro-Colombian music and culture.
Their first day was spent in the capital, Bogotá, where they visited a school to meet teenagers at a session on online safety, watched a cultural showcase in which they joined in with the dancing, and attended the digital summit looking at the urgent need to tackle the harmful aspects of technology and digital platforms.
It is the Sussexes’ third trip this year after a three-day visit to Nigeria in May and a visit to Jamaica in January.
15 notes · View notes
mariacallous · 1 month
Text
Let’s get one thing clear: the Democratic Party’s presidential nominee, Vice President Kamala Harris, was never the border czar, despite her political opponents’ attempts to label her as such. If Harris has ever had a Biden administration czarship—not with an official title but with broad authority to coordinate and direct multiple agencies, organizations, and departments on a multi-faceted policy priority—it was in artificial intelligence (AI). Strangely, this doesn’t seem to have come up a lot in the 2024 presidential contest, despite the presence of AI everywhere else these days. In fact, this role doesn’t even merit a passing mention on the “Meet Vice President Kamala Harris” page of her website even as she prepares to formally become the party’s presidential candidate at the Democratic National Convention in Chicago.
AI might lack the political resonance of the border today, but it is time we reconsider its significance to the average voter. As Harris graduates from a vibes campaign to one with more substance, the vice president should put a spotlight on the AI in her record. When AI is recast as a sweeping change that could affect jobs, income equality, national security, and the rights of ordinary citizens, it is rather quickly transformed from esoterica to an everyday concern. The Trump-Vance campaign has received support from the likes of Elon Musk, Peter Thiel, and Marc Andreessen—all major Silicon Valley and AI influencers and investors—but it is Harris, not former President Donald Trump, who has actual fingerprints on AI policy. So, what has been Harris’s track record in this area? And where is the vice president likely to take AI policy if she wins the White House?
Harris’s role as AI czar may be the political season’s best-kept secret. But if one were to trace AI policy development in the world’s leading AI-producing nation, all signs point to Harris. Remarkably, AI policy development has been led by the White House rather than the U.S. Congress. In fact, Congress has done precious little, despite the growing need for AI guardrails, while the White House, with Harris as the seniormost public official involved, has helped frame and follow up on its October 2023 executive order on AI.
That order was designed to ensure the “safe, secure, and trustworthy development and use of AI.” In addition, Harris has made a broader commitment to “establishing a set of rules and norms for AI, with allies and partners, that reflect democratic values and interests, including transparency, privacy, accountability, and consumer protections.” Significantly for a technology disproportionately reliant on a handful of industry players, Harris suggested and has led the important first step of bringing these players together to commit to a set of AI practices and standards that advances three critical objectives: safety, security, and trust.
Given the disproportionate influence of the United States on AI used around the world, it is critical for the country to have its public position clarified in international fora. Harris has represented the United States in key international convenings and led the country’s global advocacy efforts on ensuring safe AI, such as at the AI Safety Summit in Bletchley Park, England. At the other end of the stakeholder spectrum, Harris has also met with the communities most directly affected by the wider adoption of the technology, including consumer protection groups and labor and civil rights leaders, to discuss protections against AI risks.
Harris’ contact with AI has another dimension, too. As the ultimate political unicorn—a woman of color, an underrated and parodied vice president in the Biden administration, and an overnight sensation as presumptive presidential nominee of the Democratic Party after U.S. President Joe Biden stepped aside from the 2024 race—Harris’s narrative has been defined largely by others, whether it is through AI-assisted disinformation campaigns or viral memes. She has been personally targeted by deepfake videos of her supposedly making garbled statements, such as, “Today is today, and yesterday was today yesterday.” An AI-aided voice synthesis that led to a demeaning parody of her presidential campaign advertisement was reposted by Musk himself on X. Trump has also falsely claimed that the large crowds at Harris’s campaign rallies were AI generated. In other words, Harris can legitimately claim to have had AI weaponized against her personally.
Finally, Harris hails from the global capital of AI. As former attorney general and senator of California, she has been financially supported by many in the tech industry; more than 200 Silicon Valley investors have backed her run for the White House. One of her closest confidantes is her brother-in-law, Tony West, Uber’s chief legal officer (now on leave to work for the Harris campaign). It is legitimate to ask if she would be willing to confront the industry on difficult issues; at the same time, her closeness with industry leaders could help with greater government-industry collaboration.
What can we learn from Harris’s record as to what she would do in the presidency on this issue? As AI czar, Harris showed some clear patterns. For one, her primary focus has been promoting safety and addressing the risks of unregulated AI use, which can lead to bias or abuse. Second, the White House under her stewardship has accomplished a wide range of safety-, security-, and trust-enhancing actions since the issuance of the executive order—from AI testbeds and model evaluation tools developed at the Department of Energy to the Office of Management and Budget-issued government-wide policy on AI, the latter with safeguards to assess and monitor AI’s societal impact. There have been pilots at the departments of Defense and Homeland Security using AI to protect vital government software and a call to action from the Gender Policy Council and Office of Science and Technology Policy to combat AI-generated image-based sexual abuse. Harris has also been the seniormost official behind the release of a Blueprint for an AI Bill of Rights, outlining principles for the ethical design and use of AI.
Another of Harris’s initiatives has been aimed at promoting authenticity as concerns about AI-generated content skyrocket. This includes proposing international standards for tracing authenticity of government-produced digital content and identifying AI-generated or manipulated content, through digital signatures, watermarking, and labeling.
While Harris is not, any means, an expert on AI, and much work remains to get a full-throated AI policy in place, the numbers tell a tale of steady early accomplishment. A list of 100 action items following the executive order has been completed by various federal agencies on issues ranging from developing new technical guidelines for AI safety to evaluating misuse of dual-use foundation models and developing frameworks for managing generative AI risks. Harris has obtained voluntary commitments from 15 companies to ensure safe, secure, and transparent development of AI technology. 31 nations have joined the United States in endorsing a declaration establishing norms for responsible development, deployment, and use of military AI capabilities. And the U.S. government has won commitments of up to $200 million from 10 leading foundations to fund work around five pillars that cover issues from democracy and rights to improving transparency and accountability of AI.
Harris’s campaign rests on the idea of looking to the future and “not going back.” The Democratic National Convention in Chicago presents an opportunity for Harris to communicate more to the public about a key part of that future: AI’s economic and societal implications and her role in influencing them. Time is running out on conveying this issue’s importance, especially to the working class. While the impact of AI on different occupations is a matter of debate, some argue that, in the near-term, higher-income workers are more likely to benefit from productivity improvements due to AI and the share of income going to capital is likely to increase at the expense of the share that goes to labor. Both trends would contribute to an increase in income inequality.
As for the impact on jobs, there are different schools of thought. Some believe AI could help make many services, such as medical care, or currently elite job responsibilities, such as research, writing. graphics design and software coding more accessible to the middle class. Others see a plausible scenario of a hollowing out of specialized job functions. Policy and election promises need to show how a Harris administration would help steer toward the former outcome.
On the global stage, there are numerous existential risks associated with AI. Autonomous lethal weapons are a critical concern as multilateral agreements to ban such weapons have failed. Tensions with major AI-producing nations such as China are escalating, with no roadmap for getting to common ground as both the United States and China have declared their aspirations to become the world’s AI leader. A recent seven-hour meeting between top officials of the two countries in Geneva advertised as a dialogue on managing AI risks reportedly ended with no concrete agreements or follow-up meetings scheduled.
In parallel, the atmosphere has only become more tense with U.S. tariffs on Chinese imports and restrictions on the export of high-end chips to China. The Commerce Department is considering further restrictions on exporting proprietary AI models to China. Meanwhile, Beijing and Moscow are discussing a strategic partnership on various issues, including technology, while the Chinese embassy in Washington has accused the United States of “economic coercion and unilateral bullying.” If mishandled, these tensions can escalate.
Harris’s campaign can distinguish her candidacy with an acknowledgement of her track record and momentum on AI policy development. It must make the case for at least three sets of issues her administration would address. First: understanding AI’s impact on jobs and the resulting impact on economic inequality, and setting forth a plan to mitigate risks and protecting the most vulnerable. Second: developing a strategy for harnessing AI that addresses key kitchen-table concerns, such as accessible healthcare and education and skill-building. And third: crafting a vision for U.S. leadership in AI that advances responsible innovation, reduces geopolitical tensions, and preserves American national security interests.
Going from czar to president is unusual and comes with unusual challenges. Czars are usually not formally appointed as such—Harris was never officially designated AI czar despite the clear czar-like nature of her involvement—but can work to bring multiple parties together, often doing so outside public view. Then-Special Presidential Envoy for Climate John Kerry, for example, played a key role as de facto climate czarin increasing cooperation on that issue with China without much fanfare.
In other instances, and when they are brought in during an acute crisis, czars come with enormous expectations: The city of Boston awaits a rat czar, and residents want to see quick results. Czars do not have executive powers but have the respect of many, which is the calling card that allows them to convene parties with differing agendas. Presidents enjoy none of these luxuries. They own the problems they take on and they do so in public view.
There’s no escaping the reality that we are—and this election is being held—firmly in the age of AI. It is important that Harris’s team conveys the significance of AI to people’s lives and lets voters know how Harris would build on her unique track record. American voters have a choice to make for the nation’s next president this November, and on this one critical issue at least one of the candidates has a running start.
9 notes · View notes
jcmarchi · 20 days
Text
UK signs AI safety treaty to protect human rights and democracy
New Post has been published on https://thedigitalinsider.com/uk-signs-ai-safety-treaty-to-protect-human-rights-and-democracy/
UK signs AI safety treaty to protect human rights and democracy
.pp-multiple-authors-boxes-wrapper display:none; img width:100%;
The UK has signed a landmark AI safety treaty aimed at protecting human rights, democracy, and the rule of law from potential threats posed by the technology.
Lord Chancellor Shabana Mahmood signed the Council of Europe’s AI convention today as part of a united global approach to managing the risks and opportunities. 
“Artificial intelligence has the capacity to radically improve the responsiveness and effectiveness of public services, and turbocharge economic growth,” said Lord Chancellor Mahmood.
“However, we must not let AI shape us—we must shape AI. This convention is a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, like human rights and the rule of law.”
The treaty acknowledges the potential benefits of AI – such as its ability to boost productivity and improve healthcare – whilst simultaneously addressing concerns surrounding misinformation, algorithmic bias, and data privacy. It will compel signatory nations to monitor AI development, implement strict regulations, and actively combat any misuse of the technology that could harm public services or individuals.
Keiron Holyome, VP UKI & Emerging Markets at BlackBerry, commented: “To truly outrun cybercriminals and maintain a defensive advantage, robust frameworks for AI governance and ethical standards must be established, ensuring responsible use and mitigating risks.
“The first legally binding international AI treaty is another step towards such recommendations for both AI caution and applications for good. Collaboration between governments, industry leaders, and academia will be increasingly essential for sharing knowledge, developing best practices, and responding to emerging threats collectively.”
Crucially, the convention acts as a framework to enhance existing legislation in the UK. For example, aspects of the Online Safety Act will be bolstered to better address the risk of AI systems using biased data to generate unfair outcomes. 
The agreement focuses on three key safeguards:
Protecting human rights: Ensuring individuals’ data is used responsibly, their privacy is respected, and AI systems are free from discrimination. 
Protecting democracy: Requiring countries to take proactive steps to prevent AI from being used to undermine public institutions and democratic processes.
Protecting the rule of law: Placing an obligation on signatory countries to establish robust AI-specific regulations, shield their citizens from potential harm, and ensure responsible AI deployment.
While the convention initially focuses on Council of Europe members, other nations – including the US and Australia – are being invited to join this international effort to ensure responsible AI development and deployment.  
Peter Kyle, Secretary of State for Science, Innovation, and Technology, commented: “AI holds the potential to be the driving force behind new economic growth, a productivity revolution and true transformation in our public services, but that ambition can only be achieved if people have faith and trust in the innovations which will bring about that change.
“The convention we’ve signed today alongside global partners will be key to that effort. Once in force, it will further enhance protections for human rights, rule of law, and democracy—strengthening our own domestic approach to the technology while furthering the global cause of safe, secure, and responsible AI.”
The UK Government has pledged to collaborate closely with domestic regulators, devolved administrations, and local authorities to ensure seamless implementation of the treaty’s requirements once it is ratified.
The signing of the convention builds on the UK’s previous efforts in responsible AI by hosting the AI Safety Summit and co-hosting the AI Seoul Summit, as well as establishing the world’s first AI Safety Institute. 
See also: UK adjusts AI strategy to navigate budget constraints
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Tags: ai, ai safety, artificial intelligence, convention, council of europe, democracy, europe, human rights, law, legal, safety, treaty, uk
0 notes
guerrerense · 4 months
Video
The Duchess at Ais Gill - Explored
flickr
The Duchess at Ais Gill - Explored por Stephen Dance Por Flickr: 46233 Duchess of Sutherland is in full cry with safety valves lifting as she approaches the summit of the Settle and Carlisle line at Ais Gill She is working 1Z23, the 16:40 Carlisle - Crewe Cumbrian Mountain Express railtour
5 notes · View notes
master-john-uk · 11 months
Text
youtube
1st November 2023 His Majesty The King delivers a virtual address at the AI Safety Summit 2023 at Bletchley Park.
8 notes · View notes
michellesanches · 6 months
Text
Latest AI Regulatory Developments:
As artificial intelligence (AI) continues to transform industries, governments worldwide are responding with evolving regulatory frameworks. These regulatory advancements are shaping how businesses integrate and leverage AI technologies. Understanding these changes and preparing for them is crucial to remain compliant and competitive. Recent Developments in AI Regulation: United Kingdom: The…
Tumblr media
View On WordPress
1 note · View note
squideo · 1 year
Text
Tumblr media
Marvel's Secret Invasion used AI-generated images to create its opening credits, which is why we're revisiting this blog from June 8th. Because, if a human didn't create the credits, does that mean Marvel can't claim copyright? Read the blog to learn about AI and the interesting legal questions it's raising 🤖
Artificial intelligence (AI) is the hot topic in every industry, as experts and commentators speculate how this rapidly evolving technology will change the future of work; and humanity.
However the water cooler exists at your job – maybe you’re playing it retro and still have an actual water cooler in your physical office – people are gathering around it to ask if you’ve tried ChatGPT yet, if AI will result in a shorter work week, or if the robots are coming for us Skynet-style. While there is nothing to suggest artificial intelligence has become sentient, the line between reality and science fiction is blurring.
Where does this leave people working in industries like video production? Will AI replace us, or will it become another tool in our utility belts?
This Just In
News headlines surrounding artificial intelligence are constantly fluctuating, and the biggest topics of conversation can be forgotten only a week later once something new eclipses it.
AI is progressing so rapidly, it’s hard for experts to say with any degree of certainty where it may take us, but that doesn’t stop everyone having an opinion on what will happen next. At the time of writing in June 2023, here is an overview of the leading artificial intelligence headlines:
Sam Altman, creator of ChatGPT, says AI poses an “existential risk” to humanity.
The United Kingdom will host the first global summit on artificial intelligence safety (date TBA).
The Beatles are back, as AI creates brand new songs from the Fab Four.
The European Union (EU) asks social media companies, including Google and Facebook, to label AI-generated content.
The latest AI trend is to expand famous paintings, creating more content but no art.
Read this article again in July and half of this will be old news. Or a reunited Beatles, half of whom are back from the grave, will become an acceptable reality. Really, it’s anyone’s guess.
Are You Sure That’s Legal?
Artificial intelligence is not entirely new in video production. AI has been used already for image manipulation and content editing. As the power of AI grows, however, it comes with the potential to create imagery from vast datasets. Seemingly making something from scratch, like a human would.
Tumblr media
In May 2023, Adobe added an AI-powered image generator called Adobe Firefly to Photoshop. The software is promised to turn your wildest dream into an amazing image in seconds, but how is this image made? Rival software, such as Stability AI, have already faced legal action over the creation of their images. Groups like Getty Images claim Stability’s artificial intelligence generates content using existing images, or combining multiple images from the dataset its creator uploaded, which infringes their copyright.
Individuals using AI for personal use have a lot more legal freedoms than companies using AI to create images and videos for commercial use. Companies have to consider the copyright of the matter, and this is a huge ongoing debate that could take years to resolve.
As noted in the Berne Convention, an international treaty on copyright, copyright protection operates “for the benefit of the author and [their] successors in title” – the assumption being that there is a human creator. This has been affirmed in the US in the now famous “monkey selfie” dispute, during which both the Copyright Office and the courts found that animals could not hold copyright. The absence of a human creator in respect of AI-generated content therefore presents obstacles to the subsistence of copyright in the output that is generated. INFORMATION AGE, JUNE 2023
If a company uses AI to create a video, do they have ownership of it? Could the company face legal penalties if the AI is found to have used existing images and the artists’ sue, or is that a risk for the AI creator? Will users want to watch AI created content when there is such a big debate surrounding art and humanity; remember, the EU is campaigning for social media channels to put a label on AI content.
The simple answer is, we don’t know.
youtube
That’s a lot of potential legal issues and, because AI is evolving rapidly, there’s no clear answers. The earliest cases of these disputes are still ongoing, which means there’s little legal precedent to inform companies who are assessing how AI can help them with content creation.
Working with a video production company removes the uncertainty. Video production companies, like Squideo, give clients full ownership of the video once it’s created – meaning the video can be shared as widely as the company wants and, if it is replicated, the company can claim copyright.
Yet this doesn’t mean artificial intelligence has no place in the video production process whatsoever.
Judgement Day?
In 2023, 91% of businesses plan to use video as a marketing tool and 96% rate video as an ‘important part’ of their marketing strategy. There is growing demand for short videos, perfect for sharing to reels and stories on social media. Implementing AI in the video production process can speed things up and lower the cost.
Tumblr media
ChatGPT can accelerate the scriptwriting process by providing a foundation to build upon – though it’s best not to rely entirely on AI writing generators just yet, as it’s still making a variety of mistakes. Voiceovers can be supplied by AI instead of recording artists too, as the technology has made artificial voices significantly more life-like. Although they’re less directable than voiceover artist, and not necessarily cheaper.  
Video production companies like Squideo create your animated video from scratch, ensuring complete one-of-a-kindness. That doesn’t mean we can’t benefit from AI tools – but we’re not about to become obsolete either.
Ready to create a unique video of your own? Watch the video below to get a better understanding of how Squideo can help promote your business, then get in touch with us to find out more!
youtube
9 notes · View notes
lostinaflashforward · 8 months
Text
LIAFF SPECIAL #11 - Interpreti in pillole: Kristen Stewart
Tumblr media
Carissimi lettori, ben ritrovati con un nuovo appuntamento con LIAFF SPECIAL, la rubrica dedicata all’approfondimento di personaggi e temi nel mondo dell’intrattenimento. Questo mese parleremo di un'attrice molto apprezzata, e che ha visto un incredibile notorietà negli ultimi anni, fra premiazioni importanti e partecipazioni ai grandi festival del cinema, ovvero Kristen Stewart. In questo articolo ripercorreremo la sua carriera, dagli inizi in giovanissima età, fino all'arrivo della fama grazie alla Twilight Saga e alla sua reinvenzione quale volto del cinema indipendente e controcorrente, caratterizzata da grandi interpretazioni e riconoscimenti di rilievo.
Tumblr media
A young star: chi è Kristen Stewart?
Kristen Jaymes Stewart nasce a Los Angeles il 09 Aprile 1990, da padre statunitense e madre australiana, rispettivamente un produttore e una sceneggiatrice. Dopo aver studiato in scuole locali, la Stewart continuò gli studi a distanza fino al liceo, e sognava di diventare sceneggiatrice o regista, non avendo mai preso in considerazione la carriera come attrice. Ad otto anni, durante una recita natalizia scolastica, la Stewart fu notata da un agente, portandola a fare audizioni per l'anno successivo, fino ad ottenere il suo primo ruolo, nel film The Thirteenth Year (1999), seguito da The Flintstones in Viva Rock Vegas (2000), entrambi dei semplici cameo. Il primo ruolo di un certo peso arriva con The Safety of Objects (2001), dove interpreta la figlia maschiaccio del personaggio di Patricia Clarkson.
Tumblr media
Panic Room: i primi ruoli di rilievo
La prima vera svolta nella carriera della Stewart arriva nel 2002 con Panic Room, film thriller diretto da David Fincher, dove interpreta la figlia maschiaccio del personaggio di Jodie Foster, ruolo che le vale una nomination come miglior performance al Young Artist Award. A seguito del successo del film, viene scritturata in Cold Creek Manor (2003), altro thriller con protagonisti Dennis Quaid e Sharon Stone. Fra una lezione a distanza e l’altra, la Stewart trova tempo per partecipare ad altri film, come l’action-comedy Catch that Kid, il thriller Undertow e il drama Speak (tutti usciti nel 2004). In quest’ultimo la Stewart interpreta una ragazza che ha smesso di parlare dopo essere stata vittima di stupro, in una performance notevolmente apprezzata dalla critica. In seguito è apparsa in Zathura: A Space Adventure (2005) di Jon Favreau in un ruolo marginale, in Fierce People (2006), dove recita a fianco del compianto Anton Yelchin, nell’horror The Messengers (2007), a fianco di Dylan McDermott e Penelope Ann Miller e nella commedia romantica In The Land of Women (2007), assieme a Adam Brody e Meg Ryan.
Tumblr media
Into the Wild: le prime attenzioni della critica
Nel 2007 Sean Penn la scelse per interpretare un piccolo ruolo in Into the Wild, adattamento dell’omonimo romanzo di Jon Krakauer, a sua volta basato sulla vera storia di Christopher McCandless, interpretato nella pellicola da Emile Hirsch. La pellicola fu ben accolta dalla critica dell’epoca, la quale si soffermò, fra le altre cose, sull’interpretazione della Stewart, definita rilevante anche se per un ruolo non principale. In seguito la Stewart è apparsa con un cameo in Jumper (2008), ha lavorato a fianco di Robert De Niro in What Just Happened (2008) ed è stata la co-protagonista del film indipendente The Cake Eaters, dove interpreta una ragazza disabile, in un altro ruolo enormemente apprezzato dalla critica.
Tumblr media
The Runaways: fra vampiri e ruoli più drammatici
A Novembre 2007 la Summit Entertainment annunciò che Kristen Stewart avrebbe interpretato la protagonista femminile di Twilight (2008), film tratto dall’omonimo romanzo di Stephenie Meyer, e primo di una lunga e redditizia saga cinematografica. Il primo lungometraggio, diretto da Catherine Hardwicke (che la scelse dopo un provino improvvisato sul set di Adventureland), portò alla Stewart una fama mondiale, ma anche una serie di critiche negative per via della sua recitazione poco espressiva. Nel 2009 la Stewart appare in Adventureland, recitando a fianco di Jesse Eisenberg, e nel secondo capitolo della Twilight Saga, New Moon, seguito poi dal terzo, Eclipse, uscito nel 2010. Da quel momento la Stewart si alterna fra i restanti film della Twilight Saga, vale a dire le due parti di Breaking Dawn, uscite fra il 2011 e il 2012, e una serie di film più drammatici, come The Yellow Handkerchief, dove recita a fianco del compianto William Hurt, Welcome to the Rileys, assieme al compianto James Gandolfini, nel biopic The Runaways, dove la Stewart interpreta la rockstar Joan Jett, in una delle sue performance più importanti, il fantasy Snow White and the Huntsman, dove interpreta una versione action di Biancaneve e l’adattamento cinematografico di On the Road di Jack Kerouac. A seguito della fine della Twilight Saga, la Stewart diventa il volto per marchi come Chanel e Balenciaga, definendosi anche come icona di stile.
Tumblr media
Camp X-Ray: il ritorno dopo le controversie
Per due anni la Stewart non apparve più sulle scene, anche a causa dello scandalo riguardante Rupert Sanders, il regista di Snow White and the Huntsman, ma nel 2014 ritorna in sala con Camp X-Ray, interpretando una giovane guardia che lavora nel penitenziario di Guantanamo, ruolo che la riporta all’attenzione della critica. Nello stesso anno la Stewart è fra i protagonisti di Cloud of Sils Maria, film diretto da Oliver Assayas e presentato al festival di Cannes, che le ha fruttato il César Award come miglior attrice non protagonista, recitando a fianco di Juliette Binoche e Chloë Grace Moretz, e recita accanto a Julianne Moore in Still Alice, film che ha portato la Moore a vincere l'Oscar come miglior attrice protagonista. Negli anni successivi la Stewart appare in Anesthesia, film diretto da Tim Blake Nelson e incentrato sulle vite di alcuni personaggi residenti a New York, in American Ultra, dove ritrova Jesse Eisenberg, il sci-fi distopico Equals, Certain Women di Kelly Reichardt, Cafè Society di Woody Allen, in Personal Shopper, seconda collaborazione con Oliver Assayas, dove interpreta Maureen, una ragazza che lavora nel mondo della moda e che ha recentemente perso il fratello gemello, in un altra performance elogiata dalla critica e in Billy Lynn's Long Halftime di Ang Lee. In questo periodo la Stewart è anche apparsa nella videoclip per il brano "Ride 'Em on Down" de i Rolling Stones e ha debuttato come regista per un cortometraggio, intitolato Come Swim.
Tumblr media
Spencer: la nomination agli Oscar
Nel 2018 la Stewart appare in Lizzie, adattamento cinematografico delle vicende di Lizzie Borden, interpretata da Chloë Sevigny, seguito da JT Le Roy, dove interpreta Savannah Knopp, il volto dietro il famoso caso da cui il film prende il nome e nel 2019 torna al Festival del Cinema di Venezia con Seberg, film che narra la storia dell'attrice Jean Seberg, rivelatosi un altro ruolo importante per la sua carriera. In seguito la Stewart torna al cinema mainstream con il chiacchierato Charlie's Angels di Elizabeth Banks, il thriller Underwater, in cui recita a fianco di Vincent Cassel, ha diretto il cortometraggio Crickets per l'antologia Homemade ed ha recitato nel film natalizio a sfondo LGBTQ+ Happiest Season. A Giugno 2020 la Stewart fu scelta per interpretare Lady Diana in Spencer, biopic diretto da Pablo Larraín ed incentrato sul momento in cui Diana decide di divorziare dal principe Carlo. Per prepararsi al ruolo, la Stewart ha studiato ogni aspetto della compianta principessa del Galles e, a quanto pare, lo sforzo è stato ben ripagato, dato che il film ha debuttato al Festival di Venezia del 2021 ed è stato grandemente accolto dalla critica, soprattutto per l'interpretazione della Stewart, che le ha fruttato fra le altre cose, una nomination agli Oscar come miglior attrice protagonista, momento che segnerà in positivo la sua carriera. In seguito la Stewart torna a Cannes con Crimes of the Future, ultima fatica di David Cronenberg, in un ruolo marginale, ma comunque apprezzato da pubblico e critica e ha un cameo nella miniserie Irma Vep, targata Oliver Assayas.
Tumblr media
I progetti futuri
A quanto pare, Kristen Stewart sembra non volersi fermare qui, dato che ha all'attivo numerosi progetti. Fra questi menzioniamo il thriller romantico Love Lies Bleeding, diretto da Rose Glass e presentato al Sundance Festival di quest'anno, con cui recita a fianco di Katy O'Brien, che già sta ricevendo un grandissimo apprezzamento da parte della critica, il sci-fi sperimentale Love Me, dove recita a fianco di Steven Yeun e anch'esso presentato al Sundance, il debutto alla regia di un lungometraggio in The Chronology of Water, tratto dall'omonimo memoir di Lidia Yuknavitch, la comedy Sacramento, attualmente in produzione, un film che narra la nascita della Beat Generation, che sarà diretto da Ben Foster, e un biopic sull'attivista Susan Sontag.
Qual'è la vostra interpretazione preferita di Kristen Stewart? Fatecelo sapere nei commenti.
Seguiteci su Facebook (gruppo), Twitter, Instagram e Threads!
2 notes · View notes