itlegallychallenged
itlegallychallenged
IT, Legally Challenged
8 posts
Tech Law + Law = IT, Legally Challenged
Don't wanna be here? Send us removal request.
itlegallychallenged · 4 years ago
Text
The Basics:
As the famous author C.S. Lewis once said, “[t]here are far, far better things ahead than any we leave behind.” This quote is particularly fitting as now that this blog is nearing its’ end, the world is starting to embrace AI in unprecedented ways. Last week, all 193 member states of the UN Education, Scientific and Cultural Organization (UNESCO) adopted an agreement that defines the future development of AI (Culture and Education).
This agreement defines the “common values and principles which will guide the construction of the necessary legal infrastructure to ensure the healthy development of AI” (UNESCO).
Further Information:
Report of the Social and Human Sciences Commission (SHS) by UNESCO (NOTE: this is the full recommendation report that was agreed to)
UNESCO adopts first global text on artificial intelligence
UNESCO adopts first global text on artificial intelligence
Synthetic intelligence (AI) is supporting the decision-making of governments, may help fight world points comparable to local weather change and provides us personalised suggestions. However the UN company UNESCO additionally warns that the know-how is bringing challenges, comparable to gender and ethnic bias, in addition to threats to privateness. It’s due to this that 193 UNESCO member nations…
Tumblr media
View On WordPress
3 notes · View notes
itlegallychallenged · 4 years ago
Text
Smart Toys - Not the Smartest Choice?
The Basics:
Today’s generation of young children are being raised in the Digital Age. As a result, many children who have grown up deeply connected to technology are gaining a deeper understanding of what digital technology is and how works than the average adult (Garside). This development has not gone unnoticed by large toy companies and many of these manufacturers are now developing ‘Smart Toys’ (Cohen).
Smart toys, for the most part, are very similar to the traditional toys that many of us played with as children. The noticeable difference, however, is that these toys are being made with artificial intelligence capabilities, which allow them to use machine-learning to interact with the youth (Garside).
Now I am the type of person who prefers getting the bad news before the good news, so I will start by analysing some of the negatives of Smart Toys, namely the data protection aspects.
The Bad:
When purchasing a smart toy, it is important to consider what data is being collected, how the data is being stored, and if said data is being securely stored (Shipley). Unfortunately, most of this information is not readily available and even if the toy manufacturer’s website does contain this information, it is typically either difficult to find or difficult for the average person to understand (Shipley).
While this may sound like a non-issue to some, the example of the smart toy, ‘My Friend Cayla’ should be considered. This toy was very popular in 2017, but it was soon after discovered that any individual within 9 meters of the doll/child who had the Cayla app on their phone could access the audio data from the toy and listen in on the child playing with it (Bergeson & Firth-Butterfield). This discovery led to Germany issuing a ‘kill order’ and requiring parents to destroy their children’s beloved dolls (Bergeson & Firth-Butterfield). A both funny and terrifying thought is that the last remaining German Cayla doll is now ‘living’ in Berlin’s Spy Museum (Bergeson & Firth-Butterfield).
Tumblr media
Image Source: Klause_Mueller on Twitter
This is obviously a more extreme scenario of when smart toys go wrong, but it is a good example to keep in mind. Although it appears that no children were harmed as a result of this privacy flaw (Shipley), it is important that adults understand that these toys could be potentially exploited for a nefarious purpose.
The Good:
Now that we have briefly looked at the harms that may be associated with smart toys, we should quickly look at the positives.
The first positive is about smart toys, in general. Although smart toys may lead to privacy concerns, they do offer some benefits for children. For example, there is an educational and social value to them (Shipley). Some of these smart toys may assist children in pronouncing words better (eg, the LeapFrog Epic) or offering the children pro-social entertainment (eg, the GILOBABY Smart Robot).
The second positive is about companies putting out awareness for smart toys. Although there are some potential harms, smart toys that are properly created and secured can be beneficial. Sources such as Norton Security have offered a detailed guide on what parents should consider when purchasing a smart toy (Kovas/Norton). Additionally, organisations such as the Mozilla Foundation continuously update their website to give parents an easy-to-understand rating on how secure/recommended a smart toy is (Mozilla).
Tumblr media
Image Source: a screenshot I took of Mozilla Foundation’s ‘Privacy Not Included’ Website
The Verdict?:
Like everything else, smart toys can be good, and they can be bad. Unfortunately, until a more stringent regulation is introduced governing the privacy of smart toys, there are really only three options parents have: a) avoid smart toys, b) carry out a detailed investigation into the smart toy before purchasing or c) ensure that the toy’s manufacturer strictly abides by the EU’s GDPR (Milkaite & Lievens).
Further Information:
Privacy Not Included by the Mozilla Foundation
0 notes
itlegallychallenged · 4 years ago
Link
The Basics:
Google has recently developed an AI system that they call the 'Federated Learning of Cohorts' (FLoC). This system was designed as an alternative to cookies, with Google claiming that it will 'better' protect users' privacy (Witteman).
(NOTE: the meme at the bottom of this post perfectly summarises how I perceive Google's privacy policy).
As cookies are currently allowed under the GDPR (albeit with restrictions), it will be interesting to see how the FLoC would potentially hold up against both the GDPR and the AI Regulation that the EU hopes to implement in the coming years (Draft AI Regulation).
0 notes
itlegallychallenged · 4 years ago
Text
"NFTs Weren't Supposed to End Like This"
Tumblr media
Linked Articleb: Karolina Grabowska from Pexels
Brief Background:
NFTs are a non-fungible tokens. They exist as part of the Ethereum blockchain and operate in a manner that is very similar to the infamous Bitcoin (Van Boom) NFTs were originally created as a way to better protect artists and their digital artwork (Dash). Unfortunately, in the last few months, NFTs are becoming popular for, what is in my opinion, all the wrong reasons. People are now trying to make a 'quick dollar' by buying these NFTs for a minimal amount of money and then selling them at an insanely high cost.
Linked Articleby Anil Dash:
The linked article provides an excellent commentary on the history of NFTs and how they ended up where they are today. The article is written by one of the original creators of the NFT and discusses how he and his company are attempting to fight back against this mutated purpose of the NFT.
Further Reading:
Bored Ape Yacht Club: Explaining the NFT Collection Craze by Daniel Van Boom at CNET
NFTs, Explained by Mitchell Clark at the Verge
1 note · View note
itlegallychallenged · 4 years ago
Text
AI v Pirates (2021)
The Basics:
Imagine for a moment that you are back in the early 2000s. There is no COVID-19 pandemic, the only talked about Kardashian was Robert Kardashian following the OJ Simpson trial, and Netflix only exists in a DVD-mailed-to-your-home system. It is a relaxing Friday night and all you want to do is listen to a song or watch a movie that you do not have a physical copy of. What do you do? Is there a way for you to escape your boredom?
There is! Or at least, there was (albeit ilegally). In the early 2000s, many people would have used the Pirate Bay or Limewire for all of their illegal filesharing needs (Van der Sar). These systems have now gone out of use following heavy legal trouble, but their legacy remains, both in their name and the similar programmes that have come afterward (Van der Sar).
Relevant Article by Hello Future:
Copyright infringement is currently protected by the Digital Economy Act 2017. However, one main issue with the enforcement of this law in regards to illegal filesharing is that it has become gradually more difficult to uncover where the material has come from and who has shared it. Fortunately, AI systems like the one used by the Anti-Piracy Centre may help (Hello Future).
0 notes
itlegallychallenged · 4 years ago
Text
Intermediary Liability in the Age of Social Media and Cyberbullying
The Basics:Cyberbullying can be defined as ‘[the] activity of using the internet to harm or frighten another person, especially by sending them unpleasant messages’ (Cambridge Dictionary). Its prevalence has been greatly exasperated due to the rise of social media platforms [hereinafter, SMPs], but was unfortunately not properly addressed until three internationally reported tragedies struck in the early 2010s. During this time, three teen suicides made international news after it became public that each of the youth had been cyberbullied leading up to their deaths (for more information, you can visit the publicised stories of Tyler Clementi, Amanda Todd, and Rebecca Sedwick, respectively).Following this rise in awareness, there has been an outcry for SMPs to be held liable for the cyberbullying that occurs on their platform, via intermediary liability (Isselin).Current EU Legislation:When the e-Commerce Directive was adopted in 2000, it provided online intermediaries with a general liability exemption for illegal content found on their platforms.  However, this exemption only applies if the SMP (the intermediary) is unaware of the content.  If they became aware of the content at any time, they are be expected to quickly remove it (Madiega). Alternatively, if they had knowledge about the content, or ought to have known, and have failed to remove it, they may be held liable for any damage it may have caused.It is very important to note that this Directive explicitly references ‘illegal content’ rather than ‘harmful content’.  Therefore, any content that may be considered harmful but not illegal, which could potentially include cyberbullying, is not covered by the Directive.This means that the position of cyberbullying and intermediary liability is quite confusing, to say the least. If the cyberbullying content crosses the line of illegality, the SMP may be held liable if they have not removed it in time. However, if the cyberbullying content is legal, just morally wrong, the SMP does not have a legal obligation to remove it, nor could they be held responsible by not removing it. This creates great difficulty for victims or their families when seeking recourse, as this means they would only be able to hold the bully liable, which is not always possible if the bully was acting anonymously (Isselin).As such, there has been call for the Directive to be amended, or a new piece of legislation to be introduced, that will help protect individuals against a wider category of cyberbullying and may result in an SMP being held liable if they don’t remove the bullying upon notice (Madiega).UK Context:Following Brexit, the eCommerce Directive no longer applies, which complicates matters even further (Bickerstaff; O'Driscoll). However, the UK government has committed to introducing laws that will fine SMP if they fail to remove cyberbullying content (Shankleman).  Unfortunately, in the 3-years since the UK government has made this announcement, there have been no apparent updates on the matter.Facebook… Doing Good?:During my research, the thing that I found to be the most surprising is actually what links this topic to AI. Since 2020, Facebook has been using AI to help filter through the numerous amount of content on the website (Vincent). The AI moderators are meant to remove any posts that are assumed to be violating the company’s policies on hateful topics such as cyberbullying (Vincent).As these AI moderators are still relatively new, it will take some time to see if they are effective. In the meantime, it is safe to say that this is at least a step in the right direction as it can help a) SMPs in liability claims if the rules do change, as the content should be removed quickly and b) help protect our youth from the traumas of cyberbullying.Sources:Bickerstaff, R, "The end of the EU eCommerce Directive in the UK: Impact on cloud service providers" (2020), Bird&BirdCambridge Dictionary, "cyberbullying" (n.d.) Isselin, M.S., "#StopImmunizing: Why Social Networking Platform Liability is Necessary to Provide Adequate Redress for Victims of Cyberbullying" (2017), Reed SmithMadiega, T, "Reform of the EU liability regime for online intermediaries" (2020), European Parliamentary Research ServiceO'Driscoll, A, "Facebook bullying: How it happens and what to do about it" (2018), Comparitech Shankleman, J, "U.K. Seeks Penalties for Tech Giants to Stop Cyberbullying" (2018), Bloomberg Quint Vincent, J, "Facebook is now using AI to sort content for quicker moderation" (2020), the Verge Further Information: The UK's National Bullying Helpline and eCrime database can be found here: nationalbullyinghelpline.co.uk.  If you or anyone you know needs to call the helpline, it is free to call and is available from 9am-5pm. The number is 0300 323 0169. 
0 notes
itlegallychallenged · 4 years ago
Text
Tumblr media
Meme Source: Granoff, S, "Copyright, Memes, and Article 13" (2019), Medium
The Basics: There has recently been an increase in the number of content creators on YouTube that have had their videos removed/locked due to copyright claims from various record labels (Alexander). While many of these situations have included fairly obvious copyright infringements, there have been many situations where YouTube's AI algorithm detects even incidental uses of protected audio which has resulted in videos being removed/locked (Titlow). Relevant Article by Julia Alexander: While incidental uses are supposedly protected by fair use policies, the linked article discusses the complexity of YouTube's fair use policy. This complexity has resulted in an unbalanced battleground in the war raging between content creators on YouTube and large record labels. Further Information: Background Music Copyright Infringement by Ian Corzine
0 notes
itlegallychallenged · 4 years ago
Link
AI - INVENTORS OF THE FUTURE?
Today, AI is being used in many different industries, including banking, technology, and entertainment (Anoyha).  Speaking in the technological sense, AI has the capability to enable self-driving cars, build websites, simplify complicated text, and even code software (Cold Fusion).  While all of these involve their own legal implications, it is the coding software capability that will be the topic of today’s post.  Particularly, I am interested by the patentability of a software programme that has been developed by an AI.
Patenting Software:
Current UK Patent Law states that computer programmes and software are unable to be patented as they do not fall under the definition of being an ‘invention’ (Patents Act 1997, s1(2)(c)).  While this may sound discouraging to all programme developers out there, case law now offers an exemption in the form of a four-step test (Aerotel Ltd v Telco Holdings Ltd [2006] EWCA Civ 1371).  Under this test, software may be patentable if it has made a technical contribution to its relevant field (Aerotel).  A more in-depth introduction to current UK patent laws and software inventions has been delivered by ‘Thomson Reuters Legal Europe’s’ YouTube page, found here.
Inventions/Contributions by AI:
Now that it has been established that software programmes can be potentially considered for a patent under current UK law, it is time to look how software programmes are patented and by whom.
The Patents Act sets out the requirements necessary for patent applications.  Unfortunately for AI, the Patent Act explicitly states that inventions may only be patented by a ‘person’ (Patents Act 1997, s7(1)).  The definition was called into question by a recent court case and in September of this year, the Court of Appeal confirmed that the definition of ‘persons’ extends only to those with legal personality (Thaler v Comptroller General of Patents Trade Marks and Designs (2021)).  Therefore, an AI cannot be listed as an inventor on a patent application as AI lacks the recognition of legal personality within the UK, regardless of any contributions it may have made to the invention/software.  
So What Does This Mean?:
Say we have a software programme that was coded solely by an AI.  Under current law, only software programmes that have made a technical contribution to a relevant field can be considered for a patent.  Assuming that this is the case, even if it were to be patented, the AI who created the programme would still be unable to qualify as an inventor on the application because it lacks legal personality.  
As AI becomes more and more advanced in capabilities, this becomes a larger concern.  One day, we may have most of our new software programmes being developed by AI, with no way for the AI and its creators to properly patent the software (Davies & Marfe).  Fortunately, this is a concern that is currently being addressed by the UK’s Intellectual Property Office [hereinafter, the IPO] (IPO).  The IPO has recently pledged their commitment to reviewing the current patent laws and to allow some exceptions where AI may be considered an inventor (IPO).  Although there are no further updates as of this moment, it will be interesting to see how this law develops in the coming years.  
Further Information:
AI Cannot be the Inventor of a Patent, Appeals Court Rules from BBC News 

Thaler v Comptroller-General: Part 1 - AI Systems Cannot be an Inventor under the Patents Act by Toby Bond at Bird&Bird (Video) Thaler -v- the Comptroller-General of Patents, Designs and Trademarks by the Court of Appeal- Civil Division - Court 75 
0 notes