#European Data Protection Board (EDPB)
Explore tagged Tumblr posts
Text
Norway Wants Europe-Wide Ban on Facebook Behavioral Ads
Norway is urging the European Data Protection Board (EDPB) to ban Meta (formerly Facebook) from harvesting user data for advertising purposes permanently and extend the ban across Europe.
View On WordPress
0 notes
Text
Understanding GDPR Fines - Key Take Aways
Navigating the complex terrain of the General Data Protection Regulation (GDPR) can be daunting, but it’s crucial for businesses to stay compliant to avoid heavy penalties. The European Data Protection Board (EDPB) shed more light on this area by finalising its guidelines for calculating administrative fines. Aimed at bringing consistency to GDPR fines imposed across the EU, these guidelines were…
View On WordPress
#Administrative fines#Art. 83 GDPR#Data breach penalties#Data protection guidelines#Data sensitivity#Data transfer practices#EDPB guidelines#EU data privacy#EU data privacy regulations#EU data protection fines#European Data Protection Board (EDPB)#European Facebook users#European Union data protection#GDPR#GDPR compliance#GDPR compliance tips#GDPR enforcement#GDPR fines#GDPR legal requirements#GDPR penalties#GDPR updates#General Data Protection Regulation (GDPR)#Harmonized GDPR fines#Infringement classification#Large-scale data breaches#Legal maximum fines#Meta (Ireland) fine#Proactive reporting#Supervisory authorities#Turnover definition
0 notes
Text
EU Regulators May Evaluate How Data Is Stored On The Blockchain
Regulators in the European Union (EU) have released new guidance on blockchain technology as it pertains to the processing of personal data.
In a new report, the European Data Protection Board (EDPB) says that in order to properly comply with the EU’s General Data Protection Regulation (GDPR), “evaluations” may need to be conducted on how blockchains record data.
0 notes
Text
EU Regulators Dismiss Meta’s Privacy Fee For Facebook And Instagram, Declaring The Company Has Exhausted Its Alternatives

(Source-theregreview.org)
Meta’s attempt to charge European users of Facebook and Instagram for opting out of ad tracking seems to have hit a roadblock. Introduced late last year in response to a significant ruling by the EU’s highest court, Meta’s subscription model aimed to address concerns raised by privacy advocates. However, critics quickly pointed out that this approach failed to offer genuine consent, as users were essentially forced to pay a monthly fee to protect their privacy.
Originally priced at €12.99 ($13.82) for accounts used on both mobile and web, Meta recently proposed reducing the fee to €5.99 to appease critics. Nonetheless, the European Data Protection Board (EDPB), representing the EU’s privacy regulators, has sided with privacy advocates who derisively dubbed Meta’s approach as “pay or okay.”
The EDPB’s opinion, published following Politico’s report, emphasized that offering users only a binary choice between consenting to data processing for behavioral advertising or paying a fee does not constitute valid consent. According to EDPB Chair Anu Talus, such models fail to provide users with a genuine choice, leading many to unknowingly consent to data processing without understanding the implications.
Charging pages for reach and engaging in contextual ads
Austrian activist lawyer Max Schrems, known for his legal battles against Meta spanning over a decade, stated that Meta now has no choice but to offer users a clear option to opt in or out of personalized advertising. While Meta still has other revenue avenues, such as charging pages for reach and engaging in contextual ads, Schrems emphasized the importance of obtaining explicit consent from users before tracking them for advertising purposes.
In essence, Meta’s options in the EU appear limited. The company must now rethink its approach to user consent and privacy, ensuring that users are provided with transparent choices regarding their data. As regulatory scrutiny intensifies and privacy concerns continue to mount, Meta faces increasing pressure to align its practices with evolving standards of data protection and user rights.
Moreover, Meta’s woes in the EU underscore broader challenges faced by tech giants regarding data privacy and regulatory compliance. The company’s struggles reflect growing calls for stronger data protection measures and greater transparency in the digital ecosystem.
Clearer information about data collection practices
The EDPB’s stance signals a potential shift towards stricter enforcement of data protection laws, with implications not only for Meta but for the entire tech industry. As regulators crack down on practices deemed invasive or non-compliant, companies will need to reassess their data handling procedures and prioritize user privacy.
Furthermore, Meta’s difficulties highlight the complexities of balancing business interests with ethical considerations and legal obligations. While personalized advertising remains a lucrative revenue stream for tech companies, the pushback from regulators and privacy advocates underscores the need for a more user-centric approach to data management. In response to mounting scrutiny, Meta may face pressure to implement more robust privacy controls and transparency mechanisms. This could involve providing users with clearer information about data collection practices, enhancing consent mechanisms, and empowering users with greater control over their personal information.
0 notes
Text
TikTok Faces €345 Million Fine Over Unfair Design Practices Targeting Minors.
🚨 Breaking News! 🚨 The EDPB takes a firm stand against TikTok's unfair design practices targeting minors. A monumental €345 million fine signals a move towards a safer online environment for children.
The popular social media platform TikTok, has been ordered to overhaul its design practices concerning minors, following a binding decision by the European Data Protection Board (EDPB). The board found that TikTok violated the General Data Protection Regulation (GDPR) principles of fairness in processing personal data of children aged between 13 and 17. The EDPB’s decision, issued on 2nd August…

View On WordPress
#Child Safety Online#Data Protection#Digital Platforms#EDPB Decision#GDPR#ikTok#Online Privacy#Privacy Settings#Social Media Regulation
0 notes
Text
Meta's Billion Dollar Fine
Facebook's parent company, Meta, just got slapped with a jaw-dropping fine of $1.3 billion by the European Union (EU) data protection regulators. Why, you ask? Well, it's all about the way they handle your precious personal data.
The European Data Protection Board (EDPB) came down hard on Meta for transferring user data from the EU to the U.S. They've ordered Meta to get its act together, align its data transfers with the General Data Protection Regulation, and wipe out any unlawfully stored data within six months. The General Data Protection Regulation is all about giving users the right to access their data, to have it deleted, and to object to its processing. It requires companies to obtain explicit consent from users before transferring their data outside of the European Union.
If you are a user of Facebook or any other U.S. tech companies, this ruling should make you think about your privacy. It is important to be aware of how your data is being collected and used, and to take steps to protect your privacy.
You can do this by:
Reading the privacy policies of the companies you use.
Opting out of tracking and targeted advertising.
Being careful about what information you share online.
0 notes
Text
Meta hit with record $1.3 billion fine by EU over handling of Facebook users' personal data
CBS News BY HALEY OTT UPDATED ON: MAY 22, 2023 / 7:01 AM / CBS NEWS London — Meta was fined the equivalent of a record $1.3 billion by the European Union on Monday for its transfer of users’ personal data from Europe to the United States. Andrea Jelinek, chair of the European Data Protection Board (EDPB), said Meta’s infringement was “very serious since it concerns transfers that are systematic,…
View On WordPress
0 notes
Text
Meta ordered to suspend Facebook EU data flows as it's hit with €1.2BN privacy fine
It’s finally happened: Meta, the company formerly known as Facebook, has been hit with a formal suspension order requiring it to stop exporting European Union user data to the US for processing. The European Data Protection Board (EDPB) confirmed today that Meta has been fined €1.2 billion (close to $1.3BN) — which looks to be a record sum for a penalty under the bloc’s General Data Protection…

View On WordPress
0 notes
Text
EDPB on Dark Patterns: Lessons for Marketing Teams
EDPB on Dark Patterns: Lessons for Marketing Teams
“Dark patterns” are becoming the target of EU data protection authorities, and the new guidelines of the European Data Protection Board (EDPB) on “dark patterns in social media platform interfaces” confirm their focus on such practices. While they are built around examples from social media platforms (real or fictitious), these guidelines contain lessons for all websites and applications. The bad…

View On WordPress
#business#Cybersecurity#data#EDPB#EU#European Data Protection Board#European Union#Global#legal#Legal Marketing#Marketing#privacy#world
0 notes
Text
On January 4, the Irish Data Protection Commission (DPC) fined Meta €390 million ($414 million) for violating Europe’s privacy law, the General Data Protection Regulation (GDPR), and directed the company to bring its data processing operations into compliance within 3 months. Shortly thereafter, the European Data Protection Board (EDPB), which consists of all the European data protection authorities, released the text of its binding decision that dictated the Irish DPC’s ruling. The key finding is that Meta cannot rely upon its contract with users as providing a sufficient legal basis for processing user data for personalized ads. If upheld on appeal, this decision might require social media companies and other online businesses to significantly revise their data-focused advertising business model in the name of protecting privacy.
I want to discuss the EDPB’s decision in two parts. In this post, I will first analyze its legal basis and assess its likely business implications. In the next part, I will consider whether this decision holds some lessons for policymakers as they seek to revise U.S. laws to protect privacy more adequately.
The European Privacy Approach
The European Union’s GDPR became effective in 2018. It requires companies to have a legal basis for data processing, the European term of art for collecting and using personal information. “Processing shall be lawful,” says Article 6 of GDPR, “only if and to the extent that at least one of the following applies,” and includes a list of legal bases for data processing.
The key bases are fulfillment of a contract, consent, and legitimate interest. Under fulfillment of a contract, processing is lawful only if it is “necessary for the performance of a contract to which the data subject is party or in order to take steps at the request of the data subject prior to entering into a contract.” Under consent, processing is lawful only if “the data subject has given consent to the processing of his or her personal data for one or more specific purposes.” Under legitimate interest, processing is lawful only if it is “necessary for the purposes of the legitimate interests pursued by the controller or by a third party…”
The interpretation of these key legal terms of contractual necessity, consent, and legitimate interest is complex and contested. But for the purpose of understanding the broad outlines of the EDPB’s decision, the uses of the different legal bases can be simplified as follows.
Contractual necessity
Contractual necessity applies when the company needs personal information to fulfill a contract that they have made with you to provide service. An online retail stores clearly needs users’ contact details in order to send the items they have purchased. The store can rely on contractual necessity in this case as the basis for collecting and using this information.
Consent
Consent is the legal basis to use if a company wants to process personal information that is not needed to provide service to the customer. If a company wants to collect users’ zip codes at the point of sale, it must ask the customers’ permission and tell them why it wants the information (understanding the company’s customer base for instance, or direct marketing). If the customers refuse, the company must still sell them what they want to buy. If the customers provide the store with their zip codes in these circumstances, they have consented, and the company can claim that as its legal basis for collecting the information.
Legitimate interest
Legitimate interest applies when neither of the other two apply. If a company wants to collect and use user information for direct marketing but has not obtained consent and does not need the information to provide a service, it can nevertheless obtain it and use it if it can show that it has a real business need for the information, an urgent need that overrides any interest the consumers have in protecting their privacy. The comment on legitimate interest in GDPR Recital 47 says that fraud prevention and direct marketing could be justified under legitimate interest. Neither consent nor contractual necessity would be required for data use justified under legitimate interest.
Further, Article 21 of GDPR limits the use of legitimate interest as a basis for direct marketing. This article provides users with an absolute right to object to direct marketing. A company can assert its legitimate interest as a basis for direct marketing, but as soon as a user objects it must honor this request to stop direct marketing. This right to object overrides any claim of business interest.
The European Data Protection Board’s Meta Decision
The Irish Data Protection Commission’s (DPC) January 4, 2023 announcement was the product of a complex process. Meta claimed to the Irish DPC that its legal basis for processing user data for personalized social media services and for advertising purposes was contractual necessity. The Irish DPC essentially agreed, but its decision was challenged by other European data protection authorities, which triggered a process of negotiation to seek a resolution of that dispute. The dispute resolution procedure failed and, pursuant to procedures set out in the GDPR, the issue was referred to the European Data Protection Board (EDPB), a body that consists of all the European Union’s data protection authorities. The EDPB is authorized to issue binding decisions to ensure that the national data protection authorities apply the provisions of the GDPR in a correct and consistent manner.
On December 9, 2022, the EDPB announced that it had “settled” the question of whether or not the processing of personal data for the performance of a contract is a suitable legal basis for social media behavioral advertising. In conformity with that binding decision, the Irish DPC announced in January, that it was reversing itself and rejecting contractual necessity as the basis for Meta’s processing of personal data for advertising purposes. While this decision is formally one made by the Irish DPC, it effectively was determined by the collective body of European data protection commissioners. A few days later on January 11, the Irish DPC released the text of its decision, and the following day the EDPB released the text of its binding decision that had dictated the Irish DPC’s ruling.
The EDPB ruling is the key one for understanding the basis of this decision. It finds in the record it reviewed in coming to its decision information that reveals “the complexity, massive scale and intrusiveness of the behavioural advertising practice that Meta IE conducts…” (Par 96). This indicates immediately its suspicion of Meta’s data practices, revealing that it will need substantial evidence to indicate that this “massive” collection of data for personalized ads is needed to provide social media service.
On the basis of the “objectives” and “normative context” of GDPR and of earlier European court decisions the EDPB concludes that GDPR “treats personal data as a fundamental right inherent to a data subject and his/her dignity, and not as a commodity data subjects can trade away through a contract.” (Par. 100, 101). This reassertion of the fundamental premise of European privacy law that privacy is prior to business interests is a guiding principle of the decision.
The EDPB recognizes that while data subjects cannot arbitrarily trade away their privacy, they are permitted under GDPR Article 6 to provide personal information needed to obtain a service. So, the EDPB turns to the question of “whether behavioural advertising is objectively necessary for Meta” to provide its service. (Par. 111). If it is, then Meta may claim contractual necessity; if it is not, then Meta may not.
EDPB then argues that personalized advertising is not needed to provide social media services. It asserts that if “there are realistic, less intrusive alternatives, the processing is not “necessary”. (par. 120). It considers that there are such alternatives including “contextual advertising based on geography, language and content, which do not involve intrusive measures such as profiling and tracking of users.” (Par. 121). Meta has found it useful for it business purposes to generate revenue through personalized ads. But that is not contractual necessity, since there are realistic alternative funding mechanisms. EDPB concludes that personalized advertising “is useful but not objectively necessary for performing the contractual service, even if it is necessary for the controller’s other business purposes.” (Par. 121).
EDPB also argues that processing for the purposes of personalized adverting cannot be necessary to provide social media services in light of the data subject’s “absolute right” to object to data processing for purposes of direct marketing under Article 21 of GDPR. Data processing for the purposes of personalized ads “cannot be necessary to perform a contract if a subject has the possibility to opt out from it at any time, and without providing any reason.” (Par 122).
EDPB notes that an important consideration in its rejection of Meta’s contractual necessity justification is that “the main purpose for which users use Facebook and accept the Facebook Terms of Service is to communicate with others, not to receive personalised advertisements.” (Par 124)
Next Steps
The consensus among analysts is that for the immediate future Meta will be able to continue to fund its operations through personalized ads. Matt Perault at New Street Research, for instances, considers that the EDPB judgment “won’t affect its ads business in the short run.” Meta’s reaction to the decision bears out this analysis. In a company-issued blog post, Meta says it thinks its legal justification of contractual necessity “respects” GDPR and complains about the lack of “regulatory clarity” on the issue. The company said it would appeal both the ruling and the size of the fines, noting that the European courts may yet reach “a different conclusion altogether.” Presumably, it would also ask a court to stay the implementation of the ruling during the pendency of the appeal, which would allow its personalized ad business to continue uninterrupted, potentially for years.
Even if Meta fails to obtain a stay, it is open to the company to revise its legal basis and to present an alternative justification for its data processing. This could be consent, but Meta seems uninterested in pursuing this option. In the same blog post, it says that the EDPB decision does not “mandate the use of Consent” as a legal basis for its data processing. It rejects the idea that it can no longer offer personalized ads unless each user’s agreement has been obtained. And it holds out the prospect of “another available legal basis under GDPR” for personalized advertising.
But the only plausible alternative legal basis other than consent or contractual necessity would be legitimate interest. Legitimate interest is a complex legal basis that would require Meta to show its legitimate interest in personalized advertising overrides “the interests or fundamental rights and freedoms of the data subject which require protection of personal data.” If Meta pursues that route, it could submit a justification to the Irish DPC based on legitimate interest and try to satisfy the heavy burden involved in defending that legal basis.
The Irish DPC order says that Meta must “bring its processing operations into compliance with GDPR” within three months. Meta could argue, however, that it had complied with the ruling by providing this alternative legal basis of legitimate interest and should be allowed to provide personalized ads until the Irish DPC has had a chance to evaluate this new claim, which could take months or years. The Irish DPC may very well accept this argument, which would provide a significant delay in any operational changes. It is worth remembering that the objection to Meta’s contractual necessity justification was filed four years ago and will likely continue several more years with appeals.
In the longer term, however, Meta faces a seemingly insuperable hurdle in maintaining its personalized ad business in its current form, even if it succeeds in its legitimate interest justification. This is because Article 21 of GDPR provides an absolute right for users to object to the processing of their personal information for direct marketing, which would include personalized ads on social media. Even if Meta successfully invokes legitimate interest to justify the use of personal information for personalized ads, it must still honor this absolute right for users to object.
Will Meta change its existing ad model to comply?
Observing this right to object is likely to mean that Meta would have to offer its users the alternative of receiving the personalized social media services without also receiving personalized ads. Providing users with a choice, however, is extraordinarily risky for Meta’s personalized ad business. When Apple gave its app store users a yes or no choice on whether they wanted apps to track them for purposes of serving ads, 96% of U.S. citizens rejected personalized ad tracking. It is for this reason that analysts are concerned that in the long run Meta’s personalized ad model is in trouble. Dan Ives, an analyst at Wedbush Securities, for instance, thinks that the ruling could put “5 to 7 percent of Meta’s overall advertising revenue at risk.”
The alternative to a social media service paid for by personalized ads might well become an increasingly important part of Meta’s business model. The company could seek to fund this alternative through contextual ads alone. But it could also offer users an alternative of paying a fee to receive a personalized social media service free of targeted ads, a model that is widely followed in other services such as streaming music. Whether the fee could be set so high ($100 a month, for instance) that as a practical matter it forced users to accept personalized ads would be a question for the Irish DPC to address when it approves or rejects Meta’s proposal for coming into compliance with GDPR. Assessing the commercial necessity of Meta’s rates would force the agency into the new and uncomfortable position of economic regulator supervising the rates that Meta could charge its users.
Despite the potentially far-reaching nature of the ruling for Meta’s personalized ad business, it is also worth remembering that it might not mean that the company will collect any less personal information or no longer construct detailed profiles of its users. The ruling simply says that Meta cannot collect information or construct profiles for the purpose of serving personalized ads under its contractual necessity basis. The ruling seems to allow Meta to continue to collect and use personal information on the basis of its terms of service for the purpose of providing personalized social media services. So, users who accept Meta’s terms of service will still be allowing the company to collect and analyze information derived from their use of the social media platform for the purpose of ranking, prioritizing, and recommending material posted by other users. Nothing in the decision appears to mean that Meta will have to stop offering algorithmically driven social media service. It would not, for example, be required to provide a chronological feed as one or the only alternative for its users. The ruling imposes no limitation on algorithmic amplification based on personal information.
Moreover, the ruling does not say that Facebook or Instagram must be ad-free. The ads that appear on these services that many find to be annoying and intrusive will likely continue and might even increase. But now these ads would not be personalized. They would be static ads that would be shown indifferently to all users or targeted contextually to all users in a certain location or who speak a given language. Even a fee-based service might contain these non-personal ads.
Conclusion
Privacy advocates might then wonder what they have concretely gained from this apparent victory. Social media surveillance likely will not diminish, nor will the bombardment of users by distracting and confusing commercial advertising. Still, an important precedent has been set, one that vindicates the primacy of privacy rights. The decision delivers a message to all social media companies and other digital companies that they must respect the privacy interests of their users first. Their commercial interests are secondary. To paraphrase the great philosopher of human rights, Immanuel Kant, businesses must first be certain that they are respecting people’s fundamental rights, including their privacy rights. Only then are they entitled to look around for ways to satisfy their economic interests.
In a forthcoming blog, I will look at whether U.S. policymakers should reimagine for the U.S. context the European privacy requirement to demonstrate a legal basis for personal data use and if so, what the implications might be for the data practices of social media companies and other digital companies in the U.S.
9 notes
·
View notes
Text
European Privacy Watchdogs Assemble: A United AI Task Force for Privacy Rules
In a significant move towards addressing AI privacy concerns, the European Data Protection Board (EDPB) has recently announced the formation of a task force on ChatGPT. This development marks a potentially important first step toward creating a unified policy for implementing artificial intelligence privacy rules.
Following Italy's decision last month to impose restrictions on ChatGPT, Germany and Spain are also contemplating similar measures. ChatGPT has witnessed explosive growth, with more than 100 million monthly active users. This rapid expansion has raised concerns about safety, privacy, and potential job threats associated with the technology.
The primary objective of the EDPB is to promote cooperation and facilitate the exchange of information on possible enforcement actions conducted by data protection authorities. Although it will take time, member states are hopeful about aligning their policy positions.
According to sources, the aim is not to punish or create rules specifically targeting OpenAI, the company behind ChatGPT. Instead, the focus is on establishing general, transparent policies that will apply to AI systems as a whole.
The EDPB is an independent body responsible for overseeing data protection rules within the European Union. It comprises national data protection watchdogs from EU member states.
With the formation of this new task force, the stage is set for crucial discussions on privacy rules and the future of AI. As Europe takes the lead in shaping AI policies, it's essential to stay informed about further developments in this area. Please keep an eye on our blog for more updates on the EDPB's AI task force and its potential impact on the world of artificial intelligence.
European regulators are increasingly focused on ensuring that AI is developed and deployed in an ethical and responsible manner. One way that regulators could penalize AI is through the imposition of fines or other penalties for organizations that violate ethical standards or fail to comply with regulatory requirements. For example, under the General Data Protection Regulation (GDPR), organizations can face fines of up to 4% of their global annual revenue for violations related to data privacy and security.
Similarly, the European Commission has proposed new regulations for AI that could include fines for non-compliance. Another potential penalty for AI could be the revocation of licenses or certifications, preventing organizations from using certain types of AI or marketing their products as AI-based. Ultimately, the goal of these penalties is to ensure that AI is developed and used in a responsible and ethical manner, protecting the rights and interests of individuals and society as a whole.
About Mark Matos
Mark Matos Blog
#EuropeanDataProtectionBoard#EDPB#AIprivacy#ChatGPT#DataProtection#ArtificialIntelligence#PrivacyRules#TaskForce#OpenAI#AIregulation#machine learning#AI
1 note
·
View note
Text
EU Proposes New Privacy Rules For Data On The Blockchain
The European Data Protection Board has approved draft rules governing how personal data is stored and shared on blockchains, marking another step toward aligning decentralized technology with existing standards.
The new guidelines limit access to stored information and comply with the General Data Protection Regulation (GDPR) protections, according to the EDPB, which ratified the rules this month and opened public comment until June 9.
“Blockchains have certain properties that can lead to challenges when dealing with the requirements of the GDPR,” the EDPB said in a version of the guidelines available online. “The guidelines highlight the need for Data Protection by Design and by Default and adequate organizational and technical measures.
The document added: “As a general rule, storing personal data on a blockchain should be avoided if this conflicts with data protection principles.”
0 notes
Text
EU: "Cookie walls violate the GDPR"
The purpose of the EU's General Data Protection Regulation was to effectively ban the ad-tech industry and its practices by annihilating the pretense that clicking "I agree" or loading a page that said, "You agree" was the same as consent for tracking.
https://boingboing.net/2018/01/09/information-controllers-galore.html
Under the GDPR, service providers would be forced to only collect data for explicit, enumerated purposes that could be expressed in plain language, and could only share data with other entities after each one was explicitly approved by the user.
So if you operated a site that ran 50 trackers that harvested data that was passed on to hundreds of brokers who passed it on to thousands of other brokers, then each time you got a new user, you'd have to get thousands of permissions from the user.
Each permission would have to be meaningful: you'd have to explain in simple language what you were doing and why, and even if the user opted out of that collection, you'd have to still let them proceed to the site.
The fact that users might just leave your site rather than saying "no" 2,000 times before being allowed to proceed was a feature, not a bug. It was meant to expose the sham of consent.
Basically: "Obtaining informed consent to thousands of surveillance acts takes hours, so whatever you were getting by adding a line of 8pt grey-on-white type that said, 'By visiting this site you consent to our privacy policy,' it was NOT consent."
But ad-tech didn't get the memo. They started to put up "cookie walls" on their sites, pop-up boxes that basically said, "Accept our cookies or fuck off."
https://techcrunch.com/2020/05/06/no-cookie-consent-walls-and-no-scrolling-isnt-consent-says-eu-data-protection-body/
That's not consent either, and the European Data Protection Board (EDPB) just published guidelines saying so:
https://edpb.europa.eu/sites/edpb/files/files/file1/edpb_guidelines_202005_consent_en.pdf
Also not consent: scrolling past a thing that says, "Please look at this dashboard and tell us which acts of surveillance you're OK with." A user who scrolls past that dialog should be presumed to have WITHHELD consent, not granted it.
"Actions such as scrolling or swiping through a webpage or similar user activity will not under any circumstances satisfy the requirement of a clear and affirmative action”
On Tech Crunch, Natasha Lomas calls this "cookie consent theatre," and predicts new enforcement action, noting that "GDPR fines can scale as high as €20M or 4% of global annual turnover."
75 notes
·
View notes
Text
ChatGPT Relaunches Operations In Italy After Addressing Regulatory Demands
ChatGPT Relaunches Operations In Italy After Addressing Regulatory Demands https://bitcoinist.com/chatgpt-re-begins-operations-italy-address-demands/ Open AI’s ChatGPT has begun offering services in Italy again after addressing the regulatory concerns raised by the nation’s data protection agency, Garante. The internet-favorite chatbot, which was banned in Italy for almost a month, is now operational, albeit following some systemic changes. Back on March 31, Garante placed a temporary ban on ChatGPT services in Italy following suspicions of violations of the European Union’s General Data Protection Regulations (GDPR) by the highly popular Chatbot. However, the ban was mainly driven by a data breach in ChatGPT which occurred on March 20, exposing user conversations and payment information. According to the Italian data security authority, ChatGPT had no legal backing to defend its massive database of individual data, which it claimed was utilized in training the algorithms behind its operation. Garante also pointed out the lack of an age verification system, thus exposing minors and underage users of ChatGPT to answers which may be beyond their level of development and awareness. ChatGPT Complies With Garante, Introduces New Policies Almost a month following the ban, it appears ChatGPT developers, Open AI, have taken necessary measures to comply with the demands of the Italian security watchdog. According to the official statement confirming the reinstatement of ChatGPT services by Garante, Open AI has agreed to expand its privacy policy, allowing individuals across Europe, including non-users, to object to processing their data for the training of algorithms. In addition, the Microsoft-backed company has now added an age verification feature allowing only users aged 13+ and 18+ to access ChatGPT. However, users aged 13+ are required to have obtained parental consent. Commending this regulatory compliance effort shown by Open AI, Garante stated: “The Italian SA acknowledges the steps forward made by OpenAI to reconcile technological advances with respect for the rights of individuals, and it hopes that the company will continue in its efforts to comply with European data protection legislation.” In addition, Garante called on Open AI to fulfill all outstanding requests as laid out in an order dated April 11. European Authorities Ramp Up Efforts On AI Regulations As AI-powered services such as ChatGPT gain traction, European authorities intensify efforts to regulate the rapidly developing industry and protect users’ interests. For example, following the temporary ban on ChatGPT by Garante, the European Data Protection Board (EDPB), the general enforcer of the EU’s General Data Protection Regulations, deployed a special task force to conduct comprehensive research on the chatbot. Furthermore, members of the European Union parliament have initiated the trilogue stage of the AI Act – a bill aimed at checking the operations of AI-based companies. According to the proposals in the bill, AI products will be classified according to their risk level ranging from minimal, limited, high, and unacceptable. This bill, If approved, will mandate all AI companies to disclose any copyrighted material employed in developing their products. via Bitcoinist.com https://bitcoinist.com April 30, 2023 at 12:05PM
0 notes
Text
the EDPS unveils his first opinion
The European Data Protection Board (EDPB) gave, on 28 February, his opinion on the European Commission’s draft adequacy decision about the transfer of data to the UNITED STATES. Following invalidation of the Privacy Shield agreement contravening the General Data Protection Regulation (GDPR), in 2020, Joe Biden, President of the United States, adoptedlast October, a new legal framework to allow a…
View On WordPress
0 notes
Text
The European Data Protection Board’s (EDPB) recent decision on Meta’s personalized ad practices might require social media companies and other online businesses to significantly revise their data-focused advertising business models if it is upheld by the European courts.
As I explained in a previous Brookings post, the EDPB’s decision is rooted in Article 6 of the European General Data Protection Regulation (GDPR), which requires companies to have a lawful basis for their data practices. GDPR’s three main criteria for lawfulness are service necessity, consent, and legitimate interests. The EDPB rejected Meta’s claim that targeted ads were necessary to provide social media services, ruling that the ads were useful for Meta but not strictly necessary to provide the service.
Meta could now claim as an alternative legal basis that its users consent to personalized ads but, under the EDPB’s guidelines, consent must be freely given and this would be true only if users could receive social media services without being exposed to personalized ads.
Meta could claim instead that it has a legitimate business interest in serving personalized ads to its social media users, and this has some support in GDPR’s Recital 47, which says that direct marketing is a legitimate interest. However, under the absolute right to object to direct marketing in Article 21 of GDPR, Meta would then have to offer its users a personalized-ad-free social media service.
In the long run, without a court victory overturning the EDPB’s decision, Meta and other online companies relying on personalized ads will need to change their data practices.
As called for by President Biden in his recent Wall Street Journal opinion piece and again in his State of the Union address, the United States Congress is renewing its bipartisan push for national privacy legislation with a hearing on March 1, 2023 before the House Energy and Commerce Committee’s Subcommittee on Innovation, Data, and Commerce. The goal, announced Subcommittee leaders Gus Bilirakis (R-FL) and Jan Schakowsky (D-IL) is to get “a strong national standard across the finish line.”
U.S. policymakers seeking to establish new privacy law should consider carefully what lessons can be learned from a privacy regime that potentially has such a powerful impact on an established business practice in the name of protecting privacy. In this follow-up post, I’m going to argue that U.S. legislators should consider a modified version of the European approach of requiring a lawful basis for data processing as part of a new national privacy law. The wording of the U.S. version need not be the same as that in GDPR, but the key idea that companies must establish that their data practices satisfy one of several alternative standards of lawfulness—service necessity, consent, or legitimate interests—should be incorporated into U.S. law.
A Legal Basis Privacy Regime
A legal basis privacy regime sets out normative standards to determine whether a data practice is lawful. For GDPR, data processing is lawful when it is strictly necessary to provide a service, when the user has freely consented to it, or when it is necessary for the legitimate interests of the data processor or a third-party. The normative theory embedded in this approach is that data use is legitimate either because it preserves the autonomy of the user or because it serves the legitimate interests of the data processor or the public.
A legal basis regime, however, need not declare that certain enumerated concrete data practices are lawful. It does not have to say in statute, for instance, that it is lawful for companies to use personal data for information security purposes. Instead, it provides criteria for showing when specific data uses such as the use for information security would be considered lawful. Such a standard can be used by companies and regulators to treat a wide range of data practices as lawful.
An alternative approach would simply list approved or permissible uses without also incorporating a normative standard. This is embodied in perhaps the oldest privacy law in the United States, the Fair Credit Reporting Act from 1970. This law names specific data practices and it labels them as lawful. It allows consumer reporting agencies to collect and process personal information only for one of several listed permissible purposes including credit, insurance, and employment screening.
Last year’s American Data Privacy and Protection Act (ADPPA), which passed the House Energy and Commerce Committee with an impressive bipartisan vote of 52-3, also takes a list approach. It says companies “may not collect, process, or transfer” personal information unless “reasonably necessary and proportionate” to provide a requested service or fulfill one of a list of “permissible purposes.” These permissible purposes include authenticating users, providing security, preventing fraud or physical harm, scientific research, communicating with users or delivering user communications to others, and first party or targeted advertising. It names permissible data uses but, other than service necessity, it does not provide criteria whereby companies can demonstrate that other data uses are lawful.
The limitation in the enumeration approach is that it does not set out a standard of lawfulness, except that in some cases enumeration statutes allow the criterion of providing a service to serve as a standard of lawfulness. This means that if data use is not on the list of specific named practices and is not needed for providing a service, it is not allowed. Inevitably, such a static list will be underinclusive as business practices and technology evolve. Privacy law should incorporate some open-ended standard to allow the law to respond to innovative developments.
Legitimate Interests
The GDPR standard of legitimate interests is just such an open-ended standard. It says that any data processing whatsoever can be rendered lawful when it is “necessary for the purposes of the legitimate interests” pursued by a company. GDPR even creates a balancing test where data processing in pursuit of a company’s legitimate interests still would not be lawful when these interests are “overridden by the interests or fundamental rights and freedoms of the data subject which require protection of personal data.”
The language of legitimate interests contained in GDPR is not the only way to create a flexible standard of lawfulness, rather than a static list of permissible uses. But it might be a good place for Congressional drafters to begin with as they move forward with a new national privacy law.
A legitimate interests standard, or something similar, should be incorporated into privacy proposals such as ADPPA that currently rely on an enumeration approach. Of course, it would be prudent for privacy legislation to contain in addition a list of data processing activities that Congress finds to be lawful. Companies should not have to prove, for instance, that data use for fraud prevention is legitimate. A list of approved uses would provide needed legal certainty for some obvious and agreed-upon data uses to satisfy the requirement of being lawful. But there should still be an additional opportunity for businesses to demonstrate, subject to regulatory approval, that a particular data use, which is not on the pre-approved list nevertheless, satisfies a standard of lawfulness. The legal basis of legitimate interests would do this.
Consent
Some privacy advocates are wary of including consent as a sufficient legal basis for data processing. Several years ago, Cameron Kerry said in a Brookings report, “Maybe informed consent was practical two decades ago, but it is a fantasy today.” A recent report from the Annenberg School of Communication demonstrates, again, that choice as currently practiced in today’s online world fails utterly to protect privacy. This ineffectiveness of the current notice and choice regime has led many advocates to agree with privacy scholar Ari Waldman who says, “Consent, opt in or opt out, should never be part of any privacy law.”
Clearly, businesses have abused the consent basis for data use and have bombarded users with uninformative and intrusive notices. Some constraints must be put on the ability of businesses to hound their users with repeated requests to consent to information processing that is not needed to provide service.
It is important for a privacy statute to avoid overreliance on consent as the sole or most important way to make a data practice legitimate. But this should not mean abandoning consent entirely. A robust consent regime modeled after the GDPR’s is very different from the current U.S. notice and choice approach, which often relies on the weaker opt out form of choice. Consent under GDPR means “any freely given, specific, informed and unambiguous indication of the data subject’s wishes” and requires a “clear affirmative action” indicating agreement to the data collection and use. Importantly, refusing consent must be “without detriment” to the user, meaning the same service must be made available under the same terms and conditions if the user refuses consent, or the user must be offered reasonable incentives to induce consent. If data is really needed to provide the service, then the appropriate legal basis is service necessity and no form of choice, opt-in or opt-out is needed.
Consent can be a powerful way for consumers to block damaging data use. When Apple gave users a properly structured choice in connection with the ability of apps to track them for the purpose of advertising, users overwhelmingly responded that they did not want tracking.
ADPPA adopts this robust notion of consent and applies it at various points in the statute. For instance, in requiring consent to transfer information pertaining to a child. But the lack of consent in ADPPA’s list of permissible data uses is significant and damaging. It means that a company may not justify its data processing, regardless of its purpose, on the grounds of genuine user consent. And it means that users would not be able to protect themselves from damaging data practices by refusing to consent to them.
The Privacy Regulator
Interpreting and enforcing such a legal basis privacy regime will require an alert, flexible and well-funded privacy regulator staffed with knowledgeable technologists and industry experts. The statute should provide as much guidance as possible to guide the agency in this task, but substantial discretion and rulemaking authority will be needed for the agency to meet the demands of the future. Without this institutional support for implementation and enforcement, a new privacy law would be merely performative, the theatrical impersonation of privacy protection but not the real thing.
The work that European data protection authorities have done in interpreting their own statutory text relating to the key conditions of contractual necessity and consent provide some factors that could be incorporated into a new U.S. privacy statute. The statutory text would make it clear that service necessity is to be interpreted as strictly necessary for the provision of the service and not as merely useful for business purposes, and that affirmative consent applies in those circumstances where a business wants to collect and use personal data over and above what is needed to provide the service.
Unfortunately, the European interpretation of the legitimate interests standard is so narrow that it effectively removes legitimate interests as a practical way for businesses to establish a legal basis for their data use. But the United Kingdom has produced an especially helpful report on using the legitimate interests standard. If Congress wants to further constrain regulatory discretion and ensure some consistency of interpretation as agency officials change, it could incorporate directly into the statute some of the factors that have emerged in the UK legitimate interests guidelines.
For instance, the statute could require companies to conduct an impact study if they seek to use the legal basis of legitimate interests and to file that study with the privacy regulator within a fixed period of time. The statute could require the company to take into account the purpose and the necessity of the data processing and to conduct a balancing assessment weighing the purpose against the privacy rights of the data subjects. The statute could further require that the balancing assessment should take into account the nature of the personal data, the reasonable expectations of the data subjects, the likely impact of the processing on the data subjects, and whether any safeguards can be put in place to mitigate negative impacts.
The ADPPA requires the Federal Trade Commission to act as the nation’s digital privacy regulator with full funding, including the establishment of a separate Bureau of Privacy to implement the new law. Former Federal Communications Commission Chairman Tom Wheeler and his colleagues at the Shorenstein Center would create a separate Digital Platform Agency with authority over both competition and privacy, as would Harold Feld at Public Knowledge in his proposed Digital Platform Act and Senator Michael Bennett with his proposed a Digital Platform Commission. I make the case for a digital regulator responsible for privacy, content moderation and competition policy in my forthcoming Brookings book on digital regulation.
Targeted Ads
The ADPPA treats targeted advertising as a permissible use of personal data, but it requires companies to offer users the ability to opt-out, a policy similar to the GDPR approach of treating direct marketing as a legitimate interest while giving consumers an absolute right to object to it.
But how to provide for such an opt-out from targeted ads is by no means obvious. A key issue will be the extent to which companies can offer incentives for allowing personalized ads. The California privacy law deals with a related issue of an opt out from data sharing by banning financial incentives for data sharing that are “unjust, unreasonable, coercive, or usurious in nature.” ADPPA might need to be clarified to provide for a similar standard in connection with its opt out from targeted ads, but the details cannot be incorporated in the statute itself. For businesses and consumers to understand which financial incentives are banned under such a statutory provision would require significant guidance from the enforcing privacy regulator.
Cameron Kerry and Mishaela Robison argue in a recent Brookings piece that the U.S. privacy legislation should clearly provide the implementing privacy agency with rulemaking authority to address this tangled targeted ad issue. This makes good sense.
In addition, there are similar issues of interpretation and implementation in connection with the legal bases of data processing. As the EDPB decision revealed, determining when data use is strictly necessary for providing a service requires privacy regulators to make informed and detailed decisions about business operations. Businesses and consumers need clarity on which data uses are strictly necessary for providing a service and which are merely useful. Ex ante rules might help provide this clarity. To deal with this and a host of similar issues, the new privacy statute should provide the agency with rulemaking authority to interpret and implement the legal standards of consent, contractual necessity, and legitimate interests.
Conclusion
As this week’s hearing on a national privacy standard indicates, Congress is again taking up the task of enacting a new national privacy law. It should build upon the solid foundation in the ADPPA and require companies to have a legal basis for data processing. ADPPA effectively contains the service necessity standard of lawfulness already. It is not too late to add language relating to consent and legitimate interests to the ADPPA as additional standards of lawfulness. Such measures would enshrine in law a workable framework for evaluating whether business practices violate user privacy rights and would also powerfully express a national commitment to the primacy of privacy.
4 notes
·
View notes