#Data Security & GDPR
Explore tagged Tumblr posts
Text
[Full Text] Emerging Media Companies, Tracking Cookies, and Data Privacy -- An open letter to Critical Role, Dropout, and fellow audience members
Summary / TL;DR
Both Critical Role (CR) and Dropout have begun exclusively using links provided by third-party digital marketing solution companies in their email newsletters.
Every link in each of the newsletters (even the unsubscribe link) goes through a third-party domain which is flagged as a tracking server by the uBlock Origin browser extension.
Third-party tracking cookies are strictly unnecessary and come with a wide array of risks, including non-consensual targeted advertising, targeted misinformation, doxxing, and the potential for abuse by law enforcement.
You are potentially putting your privacy at risk every time you click on any of the links in either of these newsletters.
IMO these advertising companies (and perhaps CR/Dropout by proxy) are likely breaking the law in the EU and California by violating the GDPR and CCPA respectively.
Even if Critical Role and Dropout are not directly selling or exploiting your personal data, they are still profiting off of it by contracting with, and receiving services from, companies who almost certainly are. The value of your personal data is priced into the cost of these services.
They should stop, and can do so without any loss of web functionality.
1/7. What is happening?
Critical Role and Dropout have begun exclusively using links provided by third-party digital marketing solution companies in their email newsletters.
[ID: A screenshot of the Dropout newsletter alongside the page’s HTML source which shows that the target destination for an anchor element in the email leads to d2xR2K04.na1.hubspotlinks.com. End ID.]
[ID: A screenshot of the CR newsletter alongside the page’s HTML source which shows that the target destination for an anchor element in the email leads to trk.klclick.com. End ID.]
The domains attached to these links are flagged as advertising trackers by the uBlock Origin browser extension.
[ID: Screenshot of a Firefox web browser. The page displays a large warning icon and reads “uBlock Origin has prevented the following page from loading [...] because of the following filter: `||hubspotlinks.com` found in Peter Lowe’s Ad and tracking server list. End ID.]
[ID: Screenshot of a Firefox web browser. The page displays a large warning icon and reads “uBlock Origin has prevented the following page from loading [...] because of the following filter: `||klclick1.com` found in Peter Lowe’s Ad and tracking server list. End ID.]
In both cases, every link in the newsletter goes through the flagged third-party domain, and the intended endpoint (Twitter, their store page, etc.) is completely obscured and inaccessible from within the email itself. Even the unsubscribe links feed through the tracking service.
You can test this yourself in your own email client by hovering your cursor over a link in the email without clicking it and watching to see what URL pops up. You may have noticed this yourself if you use uBlock Origin as an ad-blocker.
I don’t know for certain when this first started. It’s possible that this has been going on for a year or more at this point, or it may have started just a few months ago. Either way: it ought to stop.
2/7. What is a tracking cookie?
A tracking cookie is a unique, universally identifiable value placed on your machine by somebody with the intention of checking for that value later to identify you (or at least to identify your machine).
Tracking cookies are used by companies to create advertising behaviour profiles. These profiles are supposedly anonymous, but even if the marketing companies creating them are not lying about that (a tough sell for me personally, but your mileage may vary when it comes to corporations with 9-figure annual incomes), the data can often be de-anonymized.
If this happens, the data can be used to identify the associated user, potentially including their full name, email address, phone number, and physical address—all of which may then be associated with things like their shopping habits, hobbies, preferences, the identities of their friends and family, gender, political opinions, job history, credit score, sexuality, and even when they ovulate.
Now, it is important to note that not all cookies are tracking cookies. A cookie is just some data from a web page that persists on your machine and gets sent back to the server that put it there. Cookies in general are not necessarily malicious or harmful, and are often essential to certain web features functioning correctly (e.g. keeping the user logged in on their web browser after they close the tab). But the thing to keep in mind is that a domain has absolute control over the information that has been stored on your computer by that domain, so allowing cookies is a matter of trusting the specific domain that wants to put them there. You can look at the outgoing information being sent from your machine, but its purpose cannot be determined without knowing what is being done with it on the other side, and these marketing companies ought not to have the benefit of your doubt when they have already been flagged by privacy watchdogs.
3/7. What’s the harm?
Most urgently, as I touched on above: The main source of harm is from corporations profiting off of your private data without your informed consent. However, targeted advertising is actually the least potentially harmful outcome of tracking cookies.
1/6. Data brokers
A data broker is an individual or company that specializes in collecting personal data (such as personal income, ethnicity, political beliefs, geolocation data, etc.) and selling or licensing such information to third parties for a variety of uses, such as background checks conducted by employers and landlords, two universally benevolent groups of people.
There are varying regulations around the world limiting the collection of information on individuals, and the State of California passed a law attempting to address this problem in 2018, following in the footsteps of the EU’s GDPR, but in the jurisdiction of the United States there is no federal regulation protecting consumers from data brokers. In fact, due to the rising interest in federal regulation, data broker firms lobbied to the tune of $29 million in the year 2020 alone.
2/6. De-anonymization techniques
Data re-identification or de-anonymization is the practice of combining datasets (such as advertising profiles) and publicly available information (such as scraped data from social media profiles) in order to discover patterns that may reveal the identities of some or all members of a dataset otherwise intended to be anonymous.
Using the 1990 census, Professor Latanya Sweeney of the Practice of Government and Technology at the Harvard Kennedy School found that up to 87% of the U.S. population can be identified using a combination of their 5-digit zip code, gender, and date of birth. [Link to the paper.]
Individuals whose datasets are re-identified are at risk of having their private information sold to organizations without their knowledge or consent. Once an individual’s privacy has been breached as a result of re-identification, future breaches become much easier: as soon as a link is made between one piece of data and a person’s real identity, that person is no longer anonymous and is at far greater risk of having their data from other sources similarly compromised.
3/6. Doxxing
Once your data has been de-anonymized, you are significantly more vulnerable to all manner of malicious activity: from scam calls and emails to identity theft to doxxing. This is of particular concern for members of minority groups who may be targeted by hate-motivated attacks.
4/6. Potential for abuse by government and law enforcement
Excerpt from “How period tracking apps and data privacy fit into a post-Roe v. Wade climate” by Rina Torchinsky for NPR:
Millions of people use apps to help track their menstrual cycles. Flo, which bills itself as the most popular period and cycle tracking app, has amassed 43 million active users. Another app, Clue, claims 12 million monthly active users. The personal health data stored in these apps is among the most intimate types of information a person can share. And it can also be telling. The apps can show when their period stops and starts and when a pregnancy stops and starts. That has privacy experts on edge because this data—whether subpoenaed or sold to a third party—could be used to suggest that someone has had or is considering an abortion. ‘We're very concerned in a lot of advocacy spaces about what happens when private corporations or the government can gain access to deeply sensitive data about people’s lives and activities,’ says Lydia X. Z. Brown, a policy counsel with the Privacy and Data Project at the Center for Democracy and Technology. ‘Especially when that data could put people in vulnerable and marginalized communities at risk for actual harm.’
Obviously Critical Role and Dropout are not collecting any sort of data related to their users’ menstrual cycles, but the thing to keep in mind is that any data that is exposed to third parties can be sold and distributed without your knowledge or consent and then be used by disinterested—or outright malicious—actors to de-anonymize your data from other sources, included potentially highly compromising data such as that collected by these period-tracking apps. Data privacy violations have compounding dangers, and should be proactively addressed wherever possible. The more of your personal data exists in the hands of third parties, the more it is to be de-anonymized.
5/6. Targeted misinformation
Data brokers are often incredibly unscrupulous actors, and will sell your data to whomever can afford to buy it, no questions asked. The most high-profile case of the consequences of this is the Facebook—Cambridge Analytica data scandal, wherein the personal data of Facebook users were acquired by Cambridge Analytica Ltd. and compiled alongside information collected from other data brokers. By giving this third-party app permission to acquire their data back in 2015, Meta (then Facebook) also gave the app access to information on their users’ friend networks: this resulted in the data of some 87 million users being collected and exploited.
The data collected by Cambridge Analytica was widely used by political strategists to influence elections and, by and large, undermine democracy around the world: While its parent company SCL had been influencing elections in developing countries for decades, Cambridge Analytica focused more on the United Kingdom and the United States. CEO Alexander Nix said the organization was involved in 44 American political races in 2014. In 2016, they worked for Donald Trump’s presidential campaign as well as for Leave.EU, one of the organisations campaigning for the United Kingdom to leave the European Union.
6/6. The Crux: Right to Privacy Violations
Even if all of the above were not concerns, every internet user should object to being arbitrarily tracked on the basis of their right to privacy. Companies should not be entitled to create and profit from personality profiles about you just because you purchased unrelated products or services from them. This right to user privacy is the central motivation behind laws like the EU’s GDPR and California’s CCPA (see Section 6).
4/7. Refuting Common Responses
1/3. “Why are you so upset? This isn’t a big deal.”
Commenter: Oh, if you’re just talking about third party cookies, that’s not a big deal … Adding a cookie to store that ‘this user clicked on a marketing email from critical role’ is hardly [worth worrying about].
Me: I don’t think you understand what tracking cookies are. They are the digital equivalent of you going to a drive through and someone from the restaurant running out of the store and sticking a GPS monitor onto your car.
Commenter: Kind of. It’s more like slapping a bumper sticker on that says, in restaurant-ese, ‘Hi I’m [name] and I went to [restaurant] once!’
This is actually an accurate correction. My metaphor was admittedly overly simplistic, but the correction specifies only so far as is comfortable for the commenter. If we want to construct a metaphor that is as accurate as possible, it would go something like this:
You drive into the McDonald’s parking lot. As you are pulling in, unbeknownst to you, a Strange Man pops out of a nearby bush (that McDonald’s has allowed him to place here deliberately for this express purpose), and sticks an invisible bumper sticker onto the back of your car. The bumper sticker is a tracker that tells the Strange Man which road you took to drive to McDonald’s, what kind of car you drive, and what (if anything) you ordered from McDonald’s while you were inside. It might also tell him where you parked in the parking lot, what music you were listening to in your car on the way in, which items you looked at on the menu and for how long, if you went to the washroom, which washroom you went into, how long you were in the washroom, and the exact location of every step you took inside the building.
Now, as soon as you leave the McDonald’s, the bumper sticker goes silent and stops being able to report information. But, let’s say next week you decide to go to the Grocery Store, and (again, unbeknownst to you), the Strange Man also has a deal with the Grocery Store. So as you’re driving into the grocery store’s parking lot, he pops out of another bush and goes to put another bumper sticker onto your car. But as he’s doing so, he notices the bumper sticker he’s already placed there a week ago that only he can see (unless you’ve done the car-equivalent of clearing your browser cache), and goes “ah, it’s Consumer #1287499290! I’ll make sure to file all of this new data under my records for Consumer #1287499290!”
You get out of your car and start to walk into the Grocery Store, but before you open the door, the Strange Man whispers to the Grocery Store: “Hey, I know you’re really trying to push your cereal right now, want me to make it more likely that this person buys some cereal from you?” and of course the Grocery Store agrees—this was the whole reason they let him set up that weird parking lot bush in the first place.
So the Strange Man runs around the store rearranging shelves. He doesn’t know your name (all the data he collects is strictly anonymous after all!) but he does know that you chose the cutesy toy for your happy meal at McDonald’s, so he changes all of the cereal packaging labels in the store to be pastel-coloured and covered in fluffy bears and unicorns. And maybe you were already going to the Grocery Store to buy cereal, and maybe you’re actually very happy to buy some cereal in a package that seems to cater specifically to your interests, but wouldn’t you feel at least a little violated if you found out that this whole process occurred without your knowledge? Especially if you felt like you could trust the people who owned the Grocery Store? They’re not really your friend or anything, but maybe you thought that they were compassionate and responsible members of the community, and part of the reason that you shopped at their store was to support that kind of business.
2/3. “Everyone does it, get over it.”
Commenter: [The marketing company working with CR] is an industry standard at this point, particularly for small businesses. Major partner of Shopify, a fairly big player. If you don't have a software development team, using industry standard solutions like these is the easy, safe option.
This sounds reasonable, but it actually makes it worse, not better, that Critical Role and Dropout are doing this. All this excuse tells me is that most businesses using Shopify (or at least the majority of those that use its recommended newsletter service) have a bush for the Strange Man set up in their parking lot.
Contracting with these businesses is certainly the easy option, but it is decidedly not the safe one.
3/3. “They need to do it for marketing reasons.”
Commenter 1: Email marketing tools like [this] use tracking to measure open and click rates. I get why you don’t want to be tracked, but it’s very hard to run a sizeable email newsletter without any user data.
Commenter 2: I work in digital marketing … every single email you get from a company has something similar to this. Guaranteed. This looks totally standard.
I am a web programmer by trade. It is my full time job. Tracking the metrics that Critical Role and Dropout are most likely interested in does not require embedding third-party tracking cookies in their fans’ web browsers. If you feel comfortable taking my word on that, feel free to skip the next section. If you’re skeptical (or if you just want to learn a little bit about how the internet works) please read on.
5/7. Tracking cookies are never necessary
We live in a technocracy. We live in a world in which technology design dictates the rules we live by. We don’t know these people, we didn’t vote for them in office, there was no debate about their design. But yet, the rules that they determine by the design decisions they make—many of them somewhat arbitrary—end up dictating how we will live our lives. —Latanya Sweeney
1/3. Definitions
A website is a combination of 2 computer programs. One of the two programs runs on your computer (laptop/desktop/phone/etc.) and the other runs on another computer somewhere in the world. The program running on your computer is the client program. The program running on the other computer is the server program.
A message sent from the client to the server is a request. A message sent from the server to the client is a response.
Cookies are bits of data that the server sends to the client in a response that the client then sends back to the server as an attachment to its subsequent requests.
A session is a series of sequential interactions between a client and server. When either of the two programs stops running (e.g. when you close a browser tab), the session is ended and any future interactions will take place in a new session.
A URL is a Uniform Resource Locator. You may also sometimes see the initialism URI—in which the ‘I’ stands for Identifier—but they effectively refer to the same thing, which is the place to find a specific thing on the internet. For our purposes, a “link” and a URL mean the same thing.
2/3. What do Critical Role and Dropout want?
These media companies (in my best estimation) are contracting with the digital advertising companies in order to get one or more of the following things:
Customer identity verification (between sessions)
Marketing campaign analytics
Customer preference profiles
Customer behaviour profiles
3/3. How can they get these things without tracking cookies?
Accounts. Dropout has an account system already. As Beacon is a thing now I have to assume Critical Role does as well, therefore this is literally already something they can do without any additional parties getting involved.
URL Query Parameters. So you want to know which of your social media feeds is driving the most traffic to your storefront. You could contract a third-party advertising company to do this for you, but as we have seen this might not be the ideal option. Instead, when posting your links to said feeds, attach a little bit of extra text to the end of the URL link so: becomes or or even These extra bits of information at the end of a URL are query parameters, and act as a way for the client to specify some instructions for the server when sending a request. In effect, a URL with query parameters allows the client to say to the server “I want this thing under these conditions”. The benefit of this approach is, of course, that you actually know precisely what information is being collected (the stuff in the parameters) and precisely what is being done with it, and you’ve avoided exposing any of your user data to third parties.
Internal data collection. Optionally associate a user’s email address with their preferences on the site. Prompt them to do this whenever they purchase anything or do any action that might benefit from having some saved preference, informing them explicitly when you do so and giving them the opportunity to opt-out.
Internal data collection. The same as above, but let the user know you are also tracking their movements while on your site. You can directly track user behaviour down to every single mouse movement if you really want to—again, no need to get an outside party involved to snoop on your fans. But you shouldn’t do that because it’s a little creepy!
At the end of the day, it will of course be more work to set up and maintain these things, and thus it will inevitably be more expensive—but that discrepancy in expense represents profit that these companies are currently making on the basis of violating their fans’ right to privacy.
6/7. Breaking the Law
The data subject shall have the right to object, on grounds relating to his or her particular situation, at any time to processing of personal data concerning him or her [...] The controller shall no longer process the personal data unless the controller demonstrates compelling legitimate grounds for the processing which override the interests, rights and freedoms of the data subject or for the establishment, exercise or defence of legal claims. Where personal data are processed for direct marketing purposes, the data subject shall have the right to object at any time to processing of personal data concerning him or her for such marketing, which includes profiling to the extent that it is related to such direct marketing. Where the data subject objects to processing for direct marketing purposes, the personal data shall no longer be processed for such purposes. At the latest at the time of the first communication with the data subject, the right referred to in paragraphs 1 and 2 shall be explicitly brought to the attention of the data subject and shall be presented clearly and separately from any other information. — General Data Protection Regulation, Art. 21
Nobody wants to break the law and be caught. I am not accusing anyone of anything and this is just my personal speculation on publicly-available information. I am not a lawyer; I merely make computer go beep-boop. If you have any factual corrections for this or any other section in this document please leave a comment and I will update the text with a revision note. Before I try my hand at the legal-adjacent stuff, allow me to wade in with the tech stuff.
Cookies are sometimes good and sometimes bad. Cookies from someone you trust are usually good. Cookies from someone you don’t know are occasionally bad. But you can take proactive measures against bad cookies. You should always default to denying any cookies that go beyond the “essential” or “functional” categorizations on any website of which you are remotely suspicious. Deny as many cookies as possible. Pay attention to what the cookie pop-ups actually say and don’t just click on the highlighted button: it is usually “Accept All”, which means that tracking and advertising cookies are fair game from the moment you click that button onward. It is illegal for companies to arbitrarily provide you a worse service for opting out of being tracked (at least it is in the EU and California).
It is my opinion (and again, I am not a legal professional, just a web developer, so take this with a grain of salt) that the links included in the newsletter emails violate both of these laws. If a user of the email newsletter residing in California or the EU wishes to visit any of the links included in said email without being tracked, they have no way of doing so. None of the actual endpoints are available in the email, effectively forcing the user to go through the third-party domain and submit themselves to being tracked in order to utilize the service they have signed up for. Furthermore, it is impossible to unsubscribe directly from within the email without also submitting to the third-party tracking.
[ID: A screenshot of the unsubscribe button in the CR newsletter alongside the page HTML which shows that the target destination for the anchor element is a trk.klclick.com page. End ID.]
As a brief aside: Opening the links in a private/incognito window is a good idea, but will not completely prevent your actions from being tracked by the advertiser. My recommendation: install uBlock Origin to warn you of tracking domains (it is a completely free and open-source project available on most major web browsers), and do not click on any links in either of these newsletters until they change their practices.
Now, it may be the case that the newsletters are shipped differently to those residing in California or the EU (if you are from either of these regions please feel free to leave a comment on whether or not this is the case), but ask yourself: does that make this any better? Sure, maybe then Critical Role and Dropout (or rather, the advertising companies they contract with) aren’t technically breaking the law, but it shows that the only thing stopping them from exploiting your personal data is potential legal repercussions, rather than any sort of commitment to your right to privacy. But I expect that the emails are not, in fact, shipping any differently in jurisdictions with more advanced privacy legislation—it wouldn’t be the first time a major tech giant blatantly flaunted EU regulations.
Without an additional browser extension such as uBlock Origin, a user clicking on the links in these emails may not even be aware that they have interacted with the advertising agency at all, let alone what sort of information that agency now has pertaining to them, nor do they have any ability to opt out of this data collection.
For more information about your right to privacy—something that only those living in the EU or California currently have—you can read explanations of the legislations at the following links (take note that these links, and all of the links embedded in this paper, are anchored directly to the destinations they purport to be, and do not sneakily pass through an additional domain before redirecting you):
7/7. Conclusion
Never attribute to malice that which can be adequately explained by neglect, ignorance or incompetence. —Hanlon’s Razor
The important thing to make clear here is this: Even if Critical Role and Dropout are not directly selling or exploiting your personal data, they are still profiting off of it by contracting with, and receiving services from, companies whom I believe are. You may not believe me.
I do not believe that the management teams at Critical Role and Dropout are evil or malicious. Ignorance seems to be the most likely cause of this situation. Someone at some marketing company told them that this type of thing was helpful, necessary, and an industry standard, and they had no reason to doubt that person’s word. Maybe that person had no reason to doubt the word of the person who told them. Maybe there are a few people in that chain, maybe quite a few. I do not expect everyone running a company to be an expert in this stuff (hell, I’m nowhere close to being an expert in this stuff myself—I only happened to notice this at all because of a browser extension I just happened to have installed to block ads), but what I do expect is that they change their behaviour when the potential harms of their actions have been pointed out to them, which is why I have taken the time to write this.
PS. To the employees of Critical Role and Dropout
It is my understanding that these corporations were both founded with the intention of being socially responsible alongside turning a profit. By using services like the ones described above, you are, however unintentionally, profiting off of the personal datasets of your fans that are being compiled and exploited without their informed consent. You cannot say, implicitly or explicitly, “We’re not like those other evil companies! We care about more than just extracting as much money from our customers as possible!” while at the same time utilizing these services, and it is my hope that after reading this you will make the responsible choice and stop doing so.
Thank you for reading,
era
Originally Published: 23 May 2024
Last Updated: 28 May 2024
#critical role#dimension 20#dropout#dropout tv#brennan lee mulligan#sam reich#critical role campaign 3#cr3#midst podcast#candela obscura#make some noise#game changer#smarty pants#very important people#web security#data privacy#gdpr#ccpa#open letter
10 notes
·
View notes
Text

In this article, we’ll explore what GDPR is, why it’s essential for businesses to comply, and how AI can help with data privacy protection and GDPR compliance. Learn More...
#cloud technology#ai data privacy#ip phones#unified communications#hotel hospitality#VoIP#ip telephony#hotel phone system#voip solutions#GDPR#Protection#Data Security#EU Regulation#Technology News#phonesuite pbx#hotel phone installation#technology#hotel pbx
2 notes
·
View notes
Text
When everything reverts back, you will need skills. Not just a Hollywood plot. Go watch film Space Cowboys with the tech fail & restore POV in mind.
I've finally figured out an argument that convinces coding tech-bros that AI art is bad.
Got into a discussion today (actually a discussion, we were both very reasonable and calm even through I felt like committing violence) with a tech-bro-coded lady who claimed that people use AI in coding all the time so she didn't see why it mattered if people used AI in art.
Obviously I repressed the surge of violence because that would accomplish nothing. Plus, this lady is very articulate, the type who makes claims and you sit there thinking no that's wrong it must be but she said it so well you're kind of just waffling going but, no, wait-- so I knew I had to get this right if I was gonna come out of this unscathed.
The usual arguments about it being about the soul of it and creation fell flat, in fact she was adamant that anyone who believed that was in fact looking down at coding as an art form as she insisted it is. Which, sure, you can totally express yourself through coding. There's a lot more nuance as to the differences but clearly I was not going to win this one.
The other people I was with (literally 8 people anti-ai against her, but you can't change the mind of someone who doesn't want to listen and she just kept accusing us of devaluing coding as an art) took over for I kid you not 15 minutes while I tried desperately to come up with a clear and articulate way to explain the difference to her. They tried so many reasonable arguments, coding being for a function ("what, art doesn't serve a function?") coding being many discrete building blocks that you put together differently, and the AI simply provides the blocks and you put it together yourself ("isn't that what prompt building is") that it's bad for the environment ("but not if it's used for capitalism, hm?" "Yeah literally that's how capitalism works it doesn't care about the environment" she didn't like that response)
But I finally got it.
And the answer is: It's not about what you do, it's about what you claim to be.
Imagine that someone asks an AI to write a code and, by some miracle, it works perfectly without them having to tweak it---which is great because they couldn't tell you what a single solitary thing in that code means.
Now imagine this person, with their code that they don't know how it works, goes and applies to be a coder somewhere, presenting this AI code as proof that they're qualified.
Should they be hired?
She was horrified, of course. Of course they shouldn't be. They're not qualified. They can't actually code, and even if by some miracle they did have an AI successfully write a flawless code for every issue they came across that wouldn't be their code, you could hire any shmuck on the street to do that, no reason to pay someone like they're creating something.
When actual engineers use AI what they do is get some kind of base, which they then go though and check for problems and then if they find any they fix them, and add on to the base code with their own knowledge instead of just trying different prompt after prompt until they randomly come across one that works.
People who generate code like this don't usually call themselves engineers. They're people who needed a bit of code and didn't have the knowledge to generate it, and so used a resource.
And there you go. There are people who have none of the skills of artists, they don't practice, they don't create for themselves. When they feed the prompt to the AI they then don't just use the resulting image as a reference point for their own personal masterpiece, and if they don't like it they don't have the skills to change it---they simply try another prompt, and do that until they get something they like.
These people are calling themselves artists.
Not only that, these people are bringing the AI generated thing to interviews, and they are getting hired, leaving people who slave over their craft out of the job.
And that is the difference, for the tech bros who think AI art isn't a big deal.
#sham skills#is that how hackers can get in ?#security of data#gdpr breaches#coding#turning it off#film space cowboys#javascript#remember
18K notes
·
View notes
Text
UAE Government Cybersecurity: Compliance & Protection
The United Arab Emirates is one of the fastest-growing digital economies in the world. From smart cities to paperless governance and AI integration — the UAE is betting big on technology. But with innovation comes cyber risk. In response, the UAE government launched its��National Cybersecurity Strategy (NCS) to secure the digital transformation.
In this blog, we decode the UAE Cybersecurity Strategy for 2024–2025, explore what it means for businesses, and outline how you can align your organization with national goals.
2. What Is the UAE Cybersecurity Strategy?
The National Cybersecurity Strategy is a government-wide framework designed to:
Build a secure and resilient cyber environment
Protect critical digital infrastructure
Enhance national and economic security
Promote trust in digital services
It is overseen by the UAE Cybersecurity Council, which was established in 2020 and reports directly to federal authorities.
3. Key Objectives of the 2024–2025 Strategy
The updated strategy, launched in late 2023, outlines five strategic pillars:
1. Cybersecurity Governance & Policy
Introduce a unified cybersecurity legal framework
Ensure coordination between federal and emirate-level agencies
Standardize cybersecurity compliance across sectors
2. National Cyber Resilience
Protect Critical Information Infrastructure (CII)
Improve response to large-scale cyber attacks
Establish Sectoral CSIRTs (Cybersecurity Incident Response Teams)
3. Cybersecurity Innovation & Research
Support local development of cybersecurity tools and platforms
Establish national bug bounty programs
Fund academic research in AI-powered security
4. Cybersecurity Workforce Development
Train 50,000 cybersecurity professionals by 2026
Create certification programs and skill standards
Encourage women and youth participation in cybersecurity
5. International Collaboration
Build partnerships with global cybersecurity agencies
Harmonize cross-border data protection and cyber laws
Participate in global incident response exercises
4. How This Strategy Impacts UAE Businesses
Whether you’re a fintech startup, logistics company, or a real estate giant — the strategy directly affects your digital operations.
Here’s how:
✅ Mandatory Compliance RequirementsSectors like finance, healthcare, telecom, and government contractors will need to meet updated regulations related to data protection and incident reporting.
✅ Vendor Risk Oversight You will be required to vet third-party vendors for cybersecurity compliance — especially cloud providers and payment platforms.
✅ Employee Training ExpectationsInternal awareness and cybersecurity training will be expected — not just optional.
✅ Incident Response ReportingOrganizations must report certain cyber incidents within defined timeframes, similar to the Personal Data Protection Law (PDPL).
5. Real-Life Application: The Dubai Smart City Push
As Dubai rolls out 5G-powered smart infrastructure, the Cybersecurity Strategy mandates that all government and semi-government entities integrate security-by-design models in their digital transformation.
This means any business working on IoT devices, AI applications, or smart services must meet minimum cybersecurity benchmarks to qualify for contracts.
6. How to Align Your Business with the Strategy
Here’s a roadmap for proactive alignment:
✅ Step 1: Understand Your Risk Profile
Identify your digital assets, data categories, and critical business processes.
✅ Step 2: Conduct a Cybersecurity Gap Assessment
Compare your existing cybersecurity practices with expected controls outlined by the UAE Cybersecurity Council.
✅ Step 3: Build a Governance Framework
Appoint cybersecurity leads, develop internal policies, and prepare a documented incident response plan.
✅ Step 4: Invest in Security Technology
Adopt solutions for:
Email security
Network segmentation
Endpoint protection
Cloud security posture management
✅ Step 5: Educate and Train Your Teams
Train all employees on phishing, password hygiene, and incident reporting. Run mock drills.
✅ Step 6: Partner with a Cybersecurity Advisor
Consult with firms like Centre Systems Group to stay updated, implement controls, and conduct penetration testing.
7. How Centre Systems Group Can Help
At Centre Systems Group, we provide end-to-end support to align your operations with the UAE’s 2025 Cybersecurity Strategy.
Our services include:
✅ Security policy development ✅ Cyber risk audits and ISO 27001 alignment ✅ Employee awareness training programs ✅ Cloud security consulting ✅ Managed detection and response (MDR) ✅ PDPL and NESA compliance implementation
We combine local market understanding with global best practices to help you meet every regulatory and security requirement confidently.
Cybersecurity in the UAE is no longer just about protecting data — it’s about enabling a secure digital future. The UAE’s Cybersecurity Strategy 2024–2025 reflects the country’s ambition to lead in digital innovation without compromising national security.
For businesses, this is both a challenge and an opportunity. Those who act early can gain compliance, client trust, and a competitive edge.
📩 Partner with Centre Systems Group to future-proof your business against emerging cyber risks — and stay aligned with the UAE’s strategic vision. Source Url: https://centresystemsgroup.net/blog/understanding-the-uae-cybersecurity-strategy-2024-2025/
0 notes
Text

From Risk to Resilience: Building Confidence Through Data Governance
Pilots don’t take off without a full system check. Airlines don’t fly without a clear, tested plan to handle emergencies.
So why would your organization operate without strong data governance?
Backups are like a black box—valuable only after a failure occurs. Governance, on the other hand, acts as your checklist, co-pilot, and emergency plan all in one.
In today’s complex digital world, data disruptions are not rare—they’re routine. And most incidents aren’t caused by cyberattacks but by misconfigurations, system failures, or human mistakes.
True resilience isn’t about reacting—it’s about anticipating. And that begins with governance.
PiLog’s Data Governance Solution is designed for the challenges of tomorrow, available today. It integrates real-time governance seamlessly with SAP S/4HANA via BTP, while embedding GDPR-compliant policies at every stage.
This solution doesn’t just bring clarity—it delivers operational certainty.
Its governance framework builds resilience throughout the entire data lifecycle by:
Preventing failures before they occur
Detecting issues in real time
Enabling fast, complete, and compliant recovery when needed
Ask yourself:
Where are your backups?
Are they secure?
Have they been thoroughly stress-tested?
Who is responsible if recovery fails?
PiLog’s Data Governance turns these critical questions into clear answers. It transforms chaos into clarity—and reactive firefighting into proactive reading
0 notes
Text
A Guide to Dark Patterns
The use of dark patterns have become an increasingly popular practice online. Harry Brignull, who coined the term ‘dark patterns’ defined it as practices that “trick or manipulate users into making choices they would not otherwise have made and that may cause harm.”
The growing popularity of dark patterns has naturally attracted the attention of regulatory bodies. The U.S Federal Trade Commission has even stated that they have been “ramping up enforcement” against companies that employ dark patterns as can be seen with the $520 million imposed on Fortnite creator, Epic Games. In the EU, the fines have been piling up against the violating companies, with TikTok and Facebook facing a €5 million fine and €60 million fine respectively, both imposed by French DPA, the CNIL.
The FTC, in its endeavor to combat the use of dark patterns and protect consumer privacy, even conducted a workshop and released a report on the topic in 2022, titled Bringing Dark Patterns to Light. Samuel Levine, Director of the FTC’s Bureau of Consumer Protection stated, “This report—and our cases—send a clear message that these traps will not be tolerated.” FTC Report Shows Rise in Sophisticated Dark Patterns Designed to Trick and Trap Consumers | Federal Trade Commission
With increasing state privacy regulations in the U.S and given the growing strength of the enforcement around dark patterns, it is important to know the terms that address dark patterns under the existing regulations.
Two significant definitions and regulations of dark patterns within the US come from the California Consumer Privacy Act, CCPA, and the Colorado Privacy Act, CPA. Both these state privacy regulations address dark patterns in a detailed fashion, which can be found below.
Under the CCPA, § 7004. Requirements for Methods for Submitting CCPA Requests and Obtaining Consumer Consent.
Except as expressly allowed by the CCPA and these regulations, businesses shall design and implement methods for submitting CCPA requests and obtaining consumer consent that incorporate the following principles:
“Easy to understand” The language used should be simple and easy to read and comprehend
“Symmetry in choice”. It should be as easy and quick to exercise a more privacy protective option as it is to exercise a less privacy protective option - The process to submit a request to opt-out of sale/sharing of personal information should not require more time than the process to opt-in to the sale of PI after having previously opted out. - “Yes” and “Ask me later” as the ONLY two options to decline the option to opt-in to the sale is not appropriate. “Ask me later” does not imply that the consumer has denied the decision but delayed it. Businesses will continue asking the consumer to opt-in. “Yes” and “No” are the options that should be provided. - “Accept All” and “Preferences” or “Accept All and “More information” as choices to provide consent to the use of consumers personal information is not appropriate as the choice can be accepted in one step but additional steps are required to exercise their rights. “Accept all” and “Decline all” should be used.
Confusing language should be avoided. Consumers choices should be clearly provided. Double negatives should not be used - Confusing options such as the choice of “Yes” or “No” next to the statement “Do Not Sell or Share My Personal Information” is a double negative and should be avoided. - “On” or “off” toggles or buttons may require further clarification - If at first, the options are presented in the order “Yes” and then “No”, it should not be changed then to the opposite order of “No” and then “Yes” as is this unintuitive and confusing.
The design and architecture should not impair the consumers ability to make choices. Consent should be “freely given, specific, informed and unambiguous” - Consumers should not be made to click through disruptive screens before submitting an opt-out request. - The option to consent to using PI for purposes that meet the requirements should not be combined with the option to consent to using PI for purposes that are “incompatible with the context” For example, a business that uses location data for its services such as a mobile app that delivers food to users’ locations should not ask for consent to “incompatible uses” (sale of geolocation data to third parties) along with the “reasonably necessary and proportionate use of geolocation date” that is needed for the apps services. This requires consent to “incompatible uses” to use the apps’ expected services
“Easy to execute”. The process to submit a CCPA request should be straight forward, simple and functional. Consumer’s choice to submit a request should not be undermined - Consumer should not have to search extensively for a privacy policy or webpage for the option to submit an opt-out request after clicking the “Do Not Sell or Share My Personal Information” link. - Businesses that know of faults such as broken links, non-functional email addresses, inboxes that are not monitored, should fix them at the earliest - Consumers should not have to unnecessarily wait on a webpage while the request is being processed.
Using dark patterns (practices stated above) to obtain consent is not considered as consent. Obtaining consent using dark patterns can be considered has having never obtained consumer consent.
Should the user interface unintentionally impair the user’s choice and with this knowledge, the business does not remedy the issue, it could be considered a dark pattern. “Deliberate ignorance” if the faulty, impairing designs may be considered a dark pattern.
Under the CPA, Rule 7.09 USER INTERFACE DESIGN, CHOICE ARCHITECTURE, AND DARK PATTERNS
There should be symmetry in presentation of choices. No one option should be given more focus over others - All options should use the same font, size, and style. “I accept” being in a larger size or in a brighter more attention-grabbing colour over the “I do not accept” is not considered symmetrical. - All choices should be equally easy to accept or reject. The option to “accept all” to consent the use of Sensitive data should be presented without the option to “reject all”
Manipulative language and/or visuals that coerce or steer consumers choices should be avoided - Consumers should not be guilted or shamed into any choice. “I accept. I care about the planet” vs “I reject, I don’t care about the planet” can be considered - “Gratuitous information to emotionally manipulate consumers” should be avoided. Stating that the mobile application “Promotes animal welfare” when asking for consent to collect sensitive data for Targeted Advertising can be considered “deceptively emotionally manipulative” if the reason for collection is not actually critical to promoting animal welfare.
Silence or ‘failure to take an affirmative action’ is not equal to Consent or acceptance - Closing a Consent request pop-up window without first affirmatively making a choice cannot be interpreted as consent. - Going through the webpage without affirmatively providing consent cannot on the banner provided cannot be interpreted as consent. - Using a Smart device without verbal consent; “I accept”, “I consent” cannot be considered affirmative consent.
Avoid preselected or default options - Checkboxes or radial buttons cannot be preselected
It should be as easy and quick to exercise a more privacy protective option as it is to exercise a less privacy protective option. There should be equal number of steps all options - All choices should be presented at the same time. “I accept” and “Learn more” as the two choices presents a greater number of steps for the latter and is an unnecessary restriction. - However, preceding both the “I accept” and “I do not accept” buttons with the “select preferences” button would not be considered an unnecessary restriction.
Consent requests should not unnecessarily interrupt a consumer’s activity on the website, application, product - Repeated consent requests should not appear after the consumer declined consent - Unless consent to process data is strictly necessary, consumers should not be redirected away from their activity if they declined consent - Multiple inconvenient consent request pop-ups should be avoided if they declined consent initially.
“Misleading statements, omissions, affirmative misstatements, or intentionally confusing language to obtain Consent” should be avoided - A false sense of urgency, such as a ticking clock on the consent request should be avoided. - Double negatives should be avoided on the consent request - Confusing language should be avoided such as “Please do not check this box if you wish to Consent to this data use” - Illogical choices like the options of “Yes” or “No” to the question “Do you wish to provide or decline Consent for the described purposes” should be avoided.
Target audience factors and characteristics should be considered - Simplicity of language should be considered for websites or services whose target audience is under the age of 18 - Big size, spacing, and readability should be considered for websites or services whose target audience is elderly people.
User interface design and Consent choice architecture should be similar when accessed through digital accessibility tools - The same number of steps to exercise consent should be provided on the website whether it is accessed using a digital accessibility tool or without.
Going a step further, The Virginia Consumer Data Protection Act, VCDPA, does not specifically address dark patterns. However, the Virginia Attorney General’s office could, in theory, use the regulation’s definition of consent to challenge the use of dark patterns.
The Connecticut Data Privacy Act, CTDPA, defines a Dark Pattern as “a user interface designed or manipulated with the substantial effect of subverting or impairing user autonomy, decision-making or choice”. Any practice the Federal Trade Commission (FTC) deems a "dark pattern" is also included in this definition. It is important to note that Consent obtained through the use of a Dark Pattern is not deemed valid under the CTDPA.
These are some examples of US state regulations around dark patterns. Be sure to review all state privacy laws that affect your business.
For more insights and updates on privacy and data governance, visit Meru Data | Data Privacy & Information Governance Solutions
#data privacy#data security#ai privacy#gdpr#ccpa#artificial intelligence#data protection#internet privacy#dark patterns#cybersecurity#data mapping
0 notes
Text
Why Using AI with Sensitive Business Data Can Be Risky
The AI Boom: Awesome… But Also Dangerous AI is everywhere now — helping with customer service, writing emails, analyzing trends. It sounds like a dream come true. But if you’re using it with private or regulated data (like health info, financials, or client records), there’s a real risk of breaking the rules — and getting into trouble. We’ve seen small businesses get excited about AI… until…
#AI#AI ethics#AI risks#AI security#artificial intelligence#business data#CCPA#compliance#cybersecurity#data governance#data privacy#GDPR#hipaa#informed consent#privacy by design#regulatory compliance#SMB
0 notes
Text
So after years of testing incognito browsers I have determined this:
Browsing in Chrome ends up in my adds. Browsing in Safari does not.
What’s your experience?
#data privacy#internet privacy#online privacy#privacymatters#gdpr#gdprcompliance#cyber security#data security#chrome#Google#safari
0 notes
Text
Are You Ready for GDPR Compliance in 2025?
Data privacy isn’t just a regulation it’s a responsibility. As technology advances and hybrid work becomes the norm, safeguarding visitor data has never been more critical.
Discover how to ensure GDPR compliance in your Visitor Management Systems with actionable insights, emerging trends, and strategies to stay ahead.
Read the full article here: https://www.linkedin.com/.../gdpr-compliance-visitor...
Let’s shape a future where privacy meets innovation.
#GDPR#compliance#visitor access management#visitor management#visitor privacy#visitor data#data privacy#gdprcompliance#data protection#data security
1 note
·
View note
Text
I've seen a number of people worried and concerned about this language on Ao3s current "agree to these terms of service" page. The short version is:
Don't worry. This isn't anything bad. Checking that box just means you forgive them for being US American.
Long version: This text makes perfect sense if you're familiar with the issues around GDPR and in particular the uncertainty about Privacy Shield and SCCs after Schrems II. But I suspect most people aren't, so let's get into it, with the caveat that this is a Eurocentric (and in particular EU centric) view of this.
The basic outline is that Europeans in the EU have a right to privacy under the EU's General Data Protection Regulation (GDPR), an EU directive (let's simplify things and call it an EU law) that regulates how various entities, including companies and the government, may acquire, store and process data about you.
The list of what counts as data about you is enormous. It includes things like your name and birthday, but also your email address, your computers IP address, user names, whatever. If an advertiser could want it, it's on the list.
The general rule is that they can't, unless you give explicit permission, or it's for one of a number of enumerated reasons (not all of which are as clear as would be desirable, but that's another topic). You have a right to request a copy of the data, you have a right to force them to delete their data and so on. It's not quite on the level of constitutional rights, but it is a pretty big deal.
In contrast, the US, home of most of the world's internet companies, has no such right at a federal level. If someone has your data, it is fundamentally theirs. American police, FBI, CIA and so on also have far more rights to request your data than the ones in Europe.
So how can an American website provide services to persons in the EU? Well… Honestly, there's an argument to be made that they can't.
US websites can promise in their terms and conditions that they will keep your data as safe as a European site would. In fact, they have to, unless they start specifically excluding Europeans. The EU even provides Standard Contract Clauses (SCCs) that they can use for this.
However, e.g. Facebook's T&Cs can't bind the US government. Facebook can't promise that it'll keep your data as secure as it is in the EU even if they wanted to (which they absolutely don't), because the US government can get to it easily, and EU citizens can't even sue the US government over it.
Despite the importance that US companies have in Europe, this is not a theoretical concern at all. There have been two successive international agreements between the US and the EU about this, and both were struck down by the EU court as being in violation of EU law, in the Schrems I and Schrems II decisions (named after Max Schrems, an Austrian privacy activist who sued in both cases).
A third international agreement is currently being prepared, and in the meantime the previous agreement (known as "Privacy Shield") remains tentatively in place. The problem is that the US government does not want to offer EU citizens equivalent protection as they have under EU law; they don't even want to offer US citizens these protections. They just love spying on foreigners too much. The previous agreements tried to hide that under flowery language, but couldn't actually solve it. It's unclear and in my opinion unlikely that they'll manage to get a version that survives judicial review this time. Max Schrems is waiting.
So what is a site like Ao3 to do? They're arguably not part of the problem, Max Schrems keeps suing Meta, not the OTW, but they are subject to the rules because they process stuff like your email address.
Their solution is this checkbox. You agree that they can process your data even though they're in the US, and they can't guarantee you that the US government won't spy on you in ways that would be illegal for the government of e.g. Belgium. Is that legal under EU law? …probably as legal as fan fiction in general, I suppose, which is to say let's hope nobody sues to try and find out.
But what's important is that nothing changed, just the language. Ao3 has always stored your user name and email address on servers in the US, subject to whatever the FBI, CIA, NSA and FRA may want to do it. They're just making it more clear now.
10K notes
·
View notes
Text
Overcome KSA Cloud Migration Security & Compliance Issues
The Kingdom of Saudi Arabia (KSA) is undergoing a significant digital transformation fueled by initiatives like Vision 2030. As businesses in the region embrace innovation, cloud migration has emerged as a key driver for scalability, cost-efficiency, and enhanced service delivery. However, transitioning to the cloud is not without challenges. Organizations in KSA must navigate complex security, compliance, and operational hurdles to achieve a seamless migration process.
This article explores the major challenges associated with cloud migration in KSA and provides actionable strategies to overcome them, ensuring security and compliance.
1. Cloud Adoption Trends in KSA
Cloud adoption in KSA is accelerating across various industries, including finance, healthcare, and government. This growth is driven by the need to modernize infrastructure, optimize costs, and meet the demands of a rapidly evolving digital landscape.
Key Drivers of Cloud Adoption in KSA:
Vision 2030: The Saudi government’s Vision 2030 initiative emphasizes technology-driven economic diversification, encouraging organizations to adopt advanced technologies like cloud computing.
Increased Investments: Global cloud providers, such as AWS, Microsoft Azure, and Google Cloud, are establishing data centers in KSA to cater to local demands.
Remote Work and Digital Services: The COVID-19 pandemic accelerated the adoption of cloud solutions to support remote work and digital transformation.
Market Growth Statistics:
The cloud computing market in KSA is projected to grow at a compound annual growth rate (CAGR) of 25% between 2021 and 2026.
Sectors like banking, healthcare, and education are leading in cloud adoption due to their reliance on secure and scalable IT solutions.
2. Major Challenges in Cloud Migration
While the benefits of cloud migration are undeniable, organizations in KSA face several challenges that must be addressed for a successful transition.
a. Security Concerns
Security remains one of the top challenges during cloud migration. Organizations are vulnerable to threats such as:
Data Breaches: Sensitive data can be exposed during the migration process due to improper handling or insufficient encryption.
Misconfigurations: Errors in cloud configurations can create entry points for cyberattacks.
Insider Threats: Unauthorized access by employees or third-party vendors can compromise data integrity.
b. Compliance with Regulatory Requirements
KSA has a stringent regulatory environment to protect data privacy and national security.
SAMA Cyber Security Framework: Financial institutions must comply with the Saudi Arabian Monetary Authority’s (SAMA) guidelines to ensure secure operations.
Data Sovereignty: Organizations must store and process sensitive data within the country’s borders, adhering to local laws.
Cross-Border Data Transfer Restrictions: Global companies operating in KSA face challenges in transferring data across borders while complying with local and international regulations like GDPR.
c. Lack of Skilled Workforce
A shortage of skilled cloud professionals is a significant barrier to seamless migration. Organizations often struggle to find talent with expertise in:
Cloud security and compliance.
Cloud-native application development.
Advanced tools for monitoring and optimization.
d. Migration Downtime and Business Disruption
Cloud migration can disrupt normal business operations, leading to:
Service interruptions that affect customer experience.
Delayed processes and productivity losses.
Increased pressure on IT teams to resolve technical issues quickly.
e. Cost Management
While cloud adoption is cost-efficient in the long term, migration itself can incur:
High upfront costs for infrastructure upgrades.
Unexpected expenses due to extended timelines or unplanned resources.
Overruns caused by a lack of clear budgeting.
3. Best Practices for Overcoming Cloud Migration Challenges
Organizations in KSA can overcome these challenges by adopting best practices tailored to their unique operational and regulatory needs.
a. Conduct a Comprehensive Assessment
Start with a detailed evaluation of your current IT infrastructure to identify:
Workloads suitable for cloud migration.
Security vulnerabilities and areas requiring improvement.
Estimated costs and timelines for migration.
Tools: Use cloud readiness assessment tools like AWS Migration Evaluator or Microsoft Azure’s Total Cost of Ownership (TCO) calculator.
b. Develop a Clear Migration Strategy
A well-structured migration plan minimizes risks and ensures a smooth transition.
Phased Migration: Migrate workloads in phases to reduce disruption.
Prioritization: Focus on mission-critical applications first, followed by less critical ones.
Backup Plans: Maintain on-premises backups during migration to prevent data loss.
c. Ensure Compliance with KSA Regulations
Work closely with legal and compliance experts to meet regional requirements:
Align with the SAMA Cyber Security Framework for financial institutions.
Use local data centers to address data sovereignty concerns.
Consult with compliance specialists to navigate cross-border data transfer restrictions.
d. Leverage Advanced Security Solutions
Implement robust security measures to protect data during migration:
Encryption: Use end-to-end encryption for data in transit and at rest.
Identity and Access Management (IAM): Restrict access to authorized personnel only.
Zero-Trust Architecture: Adopt a “never trust, always verify” approach to access control.
e. Train and Upskill IT Teams
Invest in training programs to bridge skill gaps:
Encourage certifications in cloud platforms like AWS, Azure, and Google Cloud.
Conduct workshops on compliance and security best practices.
Provide hands-on experience with cloud migration tools.
f. Partner with Trusted Cloud Service Providers
Choose cloud providers with a strong presence in KSA:
Evaluate their compliance readiness and security measures.
Ensure they offer support for local regulations like data sovereignty.
Leverage managed services to reduce the burden on in-house IT teams.
g. Monitor and Optimize Continuously
After migration, focus on optimizing performance and costs:
Use monitoring tools to detect vulnerabilities.
Regularly review cloud usage and adjust resources to avoid overspending.
Perform audits to ensure ongoing compliance.
Conclusion
Cloud migration presents immense opportunities for businesses in KSA, but it also comes with challenges that require careful planning and execution. By addressing security concerns, complying with regulatory frameworks, and leveraging advanced tools and practices, organizations can achieve a smooth and secure transition to the cloud.
At Centre Systems Group, we specialize in providing end-to-end cloud migration services tailored to the unique needs of KSA businesses. Contact us today to ensure a secure and compliant journey to the cloud.
Source Url: https://centresystemsgroup.net/blog/cloud-migration-challenges-in-ksa-overcoming-security-and-compliance-issues/
1 note
·
View note
Text
How to Manage Your Digital Footprint
In today’s interconnected world, our online presence is more significant than ever. Every click, share, and post contributes to our digital footprint, shaping how we are perceived both personally and professionally. This comprehensive guide on how to manage your digital footprint aims to equip you with the knowledge and tools necessary to take control of your online identity, ensuring it reflects…
#audit#cybersecurity#data breaches#data management#data policies#data privacy#data protection#data sharing#digital audit#digital footprint#digital identity#digital rights#digital safety#Digital Security#Electronic Frontier Foundation#encryption#GDPR#Google Alerts#internet footprint#internet privacy#internet security#online identity#online interactions#online presence#online privacy#online reputation#online security#online tracking#personal data#personal information
1 note
·
View note
Video
youtube
Data Privacy and Protection
0 notes
Text
AI and Data Privacy in the Insurance Industry: What You Need to Know
The insurance industry is no stranger to the requirements and challenges that come with data privacy and usage. By nature, those in insurance deal with large amounts of Personally Information (PI) which includes names, phone numbers, and Social Security numbers, financial information, health information, and so forth. This widespread use of multiple categories of PI by insurance companies demands that measures are taken to prioritize individuals’ privacy.
Further, in recent months, landmark cases of privacy violations and misuse of AI technology involving financial institutions have alarmed the insurance industry. While there is no doubt that embracing new technology like AI is a requirement to stay profitable and competitive going forward, let’s look at three main considerations for those in the insurance industry to keep in mind when it comes to data privacy: recent landmark cases, applicable regulations and AI governance.
Recent noteworthy cases of privacy violations and AI misuse
One important way to understand the actions of enforcement agencies and anticipate changes in the regulatory landscape is to look at other similar and noteworthy cases of enforcement. In recent months, we have seen two cases that stood out
First is the case of General Motors (G.M.), an American automotive manufacturing company. Investigations by journalist Kashmir Hill found that vehicles made by G.M. were collecting data of its customers without their knowledge and sharing it with data brokers like LexisNexis, a company that maintains a “Risk Solutions” division that caters to the auto insurance industry and keeps tabs on car accidents and tickets. The data collected and shared by G.M. included detailed driving habits of its customers that influenced their insurance premiums. When questioned, G.M. confirmed that certain information was shared with LexisNexis and other data brokers.
Another significant case is that of health insurance giant Cigna using computer algorithms to reject patient claims en masse. Investigations found that the algorithm spent an average of merely 1.2 seconds on each review, rejecting over 300,000 payment claims in just 2 months in 2023. A class-action lawsuit was filed in federal court in Sacramento, California.
Applicable Regulations and Guidelines
The Gramm-Leach-Bliley Act (GLBA) is a U.S. federal regulation focused on reforming and modernizing the financial services industry. One of the key objectives of this law is consumer protection. It requires financial institutions offering consumers loan services, financial or investment advice, and/or insurance, to fully explain their information-sharing practices to their customers. Such institutions must develop and give notice of their privacy policies to their own customers at least annually.
The ‘Financial Privacy Rule’ is a requirement of this law that financial institutions must give customers and consumers the right to opt-out and not allow a financial institution to share their information with non-affiliated third parties prior to sharing it. Further, financial institutions are required to develop and maintain appropriate data security measures.
This law also prohibits pretexting, which is the act of tricking or manipulating an individual into providing non-public information. Under this law, a person may not obtain or attempt to obtain customer information about another person by making a false or fictitious statement or representation to an officer or employee. The GLBA also prohibits a person from knowingly using forged, counterfeit, or fraudulently obtained documents to obtain consumer information.
National Association of Insurance Commissioners (NAIC) Model Bulletin: Use of Artificial Intelligence Systems by Insurers
The NAIC adopted a bulletin in December of last year as an initial regulatory effort to understand and gain insight into the technology. It outlines guidelines that include governance, risk management and internal controls, and controls regarding the acquisition and/or use of third-party AI systems and data. According to the bulletin, insurers are required to develop and maintain a written program for the responsible use of AI systems. Currently, 7 states have adopted these guidelines; Alaska, Connecticut, New Hampshire, Illinois, Vermont, Nevada, and Rhode Island, while others are expected to follow suit.
What is important to note, is that the NAIC outlined the use of AI by insurers in their Strategic Priorities for 2024, which include adopting the Model Bulletin, proposing a framework for monitoring third-party data and predictive models, and completing the development of the Cybersecurity Event Response Plan and enhancing consumer data privacy through the Privacy Protections Working Group.
State privacy laws and financial institutions
In the U.S., there are over 15 individual state privacy laws. Some have only recently been introduced, some go into effect in the next two years, and others like the California Consumer Privacy Act (CCPA) and Virginia Consumer Data Protection Act (VCDPA) are already in effect. Here is where some confusion exists. Most state privacy regulations such as Virginia, Connecticut, Utah, Tennessee, Montana, Florida, Texas, Iowa, and Indiana provide entity exemptions to financial institutions. This means that as regulated entities, these businesses fall outside the scope of these state laws. In other words, if entities are regulated by the GLBA then they are exempt from the above-mentioned state regulations.
Some states, like California and Oregon, have data-level exemptions for consumer financial data regulated by the GLBA. For example, under the CCPA, Personal Information (PI) not subject to the GLBA would fall under the scope of the CCPA. Further, under the CCPA, financial institutions are not exempt from its privacy right-of-action concerning data breaches.
As for the Oregon Consumer Privacy Act (OCPA), only 'financial institutions,' as defined under §706.008 of the Oregon Revised Statutes (ORS), are subject to a full exemption. This definition of a ‘financial institution’ is narrower than that defined by the GLBA. This means that consumer information collected, sold and processed in compliance with the GLBA may still not be exempt under the OCPA. We can expect other states with upcoming privacy laws to have their own takes on how financial institutions’ data is regulated.
AI and Insurance
Developments with Artificial Intelligence (AI) technology has been a game changer for the insurance industry. Generative AI can ingest vast amounts of information and determine the contextual relationship between words and data points. With AI, insurers can automate insurance claims and enhance fraud detection, both of which would require the use of PI by AI models. Undoubtedly, the integration of AI has multiple benefits including enabling precise predictions, handling customer interactions, and increasing accuracy and speed overall. In fact, a recent report by KPMG found that Insurance CEOs are actively utilizing AI technology to modernize their organizations, increase efficiency, and streamline their processes. This not only includes claims and fraud detection, but also general business uses such as HR, hiring, marketing, and sales. Each likely use different models with their own types of data and PI.
However, the insurance industry’s understanding of Generative AI related risk is still in its infancy. And according to Aon's Global Risk Management Survey, AI is likely to become a top 20 risk in the next three years. In fact, according to Sandeep Dani, Senior Risk Management Leader at KPMG, Canada, “The Chief Risk Officer now has one of the toughest roles, if not THE toughest role, in an insurance organization”
In the race to maximise the benefits of AI, consumers’ data privacy cannot take a backseat, especially when it comes to PI and sensitive information. As of 2024, there is no federal AI law in the U.S., and we are only starting to see statewide AI regulations like with the Colorado AI Act and the Utah AI Policy Act. Waiting around for regulations is not an effective approach. Instead, proactive AI governance measures can act as a key competitive differentiator for companies, especially in an industry like insurance where consumer trust is a key component.
Here are some things to keep in mind when integrating AI:
Transparency is key: Consumers need to have visibility over their data being used by AI models, including what AI models are being used, how these models use their data, and the purposes for doing so. Especially in the case of insurance, where the outcome of AI models has serious implications, consumers need to be kept in the loop about their data.
Taking Inventory: To ensure accuracy and quality of outputs, it is important to take inventory of and understand the AI systems, the training data sources, the nature of the training data, inputs and outputs, and the other components in play to gain an understanding of the potential threats and risks.
Performing Risk Assessments: Different laws consider different activities as high-risk. For example, biometric identification and surveillance is considered high-risk under the EU AI Act but not under the NIST AI Risk Management Framework. As new AI laws are introduced in the U.S., we can expect the risk-based approach to be adopted by many. Here, it becomes important to understand the jurisdiction, and the kind of data in question, then categorize and rank risks accordingly.
Regular audits and monitoring: Internal reviews will have to be maintained to monitor and evaluate the AI systems for errors, issues, and biases in the pre-deployment stage. Regular AI audits will also need to be conducted to check for accuracy, robustness, fairness, and compliance. Additionally, post-deployment audits and assessments are just as important to ensure that the systems are functioning as required. Regular monitoring of risks and biases is important to identify emerging risks or those that may have been missed previously. It is beneficial to assign responsibility to team members to overlook risk management efforts.
Conclusion
People care about their data and their privacy, and for insurance consumers and customers, trust is paramount. Explainability is the term commonly used when describing what an AI usage goal or expected output is meant to be. Fostering explainability when governing AI helps stakeholders make informed decisions while protecting privacy, confidentiality and security. Consumers and customers need to trust the data collection and sharing practices and the AI systems involved. That requires transparency so they may understand those practices, how their data gets used, the AI systems, and how those systems reach their decisions.
About Us
Meru Data designs, implements, and maintains data strategy across several industries, based on their specific requirements. Our combination of best-inclass data mapping and automated reporting technology, along with decades of expertise around best practices in training, data management, AI governance, and law gives Meru Data the unique advantage of being able to help insurance organizations secure, manage, and monetize their data while preserving customer trust and regulatory compliance.
0 notes
Text
Data Protection: Legal Safeguards for Your Business
In today’s digital age, data is the lifeblood of most businesses. Customer information, financial records, and intellectual property – all this valuable data resides within your systems. However, with this digital wealth comes a significant responsibility: protecting it from unauthorized access, misuse, or loss. Data breaches can have devastating consequences, damaging your reputation, incurring…

View On WordPress
#affordable data protection insurance options for small businesses#AI-powered tools for data breach detection and prevention#Are there any data protection exemptions for specific industries#Are there any government grants available to help businesses with data security compliance?#benefits of outsourcing data security compliance for startups#Can I be fined for non-compliance with data protection regulations#Can I outsource data security compliance tasks for my business#Can I use a cloud-based service for storing customer data securely#CCPA compliance for businesses offering loyalty programs with rewards#CCPA compliance for California businesses#cloud storage solutions with strong data residency guarantees#consumer data consent management for businesses#cost comparison of data encryption solutions for businesses#customer data consent management platform for e-commerce businesses#data anonymization techniques for businesses#data anonymization techniques for customer purchase history data#data breach compliance for businesses#data breach notification requirements for businesses#data encryption solutions for businesses#data protection impact assessment (DPIA) for businesses#data protection insurance for businesses#data residency requirements for businesses#data security best practices for businesses#Do I need a data privacy lawyer for my business#Do I need to train employees on data privacy practices#Does my California business need to comply with CCPA regulations#employee data privacy training for businesses#free data breach compliance checklist for small businesses#GDPR compliance for businesses processing employee data from the EU#GDPR compliance for international businesses
0 notes
Text
Learning the Tech Toolbox: Step-by-step route to create an online course
Honing, honing, honing. The choice of tools and processes is overwhelming. I know what my objective is, but have to keep intense focus to determine a path that works for me. Platforms to perform various functions keep popping up like mushrooms and as I explore through them, barriers pop-up too, such as: You can only record a presentation in Chrome Canva studio Today, I have explored how to…

View On WordPress
#barrier#Canva#Chrome versus Firefox#GDPR#Google#hindsight#honing#laptop versus phone#learning a process#learning by doing#new and unfamiliar process#online security#paywalls#recording online course#recording studio#safe storage of customer data#skills and experience#slide decks#software#step-by-step#subscriptions#technology#youtube
0 notes