#protect privacy in big data
Explore tagged Tumblr posts
vastedge330 · 9 months ago
Text
https://www.bloglovin.com/@vastedge/how-to-protect-your-privacy-in-a-big-data
In a world dominated by big data, safeguarding your privacy is crucial. This guide offers practical steps and solutions to protect personal and corporate data, with a focus on compliance, encryption, and best practices for data security.
0 notes
mostlysignssomeportents · 4 months ago
Text
Ad-tech targeting is an existential threat
Tumblr media
I'm on a 20+ city book tour for my new novel PICKS AND SHOVELS. Catch me TORONTO on SUNDAY (Feb 23) at Another Story Books, and in NYC on WEDNESDAY (26 Feb) with JOHN HODGMAN. More tour dates here.
Tumblr media
The commercial surveillance industry is almost totally unregulated. Data brokers, ad-tech, and everyone in between – they harvest, store, analyze, sell and rent every intimate, sensitive, potentially compromising fact about your life.
Late last year, I testified at a Consumer Finance Protection Bureau hearing about a proposed new rule to kill off data brokers, who are the lynchpin of the industry:
https://pluralistic.net/2023/08/16/the-second-best-time-is-now/#the-point-of-a-system-is-what-it-does
The other witnesses were fascinating – and chilling, There was a lawyer from the AARP who explained how data-brokers would let you target ads to categories like "seniors with dementia." Then there was someone from the Pentagon, discussing how anyone could do an ad-buy targeting "people enlisted in the armed forces who have gambling problems." Sure, I thought, and you don't even need these explicit categories: if you served an ad to "people 25-40 with Ivy League/Big Ten law or political science degrees within 5 miles of Congress," you could serve an ad with a malicious payload to every Congressional staffer.
Now, that's just the data brokers. The real action is in ad-tech, a sector dominated by two giant companies, Meta and Google. These companies claim that they are better than the unregulated data-broker cowboys at the bottom of the food-chain. They say they're responsible wielders of unregulated monopoly surveillance power. Reader, they are not.
Meta has been repeatedly caught offering ad-targeting like "depressed teenagers" (great for your next incel recruiting drive):
https://www.technologyreview.com/2017/05/01/105987/is-facebook-targeting-ads-at-sad-teens/
And Google? They just keep on getting caught with both hands in the creepy commercial surveillance cookie-jar. Today, Wired's Dell Cameron and Dhruv Mehrotra report on a way to use Google to target people with chronic illnesses, people in financial distress, and national security "decision makers":
https://www.wired.com/story/google-dv360-banned-audience-segments-national-security/
Google doesn't offer these categories itself, they just allow data-brokers to assemble them and offer them for sale via Google. Just as it's possible to generate a target of "Congressional staffers" by using location and education data, it's possible to target people with chronic illnesses based on things like whether they regularly travel to clinics that treat HIV, asthma, chronic pain, etc.
Google claims that this violates their policies, and that they have best-of-breed technical measures to prevent this from happening, but when Wired asked how this data-broker was able to sell these audiences – including people in menopause, or with "chronic pain, fibromyalgia, psoriasis, arthritis, high cholesterol, and hypertension" – Google did not reply.
The data broker in the report also sold access to people based on which medications they took (including Ambien), people who abuse opioids or are recovering from opioid addiction, people with endocrine disorders, and "contractors with access to restricted US defense-related technologies."
It's easy to see how these categories could enable blackmail, spear-phishing, scams, malvertising, and many other crimes that threaten individuals, groups, and the nation as a whole. The US Office of Naval Intelligence has already published details of how "anonymous" people targeted by ads can be identified:
https://www.odni.gov/files/ODNI/documents/assessments/ODNI-Declassified-Report-on-CAI-January2022.pdf
The most amazing part is how the 33,000 targeting segments came to public light: an activist just pretended to be an ad buyer, and the data-broker sent him the whole package, no questions asked. Johnny Ryan is a brilliant Irish privacy activist with the Irish Council for Civil Liberties. He created a fake data analytics website for a company that wasn't registered anywhere, then sent out a sales query to a brokerage (the brokerage isn't identified in the piece, to prevent bad actors from using it to attack targeted categories of people).
Foreign states, including China – a favorite boogeyman of the US national security establishment – can buy Google's data and target users based on Google ad-tech stack. In the past, Chinese spies have used malvertising – serving targeted ads loaded with malware – to attack their adversaries. Chinese firms spend billions every year to target ads to Americans:
https://www.nytimes.com/2024/03/06/business/google-meta-temu-shein.html
Google and Meta have no meaningful checks to prevent anyone from establishing a shell company that buys and targets ads with their services, and the data-brokers that feed into those services are even less well-protected against fraud and other malicious act.
All of this is only possible because Congress has failed to act on privacy since 1988. That's the year that Congress passed the Video Privacy Protection Act, which bans video store clerks from telling the newspapers which VHS cassettes you have at home. That's also the last time Congress passed a federal consumer privacy law:
https://en.wikipedia.org/wiki/Video_Privacy_Protection_Act
The legislative history of the VPPA is telling: it was passed after a newspaper published the leaked video-rental history of a far-right judge named Robert Bork, whom Reagan hoped to elevate to the Supreme Court. Bork failed his Senate confirmation hearings, but not because of his video rentals (he actually had pretty good taste in movies). Rather, it was because he was a Nixonite criminal and virulent loudmouth racist whose record was strewn with the most disgusting nonsense imaginable).
But the leak of Bork's video-rental history gave Congress the cold grue. His video rental history wasn't embarrassing, but it sure seemed like Congress had some stuff in its video-rental records that they didn't want voters finding out about. They beat all land-speed records in making it a crime to tell anyone what kind of movies they (and we) were watching.
And that was it. For 37 years, Congress has completely failed to pass another consumer privacy law. Which is how we got here – to this moment where you can target ads to suicidal teens, gambling addicted soldiers in Minuteman silos, grannies with Alzheimer's, and every Congressional staffer on the Hill.
Some people think the problem with mass surveillance is a kind of machine-driven, automated mind-control ray. They believe the self-aggrandizing claims of tech bros to have finally perfected the elusive mind-control ray, using big data and machine learning.
But you don't need to accept these outlandish claims – which come from Big Tech's sales literature, wherein they boast to potential advertisers that surveillance ads are devastatingly effective – to understand how and why this is harmful. If you're struggling with opioid addiction and I target an ad to you for a fake cure or rehab center, I haven't brainwashed you – I've just tricked you. We don't have to believe in mind-control to believe that targeted lies can cause unlimited harms.
And those harms are indeed grave. Stein's Law predicts that "anything that can't go on forever eventually stops." Congress's failure on privacy has put us all at risk – including Congress. It's only a matter of time until the commercial surveillance industry is responsible for a massive leak, targeted phishing campaign, or a ghastly national security incident involving Congress. Perhaps then we will get action.
In the meantime, the coalition of people whose problems can be blamed on the failure to update privacy law continues to grow. That coalition includes protesters whose identities were served up to cops, teenagers who were tracked to out-of-state abortion clinics, people of color who were discriminated against in hiring and lending, and anyone who's been harassed with deepfake porn:
https://pluralistic.net/2023/12/06/privacy-first/#but-not-just-privacy
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2025/02/20/privacy-first-second-third/#malvertising
Tumblr media
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
526 notes · View notes
tofueatingwokerati · 4 months ago
Text
Tumblr media
The UK no longer has end to end encryption thanks to Keir Starmer’s Labour government reanimating the zombie policy that is the Snoopers Charter, first peddled by Theresa May’s Tory government and rejected by the public.
Apple withdrawing end-to-end encrypted backups from the UK "creates a dangerous precedent which authoritarian countries will surely follow".
UK now likened to authoritarian regimes and why Starmer won’t challenge Trump since he is in lock step with US policies, openly goes after sick, disabled, pensioners and poorest, increasing their hardship rather than tax the mega rich. US policy is UK policy.
So what does this mean for Apple users in the UK?
All your data in the cloud is no longer secure in comparison to having ADP enabled and can be accessed by the government upon request. The GDPR is all but dead in the UK, there are now so many government policies that snoop on us by the back door with even news outlets online now charging us for access without *cookies enabled (data farming you whilst you read with no option to opt out unless you pay)
I checked with the ICO myself and it is a fully approved policy despite its contradiction to the rights of consent, removed in the process.
If you want a workaround here are my suggestions
Cancel your iCloud storage, your data will stay on the cloud until the renewal date, use that time to back it up locally or on a flash drive.
Tumblr media
Change your iMessage settings to delete audio messages after 2 minutes and permanently delete messages after 30 days.
Alternatively, use a third party messaging app with a delete on read feature and disable Apple iMessage altogether.
If you are tech savvy you can set up a USB drive or flash drive directly into your router hub (you should have at least one USB slot, some have two) and use FTP to back up over wifi, you can do this on any device, you don’t need a desktop.
Use a VPN service or set one up. If you’re really technical you can use a Raspberry Pi to do this, but you will need to hard code it. Think Mr Robot.
This change does not impact sensitive data like medical details which remain end to end encrypted.
If you want to learn more on the sweeping bills being pushed through government and any action your can take visit Big Brother Watch: https://bigbrotherwatch.org.uk
*If you want to read news articles without paying for the privilege of not handing over your cookie data, simply disable javascript within your browsers settings and refresh the browser page. Remember to turn it back on when your done. Alternatively disable all cookies but know this will impact your online experience and access.
17 notes · View notes
boymanmaletheshequel · 4 months ago
Text
As we enter back into an era where technology is used primarily as a weapon against you by the United States and Russian governments, (welcome back 1980s! 😍) let’s go over some tips to consider if you are a queer, bipoc, or otherwise an “enemy from within” as trump so lovingly and hypocritically put it, to protecting your identity and privacy through the internet so that you can continue to resist this cultural shift safely and resiliently!
• stop posting identifying information on social media like your full name, place of residence, and pictures of yourself or other at risk people, and go back and delete posts in which you might’ve already.
• disable location tracking services and camera and mic access to apps, services, or websites that you know to be compromised by the influence of shady overseers, such as X, Instagram, Facebook, Google, and TikTok. I know it’s convenient to use these but it’s a MAJOR security hazard.
• Turn your phone COMPLETELY off when discussing sensitive subjects with friends or when you are making plans of resistance and gathering your communities, or when you are discussing personal feelings on such subjects with family, friends, or a doctor/therapist. Basically all mobile phones now have live mics that are constantly listening to you.
• Create a new email under a fake name if you haven’t already to use as your primary email address, in order to further protect your true identity.
• use an alternative browser like DuckDuckGo or brave in place of google, preferably with a VPN on top of it to even further secure your browsing history and to bypass potential IP blocks of websites from the government.
• if you want to watch YouTube and don’t want a recording of your watch history, log out of your account and watch that way, preferably you would just delete your account entirely but I understand why some might not want to.
• pirate content from streaming services via piracy websites, again so that your watch history can’t be used to make assumptions about you if that data were ever sold to the government.
These are obviously just some of the ways to bypass data collection and ti protect your real identity online, but there are many others I’m sure I’m not aware of. Please add to this if you can, and stay safe out there, because anything you say can and will be used against you in the future of this country. But: We have survived this kind of shit before, we can again, if we are careful. Loose lips sink ships, and that goes for the internet as well.
17 notes · View notes
vavuska · 9 months ago
Text
Tumblr media Tumblr media
I thought this one would be just another developer of mobile game that use The Sims 4 contents in its (fake) ads to fraud users by making people belive it is its game, when it has a completely different gameplay.
Unfortunately, Room Makover by FlyBird Casual Games is much more than this.
Room Makover has plenty of false ads. This time I actually tried the game, because the pictures and videos on Google Play were cute and I decided to give it a try.
I was so naive. Happens that the whole page on Google Play is full of fake contents.
It's just one of those mobile games in which you have to match and remove nails. Every time you pass a level, you will rewarded with bills that you can use to unlock part of a building under "makeover". You have just three option for every part of the building or the room to choose between.
There is nothing creative. Not actual building or cluttering rooms or decoring.
As, always, since I'm a curious little bitch, I decided to dig further into this FlyBird Casual Game and... Oh. God. Their site looks even less legit than LUCKY FORTUNE GAMES' one!
It is just a void page. There is nothing here. NOTHING. The only two working section are the privacy and conditions of use, in which they try to convince you that they didn't have any liability for damages or fraud committed.
The privacy one is curious, because mentions this Commissioner's Office in UK and a long list of partners. Both for ads and for data collection, which is hugely uncommon for this kind of games.
Sooooo... In this magical list of apps, I noticed a few tech nightmares, which I will explain briefly here:
Aarki is an AI company that builds advertising solutions to drive mobile revenue growth. Traslated from bullshit: they use AI to generate fake ads and sells it to shady corps.
Blind Ferret is the big deal here! Not only gave you digital marketing solutions, data collection and analytics, but also pays influencers and product placement on social media to promote the game and, hear me out, CREATE fake ads too! It's literally written in their site: "Our Creative Services don’t just make things look pretty. Our team uses data to guide us! How do we make brands shine? By turning the arts into a numbers game with top-performing creative content." This include: Graphic Design, Illustration, 2D Animation, Video Editing and Composition, Copywriting and conceptualizing.
InMobi is a big Corp that does native advertising, which means promoted contents, collabs with influencers, etc.
Ironsource. This one is a fucking cancer. IronSource Ltd. is an Israeli software company that focuses on developing technologies for app monetization and distribution, with its core production focused on the app economy. That would sound harmless, but Samsung use it in its budget and midrange smartphone to install multiple third-party apps during the set-up process. This platform slips bloatware on the pretext of recommended apps, leading to apps clutter and reduction in on-board storage space. The only purpose it exists on Samsung phones is to download games without your consent with no way to remove it (no app installed).
Mintegral is another fucking tech nightmare. Not only poses serious threats to your privacy and datas, but also uses malicious codes to spy your activity and when you seem intentioned to install a mobile app, Mintegral’s software would then fire off fake clicks on non-existent ads to claim credit for the install and essentially collect a bounty from app publishers who pay ad networks to promote their apps.
Mistplay is one of those "play to earn bucks" that I find very very dangerous. Because YOUR data are their revenue.
Tapjoy does monetization of ads and also surveys, that force users to download one from a long list of games, download it and playing for hours or since it is gained some in-game prize. This surveys are rewarded with credits and user can spend on the mobile game they actually want to play. Tapjoy has a huge market among IMVU users, who need credits to buy piece of clothing and accessories for their avi.
The other apps do mobile app marketing, using data collection that allow shady corps to target more gullable and naive people to scam. Plus they do also monetization surveys to earn money and at the same time forcefully grow the engagement of this shady corps.
Tumblr media Tumblr media Tumblr media
Obviously, there is no user support mail listed in their Google Play page, but at least this has a contact mail listed on their website: [email protected]
As always, stay safe and please tell me if you know more about everything above or know the person who create this build first.
Help people to stay safe.
Thank you.
<<< previous Coloring app uses design of The Sims 4 builds without the consent of the creators and other mobile developers steal TS4 speed build contents and claim it's the actual gameplay of their mobile game.
2 notes · View notes
leidensygdom · 4 months ago
Text
So, Discord has added a feature that lets other people "enhance" or "edit" your images with different AI apps. It looks like this:
Tumblr media
Currently, you can't opt out from this at all. But here's few things you can do as a protest.
FOR SERVERS YOU ARE AN ADMIN IN
Go to Roles -> @/everyone roles -> Scroll all the way down to External Apps, and disable it. This won't delete the option, but it will make people receive a private message instead when they use it, protecting your users:
Tumblr media
You should also make it a bannable offense to edit other user's images with AI. Here's how I worded it in my server, feel free to copypaste:
Do not modify other people's images with AI under ANY circumstances, such as with the Discord "enhancement" features, amidst others. This is a bannable offense.
COMPLAIN TO DISCORD
There's few ways to go around this. First, you can go to https://support.discord.com/hc/en-us/requests/new , select Help and Support -> Feedback/New Feature Request, and write your message, as seen in the screenshot below.
Tumblr media
For the message, here's some points you can bring up:
Concerns about harassment (such as people using this feature to bully others)
Concerns about privacy (concerns on how External Apps may break privacy or handle the data in the images, and how it may break some legislations, such as GDPR)
Concerns about how this may impact minors (these features could be used with pictures of irl minors shared in servers, for deeply nefarious purposes)
BE VERY CLEAR about "I will refuse to buy Nitro and will cancel my subscription if this feature remains as it is", since they only care about fucking money
Word them as you'd like, add onto them as you need. They sometimes filter messages that are copypasted templates, so finding ways to word them on your own is helpful.
ADDING: You WILL NEED to reply to the mail you receive afterwards for the message to get sent to an actual human! Otherwise it won't reach anyone
UNSUSCRIBE FROM NITRO
This is what they care about the most. Unsuscribe from Nitro. Tell them why you unsuscribed on the way out. DO NOT GIVE THEM MONEY. They're a company. They take actions for profit. If these actions do not get them profit, they will need to backtrack. Mass-unsuscribing from WOTC's DnD beyond forced them to back down with the OGL, this works.
LEAVE A ONE-STAR REVIEW ON THE APP
This impacts their visibility on the App store. Write why are you leaving the one-star review too.
_
Regardless of your stance on AI, I think we can agree that having no way for users to opt out of these pictures is deeply concerning, specially when Discord is often used to share selfies. It's also a good time to remember internet privacy and safety- Maybe don't post your photos in big open public servers, if you don't want to risk people doing edits or modifications of them with AI (or any other way). Once it's posted, it's out of your control.
Anyways, please reblog for visibility- This is a deeply concerning topic!
19K notes · View notes
vengoai · 1 month ago
Text
Amazon. Google. Apple.
For years, they've shaped how we shop, search, and connect. But now?
People are starting to ask the hard questions. ❓ Where does our data go? 🧠 Who’s watching our digital lives? 🔐 What does privacy really mean in the age of Big Tech?
We dive into the rising skepticism around tech giants and what it means for the future of the internet.
VengoAI.com
1 note · View note
alyfoxxxen · 2 months ago
Text
The Tech Companies Fighting To Sell Your Data
0 notes
lord-of-the-noodles · 22 days ago
Text
come on tumblr you love online privacy
Are YOUR Digital Rights Under Attack? | GDPR Reform
youtube
Hey! I know we're all very busy looking at other places right now but this is kinda concerning
248 notes · View notes
jiff01 · 4 months ago
Video
Ad-tech targeting is an existential threat
flickr
Ad-tech targeting is an existential threat by Cory Doctorow Via Flickr: pluralistic.net/2025/02/20/privacy-first-second-third/#ma... A towering figure with the head of HAL 9000 from Stanley Kubrick's '2001: A Space Odyssey,' surmounted by Trump's hair, wearing a tailcoat with a Google logo lapel pin. It peers through a magnifying glass at a distressed, tiny Uncle Sam figure perched in its monstrous palm. Image: Cryteria (modified) commons.wikimedia.org/wiki/File:HAL9000.svg CC BY 3.0 creativecommons.org/licenses/by/3.0/deed.en
0 notes
trainsinanime · 7 months ago
Text
Tumblr media
I've seen a number of people worried and concerned about this language on Ao3s current "agree to these terms of service" page. The short version is:
Don't worry. This isn't anything bad. Checking that box just means you forgive them for being US American.
Long version: This text makes perfect sense if you're familiar with the issues around GDPR and in particular the uncertainty about Privacy Shield and SCCs after Schrems II. But I suspect most people aren't, so let's get into it, with the caveat that this is a Eurocentric (and in particular EU centric) view of this.
The basic outline is that Europeans in the EU have a right to privacy under the EU's General Data Protection Regulation (GDPR), an EU directive (let's simplify things and call it an EU law) that regulates how various entities, including companies and the government, may acquire, store and process data about you.
The list of what counts as data about you is enormous. It includes things like your name and birthday, but also your email address, your computers IP address, user names, whatever. If an advertiser could want it, it's on the list.
The general rule is that they can't, unless you give explicit permission, or it's for one of a number of enumerated reasons (not all of which are as clear as would be desirable, but that's another topic). You have a right to request a copy of the data, you have a right to force them to delete their data and so on. It's not quite on the level of constitutional rights, but it is a pretty big deal.
In contrast, the US, home of most of the world's internet companies, has no such right at a federal level. If someone has your data, it is fundamentally theirs. American police, FBI, CIA and so on also have far more rights to request your data than the ones in Europe.
So how can an American website provide services to persons in the EU? Well… Honestly, there's an argument to be made that they can't.
US websites can promise in their terms and conditions that they will keep your data as safe as a European site would. In fact, they have to, unless they start specifically excluding Europeans. The EU even provides Standard Contract Clauses (SCCs) that they can use for this.
However, e.g. Facebook's T&Cs can't bind the US government. Facebook can't promise that it'll keep your data as secure as it is in the EU even if they wanted to (which they absolutely don't), because the US government can get to it easily, and EU citizens can't even sue the US government over it.
Despite the importance that US companies have in Europe, this is not a theoretical concern at all. There have been two successive international agreements between the US and the EU about this, and both were struck down by the EU court as being in violation of EU law, in the Schrems I and Schrems II decisions (named after Max Schrems, an Austrian privacy activist who sued in both cases).
A third international agreement is currently being prepared, and in the meantime the previous agreement (known as "Privacy Shield") remains tentatively in place. The problem is that the US government does not want to offer EU citizens equivalent protection as they have under EU law; they don't even want to offer US citizens these protections. They just love spying on foreigners too much. The previous agreements tried to hide that under flowery language, but couldn't actually solve it. It's unclear and in my opinion unlikely that they'll manage to get a version that survives judicial review this time. Max Schrems is waiting.
So what is a site like Ao3 to do? They're arguably not part of the problem, Max Schrems keeps suing Meta, not the OTW, but they are subject to the rules because they process stuff like your email address.
Their solution is this checkbox. You agree that they can process your data even though they're in the US, and they can't guarantee you that the US government won't spy on you in ways that would be illegal for the government of e.g. Belgium. Is that legal under EU law? …probably as legal as fan fiction in general, I suppose, which is to say let's hope nobody sues to try and find out.
But what's important is that nothing changed, just the language. Ao3 has always stored your user name and email address on servers in the US, subject to whatever the FBI, CIA, NSA and FRA may want to do it. They're just making it more clear now.
10K notes · View notes
vastedge330 · 10 months ago
Text
Learn essential strategies for securing and protecting your big data in this comprehensive blog post. It covers best practices for data encryption, access control, and threat detection to ensure your valuable information remains safe from breaches and cyber threats. Equip yourself with the knowledge to implement robust data protection measures effectively.
0 notes
jcmarchi · 1 year ago
Text
Nicholas Brackney, Dell: How we leverage a four-pillar AI strategy
New Post has been published on https://thedigitalinsider.com/nicholas-brackney-dell-how-we-leverage-a-four-pillar-ai-strategy/
Nicholas Brackney, Dell: How we leverage a four-pillar AI strategy
.pp-multiple-authors-boxes-wrapper display:none; img width:100%;
Dell is deeply embedded in the AI landscape, leveraging a comprehensive four-pillar strategy to integrate the technology across its products and services.
Nicholas Brackney, Senior Consultant in Product Marketing at Dell, discussed the company’s AI initiatives ahead of AI & Big Data Expo North America.
Dell’s AI strategy is structured around four core principles: AI-In, AI-On, AI-For, and AI-With:
“Embedding AI capabilities in our offerings and services drives speed, intelligence, and automation,” Brackney explained. This ensures that AI is a fundamental component of Dell’s offerings.
The company also enables customers to run powerful AI workloads on its comprehensive portfolio of solutions, from desktops to data centres, across clouds, and at the edge.
AI innovation and tooling are applied for Dell’s business to enhance operations and share best practices with customers.
Finally, Dell collaborates with strategic partners within an open AI ecosystem to simplify and enhance the AI experience.
Dell is well-positioned to help customers navigate AI workloads, emphasising choice and adaptability through the various evolutions of emerging technology. Brackney highlighted Dell’s commitment to serving customers from the early stages of AI adoption to achieving AI at scale.
“We’ve always believed in providing choice and have been doing it through the various evolutions of emerging technology, including AI, and understanding the challenges that come with them,” explained Brackney. “We fully leverage our unique operating model to serve customers in the early innings of AI to a future of AI at scale.”
Looking to the future, Dell is particularly excited about the potential of AI PCs.
“We know organisations and their knowledge workers are excited about AI, and they want to fit it into all their workflows,” Brackney said. Dell is focused on integrating AI into software and ensuring it runs efficiently on the right systems, enhancing end-to-end customer journeys in AI.
Ethical concerns in AI deployment are also a priority for Dell. Addressing issues such as deepfakes, transparency, and bias, Brackney emphasised the importance of a shared, secure, and sustainable approach to AI development.
“We believe in a shared, secure, and sustainable approach. By getting the foundations right at their core, we can eliminate some of the greatest risks associated with AI and work to ensure it acts as a force for good,” explains Brackney.
User data privacy in AI-driven products is another critical focus area. Brackney outlined Dell’s strategy of integrating AI with existing security investments without introducing new risks. Dell offers a suite of secure products, comprehensive data protection, advanced cybersecurity features, and global support services to safeguard user data.
On the topic of job displacement due to AI, Brackney underscored that Dell views AI as augmenting human potential rather than replacing it.
“The roles may change but the human element will always be key,” Brackney stated. “At Dell, we encourage our team members to understand, explore, and, where appropriate, use tools based on AI to learn, evolve, and enhance the overall work experience.”
Looking ahead, Brackney envisions a transformative role for AI within Dell and the tech industry. “We see customers in every industry wanting to become leaders in AI because it is critical to their organisation’s innovation, growth, and productivity,” he noted.
Dell aims to support this evolution by providing the necessary architectures, frameworks, and services to assist its customers on this transformative journey.
Dell is a key sponsor of this year’s AI & Big Data Expo. Check out Dell’s keynote presentation ‘From Data Novice to Data Champion – Cultivating Data Literacy Across the Organization’ and swing by Dell’s booth at stand #66 to hear about AI from the company’s experts.
The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo. Explore other upcoming enterprise technology events and webinars powered by TechForge here.
Tags: ai, ai & big data expo, ai and big data expo, artificial intelligence, dell, enterprise, ethics
0 notes
rajaniesh · 1 year ago
Text
Unlock Data Governance: Revolutionary Table-Level Access in Modern Platforms
Dive into our latest blog on mastering data governance with Microsoft Fabric & Databricks. Discover key strategies for robust table-level access control and secure your enterprise's data. A must-read for IT pros! #DataGovernance #Security
Tumblr media
View On WordPress
0 notes
ms-demeanor · 2 months ago
Note
thoughts on using library computers to disguise your digital footprint? because if the machine gets wiped when you log out, and the library doesn't keep detailed records of what machine you were using when, then all someone else would have is IP data unconnected to a person and also mixed in with whatever else folks were doing on the library computers
The machine absolutely does not get wiped when you log out and there's very little chance that a library computer will let you fire up Tor. You're better off using a traffic anonymizer than you are trying to use public computers to cover your tracks. The IP address IS the big risk here.
Libraries are generally really good about protecting their patrons' privacy and I respect the hell out of them for that but computers log everything that you do and can be subpoenaed as evidence even if the library wants to protect user privacy.
Also, I love libraries but you should treat every public computer you come across like it has a keylogger installed on it because it might. Your city could have an overzealous city council that has more control than it should over the library board and has taken it upon themselves to add covenanteyes to the library computers. Your library crew could be fantastic but less tech-savvy than is ideal and may not realize it if malware is installed on one of the machines. The library may clear browser history twice a day but the ISP still has a record of where you went and what time you went there. Somebody could have literally plugged a keylogger into a USB port on the back of the machine.
The point of a traffic anonymizer is it hides where the traffic originated; each node knows where the previous hop came from and where the next hop went, but not what came BEFORE the previous hop or what happened after, or how long the chain was, so there is no way to tell if a message originated in the US or Brazil or Vietnam or Sweden. Sending traffic from a library does the opposite of this, and very clearly says "the person who sent this message did so from this geographic area; they sent messages from these five libraries so we know they're probably within X distance of these libraries" which is a hell of a lot easier to look for than "I can't even say what continent these messages originated from."
Let us say that you go to a library to log in to your protonmail account and email a journalist a link to a file that you've saved in cryptpad. You have the link written down so you don't have to go to a secondary site and you just go sit down directly at the computer and log in to protonmail and fire off your email to the journalist. The email is encrypted, so you know the contents of the email are safe. Let's say the browser history gets automatically wiped every time you close it, and you close it as soon as you stand up and walk away. Here's the incriminating information that generated:
IP address where you accessed your protonmail account
Your protonmail email address, the journalist's address, the time you sent the email, the subject line of the email
And here are the people who can be subpoenaed to share some or all of that information with the government:
The Library's ISP
The Library, who may not carefully track users but who do have event logs on the computers and traffic logs on the firewall
Protonmail
IF you only ever logged in to your protonmail account from that ISP one time, and if you've never logged in to your protonmail account anywhere that is close to your house or your job, you may be fine. But if you logged in to your protonmail on your personal cellphone at work so that you could send photos of documents to yourself, there's some data tying that account to a local IP address. If you set up the protonmail account on a whim at a coffee shop, there's some data tying that account to a local IP address. If you get an email back from the journalist and go to another local library to open it, there's some data tying that account to another local IP address.
And that gets narrowed down very quickly. "Who has access to these sensitive and leak-worthy documents through working at this entity who also lives within a 100 mile radius of these three login locations? Is it 50 people? Is it 5 people? Of the 15 people who have access to these sensitive and leak-worthy documents who work at this entity and live within 100 miles of the three login locations, who is likely to be doing the leaking? Do we fire them all? Do we interview them? Do we compare IP addresses that they've used to log in to work remotely and find that two of them have logged in at the coffee shop? Of those two, one has facebook selfies in a maga hat and the other has a less visible online presence. Let's check their traffic history. Did they check tumblr on a lunch break? Maybe once or twice? Maybe a few times? Sure seems like they are pretty dead-set against the administration. Let's double-check the access logs for this information. Let's review security footage. Let's install the monitoring on their workstation."
The thing is, they're not going to catch you leaking and then track down all the data you left behind to confirm it; they're going to see a leak and get a bunch of digital footprints and use that to narrow down suspect pools. They already know that access to the data is limited and will be reviewing prior access and carefully monitoring future access. You are already in their suspect pool by already being one of the people with known access to the data. Adding an IP address that is geographically close to you, even if it isn't your home IP address, to that is not going to make it *harder* to find you, it can only make it easier.
So just use Tor. You're safer using an anonymizer, which you likely can't do on a library computer. Create the leak email address when you're in a Tor browser, and only EVER access that email account from Tor.
Also I don't mean to jump on you about this, but between the post I've got about why you shouldn't use your work computer to torrent and the safer leaking practices post it's clear that people really don't understand what information they're leaving behind when they use computers and the internet, or how it can be a risk to them.
Accessing burner accounts from a clear IP address means that they're not burner accounts anymore, they're burned.
744 notes · View notes
bisexualbaker · 2 years ago
Text
Why do people keep recommending Dreamwidth as a Tumblr alternative, when Dreamwidth and Tumblr are so different?
To be flat-out honest, it's because Dreamwidth has so many things that Tumblr users say they want, even if it's also lacking a lot of features that Tumblr users have come to love:
Dreamwidth has incredibly lax content hosting rules. I'd say that it's slightly more restrictive than AO3, but only just slightly, and only because AO3's abuse team has been so overwhelmed and over-worked. Otherwise, the hosting policies are pretty similar. You want to go nuts, show nuts? You can do that on Dreamwidth.
In fact, Dreamwidth is so serious about "go nuts, show nuts", it gave up the ability to accept transactions through PayPal in 2009 to protect our ability to do that. (It's also one reason why Dreamwidth doesn't have an app: Dreamwidth will never be beholden to Apple's content rules this way.)
Dreamwidth cares about your privacy; it doesn't sell your data, and barely collects any to begin with. As far as I'm aware, it only collects what it needs to run the site. The owners have also spoken out on behalf of internet privacy many times, and are prepared to put their money where their mouth is.
No ads. Ever. Period. They mean it. Dreamwidth is entirely user funded.
Posts viewed in reverse chronological order; no algorithm, opt-in or otherwise. No algorithm at all. No "For You" or "Suggested" page. You still entirely create and curate your own experience.
The ability to make posts that only your "mutuals", or even only a specific subset of your "mutuals", can see. Want to make a post that's only open to Bonnie, Clyde, Butch, and Cassidy? You can do that! Want to make a post that's only open to Bonnie and Butch, but Clyde and Cassidy can't see shit? You can do that, too!
The owners have forsworn NFTs and the blockchain in general. Not as big a worry now as it was even a year ago, but still good to know!
We are explicitly the customers of Dreamwidth. Dreamwidth wants to make us happy, so any changes they make (and they do make changes) are made with us in mind, and after exploring as many possibilities as they can.
Dreamwidth is very transparent about their policies and changes. If you want to know why they're making a specific change, or keeping or getting rid of a feature, they will tell you. You don't have to find out ten months later that they're locked into a contract to keep it for a year (cough cough Tumblr Live cough cough).
So those are some things that Tumblr users would probably love about Dreamwidth.
Another reason Dreamwidth keeps being recommended is that a significant portion of the Age 30+ crowd spent a lot of earlier fandom years on a site known as LiveJournal. Dreamwidth may not be much like Tumblr, but it it started out as a code fork of LiveJournal, so it will be very familiar to anyone who spent any time there. Except better.
Finally, we're recommending Dreamwidth because some of the things that Tumblr users want are just... not going to happen on the web as it is now. Image hosting is the big one for this. Maybe in the future, the price of data will be much cheaper, and Dreamwidth will be able to host as much as we all want for a pittance that a fraction of the userbase will happily pay for everyone, but right now that's just not possible.
Everywhere you want to go that hosts a lot of images will either be running lots of ads, selling your data, or both.
Dreamwidth knows how much it costs to host your data, and has budgeted for that. They are hosting within their means, within our means.
Dreamwidth is the closest thing we may ever get to AO3 as a social media platform. One of the co-owners is from, and still in, fandom; she knows our values, because they are also her values. It may as well be the Blogsite Of Our Own.
5K notes · View notes