#Facebook profile birthdate update without ID
Explore tagged Tumblr posts
aarthi-m · 2 years ago
Text
0 notes
anntyler3 · 5 years ago
Text
Does the App Tiktok Violate Child Privacy Laws?
If you are like a lot of parents, you probably have concerns about your child's personal data being on the Internet.
The concerns get even more serious when your child is old enough (or thinks they are old enough) to start using social media on their own, and you are no longer the gatekeeper. In fact, your tween might know more about social media than you do, and while you probably "get" Facebook and Twitter, newer social media platforms such as TikTok may leave you mystified.
What is TikTok?
TikTok is a social media app that has taken the world by storm, especially among young people. In fact, the app, which allows users to create and share short, creative videos, is most common among Generation Z, with close to 70% of TikTok's audience between 16 and 24 years old.
Owned by Beijing-based technology company ByteDance, TikTok is a global platform, with more than 1.5 billion users throughout the world. Most videos on TikTok are short, between 3 and 60 seconds, and many involve lip-syncing to popular music. TikTok actually started as the app musical.ly, an app used to produce short lip-sync videos.
Is TikTok Safe?
TikTok has been in the headlines a lot over the past year because of red flags involving security censorship. In January 2020, the cybersecurity research group Check Point Research raised concerns over several security risks, including the app's vulnerability to hackers stealing personal information from users, deleting videos, and posting videos. TikTok claims that the issues have all been addressed and that its platform is safe for users.
Lawmakers, too, have raised concerns after TikTok allegedly censored topics considered sensitive by the Chinese Communist Party. In late 2019, U.S. senators called for an investigation into TikTok's censorship practices as well as potential national security risks posed by the app. TikTok has insisted that the Chinese government does not have a say in the app's content and that it intends on cooperating with regulators.
Even so, the U.S. Army banned soldiers from using TikTok in December 2019.
TikTok Faced a Class Action Over Child Privacy
Beyond security flaws, which are commonplace across social media platforms, and censorship concerns, TikTok has also been accused of violating child privacy laws. In late 2019, a group of parents filed a class action lawsuit against TikTok claiming that the app violated children's privacy laws by collecting and exposing the personal data of minors.
The lawsuit, which TikTok settled the day after it was filed, alleged that the app did not out in the proper precautions preventing children from using the app. Specifically, the app requested minors under the age of 13 who created accounts to provide personally identifying information such as name, phone number, email address, photo, and bio ? and that information was made publicly available.
The lawsuit also alleged that the app collected location data from users, including minors.
Under the Children's Online Privacy Protection Act (COPPA), social media companies cannot collect the data of children under 13 years of age without express consent of parents or guardians.
Also in 2019, TikTok reached a $5.7 million settlement with the U.S. Federal Trade Commission, which alleged similar COPPA violations.
Laws Protecting Children Online
To say the internet has changed a lot since COPPA was passed in 1998 is an understatement. Some believe that it has changed so much that is requires an update to the law. Recently, a U.S. representative introduced a new bill aimed at doing just that.
The proposed bill, the PRotecting the Information of our Vulnerable Children and Youth (PRIVCY) Act, would extend privacy protections to all young people under the age of 18 and ban targeted advertising to kids under 13. The bill would also remove the Safe Harbor provisions of COPPA, which allow social media companies to self regulate on these matters, and add provisions that give the FTC more enforcement power.
Another bill aimed at updating COPPA is the Preventing Real Online Threats Endangering Children Today (PROTECT) Kids Act, which was introduced by bipartisan lawmakers in January 2020.
The bill would raise the age of protection under COPPA to 16, give parents the right to delete any personal information that websites have collected about their children, and expand the types of data protected by COPPA.
How to Protect Your Child on TikTok
Common Sense Media, a non-partisan, non-profit that provides advice for parents on safe technology and media use for children, recommends that kids be age 16 or older to use TikTok, citing privacy issues and mature content. 
TikTok allows users 13 years old to use the app, and it requires that users under the age of 18 to  have approval of a parent or guardian. There is also a section of the app that is meant for kids under 13 and restricts access to adult content. However, it is easy for kids to just enter a false birthdate to get passed these safety measures. 
For these reasons, Common Sense Media recommends that if parents want to allow tweens or young kids to use the app, the parent should own and run the account so they can monitor what their kids are viewing and sharing.
TikTok also has safety functionality that parents can set up by tapping on the three dots at the top right corner of the user profile. Parents can use TikTok's "Digital Wellbeing" features to limit time spent on the app (Screen Time Management) and limit video content that may be inappropriate (Restricted Mode). These features are password protected, so kids can't just turn them off. 
However, Common Sense Media says TikTok's Restricted Mode is not foolproof. Age-inappropriate videos can sometimes slip through, so parental monitoring is the safest bet.
Related Resources:
Online Safety for Kids (FindLaw's Learn About the Law section)
YouTube, Google Fined for Child Privacy Violations (FindLaw's Common Law blog)
COPPA Now Includes Greater Protections for Kids Online (FindLaw's Technologist blog)
App Maker Violates Kids' Privacy, New Mexico Lawsuit Claims (FindLaw's Law and Daily Life blog)
from RSSMix.com Mix ID 8246803 http://blogs.findlaw.com/law_and_life/2020/02/does-the-app-tiktok-violate-child-privacy-laws.html
0 notes
technicalsolutions88 · 7 years ago
Link
Facebook and Instagram will more proactively lock the accounts of users its moderators suspect are below the age of 13. Its former policy was to only investigate accounts if they were reported specifically for being potentially underage. But Facebook confirmed to TechCrunch that an ‘operational’ change to its policy for reviewers made this week will see them lock the accounts of any underage user they come across, even if they were reported for something else, such as objectionable content, or are otherwise discovered by reviewers. Facebook will require the users to provide proof that they’re over 13 such a government issued photo ID to regain access. The problem stems from Facebook not requiring any proof of age upon signup.
Facebook Messenger Kids is purposefully aimed at users under age 13
A tougher stance here could reduce Facebook and Instagram’s user counts and advertising revenue. The apps’ formerly more hands-off approach allowed them to hook young users so by the time they turned 13, they had already invested in building a social graph and history of content that tethers them to the Facebook corporation. While Facebook has lost cache with the youth over time and as their parents joined, Instagram is still wildly popular with them and likely counts many tweens or even younger children as users.
The change comes in response to an undercover documentary report by the UK’s Channel 4 and Firecrest Films that saw a journalist become a Facebook content reviewer through a third-party firm called CPL Resources in Dublin, Ireland. A reviewer there claims they were instructed to ignore users who appeared under 13, saying “We have to have an admission that the person is underage. If not, we just like pretend that we are blind and that we don’t know what underage looks like.” The report also outlined how far-right political groups are subject to different threshholds for deletion than other Pages or accounts if they post hateful content in violation of Facebook’s policies.
In response, Facebook published a blog post on July 16th claiming that that high-profile Pages and registered political groups may receive a second layer of review from Facebook employees. But in an update on July 17th, Facebook noted that “Since the program, we have been working to update the guidance for reviewers to put a hold on any account they encounter if they have a strong indication it is underage, even if the report was for something else.”
Now a Facebook spokesperson confirms to TechCrunch that this is a change to how reviewers are trained to enforce its age policy for both Facebook and Instagram. Facebook prohibits users under 13 to comply with the US Child Online Privacy Protection Act that demands that requires parental consent to collect data about children. The change could see more underage users have their accounts terminated. That might in turn reduce the site’s utility for their friends over or under age 13, making them less engaged with the social network.
The news comes in contrast to Facebook purposefully trying to attract underage users through its Messenger Kids app that lets children ages 6 to 12 chat with those approved by their parents, which today expanded to Mexico, beyond the US, Canada, and Peru. With one hand, Facebook is trying to make under-13 users dependent on the social network while pushing them away with the other.
Child Signups Lead to Problems As Users Age
A high-ranking source who worked at Facebook in its early days previously told me that one repercussion of a hands-off approach to policing underage users was that as some got older, Facebook would wrongly believe they were over 18 or over 21.
That’s problematic because it could make minors improperly eligible to see ads for alcohol, real money gambling, loans, or subscription services. They’d also be able to see potentially offensive content such as graphic violence that only appears to users over 18 and is hidden behind a warning interstitial. Facebook might also expose their contact info, school, and birthday in public search results, which it hides for users under 18.
Users who request to change their birthdate may have their accounts suspended, deterring users from coming clean about their real age. A Facebook spokesperson confirmed that in the US, Canada, and EU, if a user listed as over 18 tries to change their age to be under 18 or vice versa, they would be prompted to provide proof of age.
Facebook might be wise to offer an amnesty period to users who want to correct their age without having their accounts suspended. Getting friends to confirm friend requests and building up a profile takes time and social capital that formerly underage users who are now actually over 13 might not want to risk just to able to display their accurate birthdate and protect Facebook. If the company wants to correct the problem, it may need to offer a temporary consequence-free method for users to correct their age. It could then promote this options to its youngest users or those who algorithms suggest might be under 13 based on their connections.
Facebook doesn’t put any real roadblock to signup in front of underage users beyond a self-certification that they are of age, likely to keep it easy to join the social network and grow its business. It’s understandable that some 9- or 11-year-olds would lie to gain access. Blindly believing self-certifications led to the Cambridge Analytica scandal, as the data research firm promised Facebook it had deleted surreptitiously collected user data, but Facebook failed to verify that.
There are plenty of other apps that flout COPPA laws by making it easy for underage children to sign up. Lip-syncing app Musically is particularly notorious for featuring girls under 13 dancing provocatively to modern pop songs in front of audiences of millions — which worryingly include adults. The company’s CEO Alex Zhu angrily denied that it violates COPPA when I confronted him with evidence at TechCrunch Disrupt London in 2016.
Facebook’s Reckoning
The increased scrutiny brought on by the Cambridge Analytica debacle, Russian election interference, screentime addiction, lack of protections against fake news, and lax policy towards conspiracy theorists and dangerous content has triggered a reckoning for Facebook.
Yesterday Facebook announced a content moderation policy update, telling TechCrunch “There are certain forms of misinformation that have contributed to physical harm, and we are making a policy change which will enable us to take that type of content down. We will be begin implementing the policy during the coming months.” That comes in response to false rumors spreading through WhatsApp leading to lynch mobs murdering people in countries like India. The policy could impact conspiracy theorists and publications spreading false news on Facebook, some of which claim to be practicing free speech.
Across safety, privacy, and truth, Facebook will have to draw the line on how proactively to police its social network. It’s left trying to balance its mission to connect the world, its business that thrives on maximizing user counts and engagement, its brand as a family-friendly utility, its responsibility to protect society and democracy from misinformation, and its values that endorse free speech and a voice for everyone. Something’s got to give.
from Social – TechCrunch https://ift.tt/2uApDNX Original Content From: https://techcrunch.com
0 notes