Skip to content

Social Media Privacy Statistics Every UK User Needs to Know

Updated on:
Updated by: Esraa Mahmoud
Reviewed byFatma Mohamed

Social media privacy statistics paint a stark picture: trust in platforms is declining, data breaches are multiplying, and millions of UK users are only now realising how much personal information they’ve handed over. If you manage a business’s online presence (or your own), these numbers are worth understanding before your next post goes live.

This guide consolidates the most current global and UK-specific data on social media privacy, covering user sentiment, platform risks, AI data harvesting, and the practical steps businesses and individuals can take to reduce their exposure.

Key Findings at a Glance

Before diving into the details, here are the headline figures:

  • Approximately 79% of adults globally say they are concerned about how companies use their data (Pew Research Center, 2023)
  • 72% of UK adults report being “very concerned” or “somewhat concerned” about how social media platforms handle their personal information (YouGov, 2023)
  • 65% of UK internet users are concerned about being tracked online, with social media platforms cited as the primary source (ONS, 2023)
  • 41% of UK internet users reduced their social media usage in the past year due to privacy concerns (IAB UK, 2024)
  • Meta received over 3.7 million data subject access requests across the EU and UK in 2023 alone (Meta Transparency Report)
  • Only 9% of social media users say they read privacy policies carefully before agreeing to them (Deloitte Digital Consumer Trends, 2024)

The New Frontier: Social Media, Privacy, and AI Training

The privacy conversation shifted significantly in 2023 and 2024 when several major platforms updated their terms to allow user-generated content to be used for AI model training. This caught many users off guard.

Meta announced in June 2023 that it would use public posts, photos, and comments from Facebook and Instagram to train its generative AI systems. A subsequent opt-out window was introduced for UK and EU users under GDPR pressure, but research by the UK’s Information Commissioner’s Office (ICO) found that the opt-out process was deliberately difficult to locate.

X (formerly Twitter) similarly updated its privacy policy in September 2023 to allow user data to be used for training Grok, its proprietary AI model. Unlike Meta, X did not initially offer a straightforward opt-out for existing users.

The scale of the concern is significant. A 2024 survey by the Open Rights Group found that 68% of UK social media users were unaware their content could be used for AI training until they were told. Among those who became aware, 81% said they would have opted out if the process had been clearer.

For businesses managing brand accounts, this creates a specific concern: content published on these platforms, including product imagery, customer interactions, and service descriptions, may now feed into third-party AI systems without explicit consent.

UK-Specific Privacy Rules: GDPR and the Online Safety Act

The UK sits in a unique regulatory position post-Brexit. It retains the UK GDPR (a domestically enacted version of the EU framework) while simultaneously diverging from Brussels in how enforcement is prioritised.

The ICO’s 2023 to 2024 annual report recorded 4,194 data breach reports from the information and communications sector, a 16% increase on the previous year. Social media-related incidents, including unauthorised access to accounts, scraping attacks, and third-party app breaches, accounted for a disproportionate share of complaints.

The Online Safety Act 2023 introduced new obligations for platforms operating in the UK, including requirements around user safety, illegal content removal, and greater transparency about algorithmic systems. However, the Act’s privacy provisions remain secondary to its online harm focus, leaving gaps in how user data is protected.

“Right to be forgotten” requests to social platforms increased by 22% year-on-year in the UK between 2022 and 2024, according to ICO records. Despite this, many users report difficulty enforcing these rights in practice, particularly on platforms headquartered outside the UK.

For SMEs operating in Northern Ireland, the cross-border complexity adds a further layer. Businesses that market into the Republic of Ireland are subject to Irish DPC (Data Protection Commission) enforcement under EU GDPR, while their UK operations fall under ICO jurisdiction. This dual regulatory environment requires careful handling of social media data practices.

Platform Breakdown: Who Is Most and Least Trusted?

PlatformNotable Breach HistoryAI Training Opt-OutThird-Party Data SharingTransparency Rating
Meta (Facebook/Instagram)Cambridge Analytica (2018), 533M records scraped (2021)Available but hard to findExtensiveModerate
TikTokUS/EU data access by ByteDance employees (2022)LimitedSignificantLow
X (formerly Twitter)5.4M accounts exposed (2022), Grok training (2023)LimitedHighLow
LinkedIn700M records scraped (2021)AvailableModerateModerate to High
SnapchatSnapLion law enforcement tool controversy (ongoing)LimitedModerateLow

Meta: The Largest Data Footprint

Meta collects data across Facebook, Instagram, WhatsApp, and Threads, building individual profiles that can include browsing history outside the platform through its off-Facebook activity tracking. As of 2024, Meta’s advertising platform can target users across more than 2,000 behavioural and demographic categories.

UK regulators fined Meta £17.5 million in 2022 following the investigation into the Cambridge Analytica scandal, a penalty widely regarded as insufficient given the scale of the breach.

TikTok: Data Sovereignty Concerns

TikTok’s ownership by Beijing-based ByteDance has made it the subject of ongoing scrutiny from the UK, US, and EU regulators. In 2022, TikTok admitted that employees in China had accessed the data of European and American users, including journalists.

The UK government banned TikTok from government devices in March 2023, though the platform remains freely available to the public. Despite this, 43% of UK adults aged 18 to 34 use TikTok weekly, according to Ofcom’s Online Nation 2024 report.

LinkedIn: The Professional Privacy Paradox

LinkedIn occupies a unique position. Users willingly share employment history, skills, and professional connections; this data has significant commercial value. In 2021, scraped data from 700 million LinkedIn profiles (representing approximately 93% of its user base at the time) was posted for sale online.

LinkedIn does not permit opt-outs from its data-sharing with Microsoft (its parent company) for product improvement purposes, a distinction that many business users are unaware of.

Consumer Sentiments and Privacy Behaviours

Understanding how people actually behave, rather than what they say they believe, is where privacy research gets most interesting.

The Privacy Paradox describes the well-documented gap between stated concern and actual behaviour. Research from the Oxford Internet Institute (2023) found that 76% of UK users say privacy is important to them, yet only 14% have changed their privacy settings in the past 12 months, and fewer than 1 in 10 have read a social media privacy policy in full.

Contributing factors include the complexity of privacy settings (which platforms deliberately make complex and hard to find), a sense of fatalism (“my data is already out there”), and the perceived cost of opting out (losing access to social features or reduced content relevance).

  • 32% of UK adults have deleted at least one social media account due to privacy concerns (Ofcom, 2024)
  • Only 28% of UK users enable two-factor authentication on their primary social media accounts (NCSC Cyber Survey, 2023)
  • 53% of users accept cookie consent banners without reading them (Deloitte, 2024)

For businesses that manage customer communities or run social media advertising, these figures carry a practical implication: a growing segment of your audience is actively reducing their platform engagement, and targeted advertising based on behavioural data faces increasing resistance.

“The businesses we work with across Northern Ireland are increasingly asking us how to build direct relationships with their audience, through email lists and owned communities, rather than relying entirely on platforms they can’t control,” says Ciaran Connolly, founder of ProfileTree. “That shift isn’t just a marketing preference; it’s a response to genuine concerns about where social media data goes.”

The Cost of Privacy: Scams and Identity Theft Statistics

The financial consequences of social media data exposure are measurable and growing.

Action Fraud, the UK’s national fraud reporting centre, recorded 33,000 social media account takeover reports in 2023, a 27% year-on-year increase. Account takeovers are typically the precursor to identity fraud, as access to a social profile provides enough personal context to impersonate the user or access connected services.

The average financial loss per victim of social media-enabled identity fraud in the UK was £1,200 in 2023 (Cifas, National Fraud Database 2024). However, the reputational and emotional costs for businesses are harder to quantify: a compromised business account can permanently damage customer trust built over years.

Social media scraping (the automated collection of publicly available data) remains a persistent threat. Scraped data (names, locations, employer information, relationship details) is frequently combined with data from other breaches to create detailed profiles used in phishing attacks. The ICO confirmed in 2024 that scraping of publicly accessible social media data may still constitute a breach of UK GDPR if the scraping is done without a lawful basis.

How Businesses Can Reduce Their Social Media Privacy Risk

For SMEs managing their digital presence, several practical steps can reduce exposure without requiring a full retreat from social media.

Audit your public information. Search your business name and personal name across platforms and review what is publicly visible. Many accounts default to public visibility in ways owners are unaware of.

Separate personal and business accounts. Personal accounts linked to business pages create a data bridge that expands your exposure. Keep administrative roles tied to business-specific email addresses.

Review third-party app permissions. Most social media accounts accumulate connected apps over time, many of which retain access long after they’re actively used. Review and revoke these regularly.

Understand your advertising data footprint. If you run paid social campaigns, review your ad account’s audience data settings. Meta’s Ad Preferences tool, for example, lets you audit the data categories used to target your ads; the same data informs how your business page is profiled.

Train your team. The majority of social media account takeovers begin with phishing, not technical exploits. Staff who manage brand accounts need basic security awareness training, including recognising credential-harvesting attempts disguised as platform notifications.

ProfileTree’s digital training programmes cover social media security as part of broader digital literacy sessions for SMEs. Getting this right internally is far cheaper than managing the fallout from a compromised account.

Several trends are reshaping the state of social media privacy going into 2027 and beyond.

Decentralised platforms continue to grow. Bluesky surpassed 30 million users by early 2026, and Mastodon maintains a stable base of privacy-conscious users drawn away from X’s repeated policy changes. These platforms operate on open protocols without centralised data collection, though they remain a small fraction of mainstream platform reach.

Regulatory pressure on AI training data is already intensifying. The EU AI Act, which came into force in 2024, includes provisions on training data provenance that are now being actively enforced for platforms operating in Europe. The ICO confirmed in early 2026 that AI training data practices are a current enforcement priority, with several investigations ongoing.

Zero-party data strategies are becoming standard for forward-thinking businesses. As platform privacy changes continue to reduce the reliability of behavioural advertising, businesses building direct data relationships with their customers, through email, loyalty programmes, and first-party tools, are finding themselves at a structural advantage over those still dependent on social platform targeting.

Client Perspectives

Social Media Privacy Statistics Every UK User Needs to Know

Joanne McMillan: “I recently completed mentoring sessions with ProfileTree and found the experience extremely valuable. The guidance was knowledgeable, practical, and clearly tailored to my business needs. The sessions on social media and web design were particularly helpful.”

Suzanne Cromie: “Gabby’s fun and light-hearted approach has been a much-needed tonic, especially in areas of online business that can sometimes feel heavy or overwhelming. We covered social media, SEO, and much more. I would highly recommend both Gabby and ProfileTree.”

John Callaghan: “Excellent experience with ProfileTree Web Design. Gabbi guided me through the web design process clearly and helped me understand and use AI tools effectively. Professional, supportive, and highly recommended.”

Conclusion

Social media privacy statistics consistently show a population that is concerned but largely inactive. The gap between awareness and action is not a personal failing; it is the product of platforms deliberately designed to make data collection easy and opt-out difficult. For UK businesses, this matters doubly: your own data is at risk, and so is the trust of the customers you’re trying to reach on these platforms.

Building a digital strategy that accounts for privacy, owning your audience data where possible, training your team, and understanding the regulatory environment you operate in is no longer optional. It is the baseline for responsible digital marketing in 2025.

Frequently Asked Questions

These are the questions UK users and business owners ask most about social media privacy — with straight answers based on current data and regulations.

What percentage of people are concerned about social media privacy?

Approximately 72% of UK adults are “very concerned” or “somewhat concerned” about how social media platforms handle personal data, according to a 2023 YouGov survey.

Which social media platform has the most privacy issues?

TikTok and Meta consistently receive the lowest transparency ratings. TikTok carries specific data sovereignty risks due to its Chinese parent company; Meta has the largest historical record of large-scale breaches.

Can social media platforms use my photos for AI training?

Yes, under current terms, Meta and X can use public posts and images for AI training. UK and EU users can request an opt-out, but the process is not prominently advertised.

How many UK users have experienced a social media hack?

Action Fraud recorded 33,000 social media account takeover reports in the UK in 2023, a 27% increase year-on-year.

Is TikTok more of a privacy risk than Facebook?

They carry different risks. TikTok’s risk is primarily about government data access through its Chinese ownership structure; Meta’s risk is about the sheer volume of commercial data collection and third-party sharing.

Does deleting a social media account delete all my data?

Not immediately. Most platforms retain your data for 30 to 90 days after account deletion, and some data held by third-party advertisers may persist beyond that period.

Leave a comment

Your email address will not be published.Required fields are marked *

Join Our Mailing List

Grow your business with expert web design, AI strategies and digital marketing tips straight to your inbox. Subscribe to our newsletter.