Explore GDPR breach examples, real cases, fines, and lessons to protect personal data and ensure compliance.

GDPR Breach Examples: Real‑World Cases, Key Lessons & What You Can Learn

The General Data Protection Regulation (GDPR) has been in force since 25 May 2018 and represents one of the most significant shifts in data‑protection law in Europe. Organisations that process personal data of EU residents must comply with strict requirements. When they fail, the consequences can be serious. In this article we explore a number of GDPR breach examples, summarise the facts, look at common patterns of failure, and draw lessons that any organisation (and you personally) can use.


What counts as a GDPR breach?

A “breach” under GDPR covers more than just a hack or cyber‑attack. According to GDPR itself and guidance from supervisory authorities, it includes:

  • A personal data breach, meaning a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorised disclosure of or access to personal data.

  • Failure to comply with substantive obligations (e.g., lawful basis, consent, transparency, data subject rights) may also trigger enforcement even if no external hack occurred.

  • A breach by a data controller or data processor must be reported (Article 33 GDPR) where feasible within 72 hours.

  • Organisations must demonstrate they have implemented “appropriate technical and organisational measures” (Articles 5(1)(f) and 32).

In other words: a breach isn’t only when a hacker gets in – it can be when you fail to look after data properly, fail to be transparent, fail to secure it, or fail to act when you know something is wrong. One article notes that the most frequently referenced GDPR articles in enforcement decisions relate to “general principles, lawfulness and information security”.


Why look at Brexit & UK context

Although the UK left the EU, the UK’s version of GDPR remains largely aligned via the Information Commissioner’s Office (ICO). Many of the big breach‑cases we’ll look at are UK‑based or have been enforced by the ICO. The principles apply across the EU.


Table 1: Selected GDPR Breach Examples

# Organisation What happened Data affected / scale GDPR violation(s) Penalty or outcome
1 British Airways plc (UK) In June‑Sept 2018 hackers diverted web traffic, customers’ data harvested. ~429,612 customers/staff; payment card & CVV numbers of ~77,000 customers; other card details of ~108,000. Poor security arrangements; failure to implement appropriate organisational/technical measures (Articles 5(1)(f), 32) Notice of intent to fine £183.39m; final fine reduced to £20m.
2 Marriott International, Inc. (via Starwood) Hack of Starwood reservation system began 2014; Marriott acquired Starwood in 2016; discovered by Marriott in 2018. ~339 million guest records globally; ~30 million relating to EEA; ~7 million UK residents. Failure of appropriate security; insufficient due diligence in acquisition; failure to detect breach faster. ICO originally intended fine £99.2m; final fine £18.4m.
3 Google LLC / CNIL (France) Lack of transparency and valid consent for personalised advertising. Not a large‑scale hack but regulatory sanction: €50 million fine. Breach of transparency, lawful basis (Articles 5, 6, 7) Fine by CNIL: €50m.
4 Clearview AI (facial‑recognition firm) Collected billions of images of EU citizens without consent; processing biometric data. Billions of images globally; multiple national sanctions. Unlawful processing of biometric data; lack of consent; failure to comply with transparency obligations (Articles 5, 6, 9) E.g., in Italy/France/Greece fines totalling significant amounts (ex: ~€20 m in one case).
5 Vastaamo Oy (Finland) Hack of psychotherapy centre database; sensitive health data exposed; extortion of patients. ~30,000 victims who were extorted; huge reputational damage. Inadequate security; failure to encrypt; failure to monitor privileged accounts. Fine ~€608,000 by Finnish DPA.

Why these examples matter

  • They show variety: from large scale consumer breaches (BA, Marriott), to tech‑giant regulatory consent issues (Google), to niche but highly sensitive breaches (Vastaamo).

  • They illustrate different types of violation: security failure; insufficient consent/ transparency; biometric processing without consent.

  • They highlight the scale and seriousness: millions or hundreds of millions of records, billions of images, sensitive categories of data (health, biometrics).

  • They show the regulatory consequences: large fines, reputational harm, forensic investigations.

  • They provide lessons for organisations of any size.


Deeper dive: Lessons from each example

1. British Airways (BA)

Facts & timeline: In June–Sept 2018 an attacker accessed the BA website/app via a compromised remote access gateway. Customer traffic was diverted to a fraudulent site where payment and booking data was harvested. The breach compromised ~429,612 individuals; 244,000 had full card + CVV data; 77,000 full card+CVV; 108,000 card numbers only.
Why it breached GDPR: The ICO found BA’s security arrangements were inadequate. They failed to segregate payment systems properly, third‑party scripts remained vulnerable for long.
Outcome: Proposed fine £183m (1.5% of global turnover). Final penalty reduced to £20m after mitigation and Covid‑19 context.
Key take‑aways:

  • Basic security hygiene matters (e.g., prompt patching of known vulnerabilities).

  • Payment data attracts particular attention – full CVV numbers are higher risk.

  • Large numbers of affected people greatly increase severity under Article 83 factors.

  • Even if you start cooperating after the breach, regulatory scrutiny is inevitable.

2. Marriott International via Starwood

Facts & timeline: Starwood systems were compromised in 2014; Marriott acquired Starwood in 2016; breach persisted undetected until 2018. Data of ~339m guests, ~30m EEA, ~7m UK. Passport numbers (encrypted & unencrypted), card details, arrival/departure info.


Why the breach: The ICO said Marriott had failed to secure the data they acquired (due diligence fail), failed to monitor privileged access and did not encrypt certain data (passport numbers).
Outcome: Originally intended £99.2m fine; final fine £18.4m.
Key take‑aways:

  • Mergers & acquisitions carry data‑protection risks: you inherit legacy systems, potential vulnerabilities.

  • Encryption of sensitive data (passport numbers, payment cards) is increasingly expected.

  • Monitoring privileged accounts and databases is critical – “who has elevated access” is a major vulnerability.

  • The period over which the breach persisted (2014‑2018) exacerbated the severity.

3. Google / CNIL (France)

Facts & timeline: In 2019, France’s CNIL fined Google €50m for lack of transparency and valid consent in how user data was processed for personalised advertising.
Why the breach: Google’s consent banner/process was not clear, details of data collection and processing were hidden behind multiple clicks, and users couldn’t easily understand how their personal data was used. Principles of GDPR (Articles 5, 6, 7) were breached.
Key take‑aways:

  • Not all breaches are hacking events. Non‑compliance with consent/ transparency obligations is still a “GDPR breach”.

  • Even large tech firms can get significant fines for non‑technical compliance issues.

  • Data‑controllers must make consent meaningful: simple language, not hidden in long legal documents.

4. Clearview AI

Facts & timeline: Collected billions of images from web, built biometric templates, processed data of EU individuals without their consent. Multiple EU authorities fined them (e.g., Italy/France/Greece) for breaches of Articles 5, 6, 9, 12, 13, 14.
Why the breach: Biometric data is a “special category” under GDPR (Article 9). Clearview lacked explicit consent, failed to provide transparent information, and processed highly intrusive data.
Key take‑aways:

  • Special categories of personal data (biometrics, health, religion, ethnicity) trigger heightened obligations.

  • Gathering data at scale from public sources does not exempt you from GDPR obligations if EU‑residents are involved.

  • Data‑controllers should assess whether their processing involves special categories and whether they have the lawful basis (for example explicit consent) or necessity for reasons of substantial public interest.

5. Vastaamo (Finland)

Facts: A psychotherapy centre’s database was hacked; sensitive health‑ and therapy‑data of ~30,000 patients was exposed; criminals extorted patients, publishing files and demanding payment. The Finnish DPA fined the company ~€608,000.
Why the breach: The centre had inadequate security: root account had no password, data was unencrypted, no strong monitoring.
Key take‑aways:

  • Health/therapy data is highly sensitive – breach consequences are extreme for individuals.

  • Small or mid‑sized organisations are not exempt: a large corporation is not required to have the only high‑risk failure.

  • Legal/regulatory risk is accompanied by direct harm to individuals: this is more than reputational damage.


Patterns & common risk‑factors

From the examples, we can identify several recurring themes.

Table 2: Common risk‑factors in GDPR breach cases

Risk factor Explanation
Inadequate technical safeguards Many organisations fail to encrypt data, monitor privileged accounts, patch vulnerabilities or segregate systems (BA, Marriott, Vastaamo)
Legacy systems / acquisitions Cruises acquired entities inherit vulnerabilities (Marriott/Starwood)
Poor due diligence Failing to assess data‑risk when acquiring another company or subsystem (Marriott)
Scale of data processed Large volumes of personal or highly sensitive data magnify the regulatory and reputational risk
Special categories of data Biometric data (Clearview), health/therapy data (Vastaamo) attract increased regulatory scrutiny
Consent/ transparency failures Processing without clear lawful basis or with opaque user notice (Google)
Delay in detection/notification Longer time from breach start to detection increases severity, and may violate Article 33/34
Third‑party/processor risk Use of third‑party scripts or access gateways (BA’s fraudulent site exploit)
Insufficient organisational culture or resourcing The “people & process” side of security often weaker than the tech side

Why these matter under GDPR

Under Article 83(2) factors for imposing administrative fines include: nature, gravity and duration of infringement; number of data‑subjects affected; damage suffered by data‑subjects; intentional or negligent character; categories of personal data concerned; and any mitigating or aggravating behaviour. Hence the patterns listed above often lead to higher fines or more serious actions.


Broader statistics & trends

  • One source lists the “Top 10 biggest GDPR fines” and notes that in June 2022, Meta Platforms Ireland Ltd. was fined €265 million for scraping / user data issues.

  • Another commentary notes that although fines are headline‑grabbers, only a small minority of DPA investigations end in fines: one Reddit post claims “just 1.3% of cases result in a fine”. Reddit

  • Research on enforcement shows that Articles relating to “general principles, lawfulness, information security” are the most frequently referenced in decisions.

  • A blog of “Top 10 GDPR violations and incidents of 2018” lists examples such as a German social/dating site storing plaintext passwords for ~1.87 m users and a Portuguese hospital fined €400k for giving too many staff access to patient records.

These trends show that while many organisations get away without large fines, the risk is real and the high‑profile cases show what can happen if things go wrong.


Table 3: What the law requires – quick checklist

GDPR requirement What it means in practice Why important
Lawful basis for processing (Art 6) You must identify which legal basis (consent, contract, legal obligation, vital interests, public task, legitimate interests) applies. Processing without lawful basis is unlawful.
Special categories (Art 9) If you process sensitive data (biometrics, health, race, religion) you must have an additional lawful basis and meet higher standards. Heightened risk & higher scrutiny.
Transparency, information (Arts 12‑14) You must provide clear info to data‑subjects about how you process their data. Lack of transparency is a violation even without a hack (see Google case).
Data protection by design & default (Art 25) Build privacy into your systems & minimise data processed. Preventive measure; reduces breach likelihood.
Security of processing (Art 32) You must implement appropriate technical & organisational safeguards (encryption, access controls, resilience). Often the failing point in breach cases (see BA / Marriott).
Breach notification (Arts 33‑34) Controllers must report breaches to supervisory authority within 72 hrs (when feasible), and notify data‑subjects where high risk. Failure to notify is itself a breach.
Accountability (Art 5(2)) You must be able to demonstrate compliance (records, policies, audits). Regulatory investigations assess whether you were accountable.

Why “GDPR breach examples” matter for you & for organisations

  1. Awareness – Seeing real‑world cases helps translate abstract legal obligations into concrete risks.

  2. Benchmarking – Large organisations with huge budgets are still failing; so smaller organisations must pay attention.

  3. Risk reduction – By understanding what tends to go wrong, you can prioritise your own actions (e.g., encryption, third‑party vetting, incident response).

  4. Personal rights – If you are a data subject, knowing typical breaches helps you assess your own exposure and what you can do (e.g., request access, monitor for misuse).

  5. Business continuity – The fallout from a GDPR breach isn’t only the fine: reputational damage, cost of remediation, regulatory oversight, potential lawsuits from individuals (Article 82 allows compensation for damage).


More detailed lessons & actionable advice

A. Incident response & preparation

  • Have an incident response plan that includes detection, escalation, containment, notification to regulator (within 72 hours) and notification to data‑subjects if required.

  • Implement logging/monitoring: privileged access logs, database access monitoring, anomaly detection. The Marriott case emphasised this as a key weakness.

  • Test your environment regularly (penetration tests, vulnerability scans). The BA case noted a long‑standing vulnerability in a third‑party script (Magecart) that was not patched for years. (Wired)

B. Third‑party & acquisition risk

  • Map your supply‑chain: any processor you use, any acquired business you integrate. Are you clear on their security posture?

  • On mergers/acquisitions, conduct data‑protection due diligence: What systems are you acquiring? Are they secure? Who has access? The Marriott case made this an enforcement focus.

  • For third‑party scripts (e.g., analytics, ad tags) or remote access gateways, ensure you’ve got controls, segmentation, multi‑factor authentication, and regular reviews.

C. Data minimisation, encryption & privilege control

  • Only collect what you need. Data minimisation helps reduce risk and regulatory exposure.

  • Encrypt sensitive personal data (cards, passports, health records). The Marriott case specifically found failure to encrypt passport numbers to be a key breach factor.

  • Limit privileged access: admin accounts, root accounts, remote access should be strictly controlled. The Vastaamo case again showed root account without password.

  • Regularly review dormant accounts, separation of duties, disable access when it’s no longer needed.

D. Consent, transparency, lawful basis

  • Make sure your privacy notices are clear, simple and tell the data‑subject exactly what data you process, for what purpose, with what rights.

  • For personalised advertising, profiling, or automated decisions (Article 22) make sure you disclose and offer opt‑out/consent as required. The Google and Clearview examples show non‑technical compliance is as important as security.

  • If you use “legitimate interests” as legal basis, document your balancing test and consider whether a less intrusive basis (consent) is more appropriate for large‑scale or sensitive data processing.

E. Training, culture & accountability

  • Security and data protection must be seen as board‑level issues, not just IT‑issues. Many breach cases cite organisational failures.

  • Provide staff training on data‑protection obligations, breach recognition, phishing awareness, third‑party risk.

  • Maintain records of processing (Article 30), data‑protection impact assessments (DPIAs) for high‑risk processing, and privacy audits.

F. After a breach: remediation & notification

  • Once a breach is detected, act quickly to contain it, remediate vulnerabilities, assess data‑subjects affected, and determine whether notification is required.

  • Notify supervisory authority (e.g., ICO) within 72 hours unless unlikely to result in risk to individuals. Then notify data‑subjects if high risk.

  • Communicate clearly and promptly with affected individuals about what data was affected, what you’re doing, how they can protect themselves. This also limits reputational damage.

  • Review and update policies/systems. Lessons learned should inform improvement.


Table 4: What individuals (data‑subjects) can do after a breach

Action Why it matters
Request access to your personal data (Subject Access Request) Check what data the organisation holds on you and how it was processed.
Monitor your credit, cards, passwords If payment or identity data was included, you may be at increased risk of fraud.
Change passwords / enable multi‑factor authentication (MFA) If credentials were compromised, act quickly to protect other accounts.
Be alert for phishing Breach victims are often targeted with phishing or social‑engineering attempts exploiting the event.
Check for proper notification Organisations should notify you and provide guidance on how to protect yourself. If not, consider complaint to regulator.
Educate yourself on your rights Under GDPR you have rights: erasure, rectification, restriction of processing, data portability, etc.

Why “GDPR breach examples” keep increasing

  • The digital economy continues to grow; more personal data is collected, processed and stored. With scale comes risk.

  • Attackers keep innovating: large‑scale scraping, credential stuffing, supply‑chain attacks (third‑party scripts).

  • Regulators are becoming more experienced, and enforcement mechanisms are becoming stronger; cases set precedents.

  • Data protection awareness is higher: organisations now often face regulatory, reputational and class‑action risks simultaneously.

  • New technologies bring new risks: biometric systems, AI‑driven profiling, large‑scale behaviour tracking (Clearview example).


Practical checklist for organisations

Before an incident occurs, here’s a ready‑to‑use checklist:

  1. Map all personal data you hold (where, why, who can access it, retention period).

  2. Identify special category data and higher‑risk processing (biometrics, health, children, profiling).

  3. Review lawful basis for each processing activity (consent, contract, legitimate interests etc.).

  4. Review privacy notice and transparency information – is it clear, accessible, up to date?

  5. Conduct Data Protection Impact Assessments (DPIAs) for high‑risk processing.

  6. Review technical security: encryption, access control, monitoring, patching, third‑party scripts, remote access.

  7. Review organisational safeguards: training, incident response plan, policies, vendor management, governance.

  8. Ensure you have appropriate written contracts with processors, and you (as controller) remain accountable.

  9. Test your breach response: tabletop exercises, drills, escalation paths.

  10. Monitor changes: mergers, acquisitions, new technologies, new jurisdictions (international transfers).

  11. After any incident: log timeline, act quickly, notify authority/data‑subjects if required, review what went wrong and update controls.


Final thoughts

The headline‑grabbing fines (like those against British Airways or Marriott) emphasise the financial risk. But the more fundamental risk is to individuals whose personal data is exposed or mis‑used. The GDPR is designed not just as a regulatory lever but as protecting individuals’ fundamental rights.

When you search for “GDPR breach examples”, what you find are cautionary tales: big firms, big budgets, but still failing to operationalise data‑protection properly. The lesson for all organisations – large or small – is that compliance is not only a legal obligation but good business practice.

Whether you are an organisation looking to strengthen your data‑protection posture, or an individual wanting to understand how breaches happen and what your rights are, these examples show that the risk is real, the consequences are significant, and the good‑practice call‑to‑action is urgent.