You Probably Need a DPIA. Here’s How to Do One Without Losing Your Mind.
Data Protection Impact Assessments sound like bureaucratic paperwork. They’re not. Done right, a DPIA is the single most practical tool for identifying privacy risks before they become compliance problems.
GDPR Article 35 requires a DPIA whenever processing is “likely to result in a high risk to the rights and freedoms of natural persons.” That’s deliberately vague. In practice, it catches more processing activities than most teams realize.
Building a new customer-facing AI feature? DPIA. Processing biometric data? DPIA. Large-scale profiling or monitoring? DPIA. Combining datasets from multiple sources to create user profiles? Probably a DPIA.
The Digital Omnibus proposal aims to harmonize DPIA requirements across the EU with unified trigger lists from the EDPB. That simplification is coming, but it hasn’t landed yet. Right now, you need to know when and how to conduct one.
When Is a DPIA Required?
The GDPR mandates DPIAs for processing that’s likely to create high risk. The European Data Protection Board (EDPB) and national data protection authorities provide trigger lists. If your processing hits any two of these nine criteria, you probably need a DPIA:
- Evaluation or scoring (including profiling and predicting)
- Automated decision-making with legal or similarly significant effects
- Systematic monitoring (surveillance of public areas, employee tracking)
- Processing of sensitive data or data of a highly personal nature
- Large-scale processing
- Matching or combining datasets
- Data concerning vulnerable individuals (children, employees, patients)
- Innovative use of technology (AI, IoT, biometrics)
- Processing that prevents individuals from exercising a right or using a service
Two out of nine? You’re doing a DPIA. And even if you’re just brushing one criterion, it’s worth documenting your rationale for not conducting one.
The DPIA Process: Seven Steps
Step 1: Describe the processing
Be specific. Not “we collect user data.” Instead: “We collect email addresses, browsing history, and purchase records from registered users via our web application and mobile app. This data is processed in AWS eu-central-1 and shared with our email marketing provider (Mailchimp) for personalized product recommendations.”
Include: data types collected, data sources, purposes of processing, data recipients, retention periods, and technical systems involved.
Step 2: Assess necessity and proportionality
Why do you need this processing? Can you achieve the same purpose with less data or less intrusive methods?
This is where most teams get uncomfortable. Your marketing department wants to track everything. The DPIA forces the question: do you actually need all of that? Often the answer is no.
Document your legal basis for each processing purpose. Consent, legitimate interest, or contract performance. If you’re relying on legitimate interest, document the balancing test.
Step 3: Identify the risks
What could go wrong for the individuals whose data you’re processing? Think beyond data breaches. Consider:
Unauthorized access to personal data. Data used for purposes the individual didn’t expect. Inaccurate data leading to wrong decisions.
Individuals unable to access, correct, or delete their data. Discrimination based on automated profiling. Chilling effects on behavior due to surveillance.
Rate each risk by likelihood and severity. High likelihood and high severity? That’s your priority.
Step 4: Identify mitigation measures
For each risk, define concrete measures to reduce it. Technical measures: encryption, access controls, anonymization, data minimization. Organizational measures: training, policies, access reviews, incident response procedures.
The goal isn’t zero risk. It’s proportionate risk reduction. If you’re processing millions of health records, your mitigation bar is higher than if you’re processing business email addresses.
Step 5: Consult your DPO
If you have a Data Protection Officer, involve them throughout the process. Not just for sign-off at the end. Their input during risk assessment and mitigation planning is where the value lives.
If you don’t have a DPO, consult someone with data protection expertise. An external DPO service, a privacy consultant, or legal counsel.
Step 6: Document and record
The DPIA document should include: processing description, necessity assessment, risk assessment, mitigation measures, DPO consultation outcome, and approval decision.
Keep it as a living document. Update it when the processing changes, when new risks emerge, or at least every three years (per Article 29 Working Party guidance).
Step 7: Prior consultation (if needed)
If your DPIA concludes that residual risk remains high despite mitigation, you must consult your supervisory authority before proceeding. In Germany, that’s the relevant Landesdatenschutzbehörde.
This is rare. If you’ve done a thorough job on mitigation, you should be able to reduce residual risk to acceptable levels. But the mechanism exists for cases where the processing is necessary despite significant risk.
Common DPIA Triggers for Software Teams
AI and machine learning. Any AI system that processes personal data for predictions, recommendations, or automated decisions. Especially relevant with the EU AI Act intersecting GDPR. High-risk AI systems under the AI Act will almost always trigger a DPIA under GDPR.
Customer analytics platforms. Combining purchase history, browsing behavior, demographics, and engagement data to build user profiles. That’s evaluation, scoring, and matching datasets. Three criteria hit.
Employee monitoring tools. Screen recording, keystroke logging, location tracking, productivity scoring. Systematic monitoring of vulnerable individuals (employees are considered vulnerable in GDPR terms). High-risk territory.
IoT and connected products. Smart devices that collect data continuously in private spaces. Systematic monitoring plus innovative technology.
Health and wellness applications. Processing health data, even self-reported, at any meaningful scale.
What a Good DPIA Looks Like
Short answer: specific, honest, and actionable.
A bad DPIA reads like a template with blanks filled in. “We have implemented appropriate security measures.” That tells a regulator nothing.
A good DPIA reads like an engineering document.
“User personal data is encrypted at rest using AES-256 with customer-managed keys in AWS KMS. Access is restricted to three named administrators via IAM roles with MFA. All access events are logged to CloudTrail with a 365-day retention.”
Specific. Named technology. Quantified retention. That’s what regulators want to see.
Be specific. Name the technology. Describe the architecture. Quantify where you can.
The Digital Omnibus Changes
The proposed Digital Omnibus regulation would harmonize DPIA requirements across the EU. The EDPB would compile unified lists of processing activities that do or don’t require a DPIA, plus a standard template and methodology. Once approved, these EU-wide lists would supersede national lists.
This is good news. Currently, each EU member state has its own DPIA trigger list, creating inconsistency. A German trigger list can differ from a French one. Harmonization means one set of rules.
The proposal is still moving through Parliament (feedback period runs until March 2026). Don’t wait for it. Conduct DPIAs based on current requirements. If harmonization simplifies things later, you’ll already be ahead.
For the broader regulatory context, see our pillar guide on EU compliance for software teams. For the architecture patterns that make DPIA compliance easier, read Building GDPR-Compliant Software Architecture. And if AI systems trigger your DPIA, our EU AI Act guide covers the intersection.
Need help conducting a DPIA for your software project? Let’s work through it together. We help teams identify privacy risks early and design systems that comply by default.