GDPR and AI for the German mid-market: what really matters.
EU servers, DPA, EU-based LLMs. What mid-market companies really need to look at when adopting AI, without getting dazzled by certificate lists.
By Florian Wessling
The question that comes up in every other first call: “How do you handle data protection?” The question is fair. But the answers most mid-market companies hear at AI conferences are marketing noise. Long certificate lists, ISO logos, compliance theatre.
What actually matters comes down to four points. Get them right and you have data protection under control. Skip them and an audit will hurt.
1. Who runs the model
For every AI solution you have to be able to answer: in which data centre does the model run, who has technical access, in which country is the provider located, and which legal jurisdiction governs them.
OpenAI, Google, and Anthropic offer clear setups today: OpenAI provides EU data residency, Anthropic has an EU variant, Google offers Vertex AI in European regions, Mistral is based in France. What you don’t want: models running as a shadow service over US endpoints, without your contract reflecting that.
2. Data Processing Agreement (DPA)
As soon as personal data flows through an AI system, you need a DPA with the provider and with every intermediate processor. This is not optional. It’s Article 28 of the GDPR.
The simple test: if you don’t have a signed DPA in your hands within two business days, the solution isn’t production-ready for the mid-market. Reputable providers have a standard DPA template publicly available. With White Fox, you receive the DPA together with the offer, not after the setup contract.
3. Which personal data really has to go in
Most AI setups we see at competitors push significantly more personal data into the model than necessary. A Service Machine answering FAQ requests doesn’t need the customer’s full name. A Sales Machine qualifying leads can work with a pseudonymous lead ID and only enrich the full name at CRM handoff.
Data minimisation is the right tool. Before every setup we go through a list with you: which fields are needed for the task, which aren’t, which are merely “nice to have”. Anything that’s just “nice to have” gets dropped from the data flow.
4. What you can hand to your data protection officer
A Data Protection Impact Assessment (DPIA) is not mandatory for every AI application. But it is sensible for every one that processes customer data. And it is the most honest way to think the system through yourself: what happens here, what risks does it carry, how do we mitigate them.
At White Fox Automations the DPIA template is part of every setup. You don’t have to write a Word document alone. You receive a pre-filled template that you finalise with your data protection officer.
What gets sold as important but is secondary
ISO 27001 as a provider certificate. Nice to have. Says nothing about the specific AI application in your business. ISO 27001 is a statement about the provider’s information security management system, not about the GDPR fitness of your particular use case.
On-premise as a silver bullet. Sounds secure. It is only secure if your IT actually patches the models, runs monitoring, and ships updates. In doubt, a trustworthy EU host is safer than a local server that hasn’t seen security updates in eighteen months.
EU AI Act. Important, but for most mid-market AI applications the scope falls into the low-risk class. If you are not building a high-risk application, no candidate screening, no credit decisions, no medical diagnosis, the obligations are mostly transparency and documentation requirements. Which we already meet in our setups.
How we put this into practice
Three things are the same in every White Fox setup.
EU servers. Your application runs on Mittwald in Germany. Vector stores, where used, stay in the EU. LLMs only with EU contracts.
DPA as standard. You receive the DPA with the setup offer. Not when you ask. On the first contact.
Data flow diagram. A one-page diagram showing which data moves where, where it is stored, who has access, and when it is deleted. The diagram is part of the DPIA and part of your documentation.
Where the line is
If an AI provider tells you “we’ll handle data protection later”, turn around and walk. If they tell you “GDPR is overrated”, turn around and run.
What you want is a provider who, in the first call, brings up DPA, data flows, and retention periods on their own, because they know that mid-market trust is built or lost at exactly this point.
We bring it up on our own. You’re welcome to test us.