These or similar thoughts are probably familiar to many in their professional lives. But, let's be honest – can you simply hand over all your company's internal information to an AI for support?
Anyone who remembers their internal compliance training knows the answer quickly: No – that shouldn't go to ChatGPT & Co. Yet it happens anyway, as current figures show: Around half of all employees use unapproved AI tools.1
And here's where the tension begins. We all want to perform our tasks optimally, but of course we can't put trade secrets into just any cloud.
The problem nobody talks about
I'm firmly convinced there's a massive problem with shadow AI. Employees who aren't officially allowed to use AI, but still want to deliver perfect results. Confidential data ends up in the cloud – without anyone noticing.
Shadow AI isn't an exception – it's the logical consequence of missing official solutions.
In normal office environments, this is already problematic. In regulated industries, it's a real risk. In 2023, it became public that Samsung employees had accidentally pasted confidential source code into ChatGPT.2 The response: An internal AI ban.
The regulatory reality
Let's talk about concrete requirements. In critical infrastructure sectors – energy, water, healthcare, transportation – regulations like Germany's IT Security Act 2.0 demand "appropriate organizational and technical measures according to the state of the art." Audits every two years. Attack detection systems. Documentation requirements that turn cloud dependencies into audit risks.
In Pharma and Life Sciences, 21 CFR Part 11, EU-GMP Annex 11 and GAMP 5 apply. Validated systems, continuous data integrity, audit trails for every change. When AI is used in batch release or quality decisions, it must be documented, validated, and explainable in case of errors.
In the Defense sector, classified information protection and national security interests come into play. Many organizations explicitly require air-gap deployments. Classified data simply cannot leave a data center that isn't government-controlled.
In short: Cloud AI collides with fundamental principles here.
Why cloud AI fails in these environments
The problem isn't that cloud AI is bad. The problem is: Once data is in the cloud, you've given up control.
Add to that the current political uncertainty. US providers are subject to the CLOUD Act – American authorities can demand access to data even when it's stored on European servers. After the Schrems II ruling by the ECJ, the legal situation is fragile anyway. And anyone following current political developments in the USA is rightly asking: How reliable are tech giants' promises about data sovereignty?
For regulated industries, this means: Any dependency on cloud AI is a risk that needs to be explained in the next audit.
In the Defense sector, there's increasing talk of "Sovereign AI" – AI components that must run within sovereign infrastructures. Export controls, classification requirements, NATO standards. Air-gap isn't an option here – it's a prerequisite.
Air-gap as a design principle
The good news: There's another way. On-premise LLM platforms define a counter-model to the cloud norm. Models run within your own firewall or completely offline, integrated into existing security policies and encryption standards.
For critical infrastructure operators, this means: AI systems remain integrable into ISMS and regulatory scopes. ISO 27001 building blocks, industry-specific security standards, and regulatory requirements can be consistently applied.
For Pharma, a local stack enables implementation of Part 11 and Annex 11 with classical computer system validation. IQ/OQ/PQ, test documentation, audit trails for every AI function – all achievable when the system is under your own control.
What we built from this
Exactly this design principle – local, controllable AI – is the foundation of our solutions. At BRUCHMANN [TEC] INNOVATION, we implement it concretely:
FORTRESS OFFICE is an office suite with integrated local AI. Edit documents, manage emails, coordinate projects – while using an AI that never leaves the corporate network.
The business benefit: Instead of banning AI and risking shadow AI, you give your employees a secure, auditable alternative. Security analyses recommend exactly this: provide secure tools instead of issuing bans.3 No policy exceptions, no discussions with auditors.
Steroids is our local coding automation platform. AI-assisted software development, completely offline. Code, requirements, and tests stay within the customer's network. The AI runs on dedicated hardware and can be embedded in critical infrastructure, GxP, and classified security concepts.
Not "AI despite air-gap," but "air-gap as a prerequisite for responsible AI."
What this means in practice
The technology for local AI is here. Open-source models like Llama, Mistral, or Qwen are good enough for productive use. Hardware is becoming affordable – a server with 32GB RAM is sufficient for many use cases. With our products Steroids and FORTRESS OFFICE, not every workstation needs to meet these requirements – only the internal company server.
What was missing until now: Software that brings both together. Office productivity and local AI, without cloud dependency, designed for regulated environments.
We're changing that right now.
1 SecurityWeek: "Study Finds 50% of Workers Use Unapproved AI Tools" (2024)
2 Cyberhaven: "11% of data employees paste into ChatGPT is confidential" (2023)
3 AWS Whitepaper: "Shadow generative AI" (2024)