• #
  • #

Sovereign AI in the UK: why your provider's internal culture is part of the product

Beyond the infrastructure: what actually makes a conversational AI system trustworthy, and how to evaluate your provider with criteria that most of the market is not yet using.

Daniele Dibitonto
Daniele Dibitonto · 06 2026

Contents

What is sovereign AI?

Sovereign AI — sometimes referred to as private AI or on-premise AI — is a model of artificial intelligence in which the client's data remains under the client's control, processed within infrastructure governed by the legal jurisdiction the client operates in. Sovereign AI stands in contrast to the pay-per-use model of the major generalist providers, where data is routinely transferred across jurisdictions. It guarantees data sovereignty, regulatory compliance and predictable operational costs.

In practice, sovereign AI means designing systems — including conversational ones — within controlled environments (on-premise, private cloud, or hybrid clouds with sufficient contractual and technical guarantees), with access policies, traceability and auditing arrangements that allow the organisation to demonstrate where each piece of data sits, who processes it, and under which legal framework.

So far, what any business already evaluating the market knows. What follows is what most providers leave unsaid.

Why enterprise AI is changing in the UK

The UK enterprise AI market has shifted notably over the past eighteen months. Three forces have reshaped how organisations evaluate providers.

First, the regulatory landscape has matured. The work of the AI Safety Institute, the strengthened ICO guidance on AI and data protection, and the AI Opportunities Action Plan have collectively raised the bar on what compliance, transparency and accountability look like for AI systems serving UK organisations. Public-sector procurement, in particular, has moved from "experimental" to "structured" in terms of AI vendor requirements.

Second, the conversation has moved from responsible AI as a value statement to responsible AI as a contractual requirement. UK enterprises — especially in financial services, the NHS, and central government — are now writing into their RFPs explicit clauses on bias mitigation, explainability, audit trails and language inclusivity of conversational systems. Vendors who treated these as marketing language are finding themselves unable to answer.

Third, the post-Brexit reality has created a specific commercial dynamic: UK businesses serving European customers, processing EU residents' data, or operating subsidiaries across the EU now need providers structured for both UK and EU compliance simultaneously. This has revalued the position of European AI providers in the UK market.

Against this backdrop, choosing an AI partner has become a more layered exercise than it was even twelve months ago. The technical specifications still matter. But they no longer separate good providers from problematic ones — because, on a datasheet, everyone now claims the same things.

The five commercial pillars of glacom® AI

At glacom® AI we have built our offering around five pillars that define our market positioning:

  • Fixed costs. A predictable commercial model, an alternative to the pay-per-use logic that characterises most generalist providers.
  • Sovereign AI. Data sovereignty, solutions that do not exfiltrate client information to public models or unaccountable jurisdictions.
  • Sustainability (Green). Concrete attention to the energy and environmental impact of the systems we develop, grounded in certified, verifiable infrastructure.
  • Algorithmic transparency. Explainability of decision-making processes and auditability of the systems — the technical core of what is increasingly called explainable AI.
  • Inclusive language and behaviour. Conversational systems designed to be respectful, equitable and accessible to diverse audiences.

These are the pillars that any serious European AI provider should cover, but which the market in practice has not yet made standard. Especially the fifth, which deserves a closer look — because it is structurally different from the other four.

The fifth pillar: why it is structurally different

Unlike the other four pillars, inclusive language and behaviour cannot be solved by a choice of technical architecture or technology stack. You can buy a more efficient GPU cluster. You can contract data centres with green certification. You can adopt industry-standard explainability frameworks. But you cannot retroactively buy a respectful internal culture.

A conversational system inevitably inherits the linguistic patterns, the asymmetries and the cultural practices of the people and the teams that design it. If everyday sexism is tolerated within the development teams, if accessibility is dismissed as a "minor" issue, or if people are treated badly because of their origin, age, appearance or way of speaking — these patterns will find their way, subtly but technically, into the code, the datasets, the prompts, and ultimately the behaviour of the assistant.

This is not a hypothesis. It is a technical consequence well documented in the algorithmic bias literature of recent years, and it is exactly what the UK debate on responsible AI and AI bias has been increasingly focused on since 2024.

The internal culture of the company is part of the product. Especially when the product is a conversational system.

The product begins before the product

When a business evaluates a sovereign AI provider, it looks — quite rightly — at the standard metrics: response accuracy, latency, ability to handle complex intents, data sovereignty guarantees, service levels. These are necessary metrics.

There is, however, a less obvious but more discriminating question: how does the company that built that system treat its people?

It seems out of scope. It is not. A conversational system that will speak every day to hundreds or thousands of end users — customers, citizens, patients, employees — is a product whose relational quality depends directly on how people speak inside the organisation that built it.

This is particularly relevant in the sectors where sovereign AI is gaining traction fastest in the UK: financial services (where customer experience is a competitive factor and FCA scrutiny is rising), the NHS and healthcare (where patients must feel heard and dignified treatment is a clinical issue, not a marketing one), central government and the public sector (where citizens expect respectful, accessible interactions and procurement frameworks are increasingly explicit on this), retail and e-commerce (where the conversational experience drives conversion), and the legal sector (where language is the raw material). In all these sectors, a technically perfect but relationally inadequate system is a commercial failure — and, increasingly, a reputational risk.

Verifiable sustainability: the role of infrastructure

Talking about sustainable AI has become a rhetorical exercise in which, with uneven enthusiasm, almost every provider in the sector engages. What distinguishes a statement of intent from a real commitment is the certifications of the infrastructure on which the system runs — because the environmental footprint of an AI system is determined less by the models than by the data centres that host them.

At glacom® AI the systems run on the infrastructure of Seeweb, a European cloud provider with sites in Milan, Frosinone, Lugano, Zurich and Sofia. The choice is not incidental: Seeweb is one of the few European providers that combines data sovereignty and certified sustainability in a single proposition.

The verifiable data points behind glacom® AI's green pillar, thanks to this partnership, are concrete:

  • 100% energy from certified renewable sources, with no after-the-fact offsets.
  • ISO 14001 certification, one of the first in the European cloud sector to adopt it.
  • PUE (Power Usage Effectiveness) of 1.2 in the most recent server farms — a value that places the infrastructure among the most efficient in the European market.
  • Signatory of the Climate Neutral Data Centre Pact, the European commitment to achieve climate neutrality in data centres by 2030.
  • Supporter of The Green Web Foundation, which allows independent public verification of the green nature of the hosting.
  • Dedicated Cloud GPU services for AI and machine learning designed with explicit energy-efficiency criteria.

This matters to us for two complementary reasons. The first is environmental: generating a single piece of text or image with AI can consume, according to recent estimates, the same energy as a full smartphone charge. If a business processes thousands of conversational interactions per day, the cumulative impact becomes significant — and the choice of infrastructure stops being a technical detail and becomes an environmental decision.

The second reason is commercial. Any client business that needs to report Scope 3 emissions — increasingly common under both the EU CSRD framework and the UK's own sustainability reporting expectations — needs to be able to declare, with verifiable certifications, that its technology providers operate on sustainable infrastructure. Running on Seeweb allows glacom® AI to give clients this documentation without resorting to generic offsets or back-of-envelope estimates.

European sovereign AI is not just software. It is also the infrastructure on which that software runs — and the certifications that infrastructure delivers to the end client.

Certified quality: ISO 9001 and management systems

The quality of an AI provider is not measured only by the quality of the models it ships. It is also measured by the consistency of the processes through which it ships them: how a project is managed, how the client is supported, how decisions are documented, how changes are traced.

The glacom® quality management system is certified ISO 9001:2015 for the consulting and training services the company provides to its clients — the part of the work in direct contact with the client throughout the entire lifecycle of a project, from initial analysis to system handover. The certification is issued by an accredited body and is renewed annually with independent audit.

It is the operational guarantee that "how we work" does not depend on whoever happens to be running the project at a given moment, but on a documented, auditable process.

What we are doing, and why we are saying so

At glacom® we have recently launched a structured organisational development programme that works explicitly on the coherence between internal culture and product quality. We are not telling this story for corporate communications purposes, but because we believe that anyone choosing a sovereign AI provider has the right to know what stands behind the product.

For this programme we have engaged the services of Reeducant les Violències, SCCL, a Barcelona-based co-operative specialising in organisational development, inclusive leadership and the design of conversational systems with inclusive language and behaviour. The programme covers all three operational departments of glacom® — AI, software and grant-funded finance — and reaches the entire international corporate structure of the company, present in seven countries.

It works on four operational levels:

  • Operational coordination and clarity across departments, because a disoriented team produces confused systems. Product traceability begins with the traceability of internal decisions.
  • Documented and traceable internal communication, to consolidate decisions rather than disperse them. Data governance inherits from language governance.
  • A structured protocol for managing situations of tension, both internal and with clients — because working with respect is the prerequisite, not the bonus, of good service.
  • Explicit work on the dimensions of inclusion, not as a list of good intentions but as declared, verifiable operational practices.

None of this is done because it looks good on a corporate page. It is done because it is the technical condition for the fifth pillar — inclusive language and behaviour in our systems — to be a real asset and not a marketing claim.

The dimensions of inclusion we work on

The programme identifies ten operational dimensions of inclusion that affect both the internal climate of the company and the quality of language in the systems we design:

  • Transphobia. Rejection of or discrimination against trans people.
  • Aporophobia. Rejection of or contempt for people in poverty.
  • Classism. Discrimination based on social class.
  • Ableism. Discrimination against disabled people — including invisible disabilities and neurodivergence.
  • Homophobia. Rejection of LGB people.
  • Racism. Discrimination based on racial or ethnic origin.
  • Xenophobia. Rejection of foreign nationals.
  • Lookism. Discrimination based on physical appearance.
  • Fatphobia. Rejection or stigmatisation of bodies that do not conform to dominant standards.
  • Sexism. Discrimination or attitude of superiority based on gender.

For a company with personnel in seven countries and clients operating in sectors as different as financial services, healthcare, public administration and e-commerce, this is not a philosophical list. It is a technical condition: each of these dimensions, if not addressed internally, ends up reproduced in the language of the conversational system that ships to market.

A conversational system is inclusive only if those who build it work in an inclusive environment. There is no technical shortcut.

A concrete consequence: the right to respect, in every direction

One of the operational practices the programme formalises is the right, for any person in the company, to interrupt an interaction that adopts offensive tones or disrespectful treatment — towards them or towards their colleagues — wherever that treatment comes from.

It looks like an HR detail. It is not.

A team operating in an environment where respect is guaranteed in every direction produces more careful systems: more rigorous documentation, more attentive code reviews, design decisions taken in construction rather than in defence. Protecting the internal climate is therefore a direct component of the quality of the product that ships.

Conversely, this practice is not a loss to the commercial function: over time, the portfolio of those who sell ends up populated with motivated clients, satisfied with the work they receive and respectful of the people they interact with. And, collectively, glacom® learns to recognise the patterns of problematic clients before signing, building up a pre-sales qualification tool that benefits the entire commercial team.

Working with respect is not a concession. It is a quality choice we actively protect, because we know how it translates into code.

Compliance: UK GDPR, responsible AI, the European angle

Inclusion is the ethical and cultural dimension of our positioning. There is, however, a complementary legal dimension, increasingly decisive for anyone developing sovereign AI in the UK and Europe.

In the United Kingdom the regulatory framework for AI is intentionally pro-innovation and sector-led: UK GDPR, ICO guidance on AI and data protection, the work of the AI Safety Institute, and sector-specific oversight from the FCA in financial services and the MHRA in healthcare. The principles of responsible AI and explainable AI — once academic, now contractual — sit at the centre of how UK organisations evaluate providers. In the European Union, the EU AI Act adds a horizontal framework, with binding obligations on high-risk systems, transparency, human oversight and protection of fundamental rights.

Glacom® operates across both spaces. For UK clients this matters in two distinct ways. For systems serving UK-only audiences, compliance is governed by UK frameworks — UK GDPR, ICO guidance, the relevant sectoral regulator. For UK clients with European exposure (subsidiaries, customers, processing of EU residents' data), the EU AI Act applies in addition, and operating with a provider already structured for EU compliance removes a significant layer of friction.

To support both layers we have engaged the services of BE LAW LAB, a law firm with offices in Barcelona and Madrid, specialising in digital regulation, privacy and artificial intelligence. We are structuring with them a complete compliance programme covering the audit of our AI system against the EU AI Act, the implementation of data protection measures arising from the system, the verification of international data transfers, the identification of Fundamental Rights Impact Assessments and Data Protection Impact Assessments, the audit of open-source licences in use, and the legal protection measures of the system itself.

Regulatory compliance, for us, follows the same logic as the ethical programme: first you do things the way they should be done, then you tell the market. It is the natural complement to cultural coherence — not its substitute.

Why a European AI provider matters in post-Brexit Britain

One of the structural shifts of the post-Brexit landscape, often discussed informally but rarely written down, is the increased value for UK businesses of working with European AI providers. The reason is straightforward: many UK companies serve European customers, process European residents' data, or operate subsidiaries across the EU. For all of them, working with an AI provider already structured around EU GDPR, EU AI Act readiness and EU data sovereignty removes a layer of compliance friction that a US-based hyperscaler cannot offer at the same depth.

Glacom® is a European company with corporate presence across seven countries — Italy, Spain, France, the United Kingdom, Estonia, Romania and Germany. We operate in the UK with a registered company and the capacity to deploy GPU-backed AI infrastructure locally where the project requires it, while preserving European data residency and compliance posture for clients with EU exposure.

For a UK CTO, CIO or procurement lead evaluating an AI partner, this is a quietly significant factor. It is not the loudest point on the datasheet, but it is the one that surfaces — repeatedly — the moment the legal and compliance review begins.

How to choose a sovereign AI provider in 2026

For anyone reading this article from the position of a corporate buyer, the point is simple. When evaluating a sovereign AI provider in 2026, it is worth asking these questions too — alongside the usual ones about price, latency and SLAs:

  • Where does the data sit at every stage of the system's lifecycle, and who can access it?
  • What cloud infrastructure do the models run on, and what environmental certifications does that infrastructure hold?
  • Is there a documented compliance plan for the relevant regulatory framework — UK GDPR for UK-only systems, EU AI Act for cross-border ones?
  • What internal practices does the company that builds the system have around responsible AI, AI bias mitigation and the language of the product?
  • How does the provider manage open-source licensing and intellectual property of the system?
  • Is there a declared, traceable coherence between sustainable infrastructure, internal culture, regulatory compliance and the relational quality of the conversational product?

The first two questions are standard in any serious RFP. The others, for the moment, are not. But they are the questions that separate a conversational system that will hold up over time from one that will generate reputational incidents within twenty-four months.

The fourth question, in particular, is the one few providers want to answer — because it forces the internal practice into the open, not just the technical datasheet. It is precisely the question a mature provider should be asking to be asked.

Conclusion: the next cycle of conversational AI

The sovereign AI market in the United Kingdom and in Europe is moving out of its purely technical phase — where differentiation was played out on on-premise deployment, data sovereignty and basic GDPR compliance — and into a new phase, in which differentiation is played out on four simultaneous axes: the sustainable, certified infrastructure on which the system runs, the relational quality of the systems themselves, the strength of internal governance at the provider, and the demonstrable compliance with the emerging regulatory frameworks on both sides of the Channel.

At glacom® AI we have chosen to build our offering around the full five pillars — including the fifth, which most still do not name — and around partnerships with providers that bring verifiable certifications to the table: Seeweb for sustainable, sovereign cloud infrastructure, Reeducant les Violències for organisational development and internal culture, BE LAW LAB for regulatory compliance. We are convinced that the next cycle of conversational AI will not be won on technical benchmarks, where the major generalist players are unreachable. It will be won on the coherence between what is promised and what is built, between the infrastructure chosen and the culture protected, between the commercial story and the daily practice.

On this, we believe, everything is still to be built. And we prefer to start now, declaring where we want to get to and accepting being measured against it.

We have selected some articles that may be of interest to you.

09/01/2026

  • #
20/11/2025

  • #
14/11/2025

  • #
20/11/2024

  • #
08/04/2024

  • #
02/04/2024

  • #
🚀We are hiring!

We are hiring!

Glacom