top of page
PBai03.jpg

   PERSPECTIVES   

stratgic perspective

When scaling AI,
Operating Model matters

s2square_black.png

Enterprise AI adoption has been increasing steadily. As of 2024, 78% of organizations stated to have used AI in at least one business function, up from 55% in 2023. With estimated investments exceeding $300 billion, AI Adoption is expected to have crossed 80% in 2025. 

 

As organizations move beyond experimentation to large-scale production deployments, scaling AI across enterprise will become one of the top priorities for business. In response, many organizations are strengthening their data and AI teams to support this growth.  

 

Realizing full value of AI requires cross-functional governance at enterprise level. Without this AI initiatives risk fragmentation, siloed teams, duplicated efforts, and regulatory & ethical challenges. The right governance supports agility, mitigates risk, and enables sustainable AI deployments and organization change with measurable ROI. â€‹â€‹

“The greatest barrier to AI is not technology—it’s understanding. At Mazda Motors Europe, we’re turning that challenge into an opportunity, empowering every colleague to unlock the value of AI via our Friday is AI Day, Masterchef & Literacy programs.”


Kristel Geerts
Sr Manager – IT as a Service, Mazda

Establishing an AI Center of Excellence provides a strategic framework to drive enterprise-wide AI adoption in a structured and scalable manner. Beyond defining AI strategy and governance, the CoE can play a critical role in managing the AI investment portfolio, overseeing budgets and tracking ROI from AI initiatives. It is responsible for establishing and operating the foundational AI platforms required to deploy and scale AI solutions securely and reliably. In addition, the CoE should drive AI literacy and change management across the organization, ensuring that users are equipped to adopt AI effectively and that workforce restructuring is well prepared. 

​

We have seen organizations adopting the below CoE models: 

Figure 1: AI operating models 

article01.jpg

Federated Model   â€‹â€‹â€‹

In a federated model, business units independently incubate their own AI teams. AI engineers and data scientists are embedded within product teams, often working part time on these AI initiatives. They rely on open-source tools or localized platforms to experiment and deliver AI solutions.  

 

Because the teams are business led, they bring strong domain knowledge and deliver solutions that are well aligned to the respective business area. Decision-making is faster allowing teams to experiment and iterate quickly and accelerate delivery. 

​

However, in a federated setup the use cases can become siloed, focusing on productivity improvements within individual business area. They may not be fully aligned with enterprise-level priorities and rarely lead to large-scale transformational outcomes. ​There is limited coordination across business units, resulting in minimal sharing of data, models, infrastructure, or best practices. While this model enables speed, it can lead to inconsistent standards, and challenges in scaling enterprise wide. 

​​​

Centralized Model   â€‹

In the centralized model, a central AI CoE leads AI initiatives across the organization. The AI CoE owns the AI strategy, manages a portfolio of AI use cases, and maintains responsibility for platforms, data architecture, infrastructure, and deployment environments. As AI adoption expands it also takes the responsibility to increase AI literacy across the organization. The AI CoE is also responsible to meet the legal, compliance, data privacy and ethical requirements.  

​

Business units consume AI solutions developed by the CoE or submit requests for new use cases. This structure enables strong governance, standardization, and risk control, making it suitable for organizations operating in highly regulated industries such as insurance, banking, or healthcare. However, it may limit responsiveness to specific business unit needs and slow innovation at the edge. 

​​​​​​​​​​

Hub and Spoke â€‹

The hub-and-spoke model combines elements of both federated and centralized approaches. A central AI CoE acts as the hub, responsible for establishing shared platforms, infrastructure, governance standards, and model risk and ethics frameworks. 

​

AI teams (the spokes) are embedded within business units and consist of interdisciplinary teams, typically combining domain experts with key AI roles. These teams develop and manage AI products aligned with business priorities, while leveraging the shared capabilities provided by the central hub. The management of a portfolio of AI use cases and related business cases typically sit within each business unit. 

article02.jpg

Figure 2: Team structure representation 

The above depicted AI operating model is structured as hub-and-spoke. Business aligned AI Agile Teams are responsible for identifying, prioritizing, and delivering AI use cases.  The centralized AI CoE acts as an enabling hub, defining AI strategy, architecture, governance and standards while also providing shared platforms such as data, MLOps/LLMOps, and agent orchestration capabilities. Enterprise IT provides foundational cloud infrastructure, security, and integration services to ensure reliability and compliance but does not own AI products or use-case delivery.  

​​

​

​

Mazda Motors Europe shifts AI into high gear—delivering impact today and defining advantage for tomorrow

​

Mazda Motors Europe (MME) is progressively embedding AI across its IT and business operations as part of its Data & AI strategy. MME took a bold step to incubate an AI Lab last year with a charter to encourage research and innovation in AI.

​

The AI lab is a centralized hub that engages across all business units to identify, prioritize and deploy AI use cases. It is complemented by an AI expert team, from across different streams such as enterprise architecture, security, legal, HR and data privacy. Together the group has devised a robust Gen AI policy, which ensures full compliance with EU AI act and other regulatory frameworks.

​

It has curated a portfolio of over 50 AI use cases, with approximately 10% already deployed into production. Notably, MME has adopted a pragmatic approach with the aim to leverage embedded AI capabilities of enterprise platforms such as Salesforce, ServiceNow, Microsoft, Eloqua, Databricks, etc. accelerating value realization while minimizing custom build and operational complexity.

​

Recognizing that poor quality data can derail the most well intended AI initiative, MME has elevated data to the status of a core product. The Enterprise Architect as member of the AI Lab has been pivotal in architecting a forward-looking data strategy.

The AI lab has been able to rally early AI adopters through these pilots thereby building internal momentum while widening the scale by targeted AI literacy programs.

​

The AI Lab has successfully delivered “everyday AI” that drives hard to measure productivity gains. As the journey matures, the objective is to shift toward “game-changing AI”—transformative capabilities that act as a sustained competitive differentiator. To enable this, MME is evolving the AI Lab from a centralized incubator to a Hub-and-Spoke model, strengthening business ownership via the initialization of a new AIDA Business Lead.​

​​​

​

​​​​While federated models are often the preferred setup for early-stage AI adopters, as they enable speed, experimentation, and close alignment with business teams, they typically face challenges as AI use cases scale. Centralized models are more commonly observed in organizations operating in highly regulated environments or where risk, compliance, and data control are primary concerns. 

​​

The hub-and-spoke model represents a pragmatic middle ground. It balances scale, governance, and business alignment by combining centralized platforms and standards with distributed, domain-focused AI teams. As a result, it is frequently adopted by larger enterprises seeking to industrialize AI, while preserving flexibility at the business unit level. 

​

Importantly, these models should not be viewed as static or mutually exclusive. Many organizations evolve over time, often starting with a federated approach, introducing centralized capabilities as maturity increases, and ultimately converging toward a hub-and-spoke model. The optimal structure depends on factors such as organizational size, regulatory exposure, AI maturity, talent availability, and the strategic importance of AI to the business. 

​

While the operating model establishes the foundation for AI decision-making, governance, and accountability, it is not sufficient on its own. Sustainable AI success also depends on a clear strategy, the right talent and skills, effective change management, and strong leadership sponsorship to drive adoption.  

​

References

  1. State of enterprise AI, 2025, Report by Open AI 

  2. The state of AI in 2025: Agents, innovation, and transformation, Report by McKinsey 

  3. AI adoption statistics 2025, Report by fullview.io 

2307.q702.013.F.m005.c9.artificial intelligence.jpg
SIAM

From Pilots to Scaling: The AI roles you need

Unlike other technology disruptions before, AI is all pervasive. AI use cases can be embedded into almost every workflow and business process across industries. While it is easy to experiment and pilot, it is more challenging to scale AI at enterprise level. 

​

Many AI pilots start as a technology initiative with limited business alignment. As a result the solution remains isolated, outcomes anecdotal for the real world, and governance lacking. Breaking through to the next level requires clear ownership, executive commitment and formal governance.    

​

Scaling AI demands strong strategic alignment and cross-functional orchestration across business, technology and data teams. It requires a formal operating structure with clearly defined roles and accountabilities spanning business leaders, IT, HR, legal and external partners, to ensure successful execution at scale. 

​

This article explores how enterprises can overcome organizational barriers to scaling AI by adopting the right operating model. It examines key roles and responsibilities needed to scale AI responsibly. It also highlights the critical role of an AI Center of Excellence in providing strategic direction and governance as AI initiatives scale.

article03.jpg

Figure 1: AI Impact on IT productivity

 

EXECUTION LAYER  

At the core of this model sits the execution layer, responsible for translating business problems into tangible AI solutions. It must include strong representation from both business and technology functions. This layer is best organized around a product-oriented delivery model, where cross-functional teams own AI use cases end-to-end, from ideation through deployment.​

​

Business roles â€‹

The business roles are accountable for problem framing, use-case prioritisation and value definition. They play a crucial role in ensuring that AI projects are deeply anchored into business processes and deliver measurable outcomes. They bridge the gap between domain knowledge, customer requirements and technical execution. Key roles include:  â€‹

​​

​​​

Domain Experts 

Domain Experts possess deep business and functional understanding of the processes within which AI solutions operate. The role is responsible for defining relevant use cases, validating AI outputs, guiding model design, training, and adoption of the new way of working 

 

AI Product Manager 

The AI Product Manager is responsible for identifying and prioritizing AI use cases and translating them into a clear product roadmap. The role bridges the gap between business and technical teams, ensuring AI solutions deliver measurable outcomes. 

 

Data Scientists 

Data scientists process and analyse data, build machine learning (ML) models. They develop, train and fine-tune ML models for predictive analysis and clustering.

​​

​​

Technical roles â€‹

Core technical roles are responsible for the design, development and operation of enterprise AI capabilities. These roles engineer scalable AI systems, build and operate data foundations, and ensure reliable and compliant generative AI and agentic workflows across the enterprise. â€‹â€‹

​​

​

AI Engineers 

AI Engineers are responsible for designing, building and deploying production-grade AI and generative AI solutions. The role focuses on engineering scalable AI systems by integrating models, large language models (LLMs), agent frameworks, and AI services into enterprise applications and workflows.  

 

Data Engineers 

Data Engineers are responsible to build data pipelines and data platforms. The role focuses on ingesting and transforming data from multiple sources into data lakes and data warehouses for AI models to consume.   

​

Prompt Engineers 

Prompt Engineers specialize in developing and optimizing prompts for LLMs and Gen AI workflows. The engineer is responsible to provide appropriate context, define response format and embed guard rails to ensure accurate and compliant outputs.   

​​

​

 

ENABLING LAYER  

As AI deployments scale across the enterprise, the need for robust governance, compliance, risk management, and security increases significantly. The enabling layer addresses these requirements by establishing common frameworks, architectural standards, security controls and governance mechanisms that support responsible AI adoption.  â€‹â€‹

​

AI CoE 

The AI CoE provides centralized leadership, governance, and technical direction for enterprise AI initiatives. It is accountable for defining enterprise AI strategy, AI project and portfolio management, establishing governance framework, budgeting and ROI tracking. It also acts as a key decision-making body in selection of AI organizational model (example Foundational vs Embedded).  Key roles are:​​

​​

​

Chief AI Officer  

​The Chief AI officer is the executive accountable for enterprise AI Projects and defining the AI strategy, governance and investment. He/she articulates AI’s business value to leadership, roadmap, budget and ROI tracking.  

 

AI CoE Lead  

The AI CoE lead is the operation leader responsible for the overall functioning of AI CoE. He/she leads the execution of AI initiatives, is responsible to develop the AI roadmap and policy, while collaborating with relevant stakeholders. The AI CoE promotes reuse of AI assets, reference architectures and patterns. 

 

AI Architect

​The AI Architect designs, builds and scales the AI systems. The AI Architect establishes reference architectures, design standards, and integration patterns to ensure AI solutions are scalable, secure, compliant.   

 

​Change Manager (OCM-HR) â€‹

The Change Manager is responsible for leading change management, communication and training initiatives in support of AI adoption. As AI automates business processes, existing roles are likely to evolve, giving way to new or changed roles. Workforce restructuring and rationalization needs to be designed and executed. The change manager typically comes from the HR department. â€‹

​​

​

The AI CoE also needs to ensure that AI initiatives are deployed responsibly and securely. Hence it must collaborate with legal and security teams to manage AI risk, ensure compliance and regulatory alignment.  â€‹

 

​

Legal & ethical 
Legal and ethical roles are collectively responsible to ensure that AI deployments confirm to regulatory and ethical norms of the local regions. They need to watch out for copyright and data privacy violations while also ensuring that regulatory requirements such as the AI act in Europe are met. 

​

Security Officer 
The Security Officer is responsible to ensure that AI systems are designed, deployed, and operated in a secure and compliant manner. The role defines security policies, risk controls, and assurance mechanisms for AI solutions, including data protection, access controls, model security, and third-party risk.   

 


Platform Operations 

Platform operations support the foundational technology capabilities required to deploy and operate AI solutions. These roles focus on managing the underlying data, integration, and operational platforms and ensuring reliability, performance, security, and scalability across enterprise AI deployments. â€‹

 

​

​ML Operations / AI Platform Engineers â€‹

ML Operations Engineers focus on operation aspects of ML and LLMs. They ensure the models are deployed monitored and maintained effectively. On the other hand, the AI Platform Engineer is responsible and maintaining the infrastructure for AI tools.  

​

DataOps Engineer 
DataOps Engineers are responsible for managing data pipeline and ensuring high-quality data availability for AI projects. They focus on tracking data pipeline performance, diagnosing data issues and maintaining data quality.  

 

Integration Specialists 

The Integration Specialist is responsible for integrating AI solutions with enterprise applications, data sources, and digital platforms. The role designs and implements integration patterns using APIs, event-driven architectures, and middleware technologies ensuring seamless interoperability. 

 

 

Scaling AI is as much an organizational challenge as it is a technical one. Successful enterprise AI adoption requires sustained business engagement, executive project steering, robust governance, and strong change management effort. Our congratulations if you have made it so far into the article, given the short attention spans of today.

​

Scaling of AI is achieved through standardized architectures, reusable data products, prompt libraries, reference patterns and shared platforms enabled by AI CoE. Overall, the AI operating model should be viewed as a force multiplier for AI initiatives,  rather than merely functioning as a control or oversight mechanism. 

PARTNERSHIP BENCHMARK I S-SQUARE

  • LI-In-Bug
bottom of page