88% of Organizations Use AI. 6% Are Generating Returns. The Gap Is Structural.
By Forenta Team
Eighty-eight percent of organizations now use AI in at least one business function. One year ago the number was 78 percent. The trend line looks like an adoption success story. The value data tells a different story: only around 6 percent of organizations generate more than 5 percent EBIT impact from their AI use, according to McKinsey’s 2025 State of AI research.¹ The majority are deploying AI without transforming with it.
Source: McKinsey State of AI 2025
The gap is not the model. The models have never been more capable, and access has never been cheaper. The gap is organizational. The largest single predictor of whether an organization ends up in the 6 percent or the 94 percent is whether it redesigned how work actually flows, or simply added AI tools to unchanged processes and unchanged teams.
The adoption curve and the value cliff
Gartner predicted in July 2024 that 30 percent of generative AI projects would be abandoned after proof of concept by the end of 2025.² Subsequent Gartner analysis found the actual abandonment rate reached closer to half of all generative AI initiatives.² The most commonly cited reasons were poor data quality, inadequate risk controls, escalating costs, and unclear business value. None of these are primarily technical problems.
McKinsey’s data sharpens the diagnosis. Among organizations classified as AI high performers, 55 percent report having fundamentally redesigned their workflows to incorporate AI. Among the rest, only 20 percent have done the same. High performers are 2.8 times more likely to have changed how work actually flows, not just which tools their teams use.¹ BCG’s research frames the same finding differently: in successful AI programs, 10 percent of the outcome traces to the algorithm, 20 percent to technology and data, and 70 percent to people and processes.³
Source: BCG AI at Work 2025
Only 6% of organizations are generating real returns from AI. The difference is not the model or the compute budget. It is whether the organization redesigned how work flows and built teams with the right combination of capabilities.
The three capabilities every AI team needs
Every AI product team needs three capabilities present, though not necessarily in three separate people.
The first is practical AI engineering. Not research fluency or benchmark performance, but production experience: keeping a system working reliably with real data, against real edge cases, within the cost constraints the business requires. The signal is a portfolio of deployed systems, not a credential.
The second is product thinking. Someone who can hold the question of what problem are we solving, and for whom, through the entire build, including the parts where technical possibilities start pulling the team toward elegant solutions to problems no one actually has. AI projects have a specific drift problem. A strong product mind is the correction.
The third, and most consistently absent, is domain expertise. Not knowledge of AI — knowledge of the field the AI is being applied to. Healthcare AI built without clinical input. Legal AI developed without someone who understands how practitioners actually read documents. Financial models trained by engineers who have never worked in the operational context that data comes from.
The missing capability in most AI teams is not technical. It is the person who deeply understands the domain: the clinician, the compliance lead, the operations manager who knows where the real friction is.
A study by Harvard Business School researchers, published in 2024, captured what happens when domain expertise and AI capability are combined effectively. Working with 791 professionals at Procter & Gamble, they found that AI-augmented teams were three times more likely to produce ideas ranking in the top 10 percent than teams working without AI.´ Individual contributors using AI matched the output quality of two-person human teams. The critical condition was not AI capability alone — it was the combination of AI tools with human domain judgment. The AI amplified what the team already understood.
Why small and structured beats large and fast
Research published in Nature by Wu, Wang, and Evans in 2019 analyzed over 65 million papers, patents, and software projects and found a consistent pattern across six decades: smaller teams are disproportionately responsible for disruptive advances. Larger teams produce more total output, but their work tends to extend existing directions rather than create new ones.µ The mechanism is coordination cost: more people means more decisions by consensus, more risk-aversion, more pressure toward the incremental.
Wu, Wang & Evans (2019): across 65 million papers, patents, and software projects, smaller teams are consistently more disruptive. Larger teams extend existing work. Coordination cost changes which decisions get made.
Source: Wu, Wang & Evans (2019), Nature — 65M papers, patents, software projects
Applied to AI product teams, the implication is direct. A two- or three-person team with clear roles and shared context outperforms a larger team assembled before the problem is understood. The pressure to staff up after funding closes, or after a board sets a deadline, leads to teams that grow before they have built shared context about what they are actually building. Coordination cost rises before capability does.
The smallest viable AI team for early-stage product work is typically an AI engineer paired with someone who brings both product judgment and domain knowledge — whether that is a founder, a clinical lead, an operations manager, or a compliance specialist. The size forces fast decisions. The pairing ensures both technical depth and domain relevance are present from the start.
The Dutch context: experience, not ambition
In the Netherlands, 22.7 percent of companies with ten or more employees used at least one form of AI technology in 2024, an increase of nearly 9 percentage points in a single year, according to CBS’s AI Monitor 2024.⁶ Among companies with 500 or more employees, the adoption rate reaches 59 percent. The MKB is not behind — it is at an inflection point.
Among Dutch companies not using AI, 74.6 percent cite lack of experience as the primary reason.⁶ Not cost. Not regulation. Not strategic uncertainty. Experience. The barrier is knowing how to build the team that can actually use AI productively. The organizations making that decision now need to get the architecture right from the start, because adding tools without changing processes is exactly the path the data consistently identifies as unproductive.
What structured trials reveal
Interviews are a weak diagnostic for AI team fit. They reveal communication polish and how someone performs when they know they are being evaluated. They do not reveal how someone handles ambiguity, whether domain knowledge holds up under real conditions, or whether the collaboration produces output or just activity.
Forenta’s trial framework addresses this directly. A structured trial of two to four weeks, with a defined goal and explicit success criteria, tells you more about team fit than any interview process. You learn whether the domain expert and the engineer share a working model of the problem. You learn whether the product thinker can hold direction when technical possibilities start pulling. You learn whether the collaboration produces real artifacts, or meetings about meetings.
The trial is also a team diagnostic in a broader sense. An AI engineer who struggles with domain ambiguity in week two will not perform better in week ten. A domain expert who cannot articulate requirements in a form the engineer can work with is a bottleneck that no amount of capability on either side resolves.
What this analysis does not cover
This post focuses on team composition for early-stage AI product work. It does not address AI system design itself, which introduces constraints around data quality, model selection, and evaluation methodology. It does not address the regulatory context for AI in Dutch healthcare, governed by the EU AI Act and sectoral rules from the NZa and IGJ. It does not cover the organizational dynamics of scaling an AI team beyond the initial founding core.
The CBS and McKinsey data reflect broad adoption patterns. The quality of AI programs within those numbers varies considerably, and the statistics do not distinguish between AI used for internal productivity and AI embedded in customer-facing products.
The ceiling is not AI capability
The organizations generating measurable returns from AI are not the ones with the best models. They are the ones that built teams with the right combination of technical depth, product judgment, and domain knowledge, and then changed how work actually flows. The evidence from BCG, McKinsey, and Wu et al. converges on the same point: organizational architecture is the constraint.
For Dutch MKB organizations at the early stage of building AI capability, the first decisions matter most: who is on the team, how the trial is structured, whether domain expertise is present from day one. The tools will keep improving regardless. The team architecture is what you control.
References
- 1.McKinsey & Company (2025). The state of AI in 2025: Agents, innovation, and transformation. McKinsey Global Institute.
- 2.Gartner (2024, July 29 / 2025). Gartner Predicts 30 Percent of Generative AI Projects Will Be Abandoned After Proof of Concept by End of 2025; Why Half of GenAI Projects Fail.
- 3.Boston Consulting Group (2025). AI at Work: Momentum Builds, but Gaps Remain. BCG.
- 4.Dell’Acqua, F. et al. (2024). AI as a team member: Effects on idea generation and innovation. Harvard Business School.
- 5.Wu, L., Wang, D., & Evans, J. A. (2019). Large teams develop and small teams disrupt science and technology. Nature, 566, 378–382.
- 6.Centraal Bureau voor de Statistiek (2025). Dutch AI Monitor 2024. CBS.