Organizations to pivot to 'Zero-Trust' governance as AI data flood threatens model integrity

SUNDAY, FEBRUARY 08, 2026

Gartner warns that 50% of firms must adopt zero-trust data policies by 2028 to combat "model collapse" caused by the surge in unverified synthetic data

  • Gartner predicts that 50% of global organizations will adopt a zero-trust data governance posture by 2028 due to the surge in AI-generated content.
  • This pivot is driven by the threat of "model collapse," where AI models trained on unverified synthetic data can inherit errors and biases, compromising their integrity.
  • A zero-trust approach requires implementing rigorous authentication and verification measures to distinguish between reliable human data and potentially flawed synthetic data.
  • Key recommendations for this transition include appointing a dedicated AI Governance Leader and modernizing metadata practices to flag inaccurate AI-generated content.

 

 

Gartner warns that 50% of firms must adopt zero-trust data policies by 2028 to combat "model collapse" caused by the surge in unverified synthetic data.

 

 

Half of all global organisations will adopt a "zero-trust" posture for data governance by 2028, according to a new forecast by Gartner, Inc. 

 

The shift comes as unverified AI-generated content begins to saturate data ecosystems, making it increasingly difficult to distinguish between human and synthetic information.

 

According to the latest report, Wan Fui Chan, managing VP at Gartner, warned that implicit trust in data is no longer a viable strategy. 

 

"As AI-generated data becomes pervasive, a zero-trust posture—establishing rigorous authentication and verification measures—is essential to safeguard business and financial outcomes," Chan stated.

 

 

 

The threat of 'Model Collapse'

The proliferation of synthetic data poses a significant risk to the reliability of future Large Language Models (LLMs). 

 

Currently, models are trained on vast swathes of web-scraped data, research papers, and code. 

 

However, as AI outputs are increasingly fed back into the training loops of newer models, the industry faces a phenomenon known as "model collapse." 

 

This occurs when AI tools begin to reflect the errors or biases of their predecessors rather than objective reality.

 

Despite these risks, corporate appetite for artificial intelligence remains insatiable. Gartner’s 2026 CIO and Technology Executive Survey reveals that 84% of respondents intend to increase funding for Generative AI (GenAI) this year.
 

 

Organizations to pivot to 'Zero-Trust' governance as AI data flood threatens model integrity

 

The report also highlights an emerging regulatory minefield. While some jurisdictions are expected to enforce strict "AI-free" data mandates, others may adopt more flexible frameworks.

 

Gartner suggests that success in this evolving landscape will depend on "active metadata management"—the ability to tag, catalogue, and alert organisations when data is stale or requires recertification.

 

 

 

 

Organizations to pivot to 'Zero-Trust' governance as AI data flood threatens model integrity

 

Strategic recommendations

To mitigate the risks of unverified data, Gartner advises organisations to take several immediate steps including:

 

Appoint an AI Governance Leader: A dedicated role to oversee zero-trust policies and compliance.

 

Cross-Functional Collaboration: Aligning cybersecurity, data analytics, and ethics teams to conduct comprehensive risk assessments.

 

Modernise Metadata Practices: Implementing automated systems to identify and flag inaccurate or biased AI-generated content in real-time.

 

Adopt Active Metadata: Use real-time alerts for stale or unverified data to prevent inaccurate and biased content from reaching critical systems.