AI Trust Results Drop as Adoption Rises in 2026

Written by

AI trust results are becoming a defining issue in 2026. A new Quinnipiac University poll highlights a growing contradiction: more Americans are using AI tools daily, yet fewer actually trust what those tools produce. This shift matters not only in the US but also for UK IT professionals navigating similar challenges. Understanding this gap can help teams build better systems and stronger user confidence.

The survey, conducted in March 2026 with nearly 1,400 participants, compared findings with April 2025 data. Adoption clearly increased, with only 27% saying they had never used AI tools, down from 33%. However, trust has not followed the same path. Let’s break down what is happening and why it matters.

AI Trust Results in Latest Poll Findings

The latest data reveals a simple but striking pattern. Around 51% of respondents now use AI for research, while others rely on it for writing, work tasks, and analysis. Despite this, 76% say they trust AI-generated outputs only “rarely” or “sometimes.” Just 21% express strong confidence.

This gap between usage and trust is important. People are clearly willing to experiment with AI, but hesitation appears when accuracy truly matters. According to the Quinnipiac poll release, negative sentiment toward AI has also increased year over year.

AI Trust Results and Rising Adoption Trends

Adoption continues to grow because AI tools offer speed and convenience. Tasks like drafting emails or summarising information are easier than ever. However, increased exposure also reveals limitations more quickly.

About 80% of respondents report being concerned about AI’s future impact. At the same time, enthusiasm remains low only 6% say they feel very excited about AI. Most people fall into neutral or cautious categories.

This creates a feedback loop: the more people use AI, the more they notice flaws. As a result, AI trust results continue to decline even while adoption climbs.

Why AI Trust Results Are Declining

Several key factors explain the drop in confidence:

  • Job concerns: 70% believe AI will reduce job opportunities, up significantly from last year.
  • Personal risk: 30% of workers fear their own jobs could be replaced.
  • Transparency issues: Two-thirds say companies are not clearly explaining how AI works.
  • Regulation demands: A similar proportion wants stronger government oversight.

Additionally, 55% believe AI may cause more harm than good in everyday life. These concerns are echoed in external research such as the Pew Research AI attitudes report, which shows growing caution among users.

For UK readers, this is not surprising. Similar trends appear in domestic surveys, where trust remains a barrier to wider adoption.

AI Trust Results Across Different Demographics

Age-based differences reveal interesting patterns. Millennials and baby boomers tend to express higher levels of concern, especially about job security. Meanwhile, Gen Z users are the most familiar with AI tools but remain sceptical about long-term impacts.

In fact, 81% of Gen Z respondents expect AI to reduce job opportunities. However, this does not mean rejection. Younger users continue to adopt AI, but with a more critical mindset.

Global surveys from Ipsos and Verasight confirm that trust remains a major barrier, even among frequent users. Overall, AI trust results vary by generation but show consistent hesitation across all groups.

AI Trust Results in Broader Research Context

The Quinnipiac findings align with wider industry research. McKinsey’s 2026 AI Trust Maturity Survey highlights ongoing challenges in governance and strategy. Key risks identified include:

  • Inaccuracy (74%)
  • Cybersecurity threats (72%)

Reports from Deloitte and EY also show that while workplace AI adoption has surged, oversight and control often lag behind. A Verasight study found that 64% of Americans now use AI, yet more than half feel anxious about its broader effects.

In the UK, similar patterns emerge. Public trust in AI remains limited, especially in government and public services. These consistent findings reinforce one conclusion: AI trust results are not keeping pace with rapid technological rollout.

AI Trust Results and Implications for UK IT Teams

For UK IT professionals, the message is clear. The trust gap cannot be ignored. If users do not trust outputs, adoption alone will not deliver value.

Key actions include:

  • Implement validation processes: Ensure AI outputs are reviewed before use in critical tasks.
  • Improve communication: Clearly explain how AI systems work and where data comes from.
  • Monitor regulation: Stay updated on UK AI policies and compliance requirements.

You can also explore related insights in our internal guide on AI Workflow Governance: Responsible AI Policy Framework.

Building trust early can give organisations a competitive advantage and improve long-term adoption.

Practical Steps to Improve AI Trust Results

Improving trust requires consistent effort. Here are practical strategies:

  1. Start with transparency: Show users how AI generates answers and highlight uncertainty.
  2. Focus on training: Educate teams about both capabilities and limitations.
  3. Use human oversight: Combine AI efficiency with human judgment.
  4. Adopt clear standards: Align with industry frameworks for responsible AI use.

Research such as ISACA’s AI Pulse Poll shows that knowledge gaps remain a major issue. Addressing these gaps can significantly improve user confidence.

AI Trust Results: Key Takeaways for 2026

The data tells a consistent story. AI usage is rising rapidly, but trust is not following the same trajectory. Users are engaging with tools while remaining cautious about reliability and impact.

For IT professionals, the priorities are clear:

  • Build transparency into every system
  • Communicate openly about limitations
  • Focus on accuracy and accountability

Ultimately, AI trust results will determine whether these tools become essential assets or remain underused.

FAQs

What are AI trust results?
They measure how much users believe and rely on AI-generated outputs. Current data shows trust is lower than adoption rates.

Why are AI trust results decreasing?
Main reasons include concerns about accuracy, job loss, and lack of transparency from companies.

Are these trends relevant to the UK?
Yes. UK surveys show similar concerns, particularly around regulation and responsible AI use.

How can organisations improve AI trust results?
By increasing transparency, adding human oversight, and providing better user education.

Will regulation improve AI trust results?
Stronger rules can help build confidence, especially if they focus on fairness, safety, and accountability.

Enterprise AI Factories Enter Production with NTT DATA

Written by

Enterprise AI Factories are reshaping how organisations turn raw data into practical tools. NTT DATA and NVIDIA recently announced a new step forward by bringing these systems into full production. This move helps companies move beyond small AI experiments and start using intelligent systems in everyday operations.

Many organisations have spent years testing artificial intelligence without clear results. The idea behind Enterprise AI Factories is to solve that problem by creating a reliable environment where AI models can be built, tested, and deployed continuously. For businesses across the UK and beyond, this development could finally turn AI from a pilot project into a daily operational tool.

In this guide, we explain how the partnership works, the technology involved, and why it matters for companies aiming to scale their AI strategies.

Understanding Enterprise AI Factories

Enterprise AI Factories work much like a production line for intelligence. Instead of manufacturing physical products, they produce trained AI models and automated systems. The process begins with collecting operational data, which is then processed and used to train machine learning models on specialised hardware.

Once models are trained, they move into deployment where applications use them to automate decisions, analyse information, or support human teams. Because everything happens within one integrated environment, organisations avoid the common delays that appear when systems are built in separate tools.

This structured approach is why Enterprise AI Factories are gaining attention. They allow businesses to repeat the AI development process efficiently while maintaining strong governance, security, and compliance.

If you want to explore broader trends in enterprise AI adoption, you may also find our internal guide helpful: How Businesses Are Scaling Artificial Intelligence.

How Enterprise AI Factories Run on NVIDIA Technology

NTT DATA brings global IT expertise to the partnership by designing and implementing Enterprise AI Factories using NVIDIA technologies. The platform combines high-performance GPU systems, advanced networking, and specialised AI software.

At the centre of this infrastructure are NVIDIA DGX and HGX systems, which provide the computing power needed to train large models quickly. These systems allow companies to process massive datasets without the performance bottlenecks that often slow AI development.

The architecture also supports flexible deployment. Organisations can run Enterprise AI Factories in the cloud, within their own data centres, or at the edge depending on operational needs. NTT DATA works alongside technology partners such as Dell to ensure smooth integration into existing environments.

Interestingly, NTT DATA is currently the only global IT services provider involved in all three major NVIDIA partner programmes. This level of access helps them deliver cutting-edge infrastructure for businesses looking to scale AI initiatives.

For deeper technical information about the platform, you can review the official announcement from NTT DATA.

Business Advantages of Enterprise AI Factories

One of the biggest challenges companies face is moving AI from experiments into real operations. Enterprise AI Factories address this issue by providing a consistent framework for building and deploying intelligent systems.

First, they significantly reduce development time. Instead of starting from scratch for every project, teams can reuse the infrastructure and workflows already established in the factory environment.

Second, governance becomes easier to maintain. Because the entire AI lifecycle happens in one ecosystem, companies can enforce security rules, data protection policies, and compliance requirements throughout development and deployment.

Another advantage is support for emerging technologies like agentic AI. These systems can take actions automatically based on data and predefined rules. Enterprise AI Factories provide the computing power and structure required to run such advanced models safely.

For organisations under pressure to show measurable returns on AI investments, this approach helps demonstrate results much faster.

Real-World Enterprise AI Factories Examples

Several organisations are already seeing practical benefits from Enterprise AI Factories powered by NVIDIA infrastructure.

A leading cancer research hospital has used this technology to accelerate radiology image analysis. Doctors can process scans faster and test new diagnostic models quickly, which improves research and patient care.

In manufacturing, a global automotive supplier implemented Enterprise AI Factories to simulate production workflows before launching them in real facilities. By testing workloads digitally first, the company reduced downtime and improved production efficiency.

Another example comes from a technology manufacturer that builds advanced batteries. Using Enterprise AI Factories, engineers ran complex 3D simulations of production lines before constructing physical systems. This approach saved significant time and reduced costly errors during the setup phase.

These examples highlight how industries ranging from healthcare to manufacturing can benefit from scalable AI infrastructure.

Technologies Behind Enterprise AI Factories

The infrastructure supporting Enterprise AI Factories combines powerful hardware with advanced software tools designed for AI development.

NVIDIA’s DGX and HGX platforms deliver the computing resources needed for training large models. High-speed networking ensures data flows smoothly between systems without slowing down workloads.

On the software side, NVIDIA AI Enterprise provides essential development tools. For example, NVIDIA NeMo helps developers create advanced AI models capable of understanding language, generating content, or powering intelligent assistants.

Another key component is NVIDIA NIM microservices. These containerised services provide ready-to-use APIs, allowing developers to deploy AI models into applications quickly.

NTT DATA packages these technologies into sector-specific solutions. Instead of building everything from the ground up, companies can start with pre-tested frameworks designed for industries such as healthcare, manufacturing, and financial services.

You can learn more about NVIDIA’s AI ecosystem directly from NVIDIA’s official platform overview.

Why Enterprise AI Factories Matter for UK Businesses

For UK organisations, the push toward artificial intelligence continues to grow. However, many companies still struggle to turn experimental AI projects into scalable solutions.

Enterprise AI Factories provide a structured path forward. By combining infrastructure, tools, and deployment processes into one platform, businesses can build reliable AI systems faster while maintaining strong governance.

This approach also aligns with the UK’s broader efforts to expand digital infrastructure and AI innovation across sectors such as finance, healthcare, and advanced manufacturing.

Companies that adopt Enterprise AI Factories early may gain a significant advantage. Instead of experimenting endlessly with small pilots, they can focus on building production-ready systems that improve efficiency, automate tasks, and unlock insights from their data.

FAQs

What are Enterprise AI Factories?

Enterprise AI Factories are integrated environments that allow organisations to build, train, test, and deploy AI models efficiently within one structured platform.

How do Enterprise AI Factories differ from AI pilots?

AI pilots usually focus on experimentation. Enterprise AI Factories provide a repeatable production framework that supports continuous development and real-world deployment.

Which industries benefit most from Enterprise AI Factories?

Healthcare, manufacturing, financial services, and technology companies are currently seeing the biggest advantages from these systems.

What role does NVIDIA play in Enterprise AI Factories?

NVIDIA provides the GPU infrastructure, networking technology, and AI software platforms that power the core computing environment.

Where can businesses learn more about Enterprise AI Factories?

Companies can review the NTT DATA press release or explore NVIDIA’s AI solutions through their official website.

How Enterprise AI Silos Limit Growth and How to Break Them

Written by

Enterprise AI silos are at the root of some of the most surprising roadblocks in modern AI adoption and most leaders don’t realize how deeply the issue runs. Enterprise AI silos shape how data moves, how people work, and how effectively AI models scale. This expanded guide breaks down the challenges, using IBM-inspired insights, real-world examples, and practical fixes that can help any organization move faster with AI.

At a high level, enterprise AI silos form when data becomes trapped inside departments like finance, HR, or marketing, without clear pathways to share or unify it. When information stays locked in systems that don’t communicate, AI can’t form the complete view required for meaningful predictions.

Companies invest heavily in AI tools and automation, but without aligned, accessible data, those investments hit a wall. It’s like building a race car with no racetrack the machine exists, but it can’t go anywhere.

Why Enterprise AI Silos Slow Down AI Adoption

Many companies face serious roadblocks because their data lives in isolated pockets. Enterprise AI silos turn even small AI initiatives into complicated hunts for missing or inconsistent information. Instead of focusing on model-building, teams spend months fixing data quality.

IBM surveyed 1,700 global data leaders, revealing:

  • 92% agree business outcomes matter most, yet only 29% feel confident tracking the return on their data investments.

  • 81% now “bring AI to the data,” not the other way around—proof that legacy systems slow progress.

  • Fragmented data creates 6–12 month delays in AI initiatives.

  • 74% of unstructured information (emails, docs, PDFs) remains untouched.

  • Governance gaps make data sharing risky or inconsistent.

For source details, review the IBM CDO Study (official link).

The Real-World Impact of Enterprise AI Silos on Performance

Let’s look at a few examples that show what happens when enterprise AI silos interrupt operations.

Medtronic, a global medical technology firm, used AI to automate invoice matching. The result? Processing times dropped from 20 minutes to 8 seconds, and accuracy exceeded 99%. But before this transformation, enterprise AI silos blocked cross-system communication, slowing every effort.

Matrix Renewables, a clean-energy provider, built a centralized data environment and reduced reporting time by 75% while cutting downtime 10%. Before that, asset data couldn’t be unified—a common roadblock in energy, manufacturing, and logistics.

Across industries, the impact is consistent:

  • Time wasted means missed opportunities.

  • Siloed data leads to duplicated work.

  • AI underperforms when it cannot access full context.

For more examples, explore this article on closing AI data gaps.

Solutions to Break Enterprise AI Silos

The good news? You don’t need to rebuild your entire data architecture overnight. Instead, modern frameworks offer paths to connect information without lifting and shifting massive datasets.

1. Adopt Data Mesh or Data Fabric

Both approaches keep data where it already lives but establish virtual connections. IBM strongly advocates this model to limit complexity.

A data fabric adds a smart access layer over existing systems so AI tools can query information without copying it everywhere. This reduces how often enterprise AI silos interrupt workflows.

2. Create “Data Products”

Data products turn raw information into reusable building blocks like a cross department customer profile or a supply chain reliability score. This supports:

  • Safe sharing

  • Rapid model development

  • Governance consistency

3. Modernize Tools and Integrations

Start by assessing:

  1. Where silos exist

  2. What systems don’t integrate

  3. Which teams lack access

  4. What governance gaps remain

Then introduce lightweight connectors, virtualized access layers, and collaborative tools.

4. Strengthen Governance With Security

82% of CDOs say data control is essential for reducing risk. Partnering with security teams ensures you open data responsibly without slowing innovation.

Learn more from Charter Global’s take on breaking silos.

Talent and Culture Barriers Caused by Enterprise AI Silos

Hiring and skills shortages are major contributors to slow AI adoption. 77% of data leaders report trouble finding talent—up from 62% the year before. New AI-related roles appear rapidly, and 82% of organizations are hiring for positions that didn’t exist 24 months ago.

This matters because enterprise AI silos often require specialized skills in:

  • Data integration

  • Model operations

  • Governance

  • Cloud architecture

  • API automation

Beyond skills, culture plays a huge role. 80% of leaders say open data access speeds decision-making and innovation.

Shifting culture happens through:

  • Internal workshops

  • Team-to-team collaboration

  • Sharing success stories

  • Tracking adoption of data tools by non-technical staff

Breaking silos requires people to change how they think not just how they work.

Governance & Security Issues Linked to Enterprise AI Silos

Increasing access to data requires stronger safeguards. Enterprise AI silos often emerge from old governance rules that limit sharing, but breaking them must be done thoughtfully.

Key considerations:

  • CDOs and CISOs should partner on governance frameworks.

  • Policies must protect sensitive data without restricting innovation.

  • AI agents (used by 83% of surveyed companies) must be trained on reliable, unified information.

Governance isn’t a blocker it’s an enabler when done well.

For deeper exploration, see The Information’s analysis:
https://www.theinformation.com/articles/ai-breaking-data-silos

Conclusion: Overcoming Enterprise AI Silos for Future Growth

We’ve explored how enterprise AI silos create delays, raise costs, and block AI innovation. Companies that address these barriers with data fabric, stronger governance, cultural change, and talent development see real wins—like Medtronic’s 8-second invoice matching.

Organizations ready to scale AI must ask:
What is one small action we can take today to unlock our data?

Share your insights we’d love to hear where you are on your AI journey.

FAQ

What are enterprise AI silos, and why do they matter?

They are isolated data environments within a company. AI relies on complete and consistent data, so silos slow model training and limit accuracy.

How can companies identify enterprise AI silos?

Look for long data prep cycles, inconsistent reporting, or teams unable to access critical information.

What fixes help eliminate enterprise AI silos?

Mapping data, using data fabric, adopting common governance, and encouraging sharing across teams.

Does IBM offer tools to reduce enterprise AI silos?

Yes, approaches like data fabric, data products, and platforms like watsonx help unify data and speed AI use cases.

How does talent shortage relate to enterprise AI silos?

Companies lack specialized skills to integrate data and build scalable models. Upskilling and hiring are essential.

SeekaApp Hosting