AI Market Logo
BTC Loading... Loading...
ETH Loading... Loading...
BNB Loading... Loading...
SOL Loading... Loading...
XRP Loading... Loading...
ADA Loading... Loading...
AVAX Loading... Loading...
DOT Loading... Loading...
MATIC Loading... Loading...
LINK Loading... Loading...
HAIA Loading... Loading...
BTC Loading... Loading...
ETH Loading... Loading...
BNB Loading... Loading...
SOL Loading... Loading...
XRP Loading... Loading...
ADA Loading... Loading...
AVAX Loading... Loading...
DOT Loading... Loading...
MATIC Loading... Loading...
LINK Loading... Loading...
HAIA Loading... Loading...
Can employees learn to trust an AI boss?
workplace-culture

Can employees learn to trust an AI boss?

Most employees embrace AI tools but hesitate to trust AI as their boss, highlighting the need for human accountability in AI adoption.

August 15, 2025
5 min read
Sheryl Estrada

Can Employees Learn to Trust an AI Boss?

AI agents are increasingly integrated into the workplace, but employees remain cautious about reporting to a digital boss. More than 80% of organizations are expanding their use of AI agents, according to a recent report by Fortune 500 company Workday. While 75% of workers feel comfortable collaborating with AI agents, only 30% are comfortable being managed by one. Employees generally accept AI as a tool but resist viewing AI as a decision maker to whom they must answer. Nearly half (48%) of respondents expressed concern that AI agents will increase pressure on employees to work faster. The findings come from a survey of 2,950 full-time decision makers and software implementation leaders across North America, APAC, and EMEA. Trust remains a significant barrier. Over 25% of respondents believe AI agents are overhyped. Kathy Pham, vice president of AI at Workday, emphasizes that "building trust means being intentional in how AI is used and keeping people at the center of every decision." The research shows trust in AI agents grows with increased use: only 36% of those just beginning to explore AI agents trust their organizations to use them responsibly, compared with 95% among those further along the adoption curve. Workday’s report highlights the importance of maintaining human accountability in AI decision-making. At the Fortune Brainstorm AI Singapore conference, Sapna Chadha, VP for Southeast Asia and South Asia Frontier at Google, advised that agentic platforms must clearly communicate actions and request user approval at key decision points. "You wouldn’t want to have a system that can do this fully without a human in the loop," Chadha said. With a shortage of CPAs and finance professionals looming, 76% of finance workers believe AI agents will help fill the gap, and only 12% worry about job loss. AI’s top uses in finance include forecasting and budgeting, financial reporting, and fraud detection. Gen Z workers are especially optimistic—70% are interested in working for companies that invest in AI agents. The report recommends that leaders refine performance through human ingenuity, prioritize tools and training, and design roles that unlock purpose—not just productivity. Will employees ever be comfortable calling an AI agent their boss? When it comes to AI, "never say never," but human oversight and trust-building remain essential.
Article by Sheryl Estrada Originally published at Fortune on August 15, 2025.

Frequently Asked Questions (FAQ)

What percentage of organizations are expanding their use of AI agents?

A: According to a Workday report, over 80% of organizations are expanding their use of AI agents.

How comfortable are workers with collaborating with AI agents versus being managed by one?

A: While 75% of workers feel comfortable collaborating with AI agents, only 30% are comfortable being managed by one.

What is a primary concern employees have about AI agents in the workplace?

A: Nearly half (48%) of respondents expressed concern that AI agents will increase pressure on employees to work faster.

What is the key to building trust in AI agents?

A: Kathy Pham of Workday emphasizes that building trust means being intentional in how AI is used and keeping people at the center of every decision.

How does trust in AI agents evolve with increased adoption?

A: Trust grows with increased use; only 36% of those just beginning to explore AI agents trust their organizations to use them responsibly, compared to 95% among those further along the adoption curve.

What advice was given regarding AI agent decision-making at the Fortune Brainstorm AI Singapore conference?

A: Sapna Chadha of Google advised that agentic platforms must clearly communicate actions and request user approval at key decision points, highlighting the importance of a human in the loop.

How do finance workers view AI agents in the context of professional shortages?

A: With a shortage of CPAs and finance professionals looming, 76% of finance workers believe AI agents will help fill the gap, and only 12% worry about job loss.

Which demographic is particularly optimistic about working with AI agents?

A: Gen Z workers are especially optimistic, with 70% interested in working for companies that invest in AI agents.

Crypto Market AI's Take

The integration of AI agents into the workplace, particularly in roles traditionally held by humans, presents a complex shift. While AI can streamline processes, increase efficiency, and even fill critical labor gaps in sectors like finance, the human element remains paramount. The reluctance of employees to accept AI as a direct manager underscores the need for careful implementation. At AI Crypto Market, we believe in the power of AI to augment human capabilities, not replace them entirely, especially in decision-making processes that require nuanced understanding, ethical considerations, and human oversight. Our platform leverages AI for market intelligence and trading automation, but always within a framework that prioritizes user control and transparency, ensuring that AI acts as a sophisticated tool rather than an unquestioned authority.

More to Read: