AI Market Logo
BTC Loading... Loading...
ETH Loading... Loading...
BNB Loading... Loading...
SOL Loading... Loading...
XRP Loading... Loading...
ADA Loading... Loading...
AVAX Loading... Loading...
DOT Loading... Loading...
MATIC Loading... Loading...
LINK Loading... Loading...
HAIA Loading... Loading...
BTC Loading... Loading...
ETH Loading... Loading...
BNB Loading... Loading...
SOL Loading... Loading...
XRP Loading... Loading...
ADA Loading... Loading...
AVAX Loading... Loading...
DOT Loading... Loading...
MATIC Loading... Loading...
LINK Loading... Loading...
HAIA Loading... Loading...
A world of powerful AI Agents needs new identity framework
digital-identity

A world of powerful AI Agents needs new identity framework

AI agents demand a new identity framework for secure delegation, reputation tracking, and legal clarity in autonomous actions.

August 9, 2025
5 min read
Lu-Hai Liang

A world of powerful AI Agents needs new identity framework

OpenAI’s ChatGPT has caused a big splash around the world and the technology is advancing rapidly. The growing sophistication of AI tools is creating challenges for the digital world, especially for AI agents. In a recent Dock Labs webinar, Peter Horadan, CEO of Vouched, discussed these challenges. An AI Agent can be understood as a personal assistant. For example, if you want to book a holiday, the agent can act as a vacation planner. With the latest version of ChatGPT, the AI agent can take actions on your behalf: opening a browser window, prompting you to fill in sign-in details on a website, and purchasing plane tickets. However, cybersecurity best practices advise never to share your username and password with third parties. Typing credentials into a third-party window means ChatGPT obtains a valid session key with that airline. AI agents are increasingly used at work. They might prompt users to log in to company information systems, effectively logging in as the user to work systems like finance and accounting. While ChatGPT performs well, Horadan calls this “terrible training” because it normalizes the unsafe practice of sharing credentials with AI agents, which is very risky. Currently, ChatGPT automates user interactions through screen scraping and browser automation that impersonates individuals. Anthropic’s Model Context Protocol (MCP), released earlier this year, offers a more controlled framework allowing agents to retrieve information or perform actions under strict permissions. However, MCP lacks essential features for robust identity management.

Key Challenges and Proposed Solutions

1. Clear Identification and Delegation

Any agent acting on a user’s behalf must be distinctly identified, differentiating between the human and the software agent when actions are executed. Users may want to delegate specific tasks—like purchasing tickets—without granting full authority for other activities. This requires mechanisms for distributed authentication and role-based delegation that track exactly which rights a human has granted to an agent. MCP does not currently address these but they are vital for secure, transparent agent operations.

2. Reputation Tracking

It is crucial to track the reputation of AI agents. Similar to how email systems struggled with phishing due to lack of safeguards, autonomous agents will include both trustworthy and malicious actors. Scam, fraudster, or hustler agents will emerge. Horadan suggests a reputation framework akin to Yelp for AI agents, enabling platforms to monitor behavior and flag agents that violate user expectations or act maliciously.

3. Legal and Contractual Considerations

Traditional checkboxes for terms and conditions assume a conscious human decision. If an agent consents automatically on behalf of a user, the legal validity of such agreements may be questionable. Agents must either prompt for explicit human confirmation or operate under pre-negotiated legal frameworks clearly defining their authority.

Vouched’s Know Your Agent Framework and Identity Extension for MCP

To address these challenges, Vouched has proposed a Know Your Agent framework and an Identity Extension for MCP. Inspired by OAuth 2.0 principles, this specification enables:
  • Durable, scoped authorizations tied to a session key presented by the agent when requesting permitted actions.
  • Clear identification of the agent’s credentials, separate from the user’s identity.
  • A reporting mechanism for service providers to submit structured feedback to an impartial rating authority.
  • This extension, called MCPI, adds an identity layer to Anthropic’s MCP protocol. Horadan’s presentation elaborates on how MCPI integrates with existing IAM and CIAM systems, and the role of mobile driver’s licenses (mDLs), European Digital Identity (EUDI), and verifiable credentials.

    Digital Identity Rights Framework (DIRF)

    A complementary approach comes from a team of researchers from academia and companies including Nokia, Deloitte, and J.P. Morgan, who formulated the “Digital Identity Rights Framework” (DIRF). This framework protects behavioral, biometric, and personality-based digital likeness attributes as generative AI and its products become more widespread. Available on arxiv, DIRF defines 63 enforceable identity-centric controls across nine domains, categorized as legal, technical, or hybrid. These domains include identity consent, model training governance, traceability, memory drift, and monetization enforcement. The framework aims to protect individuals from unauthorized use, modeling, and monetization of their digital identity. Interestingly, DIRF not only protects human identity but also improves AI system performance. Evaluations show DIRF substantially enhances large language model (LLM) performance across metrics, resulting in greater prompt reliability and execution stability. The authors provide an implementation roadmap and highlight compatibility with AI security layers such as NIST AI RMF and OWASP LLM Top 10.
    This article is based on insights from Peter Horadan, CEO of Vouched, and research from the Digital Identity Rights Framework team.

    Frequently Asked Questions (FAQ)

    Identity and Security for AI Agents

    Q: What is an AI Agent and what are its capabilities? A: An AI Agent is like a personal assistant that can perform tasks on your behalf. Advanced agents, such as newer versions of ChatGPT, can interact with websites, fill in forms, and even make purchases, acting with your delegated authority. Q: What are the primary security risks associated with AI Agents? A: A major risk is the normalization of unsafe practices, like sharing credentials with AI agents. This can lead to compromised accounts and data breaches. Screen scraping and browser automation used by some agents also pose impersonation risks. Q: How can AI Agents be identified securely? A: A new identity framework is needed to clearly distinguish between human users and AI agents. This involves distinct identification and mechanisms for role-based delegation to manage the specific rights granted to an agent. Q: What is the "Know Your Agent" framework? A: Proposed by Vouched, this framework, along with an Identity Extension for MCP (MCPI), aims to provide durable, scoped authorizations for AI agents, separating their credentials from user identities, and enabling reputation tracking through structured feedback. Q: What is the purpose of reputation tracking for AI Agents? A: Similar to how email systems needed safeguards against phishing, AI agents will include both trustworthy and malicious actors. Reputation frameworks help platforms monitor agent behavior and flag those that act maliciously or violate user expectations. Q: What are the legal implications of AI Agents acting on behalf of users? A: Traditional legal agreements (like terms and conditions) assume human consent. When an agent consents automatically, the legal validity of these agreements can be questioned. Agents may need to prompt for explicit human confirmation or operate under pre-defined legal frameworks. Q: How does the Digital Identity Rights Framework (DIRF) contribute to AI Agent security? A: DIRF protects digital likeness attributes and defines enforceable identity-centric controls across various domains, including consent and traceability. This not only safeguards human identity but also enhances AI system performance by improving prompt reliability and execution stability.

    Crypto Market AI's Take

    The rapid advancement of AI agents, exemplified by tools like ChatGPT, presents a significant frontier in digital interaction and automation. However, as highlighted in this article, this evolution brings forth critical challenges in establishing robust identity frameworks for these powerful agents. At Crypto Market AI, we understand the intricate relationship between AI, automation, and the digital asset space. Our platform leverages cutting-edge AI to provide sophisticated market analysis, trading bots, and personalized financial insights, all built with security and user control in mind. Ensuring the secure and transparent operation of AI agents within the cryptocurrency ecosystem is paramount. This aligns with our commitment to developing AI solutions that are not only innovative but also responsible and trustworthy, aiming to amplify human potential in finance.

    More to Read:

  • AI Agents are Broken: Can GPT-5 Fix Them?
  • AI Crypto Scams Surge 456%: Experts Warn No One is Safe
  • Intuit QuickBooks AI Agents: Boosting Business Efficiency

For more details, visit the original article at BiometricUpdate.com.