
These are interesting times for AI and trust. An increasing number of investment firms are using AI agents to review research notes and company presentations. Humans are being asked to hand over increasingly invasive biometric data, such as facial scans, voice samples, and behavioral patterns, just to prove that they are not robots. Once in the wild, this data can be weaponized by AI-powered robots to convincingly spoof real people, defeating the very systems designed to keep them away. That leaves us in a strange new arms race: the more invasive the verification, the greater the risk that it will inevitably be leaked. So how do we check who (or what) we are really dealing with?
It is inconceivable to demand transparency from humans and accept opacity from machines. Both robots and humans online need better ways to verify their identity. We cannot solve this problem simply by collecting more biometric data, nor by creating centralized registries that represent huge traps for cybercriminals. Zero-knowledge proofs offer a path forward where both humans and AI can prove their credentials without exposing themselves to exploitation.
Trust deficit blocks progress
The absence of a verifiable AI identity creates immediate market risks. When AI agents can impersonate humans, manipulate markets, or execute unauthorized transactions, companies are rightly hesitant to deploy autonomous systems at scale. As it happens, LLMs that have been “tuned” on a smaller data set to improve performance are 22 times more likely to produce harmful results than base models, and success rates of bypassing system security and ethics barriers (a process known as “jailbreaking”) triple compared to production-ready systems. Without reliable identity verification, every interaction with AI moves one step closer to a potential security breach.
The problem is not as obvious as preventing malicious actors from deploying malicious agents, because it is not as if we are faced with a single AI interface. The future will see more and more autonomous AI agents with greater capabilities. In such a sea of agents, how do we know what we are dealing with? Even legitimate AI systems need verifiable credentials to participate in the emerging agent-to-agent economy. When an AI trading robot executes a transaction with another robot, both parties need to be sure of the other’s identity, authorization, and liability structure.
The human side of this equation is equally broken. Traditional identity verification systems expose users to massive data breaches, too easily enable authoritarian surveillance, and generate billions in revenue for large corporations from the sale of personal information without compensating the people who generate it. People are rightly reluctant to share more personal data, but regulatory requirements require increasingly invasive verification procedures.
Zero knowledge: the bridge between privacy and responsibility
Zero-knowledge proofs (ZKP) offer a solution to this seemingly intractable problem. Instead of revealing sensitive information, ZKPs allow entities, whether human or artificial, to test specific claims without exposing underlying data. A user can prove that they are over 21 years old without revealing their date of birth. An AI agent can demonstrate that it was trained on ethical data sets without exposing proprietary algorithms. A financial institution can verify that a customer meets regulatory requirements without storing personal information that could be breached.
For AI agents, ZKPs can enable the deep levels of trust needed, as we need to verify not only the technical architecture but also behavioral patterns, legal liability, and social reputation. With ZKPs, these assertions can be stored in a verifiable trust graph on-chain.
Think of it as a composable identity layer that works across platforms and jurisdictions. That way, when an AI agent presents its credentials, it can demonstrate that its training data meets ethical standards, its results have been audited, and its actions are linked to responsible human entities, all without exposing proprietary information.
ZKPs could be a complete game-changer, allowing us to prove who we are without having to hand over sensitive data, but adoption remains slow. ZKPs remain a technical niche, unknown to users and entangled in regulatory gray areas. To make matters worse, companies that benefit from data collection have little incentive to adopt the technology. However, that doesn’t stop more agile identity companies from taking advantage of them, and as regulatory standards emerge and awareness improves, ZKPs could become the backbone of a new era of trusted AI and digital identity, giving individuals and organizations a way to interact securely and transparently across platforms and borders.
Market implications: unlocking the agent economy
Generative AI could add trillions annually to the global economy, but much of this value remains locked behind identity verification barriers. There are several reasons for this. One is that institutional investors need strong KYC/AML compliance before deploying capital into AI-powered strategies. Another is that enterprises require verifiable agent identities before allowing autonomous systems to access critical infrastructure. And regulators require accountability mechanisms before approving the deployment of AI in sensitive domains.
ZKP-based identity systems address all of these requirements while preserving the privacy and autonomy that make decentralized systems valuable. By allowing selective disclosure, they satisfy regulatory requirements without creating personal data traps. By providing cryptographic verification, they enable trustless interactions between autonomous agents. And by maintaining user control, they align with emerging data protection regulations like GDPR and California privacy laws.
The technology could also help address the growing deepfake crisis. When every piece of content can be cryptographically linked to a verified creator without revealing their identity, we can combat misinformation and protect privacy. This is particularly crucial as AI-generated content becomes indistinguishable from human-created material.
The ZK path
Some will argue that any identity system represents a step towards authoritarianism, but no society can function without a way to identify its citizens. Identity verification is already being done on a large scale, albeit poorly. Every time we upload documents for KYC, undergo facial recognition, or share personal data for age verification, we participate in identity systems that are invasive, insecure and inefficient.
Zero-knowledge proofs offer a path forward that respects individual privacy while enabling the trust necessary for complex economic interactions. They allow us to build systems where users control their data, verification does not require surveillance, and both humans and AI agents can interact securely without sacrificing autonomy.



