loading='lazy' When AI Answers the Phone, Who Holds Authority?
Icon April 14, 2026

Jamie Dimon Just Put Deepfakes on the World’s Agenda.But Detection Alone Is Not Enough.

Beyond Detection
Deepfakes
Identity
is it human?
is it the right human?
JPMC

Jamie Dimon does not deal in hyperbole. When the CEO of JPMorganChase, the largest financial institution in the world, uses his annual shareholder letter to highlight deepfakes as one of the defining risks of the AI era, the market should pay attention.

That is not a passing observation. It is a warning. And he’s right. But it needs to prompt a more important question that every organizations must ask. Not just: can we detect a deepfake?

But rather:

Is it a real human? Is it the right human? And is the integrity of the transaction outcome preserved and immutably bound to that authentic identity?

Because deepfake detection alone is only one signal. It is not identity assurance.

The Real Problem is Bigger Than Deepfakes

AI-generated fraud is accelerating at extraordinary speed. Synthetic voices can now impersonate customers, executives, employees, and public figures with a level of realism that would have seemed implausible only a short time ago. Fraudsters are already using cloned voices to social engineer contact centers, bypass legacy authentication controls, and manipulate high-value transactions.

That threat is real. But the market is in danger of oversimplifying the solution.

Much of the conversation today is focused on whether a voice is fake. That matters, of course. But even perfect deepfake detection would not, on its own, solve the broader trust problem.

Why? Because a detected deepfake is only evidence of possible manipulation. It does not prove who the genuine speaker is. It does not prove whether the person interacting with your organization is the authorized individual. And it does not prove that the action taken, the instruction given, or the transaction completed has remained intact from intent to outcome.

That is the real challenge of the AI era. Trust cannot rest on a single signal.

“Real Human” Is Only the First Test

For years, enterprises have approached voice security as a point-in-time exercise. Authenticate someone at the beginning of an interaction, then assume trust for everything that follows.

That model is now obsolete.

In an AI-enabled threat environment, trust must be continuously established and continuously preserved. It is no longer enough to ask whether the caller sounds human or whether they passed an authentication step 30 seconds ago. Security must answer three distinct questions throughout the interaction:

Real Human? Is it human?

Right Human? Is it the right human?

Right Outcome? Is the outcome of the interaction securely and immutably bound to that verified identity and intent?

Most authentication solutions in the market only address part of this chain.

Some focus on liveness. Some focus on fraud signals. Some focus on biometric matching at the start of the interaction. But if they cannot determine whether the speaker is the right person, and if they cannot bind the resulting transaction or instruction to that verified identity in a way that preserves integrity and non-repudiation, then the assurance model remains incomplete.

Deepfake Detection Does Not Equal Identity Assurance

This distinction is now critical.

A deepfake engine may tell you a voice appears synthetic. Useful, yes. But that alone does not establish who the legitimate person is.

Likewise, a one-time biometric match at the start of a session may indicate the caller resembles an enrolled user. But if the interaction is later manipulated, hijacked, socially engineered, or altered in-flight, that initial authentication event may no longer mean very much.

That is why point-in-time authentication is increasingly inadequate. It was built for a world in which the interaction channel itself was relatively stable. That is no longer the world we live in. Today, attackers can inject, alter, simulate, and influence in real time. Which means that trust must be maintained in real time too. For banks, this is especially important. But it does not stop at banking. Any organization that relies on voice, digital identity, approvals, authorizations, or remote interactions now faces the same problem.

And yes, that includes even the most sophisticated institutions in the world. No bank, including JPMorganChase, can assume immunity if its control framework does not fully address real-human verification, right-human assurance, and immutable transaction integrity.

Jamie Dimon’s Warning Goes Further Than Most Organizations Realize

Jamie Dimon’s message was that these risks are manageable if organizations prepare. He is right. But preparation has to mean more than bolting a deepfake detector onto an outdated authentication model. Real preparation means building for full identity assurance.

At ValidSoft, we believe that requires a layered approach:

First, determine whether the presented voice is genuine and human. That means identifying synthetic audio, cloned voices, replay attacks, and other artefacts of manipulation in real time.

Second, determine whether it is the right human. Not just a human voice, but the authorized individual associated with the account, instruction, or entitlement in question.

Third, maintain assurance continuously across the interaction. Trust should not expire the moment a user is initially verified. It must persist across the full session.

Fourth, preserve the integrity of the outcome. The instruction, transaction, or approval must be bound to the verified identity and captured in a way that is resistant to dispute, tampering, repudiation, or later ambiguity.

That final point is where many current market narratives still fall short. Because fraud prevention is not just about stopping bad actors at the front door. It is also about ensuring that what happens next remains provably tied to the authenticated party and their actual intent.

The Missing Layer: Identity, Intent, and Immutable Outcome Integrity

This is where the industry needs to evolve. If a customer authorizes a payment, changes account details, approves a transfer, or gives a high-risk instruction, the organization must be able to demonstrate more than “we believed the voice sounded legitimate.”

It must be able to demonstrate that:

The speaker was genuine, the speaker was the correct authorized party, the intent was captured correctly, and the resulting action was immutably bound to that identity and intent.

Without that chain, the organization is still exposed. That exposure may not become obvious until a dispute, fraud claim, regulatory inquiry, litigation event, or reputational incident forces the question: how do you know this was really the right person, and how do you know the outcome was not altered?

That is why deepfake detection on its own is not enough. It is an important input, but only one input. The real requirement is verifiable identity assurance and preserved transaction integrity from beginning to end.

Why This Matters Now

The rapid adoption of AI is changing the threat landscape faster than most enterprises are adapting their controls. Attackers are innovating. Customers are transacting remotely. Voice is becoming a more important interface across banking, contact centers, digital servicing, and AI-assisted interactions.

At the same time, boards and risk committees are waking up to the reality that the historic authentication model is no longer fit for purpose.

The institutions that respond successfully will not be the ones that treat deepfakes as an isolated fraud problem. They will be the ones that understand this is a broader trust and identity-assurance challenge.

The future belongs to organizations that can answer all three questions with confidence:

Is it human?

Is it the right human?

Is the outcome authentic, preserved, and immutably bound to that identity?

This is the new standard.

And any organization that does not build toward it will remain vulnerable, no matter how advanced it believes its current controls to be.

The Bottom Line

Jamie Dimon has helped elevate the issue. That matters. But the next step for the market is to move beyond detection alone.

Deepfake detection is a signal. Identity assurance is the objective. Immutable transaction integrity is the standard that will increasingly define trust.

That is the level of preparation the AI era now demands.