Bot identity is an unsolved problem. The bots causing the most damage don’t look like attackers. They look like your best customers.
When Bots are Indistinguishable from Legitimate Users
There was a time when bots were easy to define. They were the bad actors, credential stuffers, fake account creators, scripted fraudsters. You built a wall. You blocked them. Done.
That clarity is gone.
Today, bots are also your most efficient operators. They’re booking travel, managing subscriptions, executing payments and interacting with platforms on behalf of real, legitimate users. Businesses have invited automation in, and rightly so. The problem is that adversarial bots arrived at the same time, wearing the same clothes.
According to new research from PYMNTS Intelligence and Trulioo, automated bot traffic now accounts for most of the online activity. More than 90% of organisations say managing bot traffic is now a significant operational challenge. Not because bots are universally bad, but because they can no longer be universally trusted.
Not all bots are malicious. Many are legitimate agents acting on behalf of users. The challenge lies in distinguishing between helpful automation and adversarial behaviour. That single ambiguity is reshaping the entire threat landscape.
Traditional Identity Verification Fails Against Modern Bots
Every identity framework your enterprise relies on was built around one assumption: a human is on the other side. KYC, KYB, authentication layers, fraud models, all of it designed to answer, “is this the right person?”
That question on its own is no longer sufficient.
When a bot initiates a transaction, it can present perfect credentials. It can behave exactly as a legitimate user would, because it has studied how legitimate users behave. Static identity attributes, username, password, device ID, were never designed to distinguish an authorised agent from an adversarial one.
This is why the PYMNTS Intelligence findings hit so hard. During high-value interactions like lending and onboarding, over 63% of firms are already encountering bot-driven threats. And the damage runs in both directions, adversarial bots slipping through on one side, legitimate customers incorrectly blocked on the other. False-positive rates as high as 3.3% mean real customers are turned away, flagged as suspicious because they triggered the same signals as a bad bot.
The industry is losing nearly $100 billion annually, not just to fraud, but to the friction of a system that cannot tell friend from foe.
The Bot Identity Gap: Why Continuous Identity Assurance Is Now Non-Negotiable
Here is the question sitting at the centre of this challenge that most enterprises haven’t confronted directly.
In an agentic world, verifying the identity of a user is no longer enough. Systems must not only verify the identity of the principal, but they must also ensure that the agent’s actions are authorized. The outcome needs to be bound to a verified identity and action/ intent.
That’s a fundamentally different problem. A verified credential at login tells you who someone claimed to be at that moment. It tells you nothing about whether the entity now acting on that identity has permission to do so, is operating within expected boundaries, or is the same entity that authenticated in the first place.
A bot can authenticate perfectly using a real user’s credentials and then act in ways that user would never authorise. By the time traditional systems flag the anomaly, the damage is done.
Identity cannot be treated as a static attribute granted once at the door. It must be verified continuously, not just who is present, but what they are doing, and whether that action is authorised, at every step.
Is It Human? Is It the Right Human? Is It the Right Outcome?
When a bot acts, it acts on someone’s behalf. That’s the critical point most identity systems miss entirely.
The bot isn’t the principal, the human behind it is. Every action a bot takes should be traceable back to a real, verified, consenting human who authorised it. If you can’t confirm that chain, you don’t have identity assurance. You have credential assurance, and credentials can be stolen, synthetic or compromised.
A legitimate bot always has a verified human behind it. If you can’t find one, you’ve found your threat. If you can’t prove it, you’re exposed.
This is precisely where ValidSoft operates. By continuously verifying the human principle behind every interaction confirming they are real, that they are who they claim to be, and that the action being taken aligns with their authorised intent, ValidSoft closes the gap that every adversarial bot exploits.
Most bot detection tools ask “does this traffic pattern look like a bot?” ValidSoft asks, “is this the right outcome?” The first approach is a cat and mouse game; adversarial bots get better at mimicking human behaviour constantly. The second is structurally unbeatable. You cannot fake a verified human principle who isn’t there.
Not just: is there a valid credential? But: is there a verified human who authorised this action?
That distinction is what separates credential assurance from true identity assurance, and it’s the only question whose answer you can actually trust when bots are in the room.
Ready to close the gap between verified identity and authorised action/intent? Reach out to us!