AI Finds the Cracks Between Login and Payment
Watch more: What’s Next in Payments With Trulioo’s Zac Cohen
Artificial intelligence is rendering traditional digital identity verification toothless.
The next wave of AI-powered cyberattacks are already here, and they’re arriving with synthetic faces, voices, devices and behavioral signals that can mimic legitimate users down to keystroke cadence and device posture.
“When you can do fake face, voice and normal behavior in one motion, it tests the processes and can expose gaps in many organizations’ defenses,” Zac Cohen, chief product officer at Trulioo, told PYMNTS during a discussion for the March edition of the “What’s Next In Payments” series, “How Will AI Change Identity?”
“Point solutions will always fail against a multidimensional attack,” Cohen said.
And in the escalating contest between AI-powered fraudsters and the companies trying to stop them, that’s why it is the baseline that has become everything.
“Fraud will look to exploit the gaps between onboarding, login and transaction monitoring,” Cohen said.
The biggest change companies can make, he added, is shifting from point-in-time verification to continuous, contextual trust; a baseline built atop identity systems that connect signals over time and across teams.
Rise of Multidimensional AI Attacks
Over the past year, identity systems have faced a surge in AI-powered attacks. Attackers are now using automated bots and AI agents that can adapt in real time to probe for weaknesses across voice systems, login flows and behavioral checks.
“The biggest piece is the expanded scope and scale,” Cohen explained. “We’re seeing a lot of automated bots and agents that are infiltrating a wider range of identity processes, whether that’s voice spoofing or otherwise.”
Asked whether fake faces, fake voices or fake “normal” behavior pose the biggest threat, Cohen didn’t hesitate. “It’s funny, I don’t really worry about one more than the other,” he said. “What I worry about is really the ability to use them in concert together.”
Today’s danger is not a deepfaked selfie or a cloned voice in isolation but the synchronized deployment of synthetic signals across multiple checkpoints in a single flow. This reality poses a problem for security stacks built as disconnected layers with one vendor for document verification, another for bot detection, another for behavioral analytics.
These existing defense tools may struggle to respond coherently when adversaries operate across them simultaneously.
From Moment in Time to Context Over Time
For years, digital identity has revolved around discrete gates: onboarding, login, transaction approval. Pass the test, move forward. Fail it, stop. But that architecture had traditionally assumed a stable adversary and a static identity.
“We used to just block a bot,” Cohen said. “That’s really not the mechanism anymore.”
Today’s bots can be trained to look human in a single session. Meanwhile, real customers sometimes behave in unusual ways. The answer lies in comparing short-term activity with long-term patterns. Fraud appears in the gap between the moment and the baseline.
“The signal that emerges when you start comparing the live interactions against a customer’s longer-term history, that’s when you start catching fraud,” Cohen said.
“Instead of asking, ‘Did you pass that one test?’ the model really shifts to, ‘Does your behavior, device or network look consistent and does it make sense altogether?’” he added, noting that the future of identity-driven security and authorization is likely to lie in layer.
Device binding, behavioral baselining, risk-based step-up authentication, and continuous monitoring must work together. The goal of a identity solution moves from passing one test to establishing ongoing trust.
Continuous and Contextual Trust
The rise of AI agents adds another layer of complexity. As consumers begin to use digital assistants to complete transactions, identity systems must verify more than a person’s presence. Identity becomes tied to delegated authority.
“You need to verify not only the human, but the AI agent that’s acting on their behalf,” Cohen said. That includes confirming the agent is registered, cryptographically bound, and operating within permitted boundaries.
This introduces what Cohen called a “know your agent” requirement. Companies must validate the human, the agent, and the alignment between them.
His recommendation is to “escalate with precision, not blanket friction.”
“The key is matching that control to the uncertainty level, instead of forcing every customer through the longest path possible,” Cohen said, noting that when risk rises, businesses can increase controls in proportion to the uncertainty. They should also explain why the step-up is happening.
Identity verification is changing quickly. AI can undermine defenses, but it can also strengthen them.
Cohen’s own prescription for the year ahead was deceptively simple: continue working to connect context and continuity.
“You can’t do one well without the other,” he said. “You can’t have the right context unless you’re looking continuously over time.”
Companies that can connect context with continuity, and authorization with intent, may be better positioned to navigate a new AI-defined era of digital trust.
The post AI Finds the Cracks Between Login and Payment appeared first on PYMNTS.com.