AI Cometh for the Tax Man


As the IRS confronts cuts and restructuring, fast-moving AI identity fraud is a growing threat. Can the agency meet the challenge? 

The Internal Revenue Service (IRS) is at something of an existential crossroads. More than 22,000 IRS workers have signed the Trump Administration’s deferred resignation offer, which allows employees to go on leave with pay until September 30 in exchange for their departure. The IRS’s acting commissioner, Melanie Krause, joined that list last week following the IRS’s announced partnership with Immigration and Customs Enforcement (ICE), which allows ICE to access taxpaying immigrants’ personal information and pursue their deportation. 

In the midst of this marked politicized shift, last month, IRS Criminal Investigation announced the launch of Feedback in Response to Strategic Threat (CI-FIRST), which aims to enhance banks’ capacity to detect and report financial crimes by simplifying subpoena requests and improving data-sharing capacities with banks. In addition to general law-enforcement goals — fighting human trafficking, drug smuggling, and other criminal activity — the IRS says its use of Bank Secrecy Act data helped recover more than $1.4 billion to make crime victims whole between 2022 and 2024, and uncovered $21.1 billion in tax and financial crimes during that period. 

In the eyes of Pat Kinsel, founder and CEO of identification solution Proof, the IRS is right to invest in new tactics to fight fraud — but the main issue with CI-FIRST is that it relies on data from suspicious activity reports (SARs), which are fundamentally reactive in nature. Most financial institutions submit SARs only after transactions have been processed; this is helpful in an investigation, but doesn’t actually prevent fraud.

“I think we need to change our posture from being default-permissive and trying to catch bad actors to the inverse, and saying that we actually need people to affirmatively prove who they are when they do transactions,” Kinsel told Fintech Nexus

Proof, formerly called Notarize, has interfaced regularly with lawmakers on both sides of the aisle to emphasize the potential for new technologies to exacerbate financial crimes in various forms. Criminals have used artificial-intelligence (AI) tools to fraudulently apply for refinancing or quitclaim deeds on behalf of unsuspecting homeowners, representing a serious threat to the stability of homeownership without new technologized safeguards. Systemically, AI-created content led to $12 billion in fraud losses in the US in 2023, which, according to the Financial Times, could reach $40 billion by 2027. 

“We can all recognize that we can’t live in a world where you can perfectly pretend to be someone else,” Kinsel said. “This presents a huge series of problems.” 

Kinsel proposes the proactive use of technologized identity-verification tools as one way to thwart a subset of financial crime, and believes payments networks like Visa have created modern frameworks for stopping fraud that preserve privacy requirements through tokenized identity systems and other tools. 

But these solutions are not without consequences of their own, Kinsel admits. Injecting more rigorous identity-verification steps into the taxpaying process, for example, risks dissuading documented and undocumented immigrants, among other individuals with acute privacy and surveillance concerns, from participating in a range of civic and tax-generating exercises. 

Research by the Institute on Taxation and Economic Policy, for instance, suggests that even a 10% decrease in the number of undocumented immigrants paying their taxes could lead to a $9.5 billion drop in tax revenue; a decline by 41% would lead to losses equal to the $40 billion anticipated in AI-generated fraud by 2027. 

While the use of IRS data and alleged deployment of AI for immigration activity are likely to encounter resistance in the courts — as have a range of immigration-related cases over the past several weeks, threatening a potential constitutional crisis — existing law requires public services to offer in-person and human-provided verification tools, which Kinsel thinks can enable government to meet its “obligation … to serve all people.” Digitizing tools can enable systems to devote the brunt of their resources to edge cases, he asserted, which can improve accessibility over the long run. 

Furthermore, Kinsel suggested private-sector identity solutions have an open lane because the federal government is unlikely to create its own digital identity framework due to states’ rights. A range of existing regulations already require the capacity to accept digital identities and signatures as a way to safeguard financial processes, and fight the kinds of crimes CI-FIRST hopes to thwart — including IAL2 e-signature requirements at the Small Business Administration (SBA). Those laws just haven’t been enforced yet. 

Leave a Reply

Your email address will not be published. Required fields are marked *