Industry Expertise | Why Most AI Automation Stalls—And a Proven Method to Get Results

“Industry Expertise” is sponsored content produced by AFSA’s Business Partners’ to provide thought leadership and best practices for AFSA member companies. For more information about this sponsored content opportunity, contact Dan Bucherer.
Fuse: Why most AI automation stalls—and a proven method to get results
The pattern is familiar. A lender purchases an AI document-reading tool. The team validates that it extracts data correctly. Then, nothing changes. Underwriters continue opening every PDF, running the same checklists, performing the same manual reviews they did before the AI existed. One past error (or simple habit) is enough to keep the old process alive.
The result is predictable: more work (and costs), not less. The AI runs in the background while the team works around it. The tool adds a step instead of removing one. In fact, an MIT study of over 300 AI initiatives found that about 95% of AI pilots fail to deliver measurable results. For most auto lending executives, that number is not surprising; many have lived it firsthand.
The problem is rarely the technology. Most tools work as advertised. The problem is that AI gets layered on top of a process that was never redesigned to accommodate it. Staff distrusts the results based on anecdotal evidence, they remain personally responsible for errors, and they were never trained on a new workflow. So, the old process survives.
Fuse has spent the past year working this challenge with auto lenders of all sizes. The method that consistently delivers real operational savings comes down to four steps.
Step 1: Give the AI the same instructions your team follows. Not a simplified version; the real rules, the real exceptions. If your underwriters check that a driver’s license matches the application name and is not expired, the AI should follow the same logic.
Step 2: Train the AI to produce better-than-human results. This means iterating on the prompts and rules until accuracy exceeds your human baseline. With one client, we drove error rates from 4% down to 0.3% over five iterations, well below the human error rate.
Step 3: Remove the manual review from the process. This is the step most organizations skip. If the AI is more accurate than your team, the duplicate check is not safety, it’s waste. Remove the document from the processing queue entirely. This is also the most important step for removing personal liability from staff and retraining them on the new workflow.
Step 4: QA results in a separate process. Either set up a second AI to audit 100% of outputs automatically or have a QA team review a 10% sample. This replaces the old in-line review with a structured, independent check.
One mid-sized auto lender processing over 50,000 applications per month had an AI auto-processing rate stuck around 4% for five consecutive months. Six weeks after restructuring the workflow using these four steps, they reached 82%.
The technology did not change. The process did.
If your AI initiative is underperforming, the first question is not whether you need a better tool. It is whether you changed the process around it. The lenders seeing results today are not the ones with the most advanced technology. They are the ones willing to redesign the workflow—and make the leadership decisions that come with it.
Fuse helps auto lenders move AI automation from pilot to production. The company works with auto lenders across the U.S. to redesign lending workflows around AI—delivering measurable efficiency gains in document processing, underwriting, and loan decisioning. To learn more, visit fusefinance.com.
March 11th, 2026