Utilizing Generative AI in Credit Reporting Disputes Management: New Insights after AWS re:Invent

Last week, I attended the AWS re:Invent conference in Las Vegas. This was a great opportunity to hear more about what new innovations are coming to the AWS ecosystem. It was also very timely as Bridgeforce Data Solutions is actively working on a new product, AI Agent Assist, utilizing new technologies from AWS.  

We have already developed prototypes that show significant promise, yet we face considerable challenges and have much to accomplish. Thankfully some of the recent AWS product announcements appear to be very beneficial in addressing these challenges.  

I wanted to write more about our process for determining the right tools to develop AI Agent Assist, some of the challenges we have faced, and the takeaways from re:Invent.   

Identifying Use Cases 

The critical first step is identifying the specific use cases for which you want Generative AI to assist. This will prevent you from falling into the trap of a solution looking for a problem. In developing AI Agent Assist, we initially had a solid understanding of our goals. From there, we began describing these ideas in more detail and mapping out a rough iterative development approach.  

Choosing the Technology Platform 

Once we understood our business goals, the next step was to select the appropriate technology platform. Our top priority was to ensure there would be no data leakage into any Large Language Models (LLMs) we might use. This requirement is non-negotiable given the significant demand that tech companies have for data to feed into their subsequent model iterations, combined with our imperative to secure information meticulously.  

Moreover, we aimed to efficiently utilize our own data sources and, in time, data from our clients, and to compare performance and cost across LLMs (noting that those trade-offs can vary by use case).  

This led us to choose Amazon Bedrock, which I think of as a middleware (yes, I’m dating myself by using that term) between the dozens of available LLMs and our proprietary information and code.  

Working with AWS 

We started off strong with excellent support from the AWS team. We engaged in an exercise that AWS calls an Experience Based Accelerator (EBA), which required several weeks of preparation followed by three days onsite at their East Coast headquarters to kick-start our initiatives. During this three-day session, we built multiple prototypes, had one of our clients join us for feedback, and generated momentum for our ongoing efforts.  

This fast start validated our belief that the use cases we envisioned were achievable and demonstrated that these tools could deliver impressive results. With the right set of instructions, they can significantly enhance the retrieval and presentation of information, doing so much faster and more comprehensively than a human could accomplish alone. However, like any good innovation effort, we faced some challenges.  

Challenges 

Establishing a Hierarchy of AI Agents: Relying on a single AI agent to handle too many tasks could slow down processes or lead to failures. While we could utilize different “action groups” for an agent, we ideally wanted a more structured hierarchy than what was initially available.  

Realistic Response Times: Sometimes, waiting for generative AI responses can feel reminiscent of waiting for a web page to load during the era of dial-up internet. While some wait times are acceptable if the results are worthwhile, we need to consider response times significantly when choosing and addressing our selected use cases.  

Challenges with LLMs and Structured Data: Large Language Models (LLMs) often struggle with efficiently and accurately querying structured data sets. Their probabilistic nature makes it difficult to achieve the consistency and efficiency we desire when an AI agent retrieves information from a data table.  

AWS re:Invent  

Attending the AWS re:Invent conference last week reinforced our current efforts and included some positive announcements. Although the use cases and approaches varied among presenters, we found consistent themes that aligned with our internal experiences. After learning about the response times that others were experiencing, I felt better about what we are seeing with our prototypes.  

AWS also announced new features in Bedrock specifically designed to enhance multi-agent collaboration, improve interactions with structured data, and reduce response times. As we embrace these new features in the coming weeks, the challenges we’ve encountered should become easier to address.  

We look forward to further testing the possibilities that generative AI can provide and are excited to bring this new product to market in 2025!  

 

More To Explore

Sign up to get the latest insights directly from our experts on credit reporting, disputes, and credit risk analytics