Responsibly Using Large Language Models in Auto

Responsibly Using Large Language Models in Auto Informed

The adoption of generative AI and large language models (LLMs) is truly astonishing. ChatGPT, built by OpenAI, announced over 100 million users in January, making it the fastest growing consumer product in history. In comparison, it took TikTok 9 months to reach 100 million users and Instagram spent 2.5 years to reach that milestone.

With the excitement surrounding generative AI and LLMs, it’s likely that your company is already experimenting with the technology. And now that LLMs are widely available, auto lenders are seeing viable use cases. LLMs can:

  • improve customer service
  • streamline onboarding
  • support more personalized advertising
  • aid in digitizing documents
  • provide real-time intelligence on the use of ancillary products; and
  • detect fraud

They can also enhance internal processes like computer programming, model design, and strategic planning. More broadly, LLMs have great potential for quickly synthesizing insights from multiple data sources. 

As an AI-driven technology company serving the auto lending industry, Informed understands this excitement from a deep technical perspective. We are actively engaging in R&D to pass along the potential extraordinary service uplift these technologies can offer our customers. 

Risk and Governance of Large Language Models

We need to temper this excitement, however, with a clear-eyed recognition of the risks. LLM hallucinations are real and so are the biases naturally built into these models. Just because a model responds with an answer to a prompt, doesn’t mean the answer is correct. In a field like auto lending where decisions have consequential impacts on consumers’ lives, hallucinations or biased answers are unacceptable. As a result, it’s important to develop a framework for their proper use.

Fortunately, while LLMs are new, model governance is not. For well over a decade, the Office of the Comptroller of the Currency (OCC) and Federal Reserve’s SR-11-7 Guidance on Model Risk Management, has presented the gold standard in assessing the reliability of models. SR 11-7 remains relevant.

As SR 11-7 highlights, model risk management still begins with (1) robust model development, implementation; use; (2) a sound model validation process; and (3) governance. The guidance has applicability where your organization has some control over the data input and constraints governing your LLM model. It’s also relevant in reviewing the use of off-the-shelf models like ChatGPT, which could be considered a third-party service provider.

Here is relevant guidance from SR 11-7 for you to consider if your company (or third-party  vendor) is proposing deployment of an LLM model.

  • Understand the model assumptions:  What are the key assumptions in the model?
  • Check for Bias: Do you understand the data inputs into the LLM model? If no documentation on the inputs exists, have you or the vendor tested for biases in the outputs?
  • Make sure that the model outputs have been and will continue to be tested:  How was the model tested?  Has the model accuracy, stability  and robustness been checked over a range of input values?
  • Validation: Does the company have an ongoing validation process even after the model is in use? Has your internal team or your vendor supplied testing results that the product works as expected?
  • Understand the model constraints: Were the model’s limitations assessed?
  • Confirm the model works on your data and environment: Have you validated the vendor’s own model performance using your own data?

To get a hint for how regulators will look at LLMs, I commend you to the Consumer Financial Protection Bureau (CFPB) guidance on “Black-Box Credit Models Using Complex Algorithms issued in May 2022. The CFPB Circular holds that “federal consumer financial protection laws are enforceable, regardless of the technology used by the creditor.” As Director Rohit Chopra stated, “companies are not absolved of their legal responsibilities when they let a black-box model make lending decisions… The law gives every applicant the right to a specific explanation if their application for credit was denied, and that right is not diminished simply because a company uses a complex algorithm that it doesn’t understand.”

What is true for complex deep-learning algorithms is also true for LLM models. Regulators will not consider a lack of understanding of LLMs as a substitute or excuse for noncompliance. In some ways, LLMs pose even more validation challenges because, unlike black box models, LLMs do not use training data tailored to specific use cases.

In sum, LLMs provide our industry with an extraordinary opportunity. We need to bind that opportunity with responsible model and data governance.

An earlier version of this article appeared on Auto Finance News.

author avatar
Tom Oscherwitz VP of Legal
Tom Oscherwitz is Informed’s VP of Legal and Regulatory Advisor.  He has over 25 years of experience as a senior government regulator (CFPB, U.S. Senate) and as a fintech legal executive working at the intersection of consumer data, analytics, and regulatory policy.

Informed named to the Software Report's Top 25 AI Companies for 2024

X