For an industry that prides itself for being sophisticated with the use of data and advanced analytics adoption to new credit scoring systems is a slow and expensive process. According to VantageScore LLC, three recent independently conducted surveys http://www.vantagescore.com/news-story/140 suggest that slightly more than 50% of the lenders interviewed would consider switching from their existing credit scoring system to one that is proven to be more predictive . Given the importance of properly assessing consumer credit risk why weren’t 100% of the respondents willing to consider switching to a more predictive score?

Wiermanski May 2015 300U.S. credit bureaus continue to innovate but lenders are slow to adopt.

Since the mid 1990’s each of the three largest U.S. credit reporting agencies introduced various credit risk models that outperformed more popular models, but experienced little user adoption. Why? One reason for the lack of adoption is that many lenders have an irrational reluctant to use a different credit model from each CRA. Their reluctance was premised on the fact the models offered by each CRA were designed using different performance definitions, performance windows, score scaling and development team; and that multiple CRA/score based decision strategies would be required for automated account management and underwriting decisions. This belief was irrational because the generic credit bureau based was not the same across each credit bureau. Each model version was based upon varying levels of credit information content and quality, different credit characteristic definitions, different performance windows and different good/bad performance relationship. The only commonality across the models that they were using was the name of the model development company and that the models had similar, not identical, score ranges. This irrational reluctance to adopt innovation made for less efficient risk management practices and discourages credit bureau innovation.

A migration from emphasis on nominal score values to inherent risk assessment is required. 

When generic credit bureau models were first introduced in the U.S. (new non-U.S. credit bureaus introducing credit scores planning to introduce generic credit scores please take note) lending strategies were programmed using score values, instead of the inherent risk associated with a particular score. For instance, in an account origination scenario lenders programmed automated underwriting systems to approve applicants with a credit score above a specific credit score (i.e. 679), instead of the risk level associated with the score provided (i.e. probability of default greater than 25%). As the use of generic credit bureau based risk models spread, federal government regulatory agencies created regulations based upon score cut-offs, not risk levels, based upon the credit risk model used by lenders. These regulations were mistakenly perceived by lenders as an endorsement of specific credit scoring systems, which created an unintended barrier for the adoption of new, more predictive generic scores. Federal regulatory agencies have since retracted and modified their regulations with an emphasis on inherent risk assessment instead of specific score values http://www.vantagescore.com/regulators and migration to new credit scoring systems have begun, but the process is painstakingly slow and expensive because automated credit risk management systems are still programmed for a specific score threshold not a credit risk threshold.

Credit bureaus can take the lead to deemphasize scores and convert decisions to risk based thresholds.                                             

Because the practice of basing underwriting and account management strategy logic around arbitrary score values is so entrenched in the U.S., and possibly other countries with established credit bureaus, there may be little that established credit bureaus can do to migrate lenders towards accepting risk based score thresholds, but it might not be too late for credit bureaus entering a country that does not have an established credit bureau or credit bureau based scoring system. For instance in the U.S. during the mid 1990’s Experian (then named TRW) introduced the National Risk Model that returned a score range of 1-999 with score representing the interval probability of default. Because the National Risk Model returned the probability of default, not a specific score, after programming underwriting and account management strategies users of the National Risk Model are be more proficient to adapt and implement updated versions of this model or competing models offered by another CRA because their strategies are programmed according to the inherent risk associated with the consumer not the nominal score value returned on a consumer’s credit report. Unfortunately, by the time this model was introduced most lenders just invested a great deal of effort to program, and in some instances reprogram, all of their various decision support platforms to accommodate the first generation of generic credit bureau models and were not interested or willing to repeat this arduous exercise to accommodate another model from one U.S. credit bureau.

Credit bureaus introducing their first credit bureau risk model may want to deviate from the norm

Credit bureaus to introduce their first credit bureau based scoring system may want to consider introducing a model where the score represents the probability of default. The benefit to the CRA and lenders is that as updated versions of their model are released, which will be required as their data assets improve and competition increases, lenders already accustomed to using a model with a score based upon probability of default will be more readily open to switching to a new model version than a model based upon a nominal score range. On the downside it will also be easier for a competing credit bureau entice lenders to convert their score with a model having a similar score value.

Data quality and product innovation ultimately will determine market share                                                                                                                                                  

While having predictive models are important, ultimately the meaning of a score will have little consequence in determining a credit bureau’s market share. The key to long term success resides in a credit bureau’s ability to expand and improve the quality of their data base. As credit bureaus expand their databases to include alternative data, models that return a nominal score will continue to be an impediment to innovation and improved risk evaluation. To paraphrase Rupert Murdock, ‘The (information) world is changing very fast. Big will not beat small anymore. It will be the fast beating the slow.’ Those credit bureaus that innovate and can quickly accommodate customer adoption will prosper.

About the Author: Chet Wiermanski is one of BIIA’s contributing editors writing on the subjects of credit scoring and decision systems. He is a Visiting Scholar at the Federal Reserve Bank of Philadelphia researching new applications of consumer credit report information. Additionally, Chet is Managing Director of Aether Analytics which specializes on leveraging hidden data sequences and time series components within consumer credit information typically ignored by traditional credit bureau based solutions. Previously Chet was the Global Chief Scientist at TransUnion LLC. Holding a variety of positions within TransUnion, during his tenure, between July 1997 and February 2012, he was responsible for identifying, evaluating and developing new technology platforms involving alternative data sources, predictive modeling, econometric forecasting and related consulting services.