The Problem We Found With Machine Learning

Machine learning for credit

Last week our underwriting team turned down a bunch of loans. Too many delinquencies, not enough time in business, debt service was too low, loan-to-value (LTV) was too high, and cash flow was not of sufficient quality - all good reasons, all normal conclusions. As we have been chronicling in our blog (HERE for example), we have been experimenting with artificial intelligence. Last week, our machine learning applications turned down their share of loans. The interesting part is that they were not the same loans. The disturbing part is that for some turndowns, we have no idea as to why. Turndowns, without reason, are a problem and present a major speed bump in our path to utilize machine learning.

 

The Difference Between Human and Machine Underwriting

 

For some of the loans, the machine didn’t care about the time in business or LTV. One thing we have learned from our robotic bankers is that many weak predictors of credit are often more accurate than a strong predictor such as time-in-business or LTV. 

When it comes to credit, many weak predictors are often more accurate than a strong predictor

 

Now, we are not to say which method is more accurate, but that day will come. However, we will say that when that day comes, that machine will get smarter. That underwriter, we are not so sure. That underwriter might understand that hospitality is risky, but it will miss the nuances of how a combination of occupancy patterns and average daily room rates are predictive of future problems. That is a huge advantage that machines have over human underwriting – they not only learn this lessons of history, they internalize them 100%.

 

Machines not only learn the lessons of history they internalize them 100%

The problem arises when you ask a machine why they didn’t approve the loan. While a human can tell you, the machine cannot. The truth is that while you train the machine on the data and start off with a predictive model, neural networks take over and start to learn. What it learns, unfortunately, is not clear. When it comes to machine learning, there is a lack of transparency and accountability. This not only makes credit folks and management uneasy but is likely a non-starter with the regulators to say nothing of compliance.

 

The Child Becomes Smarter Than The Parent

 

Just like a child becomes smarter than their parents, the same thing happens to neural networks.  We have no idea how our kid learned what a “Z Bingo U Split” is in football, but he picked it somewhere, and he was accurate. He didn’t know either where he learned to recognize that play in professional football which is pretty much what our machine learning credit application said when we inquired as to why the application turned down a loan that our credit folks approved. 

 

Luckily, we can feed the same loan in with a change of variables and see what happens.

 

As near as we could tell, our robotic underwriter learned to pick up on a combination of median income change, frequency of property sales and the rate of change in comparable sales multiples. Those are a combination of three factors that we are pretty confident not in any bank’s existing credit model. We did not have any hand in building those factors into the initial model, so we have little explanation on how or why the application used those factors in that combination and weightings.

 

The ironic part is that in the three days it took to reverse engineer that answer, the model had processed another 30 loans and some 1,500 more variables. The answer we would get today isn’t the same as the answer we got last week.

 

In our case, we had artificially constrained the universe of knowledge our credit application had access to. Had the application had the ability to have free access to the internet, there is no telling the information that would have been analyzed. 

 

The Solution

 

Machine learning will surely become more transparent to the extent it can but, as we learned, it may not matter because every day another 50 nodes on the algorithm develop, and whereas last week it was the combination of three variables, this week it is a combination of 32 variables that caused the loan to be turned down.  Let the machine run for another 90 days, and in all probability, the output of the credit underwriting model is likely beyond the realm of human comprehension – mind blown.

 

Given this aspect of machine learning, we are not sure there is a solution. If we utilized automated underwriting today, we would not be able to tell a loan applicant why they were turned down or even prove there was not a bias in the decision. Regulators look at some common loan variables that can be used as proxies for potential disparate lending practices but with machine learning, testing the output is less clear. Factors like the presence of broadband, business density, proximity to a major supermarket and several other factors are examples of variables that could be used by the machine to inadvertently result in a disproportionate adverse impact on a protected class of people. Even when it comes to commercial lending, not having a clear indication of why a borrower was turned down, at a minimum, presents a customer experience problem.

 

Machine learning applications need to become more transparent and need to provide indications how they are learning and what correlations and weights they are applying. Artificial intelligence has shown great promise in understanding a large array of variables and turning them all into actionable data. The time will come to leverage machine learning in credit, it is just not now.