An updated and expanded 2nd edition (first edition)
Why read this post?
Learn why high-stakes data is essential for building accurate credit-scoring models.
Billions of people lack traditional credit histories, but every single person on the planet has attitudes, beliefs, and behaviors that can be used to predict creditworthiness. Quantifying these human traits is the focus of psychometrics, and the alternative data provided by this technique allows LenddoEFL to greatly expand financial inclusion in its mission to #include1billion.
But there is a catch: in order to build models that accurately predict default, applicants need to complete psychometric assessments in pursuit of actual financial products, a so-called “high-stakes” environment. This is because people answer psychometric questions differently when they have a chance to receive a loan (the high stakes) than they would in a hypothetical situation with no incentive (the low stakes).
Despite this fact, psychometric tools are frequently built using low-stakes data. For example, many companies develop psychometric credit scoring tools using volunteers. And many lenders want to validate psychometric credit scoring tools on their clients through back-testing: giving the application to existing clients and comparing scores to their repayment history, again a low-stakes setting.
These approaches are only valid if low-stakes data can be applied to the real world of high-stakes implementation, where access to finance is on the line for applicants. But it turns out that this is not the case. A recent study published by our co-founder Bailey Klinger and academic researchers proved that low-stakes testing has no predictive validity for building and validating psychometric credit scoring models in a real-world, high-stakes situation. The data below shows exactly how applicant responses shift as they move from one environment to another.
To test for differences between low- and high-stakes situations, LenddoEFL gathered psychometric data from two sets of micro-enterprise owners in the same east-African country. One group already had their loans (low-stakes) and another group completed a psychometric assessment as a part of the loan application process (high-stakes).
First, the low-stakes data. The figure below shows the frequency distribution for two of the most important ‘Big 5’ personality dimensions for entrepreneurs, Extraversion and Conscientiousness, as well as a leading integrity assessment[i].
You can see that when the stakes are high, people are answering the same questions very differently. The distribution of scores on these three personality measures shifts significantly to the right. When something important is at stake, like being accepted or rejected for a loan, people answer differently.
How do these differences in low- vs. high-stakes data matter for credit scoring?
To see how these differences impact the predictive value of psychometric credit scoring, we can make two models[ii] to predict default: one uses responses from applicants that took the application in low stakes settings, and the other uses responses from applicants that were in high stakes settings. Then we can use a Gini Coefficient—which measures the ability of a model to successfully rank-order applicants’ riskiness and for which a higher coefficient is a metric of success in this—to compare each model’s ability to predict default for the opposing population as well as its own.[iii]
These results clearly show that there is a significant change in the rank ordering when models built on low-stakes data are applied in high-stakes settings and vice versa.[iv] Importantly, we can see that a psychometric credit-scoring model can indeed achieve reasonable predictive power in a real-world, high-stakes setting. But, that is only when the model was built with high-stakes data.
Think about it like this: when the stakes are high, both less and more risky applicants change their answers. But, less risky applicants change their answers in a different way than riskier applicants. This difference is what is used to predict risk in psychometric credit scoring models: the difference between how low- and high-risk people answer in a high-stakes setting.
This also illustrates why we see that a model built on low-stakes data is ineffective in a real-world high-stakes implementation. In the low-stakes setting, the low- and high-risk people aren’t trying to change their answers, because they aren’t concerned with the outcome of the test. Once the stakes are high, however, this pattern changes.
Testing existing loan clients or volunteers has an obvious attraction: speed. That way you don’t have to bother new loan applicants with additional questions, and then wait for them to either repay or default on their loans before you have the data to make or validate a score, an approach that takes years.
Unfortunately, these results clearly show that this shortcut does not work. People change their answers when the stakes are high, so a model built on low-stakes data falls apart when used in the real-world. People answer optional surveys with less attention and less strategy than they do a high-stakes application, and therefore the only strong foundation to a predictive credit-scoring model is real high-stakes application data and subsequent loan repayment.
Consider an analogy: you can’t predict who is a good driver based on how they play a driving video game, where the outcome is not important. Conversely, someone who does well on a real-world driving test may not perform that well on a video game. Whether it is driving skills or creditworthiness, you must predict the high-stakes context with high-stakes data.
- Psychometric model accuracy is only guaranteed when you collect data in a high-stakes situation (i.e., a real loan application).
- Despite its speed, back-testing a model on existing clients in a low-stakes setting is risky because it might not tell you anything about how the model will work in a real implementation.
- If you want to buy a model from a provider, the first thing you should verify is what kind of data they used to make their model. Was it from a real-world high-stakes implementation similar to your own?
[i] These are indices from widely available commercial psychometrics providers. It is important to note that LenddoEFL no longer uses any of these assessments or dimensions in our assessment, nor any index measures of personality.
[ii] Stepwise logistic regression built on a random 80% of data, and tested on the remaining 20% hold-out sample. An equivalently-sized random sample was used from the other set (high-stakes data for the low-stakes model, and low-stake data for the high-stakes model) to remove any effects of sample size on gini.
[iii] Note that this exercise was restricted to those questions that were present in both the low- and high-stakes testing. It does not represent LenddoEFL’s full set of content and level of predictive power, it is only for purposes of comparing relative predictive power.
[iv] The results also show that using standard personality items, the absolute predictive power is lower in a high-stakes setting compared to a low-stakes setting. This is likely because of the ability to manipulate some items in a high-stakes setting makes them not useful within a high-stakes setting. This lesson has lead LenddoEFL to develop a large set of application content that is more resistant to manipulation and which has much higher predictive power in high-stakes models. This content forms the backbone of the current LenddoEFL psychometric assessment, all of which is built and tested exclusively with high-stakes data and subsequent loan repayment-default rather than back-testing.