On-Demand Webinar - Data-Driven Underwriting: Top 5 Myths

Irresistible website designs and kickass marketing programs that deliver results

Learn More

To watch the recording in full, click here, otherwise, you can read the transcript below.


Lenora Thomas (LT): Hi everyone, my name is Lenora Thomas, Director of Customer Success at Trust Science. Thanks for joining us today in our webinar:

Data driven underwriting - Top 5 Myths

We’ll debunk myths and help provide information on how data can help improve loan performance.

I’ll start by introducing our Myth Busters; I'd like to extend a special welcome to our panelists

Jeremy Mitchell, VP Partnerships Analytics Leader at Equifax

Jordan Hyde, Founder and CEO from GoDay, and also a Trust Science customer

Matt Lavoie, VP of Data Engineering, Trust Science

Our panel discussion will last for about 30 minutes. We’ll debunk the top five myths about what you can and can't do with data and then we'll open up the webinar for a 15 minute live Q&A

For those of you who are new to Trust Science we provide subprime lenders with credit scoring they can count on that integrates into their decision-making process.

We know credit decisioning is challenging without current and up to date information on your borrowers. So, new ways to work with the data is a solution.

But there are a lot of myths out there. Today we're helping you separate fact from fiction. So let's get started on the myth debunking.

Myth #1 : You can't use social media data for credit decision. It's not allowed

Jeremy, do you want to kick us off here?

Jeremy Mitchell (JM): Causes more distance and a modicum of data quality. And again producible for the consumer to understand what is being leveraged for that credit decision now that data really extends into data that's being pulled in on a consumer. Not necessarily what's being provided by the consumer to make the decision. And so once you get into that area around the consumer consenting to data being provided then there are different rules that are that are in play there. Social media data being being one of those data sources that a consumer can provide for us.

The decision then it goes back into the lender on really how they're incorporating that data and want to use it. And again, being able to explain to the consumer, what they use and why they made the decision.

Mat Lavoie (ML): Yeah, I think, generally they feel that the regulators are quite optimistic about what can happen there. I know the Consumer Financial Protection bureau from what they've stated, they're actively inquiring it and they're like I said, they're very optimistic about how it can help the millions of people who are credit invisible and we found that that using some of that data in social has been very good in predicting credit risk.

It's a lot, a lot of information, that people are publishing and they're publishing and publicly in a lot of different systems and again, it’s getting consent. I think the important piece here because there's a lot of stuff that I think people treat very privately. So, just making sure you get the right permissions on whatever you’re using.

Jordan Hyde (JH): I'd like to add on to that point that we see a lot of publicly available data that is it just floating around. So LinkedIn is a great example. Anyone can Google someone on LinkedIn and then we can see a whole bunch of information about them. So there is some publicly available data as well, specifically relating to social media and and you know that anyone can access for using with a machine or otherwise.

JM: Yeah. Just, just to tack on to that, you know, in the cases where you as a lender or procuring that data, let's say outside of the consumer consent. Again, one of the regulatory requirements that are out there is, if a consumer disputes that information, thinks is incorrect, etc. Then they have to have a vehicle to be able to get it corrected. And so in the consumer consent aspect of it if they are consenting the data, then there's sort of an inherent implication that what they're providing you they've already sort of verified that information and they believe it to be correct as they're allowing it to usage.

That may not necessarily be the case if you're going to procure that information from a public site without their consent, because you know, they haven't really been afforded you know, quote unquote, to verify that information up front. And so if they do dispute, some of the information in that aspect, there has to be a vehicle to get it corrected. In general, most of this data that we're talking about is their data. So it's what they provide. And what they essentially control from from a correction perspective. However, you know you, it'll be sort of a, a tall task to expect somebody like, LinkedIn or Facebook or Google or anybody from that matter to sort of maintain an avenue where they're going to go in and correct data like a financial institution is obligated to do so. So just keep that in mind.

LT: Definitely. Those are all great points. Okay, so let's move on to myth number two

Myth #2 : You can’t get mobile data for use in credit decision making

Mat, what is your take on this?

ML: So mobile devices nowadays are very individual. And they contain a lot of information that you are entering yourself. But there's also information that devices just capturing in the background. So there's a lot of different kinds of information that's available there.

And again, I think we just spoke about consent. I think that's important piece about how we collect that information. Being that it's, you know, that says so individual and it's so specific to you. It's a great forum and it's a very simple way to present and to deliver that information to another party.

Transparency is very important how you came in at what you're obtaining, and things like that. But if you meet those guidelines. I think it's a very useful form and there's a lot of good data that can be gathered there. Okay, so like the social data, it does also have that consent component before you can access it. Right. And even though the devices lock down some of those pieces and they'll prompt you even from the device before it's even provided to any application. And you see that with many apps nowadays.

Jeremy, what are your thoughts on this?

JM: Yeah. Again, I think it sort of falls in line with the sort of the consumer consent umbrella. I think that you know, from mobile data perspective, the things you have to be careful of is you want to be able to leverage information that the consumer has control over right so something that they're that's associated with their inputting that they manage, etc.

You kind of want to stay away from data that you know maybe being collected on the phone that they have very little control over. That may be your perspective of their behavior even. And so, you know, obviously, this is the important data. And the data that people are going to understand and, you know, be more likely to approve. It’s in these sort of important decisions are things where it's indicative of the consumers behavior and you know, whether that's just understanding their identity better or whether that behavior is behaviors that they are they are doing that correlates to either trustworthiness on repaying a debt. Or, or the other way, where they're untrustworthy. You know, things that are again sort of just happening on a mobile device that don't really correlate to the consumer, and would be hard pressed to sort of be approved or being useful for making your credit decision right.

ML: I think another good point to maybe add is mobile data can be very useful, from a broad perspective, it does that. You know, it's very difficult if someone was trying to replicate an identity or certain things there's a lot of information there about you and what you're presenting. So it's a good avenue for validating that identity in the first place.

LT: Excellent. Okay, let's jump to myth number three.

Myth #3 : Decisions from models built on AI are not explainable

I am guessing Matt that both you and Jeremy have a take on this. Jeremy, do you want to start?

JM: Yeah, sure. So I guess I would probably start and lead with you know, if you've been sort of aware of what's been going on in the industry for a while now. There are many companies out there that claim that they have explainable AI and so you know, if you take that, you know, at face value, then you could argue that, you know, this myth sort of doesn't exist anymore that that you know AI and machine learning models are being claimed to be explainable

You know, many areas across, you know, many countries and many organizations, specifically with Equifax, we've got a patented methodology is where we have machine learning techniques that are explainable we provide reason codes. And so we've had that for a few years now and we're expanding those patents with adding additional machine learning techniques as we speak. I think, you know, I would say this message you pretty well dead.

The thing you have to be kind of careful with is who you're working with and how they're doing it. Like understanding, that's really key because there are organizations out there that are doing this that really don't meet the demands of the regulatory mandates out there. So you do have to be very careful with how you do it and who is doing it for you.

ML: Right. I think just to Jeremy's point like the fact that even patent exists, but this it's a complex component. And you know, a lot of the myth originated from a lot of the other you know, AI, that right now is really hard to explain. Which is like the image recognition and certain things like that, you know, and they think that it applies to all machine learning, which isn't the case. But it is something that's difficult to do. And it's something that there's not necessarily a perfect solution to it either.

It's really important that when you're building it, in even the models that are built you're keeping this in mind. You're building it with that intention, because just, you can't just slap on explainability necessarily to any model and assume that it will generate the right output. So that's something to look out for.

The other thing I think is from our side, we ended up favoring working with the open source community when we're looking at explainability. There's a lot of research been done about that, but personally for the reason that if something's going to be explainable you need to know the process that's being used to explain it.

And if you have something a bit more black box and it isn't revealed, then you're kind of disguising maybe how you got to those results. At least if something's open source, someone can pull it to know exactly what processes have been used and why it is explainable.

JM: So yeah, just to add at that, it is worth noting that different countries have different rules. And so in the US, you know, they have very strict regulations on what you're supposed to do.

And what you need to be able to explain to a consumer that doesn't necessarily apply in every country. And so it's very important to understand what each country requires because there may be levels of explainability that are needed, which also would then go into different techniques that could be used in different countries.

Because you’re meeting those requirements but allows you a little bit more latitude on the complexity of those models that you're building as well.

LT: Okay, that's definitely true. Anything to add?

ML: No. I think that's good.

LT: Excellent. Let's go to myth number four.

Myth #4: The big bureaus are standing still in the data race

Jeremy as you're coming from a bureau what's your take on this?

JM: Um, yeah. So again, you know, if you've been observant on what's been going on in the past few years, I think you definitely understand this is not the case at all.

I think, just in general, the bureau's are constantly looking for information to sort of fill in the blind spots of what credit data providers offer. So if you sort of think about all the different reasons of why a consumer may not pay back a credit obligation, there's a number of different reasons of why that could happen and so sort of big players in the space the bureaus are going to be constantly on the lookout for those data sources that sort of help fill in those gaps.

And you've seen it recently with TU’s (TransUnion’s) acquisition of Factor Trust, Experian’s acquisition of Clarity and our acquisition of Data X. So we're trying to fill in sort of the low end subprime spectrum there.

You know there's partnerships that are being announced across the three bureaus for consumer consent data and other data sources. So the bureaus are always going to be marching towards acquiring more data and getting access to more data.

Now, they're going to have a strategy, right? So their strategy is going to be to start with things that are similar to what they have. Right. So financial type data and then broaden outwards, so that they're going to have a, maybe a, different focus then.

Yeah, maybe a smaller organization that sees an opportunity and kind of goes in there and takes advantage of that. And as well, the title in the myth says big bureau and so because these bureaus are big it does take some time to navigate through getting new data sources, especially as you start classifying them as alternative.

You know, they have a lot at stake. So they have to fully vet out what the data is, where it's coming from, you know, running through all our processes and stuff like that. So it does take a long time for them to sort of push move the needle into these areas. And so that's where smaller organizations that are like singular focus, they do have a little bit of an advantage.

LT: Okay and Jordan as a lender, how do you incorporate bureau data with Trust Science.

JH: So bureau data is very important because in certain ways because it has access to trade lines, collections, all sorts of alternative data, that you know, it's otherwise very hard to get your hands on as a lender.

You might not know if there's any collections items any other way. Then comes a bureau, for example, of that would have that sort of knowledge, to date data available. And just the fact is, as Jeremy said, I mean, being a big bureau means that people are reporting, lots of people are reporting in that data. And then taking that data and using it with other sources of data to to make credit decisions so you know, one thing that we always talk about is that traditional credit scores are great for for some clients have a big bureaus and not ideal or customized for others. And so how can we take some of the data that the big bureaus do have because I agree, this is a myth. They do have a lot of good data coming in and then how do we analyze that to make a decision on the credit worthiness of an individual based on you know pieces of that data that we're, you know, things that we can get from current report or from a big bureau.

LT: Okay, so I think that myth is definitely busted. So let's pop over to myth number five.

Myth # 5: Scoring and decisioning can only be done once in the underwriting flow or as a waterfall or overlay to previous activities.

So for this one. How about we hear from you, Mat on how we can enable customers and then also hear from Jordan in terms of his perspective, which especially pertinent to this audience as a lender and our customer.

ML: We were just talking about large amounts of data and different forms of data. And I think in general, you could always be going out and collecting yet another form of data and yet another piece. And so you know the bureaus and certain things have been working on the the premise that ok, here's your score. This is yours, and it's pre-calculated, you can look it up, and this is what it is during that that period of time.

When you're building machine learning models and you're trying to build a decision based on the loan at that period of time. Now you can take in each of these different forms of data and you don't necessarily have to go out and make that next call. You can decide, you know, once you've gone, maybe the bureau and maybe you've got some social data, you may have even decided to get some banking. Every bit of data that you're adding provides you to get more information and and reduces the error in the risk that that's acceptable. What are the risks of that, that you're calculating for that. So now, at that point, the lender could easily decide, you know what, that's a risk I'm willing to accept. And to just accept the loan there and not make additional calls. So you can really build quite a complex system that will look at each, and then you can make a value evaluation at any point to say it. Is it worth it to go and get that extra piece of data.

LT: Definitely, and Jordan, how about you jump in?

JH: Sure. I mean, I agree with Mat. The important piece here is that there's no point in going and collecting all sorts of data on a customer that you know, you cannot proceed with, for example. So whether you're pulling a big bureau or you're getting alternative data, if you know there's some pretty telltale signs, you can't proceed t's way better to know that earlier in the process. I'm not just talking about the knockout rules, those, those are important too. What I mean is that there could be a combination of factors that would result in a lender, not wanting to move forward with an application. And it's, in my opinion, important to make that assessment as you go. So I'm going to look at a combination of factors to make a decision, if you want to proceed. Perhaps you gain more data and then you could run out again and make another decision if you want to proceed, and get even more data.

Some consumers, you know, our are easy for a human eye right we look at them and we go, no of course, this is a no brainer, I would definitely issue alone. And then others are much more complicated.

And it's about getting a system in place, and it uses the right amount of data for each customer. And that is way easier said than done. But it's about bringing all of that together, bringing the knowledge, the waterfall and the rest of your data and understanding when you're going to put what sources in, and when you're going to make a final decision.

LT: Definitely Jeremy, do you have anything you'd like to add to this one.

JM: Yeah, I mean I think from a, from a bureau perspective, the way this was framed out, it really applies is the, you know, the experience that I've had. With multiple bureaus is usually the call for data is a singular back and forth conversation. And so one time, hey, we need this data, we need the scores. When these attributes and whether or not they need all of it or not, at any particular time. They call for everything that they need at that particular point in time.

And it's delivered back now within their underwriting system. They may flow data or scores through a different points and have different knockout rules throughout the process. So that's all being done within their system.

They generally come and pull everything they can from us in that singular time because they don't want to have to make a separate call for each particular data source because that has impact both on the consumer and also has impact on their costs as well. And then the other part of it is, is that the bureaus aren't really set up to sort of hang in the process and other words different lenders calling out for information again like I said that call comes in and the response goes out and then the connections closed.

Doesn't really hang there go. Okay, tell me what you want. I'll give it back to you. And then I'm going to hang here and wait for the sermon, if you have another request for different data source.

Or something else, not really stuff to do. That way it's very transactional and so you know essentially most lenders are calling for, like, it's like a rainy. Now, they may go to multiple companies to pull different data sources. But usually when they go to those companies, specifically, like the bureaus, for example, they're calling for all the data that they might want and we're returning that back and they're only making that call once.

ML: Right. I think, maybe to add that as well, I guess part of it is a, you know, from our perspective when we go out and collect some social data, some of that, you know you're searching the web and you're getting some of that very quickly, but you may need to do some refreshes and updates.

And that can take some time, but you, you may be able to collect, you know, a bit of information already. Well, should you just hang on to that or could it be used to generate a score right now?

And perhaps the best decision is to pass it on, and then continue your search online for other piece of information as you collect them. Just keep adding and keep refining and refining the score and again at each point, you're just reducing your risk, your error in the calculations. So at some point you can decide to make a decision and say, yeah, it's acceptable risk at this point.

JM: Well, just to add to that one of the reasons why I think that's a very interesting prospect is because generally when people want alternative data, whether it's consumer consent data, or social media data or other sorts of data, they're not looking for reasons to turn them down. They probably already have enough reasons to do that.

And so they're looking for more reasons to approve them to accept them. And so if I can get to accept decision before I've collected all the data, then I can stop. II can get there and move on. And so that makes sense to be able to say, well, you know what decision would I make based on the information I have right now. And then if I can get to the decision that either I'm very confident is right for the business or right for the consumer and keep continuing to collect information that might add more intelligence to the process. So I think that's a very sort of new innovative thought process around how to sort of handle that process, given that, like I said, probably already have plenty of information that's telling you why you shouldn't approve them.

And the goal here is to try to figure out what is their information I don't know about that would really tell me I should approve them.

LT: Definitely. Right, so we're trying to stay on schedule here. So we are at the half hour mark and we are going to open up for the Q&A a portion for our audience I so do we have some questions that are coming in.

The first question is,

Q: in your opinion, how much data is enough and how should you get it?

Who would like to take that on, as a start Jordan would you maybe like to start with that one?

JH: Yeah, no data is enough.

So that's a really hard question. I think that it's important to collect as much data as you can to understand it. That might add value in the future.

And I think that there's, my goodness, as much data as you can look at to help you understand what you're looking at is really important. And to help you make a decision and, you know, there's so many pieces that you can pull into learn from as you go. And for us it's been literally trial by fire, and we have tried to collect every point of data that we're allowed to to help us make decisions through the process we collect data, you know, multiple points through the process and know any, any ideas we're always open to always brainstorming of other ways to get more data just to even look at how to make a decision.

ML: Yeah, and I think when you're collecting data, you don't necessarily have to always have the same order as well. It’s just once you see as some piece of information that might give you some insight to maybe check a specific source or request something specific from that customer as a bit of information. So you can get some insight that way as well.

LT: Jeremy, anything to add on that one?

JM: Yeah, I mean I think as a data scientist, I would say I would agree with Jordan. You never can have enough data, but I think what's important is to understand the different data sources you can get to the value that they bring to the decision, right? And so if I have a data source that is really, really important in making the decision and it provides most of the value in doing that but collecting other data that's not really going to override what that data is saying is probably not very useful. And so it really comes back down to what's the value that each of the data sources provide.

What, to your point, Mat, what order is important and what incremental value they add to each other to help determine you know, as I get data as I'm collecting data. Do I have enough for my first official answer but the sort of subsequent added data is not really going to override my decision.

LT: Right. We've got a couple of additional questions.

Q: How do you disclose a turn down for a social media reason I'm giving an example, you don't have enough friends on Facebook.

ML: So this is a very common type of question that people come up and ask, Well, you know, how are we doing this right now so a lot of the information that you get on online and social on when Jordan was talking about like LinkedIn or something like that, that’s something that you're publicly posting on, you can use that information. Anything that that they're publicly posting and that that becomes fairly easy to create. When you get into this more nuanced, enough friends and certain things.

I think even when we build the features we look at trying to, to understand what is actually happening behind it. Um, and it's looking at perhaps the stability of certain things and that they have on their profile.

If LinkedIn is a good one, where a person is really promoting themselves. It's almost becoming like this is your resume right? And ties very closely to what they're trying to do in their career or some things like that. And so you don't get too many of the ones where it's something very similar to, you know, you don't have enough friends, um, that turns out to be not very predictive anyway. So it's more about those other qualities and you know you have people recommending you and people doing other activities on maybe what you're posting or what you're following might be indicative of type person. And so it's not just financial things that that I think you can look at when you're trying to make a credit decision. I think that's maybe the misunderstanding. There's people often go it's has to be financial but you know you're looking to see, well, what's the is this person usually typical have some good repay. Do they have that kind of quality, the qualities that and I think we're starting to see some of that with a social data, which is really interesting.

LT: So next question.

Q: So myth number two talks about mobile data consent and this gentleman is looking to know if we can explain what mobile data gets.

ML: Yeah. So there's a few things that we have. So there's a Trust app that you can download and we use it in a couple different ways. First off, the mobile devices really fast way to connect it to your social data sources, for example, you already have your logins, you can just say, Do you want this app to access this data and it will say yes. And then you can pass that on and that's very fast.

There are other things on there that can come up where we can do some validation of the data and say well you know they're supposed to be living here, do you actually live in this house? Are you being representative from that location or some things like that might come up the other things that that are often asked, you know, just about validating who that person is, some of that data that's been collected there.

And just, you know, third their general usage might be something that would be interesting to look at as well, how they're using their device and what they choose to do.

Q: Do you use social data primarily for identification or as an actual credit indicator. Does it apply more to them files versus derogatory data?

ML: Yeah, so we actually do use it for as a credit indicator and found that that it does provide a lot of value there. Um, we also use it as a way to for identification. You know if certain things don't match, then that might be something that needs to be looked at further. It's pretty simple usually to go in, and I was mentioning fraud before, where you can look and it's pretty easy to just make up some information. But if you're using some email, it doesn't match the profile and now you got a disconnect between some that information there. Now, you start to question and you see more evidence of that. Now it's something to to at least raises applied to look into. And that's useful there.

Q: if you want to use a solution with machine learning how do you choose a vendor, and how does that impact explainability? Examples are what kind of features, how many should you have.

Maybe Jordan, as someone who's gone through this process, you can give us a bit of your feedback on that.

JH: The question was how you choose a vendor with regards to gaining data and gaining AI capabilities.


Um, I think, I think that there's an important consideration is that there's multiple data sources out there and multiple ways you can get your data.

How you end up crunching that data is up to you but it needs to be in a secure fashion with people that you know and trust. At the end of the day we need to find people who are going to look into your data for analysis to make decisions.

You know, there's two pieces to this puzzle. One is gaining data. The second is properly analyzing it with someone that you can sit across the table with and have discussion and and really gain the knowledge you need in order to make, you know, ongoing decisions we have played with so many things. And we've made a ton of mistakes as we've gone. We've also had a bunch of wins and the only way we got there is by testing.

You know, I can't really comment on the exact way to choose the right vendor and as business owners. We've all been there, trying to choose between two great options or a few options, you're unsure of but I would say that it's very important to ensure that, you know, you really have a good idea of what you can collect where your data is coming from and then work through some analysis with your vendors to see, you know, who can really pull out the interesting pieces of that data.

ML: You know, there's a lot of data. And so there's some groups that have even, you know, gone down the route you might have hired some data scientists, you might have your own team and you're working on certain things.

And it's really about the data that's available for use, so I you know I think just the machine learning aspect. There's a lot of things you can look at.

But there's also just the data implications as well are they bringing their own data. Are they just doing the research there on, you can still probably work with an outside group and still get some benefit. We’ve seen that.

LT: So this actually ties in quite nicely.

Q: I have a significant amount of historical known data do I really need to go out and get more data?

ML: So more data is okay, so here's what it comes down to. You can get away, you can gain and really start with your historical data and get a pretty good lift. And build a model from that.

And I think if you have enough historical data where it's coming from certain things you're asking the right questions, you can actually do quite a bit. And you might want guidance to to figure out what questions you should be asking things that you don't know. You should be asking that maybe you should but you can get pretty far that the key is if there's another data set that would be disjointed essentially that there's not a whole lot of overlap with what you're gathering.

So when I was talking before you know you get financial data from the bureau. That's not something we see a lot of in our social. What we've noticed is that our data is destroyed. So the combination of both of them actually helps generate a better score because they come from different sets and they take different perspectives of looking at the person. So I think it's, there's still benefit, but we can get pretty far with a lot of data.

LT: Excellent. Jeremy and Jordan, anything to add to that one.

JM: Yeah, I'll take a stab with a different perspective because Mat talked about for the data you're using as predictors. I'll talk from the other side of it, which is the amount of data and have the training set, if you will, of what you're trying to do to develop a model.

And I think it really comes down to what your objectives are, and how you plan on using it. So if your objective is to build a model and augment your solutions that you're currently making then generally what most financial institutions have from historical data perspective is, I know who I approved. I know who I declined and those that I booked. I know how they perform. So in general, if I just take that data and just go and build a model on it then the issue that you most likely will run into is, I'm just going to reiterate the same decisions I've made over and over again. And I don't understand.

If there's a swap set that I really need to be concerned about is there people on declining that should be approving and vice versa. And so going out to get more data on those consumers about how they might have performed on similar type of loans or accounts could help them determine did they actually perform good to actually perform badly. The other side of it, as well as let's say you're wanting to create a new product type or go into a new geography, then the data that you have probably isn't going to be sufficient. Understand, well, what, what's the effect of that. How many people perform on that is my data predictive of how someone's going to perform on this new product.

Or in this new geography and so collecting more data. This sort of round out what you have in your portfolio, usually a good idea.

LT: That's so very true. Great answer.

Okay. Do I have enough time for one more question.Okay, so I the final question. We're going to touch on today is

Q: Right now we’re automating parts of their underwriting using rules that we’ve developed over about 10 years and given the explosion in alternative data sources in the significant size of those data sets. How can we actually automate our decisions by leveraging these new data sources?

It's a big question.

ML: Yeah, so there's a lot of ways, and like, if you're looking at it, you can try to think of. There's a lot of systems. I've seen that are very rule based and, you know, you kind of go through and you can build out really good systems and humans are pretty good at finding some patterns. But when it comes down that then you start adding all these different bits of data and all these are features on that's where our ability cannot be matched by what a computer can do. That's why machine learning models exists because they can take advantage of all that data and really classify it. And interestingly enough, I guess it you know comes up every now and then, like, trying to classify on what attributes to use and which combinations of them, and what would give the best output.

And you don't need that much specifically in the end of the best models don't perform with hundreds and hundreds or even thousands of features and I've heard people mentioned that before. And it's like, well, you're not going to have thousands of features and your actual decision.

What you think about it this way just a binary feature, it would only take 33 features to classify every single person in the world uniquely down to one and you typically want to have groups when you're trying to classify people so you can get a percentage, you know,

Separate the risk that way. Um, but yeah, so you don't need it that much. Just really looking for for the optimal ones, I guess. And what is the combination of those. And that's something that I can't think of a better way to figure out the automation then to throw that machine learning approach to decide that.

LT: Okay. Jeremy and Jordan, anything to add to that one.

JH: What Matt said is, is correct. I mean, we can look at lots of data sources all together, but really there's a few main features that end up being the winners. It's also about looking at those alternative data to see if one might fit into that.

LT: Right. Well, at that point, I think we have just enough time to wrap up. One last slide for you this wraps up our webinar for today.

Thank you so much for joining us. If you'd like a personalized demo and how we can help you automate your decision process and add data to score your than file or credit invisibles please reach out to us.


You can follow us on Twitter @trustscience or you can reach out to us by email at solutions@trustscience.com

Again, thank you everyone for joining us today and thank you very much to our panelists. We really appreciate your time. Thanks.

What happens when Trust Science goes toe-to-toe with traditional scoring methods? Check out our helpful infographic, Ultimate Underwriting Showdown to find out.

Get Infographic