Thursday, October 28, 2010

Not your typical financial risk model…

I’ve not done a lot of analysis in the finance industry, and my Google searches didn’t yield helpful insights for similar data mining. I just finished a project and would like some feedback. I’m trying to explain this as a data preparation and analysis approach to solve a specific problem. I’ve described as best I could without names or actual data. I also did a lot of presentation and extra info for the segments not described here. If anyone has relevant words of wisdom, or suggestions for a different approach they would have taken, then please describe it! Otherwise, perhaps this will be helpful to others…


The business problem to solve was generating customer insight (Businesses with loans), with considerations for each client business' financial health and business loan repayment risk.

The first thing we concentrated on was tax payments. The data I had access to contained typical finance account monthly summaries (eg. balance at close of month, total $ of transactions etc) but also two years of detailed transactional history of all outgoing and inbound money transfers/payments (eg. including tax payments made by many thousands of businesses). We examined two years of summary data and also all transactions for only those money transfers/payments that involved the account number belonging to the tax man.

The core idea was to understand each businesses tax payments over time in order to get an accurate view of their financial health. Obviously this would have great importance in predicting future loan repayments or likelihood of future financial problems. One main objective was to understand if tax payment behavior differed significantly between customers, and a secondary consideration was the risk profiles of any subgroups or segments that could be identified.

It was a quick preliminary investigation (less than two weeks work) so I tackled the problem very simplistically to meet deadlines.

For the majority of client businesses tax payments occur quarterly or monthly, so I first summarized the data to a quarterly aggregation, for example;


As you can see above, each customer could have many records (actually it was a maximum of 8, one for each quarter over a two year period), each record showing the account balance at the end of the quarter and the net sum of payments made to (or from!) the tax man.

Then I created two offset copies of Tax Payments, one being the previous record (Lag) and the other being the subsequent record (Lead) like so;


I then simply scaled the data so that everything was between 0-1 by using;

(X – (minimum of X)) / ((maximum of X) - (minimum of X))

Obviously, where X is one of the variables representing quarterly account balance or tax payments, and the maximum is within Customer ID.

For example the raw data here;


Got rescaled to;


I did the all raw balance and tax payment variable rescaling this way so that I could later run a Pearson’s correlation, and k-means clustering, and also graph data easily on the same axis (directly compare balance and tax payments). Some business customers had very large account balances, but small tax payments.

For example I could eventually generate a line chart like this showing a specific business’ relationship between balance (dotted line) and tax payments (bold red line);


I then ran a simple Pearson’s correlation with the variable ‘Balance’ correlated against the 3 tax payment variables (original, lag , and lead) with a correlation Group By clause on the Customer ID. This would output three correlation scores, one for the original (account balance and tax payments in same month), second for the correlation between current account balance and previous month’s tax payments, and the third for the current account balance and future month tax payments.

My thought process was to use the highest correlation score (along with balance and tax payment amounts as described below) to build k-means clusters to segment the customer base. Hopefully the segments would reflect, amongst other things, the strongest relationship between account balance and tax payments.

I joined the correlation outputs to the data and then I flipped/transposed and summarized the data so that each quarter was a new column for balance and tax payments, creating a very wide and summarized data set. For example;


…also including the correlation, lag, lead and original value variables in the single record per customer…

Now I have a dataset that is a nice single record per customer, and concentrated on representing the growth or decline in tax payments over the 2 year period. I did this quite simply by converting the raw payments into percentages (of the sum of each customer’s payments over the two years). In some cases a high proportion of the customer’s payments occurred many months ago, which represents a decline in recent quarters.

I then built a K-means model using inputs such as;

- the highest correlation score (of the three per customer) and categorical encoding of the correlations (eg. ‘negative correlation’ / ‘positive correlation’, ‘lag’ / ‘lead’ etc)
- Data manipulated payment sums
- Variables representing growth or decline in payments over time.

The segments that were generated have proved to perform very well. Many features of the client business that were not used in the segmentation (eg number of accounts per client, and risk propensity) could be distinguished quite clearly by each segment.

When I examined the incidence of risk (failure or problems repaying a business loan) for a three month period (also with a three month gap) I found some segments had almost double the risk propensity.

Timeline described below;



As you can see, there were a very small number of risk outcomes (just 204 in three months) but each of these is very high value, so any lift in risk prediction is beneficial. I hate working with such small samples, but sometimes you get given lemons….



Suppose I built five clusters, here’s an example summary of the type of results I managed to get;


Where ‘Risk Index’ is simply calculated as;

(‘% Of Total Risk’ – ‘% Of Client Count’ ) / ‘% Of Client Count’


So, this is showing that cluster 5 has 67.91% higher propensity to be a bad risk that the entire base (well, in the analysis…). Conversely cluster 2 is much less (-70%) likely to be a bad risk than the average customer.

Maybe not your typical financial risk model….





12 comments:

Ivan said...

Hi Tim!

Nice to have you back :)

Interesting approach, hope I can use some elements of it in my own analysis of bid segments (find appropriate clusters of business customers vs. how much a telco can afford to offer to new business segment customers having in mind the risk of involuntary churn due to misspayment...)

Clark Adams said...

Nice proposal on finance and data analysis. What program or software did you use for your financial risk model? It's very important for business sectors to have tools like these, for them to monitor the ups and downs of their company.

Tim Manns said...

ok, here's the pitch :)
- For the past year I have been employed by SAS as a data mining consultant, using fabulous SAS software (mostly Enterprise Guide, SAS coding and a bit of Enterprise Miner) to solve advanced data aalytics problems.

hunter said...

You have switched from SPSS to SAS! Why? You decided SPSS is not powerful enough? (If the scripting can be called a language it is a very weak language).

Tim Manns said...

Some personal 'home-life' factors were the main reasons for the change.

It also gave me the opportunity to develop some new skills.

It is the analyst that delivers the results, not the tool :) Any competent analyts can use most toolsets to deliver the same results. Of course each has strengths and weakness, but I don't believe there are many tools out there with weakness so great they prevent data mining projects from being successfully completed.

Anonymous said...

It doesn’t matter if you decided which segments to use based on statistical analysis or expert judgment you will need to justify your decision. In fact, until your segmentation decision isn’t justified it is only an idea – segmentation idea. Expert judgment based segmentation can probably be justified easier by providing (predominantly) qualitative reasons. Slightly more difficult task is to justify clustering or decision tree based segmentation decision as your clients would expect predominately quantitative evidence to support your decision.

I wrote about this here and here.


Feel free to comment

Unknown said...

Good article Guys, Thanks. By the way I found this awesome site for Financial Data and Analysis : http://thinknum.blogspot.com/ You can Search over 7,000,000 datasets. It has a financial data analysisengine which brings the functionality traditionally found on wall street proprietary trading desks to an open platform. You can visualize, save and share data. This tool is similar to PlotTool at Goldman Sachs or DataQuery at JP Morgan. Check it out, You may like it!

tucrumesh said...

I am a new user of this site so here i saw multiple articles and posts posted by this site,I curious more interest in some of them hope you will give more information on this topics in your next articles.
ExcelR data analytics courses in bangalore with placement

dataanalytics said...

I finally found great post here.I will get back here. I just added your blog to my bookmark sites. thanks.Quality posts is the crucial to invite the visitors to visit the web page, that's what this web page is providing.
data analytics course
business analytics course
data science courses

MVLTR Apps for Android said...

Good job in presenting the correct content with the clear explanation. The content looks real with valid information. Good Work…

AWS Training in Hyderabad

Anonymous said...

Hi, Thanks for sharing nice articles....

RTI Online Haryana

Ramesh Sampangi said...

Informative blog and knowledgeable content. Keep posting more blogs with us.
Data Science Course in Hyderabad