|Your Navigation Path||Case Base
This Case demonstrates the basic concepts in MiningMart in a simple KDD application. It is based on sales data provided by a drug store chain (see Business and Data Description). After preprocessing, the Support Vector Machine (SVM) is used to predict the number of sales of a particular product in a particular shop in a given week. The SVM is trained on past sales figures for that item. The final Step in this Case compares the predicted sales with actual sales and prints the average error.
The data for this Case was provided by a drug store chain. The first input data table contains the weekly sales numbers of about 4000 items in 20 shops for 110 weeks (two years). The second input table provides boolean information about the presence of certain bank holidays for each of the 110 weeks, because the season of the year (like Christmas) influences people's shopping behaviour. This bank holiday information was therefore crucial for a successful prediction of sales figures.
The goal was to predict the number of sales of a particular item in a particular shop in week 110, using the first 105 weeks as training data. Thus, the prediction had a time horizon of 5 weeks, which is necessary for effective stocks planning for the shop. The prediction was performed using an asymmetric loss function which punishes underestimation more than overestimation, because it is more costly if an item runs out of stock than if too many items are ordered (the drug store chain does not sell items like fresh food which can only be held in stock for a short time). The prediction was successful, in that a small loss was achieved; however, this Case models only the training error. The test error is a little higher.
One interesting method in this Case is the segmentation and parallel further processing of the input table into segments which correspond to a particular item in a particular shop. This means that all following steps, including the mining step, are applied to each item-shop pair. This is done automatically without extra effort by the user. Thus, a number of parallel learning tasks is executed from one conceptual model. Another interesting method is the application of a windowing function on the table to transform the time-stamped data into a simpler representation (an attribute-value format with a fixed number of attributes; this number corresponds to the window size). For the data mining step, a Support Vector Machine (SVM) is used that can optimize an asymmetric loss function, which means here that underestimation of sales figures results in a higher loss than overestimation (see Goals and Results).
Martin Scholz, University of Dortmund, Computer Science VIII; Email: firstname.lastname@example.org; Web: www-ai.cs.uni-dortmund.de