How does a multilayer Perceptron in Weka look like?
You must use two different techniques and build models with both: pick a suitable tree building algorithm and also use a multi-layer perceptron.
Describe the different methods you used and the results that you got.
Give a detailed technical description of the techniques and the way the models are represented.
In this section, it is particularly important that the description and the diagrams are your own work. Do not copy (or even paraphrase) from other sources. You must avoid plagiarism.
Describe what hyperparameters may be changed and what effect this has. If you varied the hyperparameters of a model, show how this impacted on the results.
Describe how you split the data for training, validation and testing purposes. Be methodical and record each result. This stage is a little like scientific research – you are carrying out experiments in your search for the best solution.
Once established the hyperparameter that better fit for this data set and built the model, test data are used to verify the performance on new data.
For the decision tree the final hyperparameters are:
- No discretizing value
- 5 minimum object on the leaf
- Confidence factor 0.33
- Variables used: Employment status, Current debt, Income, Own home, CCJs
The correctly classified instance is 77.8125. The result is really good considered that the percentage is close to the one obtained during the test.
An important advantage of the decision tree is that is a human-readable representation, and in case someone wants to know why they could not have a loan is reasonably easy to read and to interpreter just following the path from top to bottom.