Machine Learning Foundations Course Passed

Yes today I passed my Machine Learning Certificate Course in Machine Learning Foundations a Case Study approach from the University of Washington on Coursera. This course was a great introduction to Graphlab and really fun to do the modules from all 6 weeks. Graphlab allowed me to do regression analysis, classification analysis, sentiment analysis and machine learning with easy to use apis. The lecturers Carlos Guestrin and Emily Fox were fantastically enthusiastic making the course really enjoyable to do. I look forward to rolling this knowledge into my lectures in DBS over the coming months. Hopefully I have the time to complete the Specialization and Capstone project on Coursera too in the coming months.

Hackathon in Excel / R / Python

Today in the hackathon you can practice / learn some Excel, R, & Python, Fusion Tables to perform some data manipulation, data analysis, and graphics.

In R to set your working directory, use the function setwd() or in Python use the os.chdir function to achieve the same.

Part A

Hackathon Quiz 23rd October 2016 R

Attempt Some R Questions to practice using R.

Next we can practice reading data sets.

Attached are two files for US baby names in 1900 and 2000.

In the files you’ll see that each year is a comma-separated file with 3 columns – name, sex, and number of births.

Part B

Hackathon Quiz 23rd October 2016 Baby Names

Attached are two files for US baby names in 1900 and 2000


Amazon best sellers 2014
Froud ships 1907

Running an Analysis of Variance

Carrying on from the Hypothesis developed in Developing a Research Question I am trying to ascertain if there is a statistically significant relationship between the location and the sales price of a house in Ames Iowa. I have chosen to explore this in python. The tools used are pandas, numpy, and statsmodels.

Load in the data set and ensure the interested in variables are converted to numbers or categories where necessary. I decide to use ANOVA (Analysis of Variance) to test and TukeyHSD (Tukey Honest Significant Difference) for post-hoc testing my data set and my hypothesis.

This tells us that there are 25 neighbourhoods in the dataset.

We can create our ANOVA model with the smf.ols function and we will tilda SalePrice (dependent variable) with Neighborhood (independent variable) to build our model. We can then get the model fit using the fit function on the model and use the summary function to get our F-statistic and associated p value which we hope will be less than 0.05 so that we can reject our null hypothesis that there is no significant association between neighbourhood and sale price, therefore we can accept our alternate hypothesis that there is a significant relationship.

We get the output below which tells us that for 1460 observations with an F-statistic of 71.78 the p-value is 1.56e-225 meaning that the chance of this happening by chance is very very very small – 224 zero after the decimal point followed by 156, so we can safely reject the null hypothesis and accept the alternative hypothesis. Our adjusted R-squared is also .538 so our model is giving up a nearly 54% value for accuracy in including more than half of our training samples correctly. So our alternative hypothesis is that there IS a significant relationship between sale price and location (neighbourhood).

We know there is a significant relationship between neighbourhood and sale price but we don’t know which neighbourhood – remember we have 25 of these that can be different from eachother. So we must do some post-hoc testing. I will use the tukey hsd for this investigation

We can check the reject column below to see if we should reject any variations between neighbourhoods – but with 25 neighbourhoods, there are 25*24/2  = 300 relationships to check so there is a lot of output. Note we can output a box-plot to help visualise this too – see below the data for this output.

To visualise this we can use the pandas boxplot function although we probably have to tidy up the indices on the neighborhood (x) axis:

box_plot

Developing a Research Question

While trying to buy a house in Dublin I realised I had no way of knowing if I was paying a fair price for a house, if I was getting it for a great price, or if I was over-paying for the house. The data scientist in me would like to develop an algorithm, a hypothesis, a research question, so that my decisions are based on sound science and not on gut instinct. So for the last couple of weeks I have been developing algorithms to determine this fair price value. So my research questions is:
Is house sales price associated with socio-economic location?

I stumbled upon similar research by Dean DeCock from 2009 in his research determining the house price for Ames Iowa. So that is the data set that I will use. See the Kaggle page House Prices Advanced Regression Techniques to get the data.

I would like to study the association between the neighborhood (location) and the house price, to determine does location influence the sale price and is the difference in means between different locations significant.

This dataset has 79 independent variables with sale price being the dependent variable. Initially I am only focusing on one independent variable – the neighborhood, so I can reduce the dataset variables down to two, to simplify the computation my analysis of variance needs to perform.

Now that I have determined I am going to study location, I decide that I might further want to look at the bands of house size, not just the house size (square footage), but if I can turn those into categories of square footage, less than 1000, between 1000 and 1250 square feet, 1250 to 1500, > 1500 to see if there is a variance in the mean among these categories.

I can now take the above ground living space variable (square footage) and add it to my codebook. I will also add any other variables related to square footage for first floor, second floor, basement etc…

I then search google scholar, kaggle, dbs library for previous study in these areas, finding: a paper from 2001 discussing previous research in Dublin, however it was done in 2001 when a bubble was about to begin, and a big property crash in 2008 that was not conceived. http://www.sciencedirect.com/science/article/pii/S0264999300000407
Secondly Dean De Cock’s research on house prices in Iowa http://ww2.amstat.org/publications/jse/v19n3/decock.pdf

Based on my literature review I believe that there might be a statistically significant association between house location (neighborhood) and sales price. Secondary I believe there will be a statistically significant association between size bands (square footage band) and sales price. I further believe that might be an interaction effect between location & square footage bands and sales price which I would like to investigate too.

So I have developed three null hypotheses:
* There is NO association between location and sales price
* There is NO association between bands of square footage and sales price
* There is NO interaction effect in association between location, bands of square footage and sales price.

Running a LASSO Regression Analysis

A lasso regression analysis was conducted to identify a subset of variables from a pool of 79 categorical and quantitative predictor variables that best predicted a quantitative response variable measuring Ames Iowa house sale price. Categorical predictors included house type, neighbourhood, and zoning type to improve interpretability of the selected model with fewer predictors. Quantitative predictor variables include lot area, above ground living area, first floor area, second floor area. Scale were used for measuring number of bathrooms, number of bedrooms. All predictor variables were standardized to have a mean of zero and a standard deviation of one.

The data set was randomly split into a training set that included 70% of the observations (N=1022) and a test set that included 30% of the observations (N=438). The least angle regression algorithm with k=10 fold cross validation was used to estimate the lasso regression model in the training set, and the model was validated using the test set. The change in the cross validation average (mean) squared error at each step was used to identify the best subset of predictor variables.

Figure 1. Change in the validation mean square error at each step:

regression_coef_progmean_squared_error

Of the 33 predictor variables, 13 were retained in the selected model. During the estimation process, overall quality, above ground floor space, and garage cars being the main 3 variables. These 13 variables accounted for just over 77% of the variance in the training set, and performed even better at 81% on the test set of data.

Wesleyan’s Regression Modeling in Practice – Week 2

Continuing on with the Kaggle data set from House Prices: Advanced Regression Techniques I plan to make a very simple linear regression model to see if house sale price (response variable) has a linear relationship with ground floor living area, my primary explanatory variable. Even though there are 80 variables and 1460 observations in this dataset, my hypothesis is that there is a linear relationship between house sale price and the ground floor living area.

The data set, sample, procedure, and methods were detailed in week 1’s post.

There is quite a sizable differece between the mean and median – almost 17000, or just under 10% of our mean.
So we can center the variables as follows:

sale_price_histogram_pythonsale_price_ground_living_area

Looking at the graphs and summary statistics my hypothesis seems to be explained better than I expected. Remember the null hypothesis (H0) was that there was no linear relationship between house sale price and ground floor living space. The alternative hypothesis (H1) was that there is a statistically significant relationship. Considering there are 79 explanatory variables and I selected only one to explain the response variable and yet both my R-squared and adjusted R-squared are at .502 (so a little over 50% of my dataset is explained with just one explanatory variable).

My p-value of 4.52e-223 is a lot less than .05 so there is significance that the model explains a linear regression between sale price and ground floor living area so I can reject my null hypothesis and accept my alternative hypothesis that there is a relationship between house price and ground floor living space. However both the intercept (p-value = 3.61e-05) and the ground floor living space (p-value = 2e-16) appear to be contributing to the significance – with both p-values 0.000 to 3 decimal places and both t values being greater than zero so it is a positive linear relationship.

From the graph the dataset appears to be skewed on the sale price data – the mean is -1124 from zero (where we’d like it to be) so the data was centered.

I realise I still need to examine the residuals and test for normality (normal or log-normal distribution).

Note the linear regression can also be done in R as follows:

sale_price_histogramqqnorm_sale_price

To improve the performance of my model I now need to look at treating multiple explanatory variables which will be done in next week’s blog post.

Will Mayo Ever Win an All-Ireland? Will Dublin win 3 in a Row?

On a bulletin board yesterday a Mayo man posed the following questions. Calculate the probabilities of:

  • Mayo winning the All Ireland within the next 65 years
  • Dublin getting three in a row

He will be delighted to know that the probability of Mayo winning an All Ireland in the next 65 years is almost 100% that they will, no matter what way the data is sliced.

They have won 3 / 131 so approximately 1 in 44.
They have won 3 / 15 finals they have appeared in so 1 in 5, (.2), and they have now been in 8 in a row without winning one.
They have been in 5 out of the last 15 finals = one in 3 = (.33)

Which led me onto the Dublin question:
As of today the Dubs getting 3 in a row without putting thought into it should be -> 1 in 33.
The 31 counties taking part (Kilkenny doesn’t and the shouldn’t be allowed hurl if they don’t play football) plus London and New York.

However Dublin only play in Leinster and winning that gets them to the quarter-final – so if they win Leinster then that is 1 in 8.
But they are not guaranteed to win Leinster – they have only won 9 out of the last 10 – so 90% chance of getting to the last 8 ->
So 9/10 * 1/8 = 9/80 = 0.1125
But this seems a bit to low to price Dublin to win next year.

From another view Dublin have won four of the last six = 4/6 = 2/3

But I s’pose this last algorithm is lacking any nerves of doing a threepeat – it is 93 years since Dublin did it. Kerry are the only team to have done it in the last 50 years, and they only did it twice in that time, and it has not been done in the last 30 years – only 2 teams in the last 30 years have been in a position to do it and both failed, and this included Kerry getting to 6 finals in a row, winning 4 in 6 and still failing to win 3 in a row.

And now what odds would I want to place a bet in a bookmakers – probably 1 in 4 sounds right – if they can beat any two out of Kerry, Mayo, and the Ulster champions that would win it for them.

Wesleyan’s Machine Learning for Data Analysis Week 2

iris_random_forest

Week 2’s assignment for this machine learning for data analytics course delivered by Wesleyan University, Hartford Connecticut Area in conjunction with Coursera was to build a random forest to test nonlinear relationships among a series of explanatory variables and a categorical response variable. I continued using Fisher’s Iris data set comprising of 3 different types of irises’ (Setosa, Versicolour, and Virginica) with 4 explanatory variables representing sepal length, sepal width, petal length, and petal width.

Using Spyder IDE via Anaconda Navigator and then began to import the necessary python libraries:

Now load our Iris dataset of 150 rows of 5 variables:

Now we begin our modelling and prediction. We define our predictors and target as follows:

Next we split our data into our training and test datasets with a 60%, 40% split respectively:

Training data set of length 90, and test data set of length 60.

Now it is time to build our classification model and we use the random forest classifier class to do this.

Finally we make our predictions on our test data set and verify the accuracy.

Next we figure out the relative importance of each of the attributes:
# fit an Extra Trees model to the data

Finally displaying the performance of the random forest was achieved with the following:

And the plot success was output:

iris_random_forest

Random forest analysis was performed to evaluate the importance of a series of explanatory variables in predicting a binary or categorical response variable. The following explanatory variables were included as possible contributors to a random forest evaluating the type of Iris based on petal width, petal length, sepal width, sepal length.

The explanatory variables with the highest relative importance scores were petal width (42.8%), petal length (40.9%), sepal length (9.6%), and finally sepal width (6.7%). The accuracy of the random forest was 95%, with the subsequent growing of multiple trees rather than a single tree, adding little to the overall accuracy of the model, and suggesting that interpretation of a single decision tree may be appropriate.

So our model seems to be behaving very well at categorising the iris flowers based on the variables we have available to us.