This dataset combines socio-economic data from the 1990 US Census, law enforcement data from the 1990 US LEMAS survey, and crime data from the 1995 FBI UCR.
In this exercise, we will be comparing ridge regression, LASSO regression, and elastic net regularization. First we load the data.
library(RCurl)
package ‘RCurl’ was built under R version 3.2.4Loading required package: bitops
library(glmnet)
package ‘glmnet’ was built under R version 3.2.4Loading required package: Matrix
package ‘Matrix’ was built under R version 3.2.5Loading required package: foreach
foreach: simple, scalable parallel programming from Revolution Analytics
Use Revolution R for scalability, fault tolerance and more.
http://www.revolutionanalytics.com
Loaded glmnet 2.0-5
## get column names
# specify the URL for the column names and descriptions
names.file.url <- 'http://archive.ics.uci.edu/ml/machine-learning-databases/communities/communities.names'
names.file.lines <- readLines(names.file.url)
# only keep the attribute names, and discard the rest of the lines
names.dirtylines <- grep("@attribute ", names.file.lines, value = TRUE)
# split on spaces and pick the second word
names <- sapply(strsplit(names.dirtylines, " "), "[[", 2)
# drop the first 5 columns
names <- names[6:length(names)]
## download data and join in names
# specify the URL for the Crime and Communities data CSV
urlfile <-'http://archive.ics.uci.edu/ml/machine-learning-databases/communities/communities.data'
# download the file
downloaded <- getURL(urlfile, ssl.verifypeer=FALSE)
# treat the text data as a steam so we can read from it
connection <- textConnection(downloaded)
# parse the downloaded data as CSV
dataset <- read.csv(connection, header=FALSE, na.strings=c("?"))
# drop irrelevant columns
dataset <- dataset[ ,6:ncol(dataset)]
# drop rows with null columns
dataset <- dataset[rowSums(is.na(dataset)) == 0,]
# fix the column names
colnames(dataset) <- names
# preview the first 5 rows
head(dataset)
We now split this dataset into train and test sets (an alternative would be to use cross-validation).
## perform train-test split
# 75% of the sample size
smp_size <- floor(0.75 * nrow(dataset))
## set the seed to make your partition reproductible
set.seed(123)
train_ind <- sample(seq_len(nrow(dataset)), size = smp_size)
df.train <- dataset[train_ind, ]
df.test <- dataset[-train_ind, ]
Q1. Fit a linear regression model on df.train. The goal is to predict ‘ViolentCrimesPerPop’ from the other columns. What is the r-squared on the train data? What about the test data?
We are going to use the glmnet
package for this exercise (as it is helpful for the next few questions as well). glmnet
fits an elastic net model by default. Recall that the loss function for elastic net was given by: \[\underset{w}{\operatorname{argmin}}{\cal L}(X) + (1 - \alpha) \cdot \lambda \sum_{i=1}^M \beta_i^2 + \alpha \cdot \lambda \sum_{i=1}^M |\beta_i|.\] Thus, for linear regression, we need to set \(\lambda = 0\).
Note that this package can fit other generalized linear models as well, including logistic regression for classification, Poisson regression for modeling count data, and Cox regression for survival analysis/churn prediction. To fit these other models, you will have to change the family
parameter in the glmnet
function call.
For details and examples on the package, visit https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html
labelColIdx = match(c("ViolentCrimesPerPop"), names)
fit.linreg <- glmnet(as.matrix(df.train[,-labelColIdx]),
as.matrix(df.train$ViolentCrimesPerPop),
alpha=1, lambda=0, family="gaussian",
standardize=FALSE)
fitted.linreg.train <- predict(fit.linreg, newx = data.matrix(df.train[,-labelColIdx]), s=0)
fitted.linreg.test <- predict(fit.linreg, newx = data.matrix(df.test[,-labelColIdx]), s=0)
cat("\nTrain correlation coefficient: ", cor(as.matrix(df.train$ViolentCrimesPerPop), fitted.linreg.train)[1])
Train correlation coefficient: 0.9281492
cat("\nTest correlation coefficient: ", cor(as.matrix(df.test$ViolentCrimesPerPop), fitted.linreg.test)[1])
Test correlation coefficient: 0.6168479
Q2. Also fit each of a ridge, lasso, and elastic net regression on the same data. Use the function cv.glmnet
to cross-validate and find the best values of \(\lambda\). For elastic net, try a few values of \(\alpha\) as well.
Hint: cv.glmnet
does not optimize for \(\alpha\). You should use the command set.seed(k)
for some fixed k
before each value of \(\alpha\) being run. You can do the cross validation for \(\alpha\) manually for the purpose of this exercise. cv.glmnet
automatically picks appropriate values of \(\lambda\) to try.
Q3. Which model performs the best?
Ridge Regression
fit.ridge <- cv.glmnet(as.matrix(df.train[,-labelColIdx]),
as.matrix(df.train$ViolentCrimesPerPop),
type.measure="mse", alpha=0, family="gaussian",
standardize=FALSE)
fitted.ridge.train <- predict(fit.ridge,
newx = data.matrix(df.train[,-labelColIdx]),
s="lambda.min")
fitted.ridge.test <- predict(fit.ridge,
newx = data.matrix(df.test[,-labelColIdx]),
s="lambda.min")
cat("\nTrain correlation coefficient: ", cor(as.matrix(df.train$ViolentCrimesPerPop), fitted.ridge.train)[1])
Train correlation coefficient: 0.8463421
cat("\nTest correlation coefficient: ", cor(as.matrix(df.test$ViolentCrimesPerPop), fitted.ridge.test)[1])
Test correlation coefficient: 0.7725274
LASSO
fit.lasso <- cv.glmnet(as.matrix(df.train[,-labelColIdx]),
as.matrix(df.train$ViolentCrimesPerPop),
type.measure="mse", alpha=1, family="gaussian",
standardize=FALSE)
fitted.lasso.train <- predict(fit.lasso,
newx = data.matrix(df.train[,-labelColIdx]),
s="lambda.min")
fitted.lasso.test <- predict(fit.lasso,
newx = data.matrix(df.test[,-labelColIdx]),
s="lambda.min")
cat("\nTrain correlation coefficient: ", cor(as.matrix(df.train$ViolentCrimesPerPop), fitted.lasso.train)[1])
Train correlation coefficient: 0.8416111
cat("\nTest correlation coefficient: ", cor(as.matrix(df.test$ViolentCrimesPerPop), fitted.lasso.test)[1])
Test correlation coefficient: 0.7647234
Elastic Net
Note that since we are doing nested grid-search, we have to split the train dataset further into train and test.
set.seed(1)
smp_size <- floor(0.75 * nrow(df.train))
train_ind <- sample(seq_len(nrow(df.train)), size = smp_size)
df.alphatrain <- df.train[train_ind, ]
df.alphatest <- df.train[-train_ind, ]
for (a in log10(seq(1,10,1))) {
set.seed(123)
cat("\nalpha =", a)
fit.elastic <- cv.glmnet(as.matrix(df.alphatrain[,-labelColIdx]),
as.matrix(df.alphatrain$ViolentCrimesPerPop),
type.measure="mse", alpha=a, family="gaussian",
standardize=FALSE)
fitted.elastic.train <- predict(fit.elastic,
newx = data.matrix(df.alphatrain[,-labelColIdx]),
s="lambda.min")
fitted.elastic.test <- predict(fit.elastic,
newx = data.matrix(df.alphatest[,-labelColIdx]),
s="lambda.min")
cat("\nTrain correlation coefficient: ", cor(as.matrix(df.alphatrain$ViolentCrimesPerPop),
fitted.elastic.train)[1])
cat("\nTest correlation coefficient: ", cor(as.matrix(df.alphatest$ViolentCrimesPerPop),
fitted.elastic.test)[1])
}
alpha = 0
Train correlation coefficient: 0.8482096
Test correlation coefficient: 0.8257798
alpha = 0.30103
Train correlation coefficient: 0.836436
Test correlation coefficient: 0.8146347
alpha = 0.4771213
Train correlation coefficient: 0.837226
Test correlation coefficient: 0.8111497
alpha = 0.60206
Train correlation coefficient: 0.8380694
Test correlation coefficient: 0.8097428
alpha = 0.69897
Train correlation coefficient: 0.8385455
Test correlation coefficient: 0.8087591
alpha = 0.7781513
Train correlation coefficient: 0.837388
Test correlation coefficient: 0.8064273
alpha = 0.845098
Train correlation coefficient: 0.837405
Test correlation coefficient: 0.8061786
alpha = 0.90309
Train correlation coefficient: 0.8360578
Test correlation coefficient: 0.8037225
alpha = 0.9542425
Train correlation coefficient: 0.8344337
Test correlation coefficient: 0.800979
alpha = 1
Train correlation coefficient: 0.8343754
Test correlation coefficient: 0.8017738
Elastic net performs the best (looking at the test correlation coefficient). This is unsurprising, since elastic net is a superset of the other models. Picking the optimal value from above, \(\alpha = 0.30103\).
optalpha <- 0.30103
fit.elastic <- cv.glmnet(as.matrix(df.train[,-labelColIdx]),
as.matrix(df.train$ViolentCrimesPerPop),
type.measure="mse", alpha=optalpha,
family="gaussian", standardize=FALSE)
Q4. Make the following scatterplot:
- Each point corresponds to one predictor in the data
- The x-value is the coefficient of that predictor under OLS regression
- The y-value is the coefficient of that predictor using ridge regularization
Q5. Do the same for OLS vs Lasso, and OLS vs ElasticNet. What do you notice about the magnitude of the parameters and numbers of zeros?
plot(coef(fit.linreg, s=0)[-1], coef(fit.ridge, s=0)[-1], main="OLS vs. Ridge parameters",
xlab="OLS", ylab="Ridge", pch=19)
plot(coef(fit.linreg, s=0)[-1], coef(fit.lasso, s="lambda.min")[-1], main="OLS vs. LASSO parameters",
xlab="OLS", ylab="LASSO", pch=19)
plot(coef(fit.linreg, s=0)[-1], coef(fit.elastic, s="lambda.min")[-1], main="OLS vs. Elastic Net parameters",
xlab="OLS", ylab="Elastic Net", pch=19)
The magnitude of the ridge regression parameters are much smaller in magnitude than those for OLS. For LASSO, many parameters are exactly zero, yielding a sparse solution. Same (but to a lesser extent) for the elastic net.
Q6. Make the following scatterplot:
- Each point corresponds to one predictor in the data
- The x-value is the coefficient of that predictor under OLS regression
- The y-value is the correlation coefficient of that predictor with the target
Q7. Based on the above plot, what can you say about the interpretation of the regression coefficients? Does the sign of the coefficient implying the relationship of that variable and the target? What are some potential issues interpreting the magnitude?
# initialize with coefficient vector to keep the row names
corrs <- array(0.0,length(coef(fit.linreg, s=0)[-1]))
cnt <- 0
for (feat in rownames(coef(fit.linreg, s=0))[-1]) {
cnt <- cnt + 1
corrs[cnt] <- cor(as.matrix(df.train$ViolentCrimesPerPop), df.train[,feat])[1]
}
plot(coef(fit.linreg, s=0)[-1], corrs, main="Regression coefficients versus correlation coefficients",
xlab="Linear regression coefficients", ylab="Correlation coefficients", pch=19)
We should be very careful in interpreting regression coefficients directly. While a predictor \(x_i\) may be postively (negatively) correlated with the target, \(\beta_i\) may be negative (positive) in the presence of other predictors, which are correlated with \(x_i\). Similarly, a small \(\beta_i\) does not mean that \(x_i\) is uninformative, but it may be that other variables have ‘captured’ the information, and left \(x_i\) with nothing left to contribute.
Additionally, we have to consider issues of scale - whether we measure distance in meters or feet, the information content does not change. But the regression coefficient would change to scale appropriately. Thus, we have to ensure that our variables are normalized to have zero mean and unit standard deviation prior to fitting the model.
---
title: "Crime and Communities Linear Regression Exercise"
output: html_notebook
---

This dataset combines socio-economic data from the 1990 US Census, law enforcement data from the 1990 US LEMAS survey, and crime data from the 1995 FBI UCR.

In this exercise, we will be comparing ridge regression, LASSO regression, and elastic net regularization. First we load the data.

```{r}
library(RCurl)
library(glmnet)

## get column names
# specify the URL for the column names and descriptions
names.file.url <- 'http://archive.ics.uci.edu/ml/machine-learning-databases/communities/communities.names'
names.file.lines <- readLines(names.file.url)
# only keep the attribute names, and discard the rest of the lines
names.dirtylines <- grep("@attribute ", names.file.lines, value = TRUE)
# split on spaces and pick the second word
names <- sapply(strsplit(names.dirtylines, " "), "[[", 2)
# drop the first 5 columns
names <- names[6:length(names)]

## download data and join in names
# specify the URL for the Crime and Communities data CSV
urlfile <-'http://archive.ics.uci.edu/ml/machine-learning-databases/communities/communities.data'
# download the file
downloaded <- getURL(urlfile, ssl.verifypeer=FALSE)
# treat the text data as a steam so we can read from it
connection <- textConnection(downloaded)
# parse the downloaded data as CSV
dataset <- read.csv(connection, header=FALSE, na.strings=c("?"))
# drop irrelevant columns
dataset <- dataset[ ,6:ncol(dataset)]
# drop rows with null columns
dataset <- dataset[rowSums(is.na(dataset)) == 0,]
# fix the column names
colnames(dataset) <- names
# preview the first 5 rows
head(dataset)
```

We now split this dataset into train and test sets (an alternative would be to use cross-validation).

```{r}
## perform train-test split
# 75% of the sample size
smp_size <- floor(0.75 * nrow(dataset))

## set the seed to make your partition reproductible
set.seed(123)
train_ind <- sample(seq_len(nrow(dataset)), size = smp_size)

df.train <- dataset[train_ind, ]
df.test <- dataset[-train_ind, ]
```
<br>

#### Q1. Fit a linear regression model on df.train. The goal is to predict 'ViolentCrimesPerPop' from the other columns. What is the r-squared on the train data? What about the test data?

We are going to use the `glmnet` package for this exercise (as it is helpful for the next few questions as well). `glmnet` fits an elastic net model by default. Recall that the loss function for elastic net was given by:
$$\underset{w}{\operatorname{argmin}}{\cal L}(X) + (1 - \alpha) \cdot \lambda \sum_{i=1}^M \beta_i^2 + \alpha \cdot \lambda \sum_{i=1}^M |\beta_i|.$$
Thus, for linear regression, we need to set $\lambda = 0$.

Note that this package can fit other generalized linear models as well, including logistic regression for classification, Poisson regression for modeling count data, and Cox regression for survival analysis/churn prediction. To fit these other models, you will have to change the `family` parameter in the `glmnet` function call.

For details and examples on the package, visit https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html

```{r}
labelColIdx = match(c("ViolentCrimesPerPop"), names)

fit.linreg <- glmnet(as.matrix(df.train[,-labelColIdx]),
                     as.matrix(df.train$ViolentCrimesPerPop),
                     alpha=1, lambda=0, family="gaussian",
                     standardize=FALSE)

fitted.linreg.train <- predict(fit.linreg, newx = data.matrix(df.train[,-labelColIdx]), s=0)
fitted.linreg.test <- predict(fit.linreg, newx = data.matrix(df.test[,-labelColIdx]), s=0)

cat("\nTrain correlation coefficient: ", cor(as.matrix(df.train$ViolentCrimesPerPop), fitted.linreg.train)[1])
cat("\nTest correlation coefficient: ", cor(as.matrix(df.test$ViolentCrimesPerPop), fitted.linreg.test)[1])
```
<br>

#### Q2. Also fit each of a ridge, lasso, and elastic net regression on the same data. Use the function `cv.glmnet` to cross-validate and find the best values of $\lambda$. For elastic net, try a few values of $\alpha$ as well.

##### Hint: `cv.glmnet` does not optimize for $\alpha$. You should use the command `set.seed(k)` for some fixed `k` before each value of $\alpha$ being run. You can do the cross validation for $\alpha$ manually for the purpose of this exercise. `cv.glmnet` automatically picks appropriate values of $\lambda$ to try.
<br>

#### Q3. Which model performs the best?

#### Ridge Regression
```{r}
fit.ridge <- cv.glmnet(as.matrix(df.train[,-labelColIdx]),
                        as.matrix(df.train$ViolentCrimesPerPop),
                        type.measure="mse", alpha=0, family="gaussian",
                        standardize=FALSE)

fitted.ridge.train <- predict(fit.ridge,
                              newx = data.matrix(df.train[,-labelColIdx]),
                              s="lambda.min")
fitted.ridge.test <- predict(fit.ridge,
                             newx = data.matrix(df.test[,-labelColIdx]),
                             s="lambda.min")

cat("\nTrain correlation coefficient: ", cor(as.matrix(df.train$ViolentCrimesPerPop), fitted.ridge.train)[1])
cat("\nTest correlation coefficient: ", cor(as.matrix(df.test$ViolentCrimesPerPop), fitted.ridge.test)[1])
```
#### LASSO
```{r}
fit.lasso <- cv.glmnet(as.matrix(df.train[,-labelColIdx]),
                        as.matrix(df.train$ViolentCrimesPerPop),
                        type.measure="mse", alpha=1, family="gaussian",
                        standardize=FALSE)

fitted.lasso.train <- predict(fit.lasso,
                              newx = data.matrix(df.train[,-labelColIdx]),
                              s="lambda.min")
fitted.lasso.test <- predict(fit.lasso,
                             newx = data.matrix(df.test[,-labelColIdx]),
                             s="lambda.min")

cat("\nTrain correlation coefficient: ", cor(as.matrix(df.train$ViolentCrimesPerPop), fitted.lasso.train)[1])
cat("\nTest correlation coefficient: ", cor(as.matrix(df.test$ViolentCrimesPerPop), fitted.lasso.test)[1])
```
#### Elastic Net

#### Note that since we are doing nested grid-search, we have to split the train dataset further into train and test.

```{r}
set.seed(1)
smp_size <- floor(0.75 * nrow(df.train))
train_ind <- sample(seq_len(nrow(df.train)), size = smp_size)

df.alphatrain <- df.train[train_ind, ]
df.alphatest <- df.train[-train_ind, ]

for (a in log10(seq(1,10,1))) {
  set.seed(123)
  
  cat("\nalpha =", a)
  fit.elastic <- cv.glmnet(as.matrix(df.alphatrain[,-labelColIdx]),
                         as.matrix(df.alphatrain$ViolentCrimesPerPop),
                         type.measure="mse", alpha=a, family="gaussian",
                         standardize=FALSE)

  fitted.elastic.train <- predict(fit.elastic,
                                  newx = data.matrix(df.alphatrain[,-labelColIdx]),
                                  s="lambda.min")
  fitted.elastic.test <- predict(fit.elastic,
                                 newx = data.matrix(df.alphatest[,-labelColIdx]),
                                 s="lambda.min")

  cat("\nTrain correlation coefficient: ", cor(as.matrix(df.alphatrain$ViolentCrimesPerPop),       
                                             fitted.elastic.train)[1])
  cat("\nTest correlation coefficient: ", cor(as.matrix(df.alphatest$ViolentCrimesPerPop), 
                                            fitted.elastic.test)[1])
}
```

Elastic net performs the best (looking at the test correlation coefficient). This is unsurprising, since elastic net is a superset of the other models. Picking the optimal value from above, $\alpha = 0.30103$.

```{r}
optalpha <- 0.30103
fit.elastic <- cv.glmnet(as.matrix(df.train[,-labelColIdx]),
                         as.matrix(df.train$ViolentCrimesPerPop),
                         type.measure="mse", alpha=optalpha, 
                         family="gaussian", standardize=FALSE)
```
<br>

#### Q4. Make the following scatterplot:
#### - Each point corresponds to one predictor in the data
#### - The x-value is the coefficient of that predictor under OLS regression
#### - The y-value is the coefficient of that predictor using ridge regularization
<br>

#### Q5. Do the same for OLS vs Lasso, and OLS vs ElasticNet. What do you notice about the magnitude of the parameters and numbers of zeros?

```{r}
plot(coef(fit.linreg, s=0)[-1], coef(fit.ridge, s=0)[-1], main="OLS vs. Ridge parameters", 
  	xlab="OLS", ylab="Ridge", pch=19)
plot(coef(fit.linreg, s=0)[-1], coef(fit.lasso, s="lambda.min")[-1], main="OLS vs. LASSO parameters", 
  	xlab="OLS", ylab="LASSO", pch=19)
plot(coef(fit.linreg, s=0)[-1], coef(fit.elastic, s="lambda.min")[-1], main="OLS vs. Elastic Net parameters", 
  	xlab="OLS", ylab="Elastic Net", pch=19)
```
The magnitude of the ridge regression parameters are much smaller in magnitude than those for OLS. For LASSO, many parameters are exactly zero, yielding a sparse solution. Same (but to a lesser extent) for the elastic net.
<br>

#### Q6. Make the following scatterplot:
#### - Each point corresponds to one predictor in the data
#### - The x-value is the coefficient of that predictor under OLS regression
#### - The y-value is the correlation coefficient of that predictor with the target
<br>

#### Q7. Based on the above plot, what can you say about the interpretation of the regression coefficients? Does the sign of the coefficient implying the relationship of that variable and the target? What are some potential issues interpreting the magnitude?

```{r}
# initialize with coefficient vector to keep the row names
corrs <- array(0.0,length(coef(fit.linreg, s=0)[-1]))

cnt <- 0
for (feat in rownames(coef(fit.linreg, s=0))[-1]) {
  cnt <- cnt + 1
  corrs[cnt] <- cor(as.matrix(df.train$ViolentCrimesPerPop), df.train[,feat])[1]
}

plot(coef(fit.linreg, s=0)[-1], corrs, main="Regression coefficients versus correlation coefficients", 
    	xlab="Linear regression coefficients", ylab="Correlation coefficients", pch=19)
```

We should be very careful in interpreting regression coefficients directly. While a predictor $x_i$ may be postively (negatively) correlated with the target, $\beta_i$ may be negative (positive) in the presence of other predictors, which are correlated with $x_i$. Similarly, a small $\beta_i$ does not mean that $x_i$ is uninformative, but it may be that other variables have 'captured' the information, and left $x_i$ with nothing left to contribute.

Additionally, we have to consider issues of scale - whether we measure distance in meters or feet, the information content does not change. But the regression coefficient would change to scale appropriately. Thus, we have to ensure that our variables are normalized to have zero mean and unit standard deviation prior to fitting the model.
