%load_ext autoreload
%autoreload 2
import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
all='ignore')
np.seterr(
from sklearn.datasets import make_blobs
Here is the link to my source code (source.py):
https://github.com/Sallyliubj/Sallyliubj.github.io/blob/main/posts/LogisticRegression%20/source.py
Objective:
This blog consists of four parts:
The implementation and testing of regular Logistic Regression Model.
The implementation ans testing of Stochastic Logistic Regression Model.
The adding of momentum to my Stochastic Logistic Regression Model.
The experimentation with the three methods, including three cases discussing how the learning rate, batch size, and momentum would affect the convergence of Logistic Regression.
Generate a random data set:
First, I generate a set of data, shown as below.
123)
np.random.seed(= 3
p_features = make_blobs(n_samples = 200, n_features = p_features - 1, centers = [(-1, -1), (1, 1)])
X, y
= plt.scatter(X[:,0], X[:,1], c = y)
fig = plt.xlabel("Feature 1")
xlab = plt.ylabel("Feature 2") ylab
Part 1. Logistic Regression Algorithm
I implement my simple logistic regression model to fit the data:
from source import LogisticRegression
= LogisticRegression()
LR = 0.05, max_epochs = 1000)
LR.fit(X, y, alpha #LR.w
Testing of simple Logistic Regression Method:
I visualize the prediction by drawing a line to separate the data. I also trace the history of logistic loss (empirical risk) to see if it converges at the end.
= LR.pad(X)
X_ = LR.loss(X_, y)
loss
= plt.subplots(1, 2)
fig, axarr
0].scatter(X_[:,0], X_[:,1], c = y)
axarr[0].set(xlabel = "Feature 1", ylabel = "Feature 2", title = f"Loss = {loss}")
axarr[
= np.linspace(-3, 3, 101)
f1
= axarr[0].plot(f1, (LR.w[2] - f1*LR.w[0])/LR.w[1], color = "black")
p
1].plot(LR.loss_history)
axarr[1].set(xlabel = "Iteration number", ylabel = "Loss")
axarr[ plt.tight_layout()
print(LR.score_history[-5:])
#print(LR.loss_history[-5:])
#print(LR.w)
[0.895, 0.895, 0.895, 0.895, 0.895]
Based on the graph of loss history shown above, gradient descent seems to be convergent.
Note:
Since the data are not linearly separable, we cannot draw a perfect separating line, and the score (accuracy) will never reach 1.0. In this set of data, the score reaches 0.895 at the end.
Part 2. Stochastic Gradient Descent Method
I implement the Stochastic Gradient Descent Method and fit the data:
= LogisticRegression()
LR2 = 0.01, max_epochs = 500, batch_size = 15) LR2.fit_stochastic(X, y, alpha
Testing of fit_stochastoic method for Logistic Regression:
Similar to what I did above, I visualize the corresponding line, and track the value of the loss achieved:
= LR.pad(X)
X_ = LR2.loss(X_, y)
loss
= plt.subplots(1, 2)
fig, axarr
0].scatter(X_[:,0], X_[:,1], c = y)
axarr[0].set(xlabel = "Feature 1", ylabel = "Feature 2", title = f"Loss = {loss}")
axarr[
= np.linspace(-3, 3, 101)
f1
= axarr[0].plot(f1, (LR2.w[2] - f1*LR2.w[0])/LR2.w[1], color = "black")
p
1].plot(LR2.loss_history)
axarr[1].set(xlabel = "Iteration number", ylabel = "Loss")
axarr[ plt.tight_layout()
print(LR2.loss_history[-5:])
print(LR.w)
[0.21894443974814265, 0.2189453010824046, 0.21894371455305317, 0.21894288134945547, 0.21894209558615926]
[1.55870232 1.94248526 0.03479441]
Based on the graph of the loss history shown above, the stochastic gradient descent seems to be convergent.
Part 3. Momentum
I implement the momentum method for stochastic gradient descent. If the user sets momentum = True then I set the parameter to value 0.8. Otherwise it is set to 0, and we have regular gradient descent.
Here I generate a set of data with 10 feature dimensions.
123)
np.random.seed(= 10
p_features = make_blobs(n_samples = 200, n_features = p_features - 1, centers = [(-1, -1), (1, 1)]) X, y
Comparing the three fit methods
Here is a plot showing the evolution of the loss function for the three algorithms:
#Fit and plot the graph for stochastic gradient with momentum
= LogisticRegression()
LR3
LR3.fit_stochastic(X, y, = 1000,
max_epochs = True,
momentum = 10,
batch_size = .1)
alpha
= len(LR3.loss_history)
num_steps + 1, LR3.loss_history, label = "stochastic gradient (momentum)")
plt.plot(np.arange(num_steps)
#Fit and plot the graph for stochastic gradient without momentum
= LogisticRegression()
LR4
LR4.fit_stochastic(X, y, = 1000,
max_epochs = False,
momentum = 10,
batch_size = .1)
alpha
= len(LR4.loss_history)
num_steps + 1, LR4.loss_history, label = "stochastic gradient")
plt.plot(np.arange(num_steps)
#Fit and plot the graph for standard gradient
= LogisticRegression()
LR5 = .05, max_epochs = 2000)
LR5.fit(X, y, alpha
= len(LR5.loss_history)
num_steps + 1, LR5.loss_history, label = "gradient")
plt.plot(np.arange(num_steps)
"iteration")
plt.xlabel("loss")
plt.ylabel(
plt.loglog()
= plt.legend() legend
Based on the graph, stochastic gradient descent with and without momentum tends to converge at a rate faster than standard gradient descent, but these random algorithms can “bounce around” near the good solution. Standard gradient descent might need more epochs to find a good solution, but quickly “settles down” once it finds it.
Part 4. Perform Experiments
After testing and implementing my Logistic Regression class, I will now perform experiments to show examples of the following phenomena:
Case 1:
A case in which gradient descent does not converge to a minimizer because the learning rate (alpha) is too large:
#Fit and plot the graph for stochastic gradient with momentum
= LogisticRegression()
LR3
LR3.fit_stochastic(X, y, = 1000,
max_epochs = True,
momentum = 10,
batch_size = .9)
alpha
= len(LR3.loss_history)
num_steps + 1, LR3.loss_history, label = "stochastic gradient (momentum)")
plt.plot(np.arange(num_steps)
#Fit and plot the graph for stochastic gradient without momentum
= LogisticRegression()
LR4
LR4.fit_stochastic(X, y, = 1000,
max_epochs = False,
momentum = 10,
batch_size = .9)
alpha
= len(LR4.loss_history)
num_steps + 1, LR4.loss_history, label = "stochastic gradient")
plt.plot(np.arange(num_steps)
#Fit and plot the graph for gradient
= LogisticRegression()
LR5 = .9, max_epochs = 1000)
LR5.fit(X, y, alpha
= len(LR5.loss_history)
num_steps + 1, LR5.loss_history, label = "gradient")
plt.plot(np.arange(num_steps)
"iteration")
plt.xlabel("loss")
plt.ylabel(
plt.loglog()
= plt.legend() legend
From the graph above, when I set alpha to 0.9, the three lines describing the three algorithms cannot converge to a minimizer.
Case 2:
A case in which the choice of batch size influences how quickly the algorithm converges.
#Fit and plot the graph for stochastic gradient with batch_size = 20
= LogisticRegression()
LR3
LR3.fit_stochastic(X, y, = 500,
max_epochs = False,
momentum = 10,
batch_size = .01)
alpha
= len(LR3.loss_history)
num_steps + 1, LR3.loss_history, label = "stochastic gradient batch_size=10")
plt.plot(np.arange(num_steps)
#Fit and plot the graph for stochastic gradient with batch_size = 50
= LogisticRegression()
LR4
LR4.fit_stochastic(X, y, = 500,
max_epochs = False,
momentum = 30,
batch_size = .01)
alpha
= len(LR4.loss_history)
num_steps + 1, LR4.loss_history, label = "stochastic gradient batch_size=30")
plt.plot(np.arange(num_steps)
#Fit and plot the graph for stochastic gradient with batch_size = 90
= LogisticRegression()
LR5
LR5.fit_stochastic(X, y, = 500,
max_epochs = False,
momentum = 90,
batch_size = .01)
alpha
= len(LR5.loss_history)
num_steps + 1, LR5.loss_history, label = "stochastic gradient batch_size=90")
plt.plot(np.arange(num_steps)
"iteration")
plt.xlabel("loss")
plt.ylabel(
plt.loglog()= plt.legend() legend
Compare the there lines with different batch size, the green line with batch_size = 90 seems to converge faster then the other two line.
However, by constantly experimentating with different batch sizes, I noticed that it may not always be the same case. Sometimes the line with smaller batch size may converge more quickly.
After trying different batch sizes, I can conclude that on average, the larger batch size tends to converge faster.
Case 3:
A case in which the use of momentum significantly speeds up convergence.
#Fit and plot the graph for stochastic gradient with momentum
= LogisticRegression()
LR3
LR3.fit_stochastic(X, y, = 500,
max_epochs = True,
momentum = 40,
batch_size = .05)
alpha
= len(LR3.loss_history)
num_steps + 1, LR3.loss_history, label = "stochastic gradient (momentum)")
plt.plot(np.arange(num_steps)
#Fit and plot the graph for stochastic gradient without momentum
= LogisticRegression()
LR4
LR4.fit_stochastic(X, y, = 500,
max_epochs = False,
momentum = 40,
batch_size = .05)
alpha
= len(LR4.loss_history)
num_steps + 1, LR4.loss_history, label = "stochastic gradient")
plt.plot(np.arange(num_steps)
"iteration")
plt.xlabel("loss")
plt.ylabel(
plt.loglog()
= plt.legend() legend
Comparing to the two lines in the graph, the gradient descent with momentum (blue line) tend to converge faster than the other line.
Conclusion:
According to these experimentations, I find that by changing the learning rate, batch size, and momentum, the performance of my Logistic Regression Model differs significantly.
Therefore, it is important to choose the appropriate value of alpha, batch size, and momentum in order for the model to perform the accurate prediction.