Logistic Regression with Keras

Logistic regression in machine learning is an algorithm for binary classification; this means output is one of the two choices like true/false, 0/1, spam/no-spam, male/female etc. Though the name says Logistic regression, it is actually a classification algorithm, not a regression algorithm.

In order to derive an output as above, from the underlying probabilities, it generally uses the sigmoid function, which has a range of 0 to 1. In other words, it is a linear algorithm with sigmoid function applied on output.

For our demonstration purposes, we will use Bank Note Dataset from UCI machine learning repo.

The first 4 columns in the data set are X input features.

Wavelet Transformed image (continuous) Variance; Wavelet Transformed image (continuous) skewness; Wavelet Transformed image (continuous) curtosis;  Image (continuous) Entropy.

The last column is Y, which says whether a given note is authentic (0) or forged (1)

In order to implement this in Keras, we follow the steps below:

  1. Create X input and Y output;
  2. Create a Sequential model;
  3. Add the input layer and a hidden layer with the number of neurons, number of input variables and activation function;
  4. Add the output layer with the sigmoid function;
  5. Compile the model; and
  6. Train the model.

Here is the code. You can play with different hyperparameters to increase accuracy.

Here is the sample output.

100/919 [==>………………………] – ETA: 0s – loss: 0.5984 – acc: 0.7700
919/919 [==============================] – 0s 14us/step – loss: 0.5483 – acc: 0.7889 – val_loss: 0.7256 – val_acc: 0.4768
Epoch 11/15

100/919 [==>………………………] – ETA: 0s – loss: 0.5178 – acc: 0.7900
919/919 [==============================] – 0s 14us/step – loss: 0.4934 – acc: 0.8118 – val_loss: 0.7121 – val_acc: 0.4768
Epoch 12/15