The Easy Model with Keras, in Google Colab.

by CM


Posted on November 11, 2019



The Goal:

Highlevel APIs such as Keras foster the popularity of Machine Learning Libraries such as TensorFlow. Keras in particular is an open-source neural-network library written in Python. It pretty much runs on top of TensorFlow, Theano, or other libraries. The idea behind Keras is to enable fast implementation with neural networks while being user-friendly, ease to use, and modular. In the latter, we will use Keras to build an easy neural network that allows us to predict the result of a quadratic function.


Key components are:

Dataset:
We will create our own dataset from scratch.

First, we upgrade to TensorFlow 2.0 via pip (pip is the package installer for Python). Based on your file versions, some requirements might already be satisfied.

### Upgrade to TensorFlow 2.0
!pip install --upgrade tensorflow
==========================
EXAMPLE OUTPUT
==========================

Collecting tensorflow
  Downloading https://files.pythonhosted.org/packages/46/0f/7bd55361168bb32796b360ad15a25de6966c9c1beb58a8e30c01c8279862/tensorflow-2.0.0-cp36-cp36m-manylinux2010_x86_64.whl (86.3MB)
     |████████████████████████████████| 86.3MB 114kB/s
Collecting tensorboard<2.1.0,>=2.0.0
  Downloading https://files.pythonhosted.org/packages/9b/a6/e8ffa4e2ddb216449d34cfcb825ebb38206bee5c4553d69e7bc8bc2c5d64/tensorboard-2.0.0-py3-none-any.whl (3.8MB)
     |████████████████████████████████| 3.8MB 41.4MB/s
Collecting tensorflow-estimator<2.1.0,>=2.0.0
  Downloading https://files.pythonhosted.org/packages/fc/08/8b927337b7019c374719145d1dceba21a8bb909b93b1ad6f8fb7d22c1ca1/tensorflow_estimator-2.0.1-py2.py3-none-any.whl (449kB)
     |████████████████████████████████| 450kB 46.7MB/s

Second, we import all dependencies - (note that those libraries come preinstalled with Colab. In case you are using e.g. Jupyter notebook on your local machine, make sure to install the respective libraries, e.g. using pip).

### Importing all dependencies
import tensorflow as tf

import numpy as np

from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense

We then create our dataset. We do create two lists/arrays with each 13 values. However, in order to use them in TensorFlow, we need to convert them to 1-Dimensional Tensors.

x_values = [1,2,3,4,5,6,7,8,9,10,11,12,13]
y_values = [1,4,9,16,25,36,49,64,81,100,121,144,169]

print(type(x_values))
print(type(y_values))

x_values = tf.constant(x_values)
y_values = tf.constant(y_values)

print(type(y_values))
print(type(x_values))

We find that we successfully converted our lists / arrays to tensors.

==========================
OUTPUT
==========================

class 'list'
class 'list'
class 'tensorflow.python.framework.ops.EagerTensor'
class 'tensorflow.python.framework.ops.EagerTensor'

We now have a look at the shape, length, and rank of the tensor.

print(x_values.shape)
print(len(x_values))
print(tf.rank(x_values))

We find that we have a Rank 1 Tensor with a Length / Shape of 14.

(14,)
13
tf.Tensor(1, shape=(), dtype=int32)

Now we can start building our regression model. We will use a Sequential model with 1 Dense Layer with 1 Neuron.

model = Sequential()

model.add(tf.keras.layers.Dense(units = 1, input_shape=(1,)))
model.summary()

==========================
OUTPUT
==========================

>Model: "sequential_1"
_________________________________________________________________
Layer (type)                 Output Shape              Param #
=================================================================
dense_1 (Dense)              (None, 1)                 2
=================================================================
Total params: 2
Trainable params: 2
Non-trainable params: 0
_________________________________________________________________

We then compile our model with stochastic gradient as optimizer, and Mean Squared Error as Loss Function. After compiling, we can immediately start training our model. In this regard, we will let it run through the dataset 1000 times.

model.compile(optimizer="adam", loss="mean_squared_error")
model.fit(x_values, y_values, batch_size=1, epochs=1000, verbose=1)

==========================
OUTPUT
==========================

Train on 13 samples
Epoch 1/500
13/13 [==============================] - 0s 15ms/sample - loss: 7996.8052
Epoch 2/500
13/13 [==============================] - 0s 352us/sample - loss: 1016.3798
Epoch 3/500
13/13 [==============================] - 0s 377us/sample - loss: 483.8046
Epoch 4/500
13/13 [==============================] - 0s 169us/sample - loss: 441.0344
Epoch 5/500
13/13 [==============================] - 0s 241us/sample - loss: 435.4903
................
Epoch 495/500
13/13 [==============================] - 0s 270us/sample - loss: 157.7715
Epoch 496/500
13/13 [==============================] - 0s 230us/sample - loss: 157.7385
Epoch 497/500
13/13 [==============================] - 0s 226us/sample - loss: 157.7057
Epoch 498/500
13/13 [==============================] - 0s 279us/sample - loss: 157.6732
Epoch 499/500
13/13 [==============================] - 0s 193us/sample - loss: 157.6411
Epoch 500/500
13/13 [==============================] - 0s 260us/sample - loss: 157.6091

The model is now trained, although still has some error.

Nevertheless, we can already use it to make some predictions. In this regard, we try to predict the quadratic value of 10.

test_tensor = [10]
print(type(test_tensor))

test_tensor = tf.constant(test_tensor)
prediction = model.predict(test_tensor)
print(prediction)

The Model predicts a value that is above 100. However, it seems to have learned the general concept of quadratic functions, however, still showing some error in the regression estimation.


[[104.834]]

We have done it. We have built our first ML model with Keras that allows us to predict the value of a quadratic function.

Leverage TensorFlow and Keras!

#EpicML


News
Dec 2021

--- Quantum ---

Simulating matter on the quantum scale with AI #Deepmind
Nov 2021

--- Graviton3 ---

Amazon announced its Graviton3 processors for AI inferencing - the next generation of its custom ARM-based chip for AI inferencing applications. #Graviton3
May 2021

--- Vertex AI & TPU Gen4. ---

Google announced its fourth generation of tensor processing units (TPUs) for AI and ML workloads and the Vertex AI managed platform #VertexAI #TPU
Feb 2021

--- TensorFlow 3D ---

In February of 2021, Google released TensorFlow 3D to help enterprises develop and train models capable of understanding 3D scenes #TensorFlow3D
Nov 2020

--- AlphaFold ---

In November of 2020, AlphaFold 2 was recognised as a solution to the protein folding problem at CASP14 #protein_folding
Oct 2019

--- Google Quantum ---

A research effort from Google AI that aims to build quantum processors and develop novel quantum algorithms to dramatically accelerate computational tasks for machine learning. #quantum_supremacy
Oct 2016

--- AlphaGo ---

Mastering the game of Go with Deep Neural Networks. #neural_network