Every day, our computers get a little bit closer to functioning like our minds. The human brain is an organ of nearly unimaginable potential. Think of what we could do if we could essentially program a human brain and have it do our bidding?

Machine learning has been all the rage recently, as we inch gradually closer to having real, functional AI. Machine learning plays a vital role in predictive analytics, personalized recommendations, and numerous other disciplines at the forefront of today’s digital world. Machine learning is still rather tethered to the data we supply it with, however.

Deep learning is one branch of machine learning, which focuses on recreating neural networks in code. Tensorflow is one of the most popular libraries for implementing deep learning, especially for those just starting with the discipline.

We’ve compiled a brief introduction to Tensorflow to show you how to get started with the library and let you experiment with deep learning for yourself. We’ll show you how to get up and running with Tensorflow and give some rudimentary examples of working with data with the library. We’ll show you how to create your own machine learning algorithm based on an existing training model.

An Introduction To TensorFlow

Tensorflow gets its names from tensors, which are multidimensional data arrays. This also helps to explain what separates deep learning from machine learning in a more general sense.

To properly understand this concept requires a bit of high-level math. Bear with us for a moment, as the concepts will be much clearer with just a bit of explanation.


Vectors are a special form of matrix, or rectangular array of numbers, that are transformed by certain rules under changing coordinates. They are often seen in relations, such as cross products, dot products, or linear maps. The simplest definition of a vector is anything that has a direction and a magnitude.

Tensors deal with a special kind of vector known as plane vectors. Plane vectors deal with scalar magnitudes, specifically. A scalar might be “6 m/s” or “5 m/sec.” On the other hand, a vector might be “6 meters north” or “5 m/sec East.” As you might notice, vectors involve a direction.

The simplest form of a plane vector is an x,y coordinate. You can just as easily have vectors which deal with three-dimensions, via x,y,z coordinates. The point of vectors is to offer an abstraction where you can work with these variables without having to map each one out on a graph.

To put it simply, using our previous definition, a vector is a scalar magnitude given a definition. Tensors, on the other hand, are scalar magnitudes with multiple directions.

Scalars can be represented with a single number. Vectors can be expressed with a sequence of three numbers. Tensors can be represented with an array of 3R numbers. The variable ‘R’ stands for the rank of tensor. A second-rank tensor would be represented by nine numbers, for example.

This all sounds rather technical and abstract. Let’s load up TensorFlow to see how these principles are applied in action.

Installing TensorFlow

TensorFlow supports APIs for Python, C++, Haskell, Java, Go, and Rust. There’s also a third-party package for R. For this tutorial, we’re going to be using Python and Google Colab to run our code. You can also view and run the completed Notebook yourself or even download the Python file if you want to experiment with the code in your own projects.

Getting Started With TensorFlow

Now, let’s try out some basic functions to test some of these concepts. Start by opening Google Colab and creating a new notebook. We’ve called ours tensorflow-tutorial.

We’re going to create two arrays to demonstrate working with tensors. Input the following into your Google Colab notebook:

import tensorflow as tf

###initialize two constants###
x1 = tf.constant([1,2,3,4])
x2 = tf.constant([5,6,7,8])

result = tf.multiply(x1, x2)

###Print the result

You should see tf.Tensor([ 5 12 21 32], shape=(4,), dtype=int32) as a result.

Now that we’ve seen one instance of TensorFlow working in the abstract let’s turn our attention to some real-world applications. Let’s start by taking a look at the data we’ll be working with.

Understanding Data In TensorFlow

We’re going to show you how to load data into TensorFlow using tf.data. We’ll be using a directory of images that are available license-free from Google. Each sub-directory contains one class of images.

We’ll start by loading the libraries we’ll need and defining some of the variables. Input the following into your Google Colab notebook after clearing the previous code, or just start a new notebook for this project.

import tensorflow as tf
AUTOTUNE = tf.data.experimental.AUTOTUNE
import IPython.display as display
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
import os

Next, you’re going to import the data you’ll be using pathlib:

import pathlib
data_dir = tf.keras.utils.get_file(origin='https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',
                                         fname='flower_photos', untar=True)
data_dir = pathlib.Path(data_dir)

Hit run to initiate a download. Now, insert a new code cell beneath this one. To make sure that everything downloaded as it should, insert the following into the cell and hit run:

image_count = len(list(data_dir.glob('*/*.jpg')))

The snippet above should spit back 3670. You can also use this next command to see the sub-directories, which are the types of flowers available:

CLASS_NAMES = np.array([item.name for item in data_dir.glob('*') if item.name != "LICENSE.txt"])

That command will respond with the following array of flower types:

array(['roses', 'tulips', 'daisy', 'sunflowers', 'dandelion'],

Knowing that is helpful, but it’d be nice to see some images. To see some of these images, you can use the display command. This command will return some roses:

roses = list(data_dir.glob('roses/*'))
for image_path in roses[:3]:

Using Keras

Keras is one of the reasons TensorFlow is so popular for machine learning projects. Keras is TensorFlow’s API, which is designed for human consumption rather than a machine. Keras has been so popular it’s now fully integrated into TensorFlow without having to load an additional library.

The tf.keras.preprocessing. function is a simple, easy way to load files into your code. Insert the following into your code cell to initialize Keras:

# The 1./255 is to convert from uint8 to float32 in range [0,1].
image_generator = tf.keras.preprocessing.image.ImageDataGenerator(rescale=1./255)

Now, define some parameters for the data loader.

STEPS_PER_EPOCH = np.ceil(image_count/BATCH_SIZE)


train_data_gen = image_generator.flow_from_directory(directory=str(data_dir),
                                                     target_size=(IMG_HEIGHT, IMG_WIDTH),
                                                     classes = list(CLASS_NAMES))
def show_batch(image_batch, label_batch):
  for n in range(25):
      ax = plt.subplot(5,5,n+1)
image_batch, label_batch = next(train_data_gen)
show_batch(image_batch, label_batch)

Load Using TF.Data

Keras is wonderful, but it can be slow and unwieldy. Here’s another method to load the data into TensorFlow, using the TF.data function:

list_ds = tf.data.Dataset.list_files(str(data_dir/'*/*'))
for f in list_ds.take(5):

The print(f.numpy()) command tests to make sure things are loading as they should. Running the above code will return a list of images and their directories:


Now, here’s a short function to split a file into an (img, label) pair.

def get_label(file_path):
  # convert the path to a list of path components
  parts = tf.strings.split(file_path, os.path.sep)
  # The second to last is the class-directory
  return parts[-2] == CLASS_NAMES
def decode_img(img):
  # convert the compressed string to a 3D uint8 tensor
  img = tf.image.decode_jpeg(img, channels=3)
  # Use `convert_image_dtype` to convert to floats in the [0,1] range.
  img = tf.image.convert_image_dtype(img, tf.float32)
  # resize the image to the desired size.
  return tf.image.resize(img, [IMG_WIDTH, IMG_HEIGHT])
def process_path(file_path):
  label = get_label(file_path)
  # load the raw data from the file as a string
  img = tf.io.read_file(file_path)
  img = decode_img(img)
  return img, label

Now you can create a dataset of the (image, label) pairs:

# Set `num_parallel_calls` so multiple images are loaded/processed in parallel.
labeled_ds = list_ds.map(process_path, num_parallel_calls=AUTOTUNE)
for image, label in labeled_ds.take(1):
  print("Image shape: ", image.numpy().shape)
  print("Label: ", label.numpy())

Training For Machine Learning

Now you’re ready to start training your machine learning project for deep learning. For your dataset to be useful for machine learning, it should be shuffled, sorted into batches, and those batches should be available as quickly as possible.

def prepare_for_training(ds, cache=True, shuffle_buffer_size=1000):
  # This is a small dataset, only load it once, and keep it in memory.
  # use `.cache(filename)` to cache preprocessing work for datasets that don't
  # fit in memory.
  if cache:
    if isinstance(cache, str):
      ds = ds.cache(cache)
      ds = ds.cache()
  ds = ds.shuffle(buffer_size=shuffle_buffer_size)
  # Repeat forever
  ds = ds.repeat()
  ds = ds.batch(BATCH_SIZE)
  # `prefetch` lets the dataset fetch batches in the background while the model
  # is training.
  ds = ds.prefetch(buffer_size=AUTOTUNE)
  return ds

Then input:

train_ds = prepare_for_training(labeled_ds)
image_batch, label_batch = next(iter(train_ds))
show_batch(image_batch.numpy(), label_batch.numpy())

This should return a table with images from your data, paired with their label.

This free-to-use database is a perfect example database for machine learning; showcasing thousands of variations per image classifier.

Retraining An Image Classifier

Now that we’ve got our dataset loaded and classified, it’s time to prepare this data for deep learning. We accomplish this by retraining an existing image classifier machine learning model.

To start, we’re going to install tensorflow-gpu, which is uniquely equipped to handle machine learning. We’re going to start off by installing some additional libraries. Input the following into a new cell on Google Colab:

import itertools
import os
import matplotlib.pylab as plt
import numpy as np
import tensorflow as tf
import tensorflow_hub as hub
print("TF version:", tf.__version__)
print("Hub version:", hub.__version__)
print("GPU is", "available" if tf.config.list_physical_devices('GPU') else "NOT AVAILABLE")

When you run this, you’ll see the versions of TF and hub you’re using. You’ll also see if you have GPU installed on your system or not. Don’t worry about it if not, as the following code will still work, it’ll just run a bit slower.

Next, we’re going to load the SavedModel module, which we’ll be retraining for our own deep learning needs.

module_selection = ("mobilenet_v2_100_224", 224) #@param ["(\"mobilenet_v2_100_224\", 224)", "(\"inception_v3\", 299)"] {type:"raw", allow-input: true}
handle_base, pixels = module_selection
MODULE_HANDLE ="https://tfhub.dev/google/imagenet/{}/feature_vector/4".format(handle_base)
IMAGE_SIZE = (pixels, pixels)
print("Using {} with input size {}".format(MODULE_HANDLE, IMAGE_SIZE))
BATCH_SIZE = 32 #@param {type:"integer"}

Now, we’re going to associate our machine learning library with the variable data_dir:

data_dir = tf.keras.utils.get_file(

Then we’ll confirm that everything’s loaded as it should be:

data_dir = tf.keras.utils.get_file(
datagen_kwargs = dict(rescale=1./255, validation_split=.20)
dataflow_kwargs = dict(target_size=IMAGE_SIZE, batch_size=BATCH_SIZE,
valid_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
valid_generator = valid_datagen.flow_from_directory(
    data_dir, subset="validation", shuffle=False, **dataflow_kwargs)
do_data_augmentation = False #@param {type:"boolean"}
if do_data_augmentation:
  train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
      width_shift_range=0.2, height_shift_range=0.2,
      shear_range=0.2, zoom_range=0.2,
  train_datagen = valid_datagen
train_generator = train_datagen.flow_from_directory(
    data_dir, subset="training", shuffle=True, **dataflow_kwargs)

Now you’re going to add a linear classifier layer on top of the feature_exracter_layer:

do_fine_tuning = False #@param {type:"boolean"}

Then you’re going to define the machine learning model:

print("Building model with", MODULE_HANDLE)
model = tf.keras.Sequential([
    # Explicitly define the input shape so the model can be properly
    # loaded by the TFLiteConverter
    tf.keras.layers.InputLayer(input_shape=IMAGE_SIZE + (3,)),
    hub.KerasLayer(MODULE_HANDLE, trainable=do_fine_tuning),

Finally, you’re ready to retrain the model.

  optimizer=tf.keras.optimizers.SGD(lr=0.005, momentum=0.9), 
  loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True, label_smoothing=0.1),
steps_per_epoch = train_generator.samples // train_generator.batch_size
validation_steps = valid_generator.samples // valid_generator.batch_size
hist = model.fit(
    epochs=5, steps_per_epoch=steps_per_epoch,
saved_model_path = "/tmp/saved_flowers_model"
tf.saved_model.save(model, saved_model_path)

You now have a working, trained deep learning model for your machine learning project! Remember, the train_generator.samples can take a while if you don’t have GPUs installed, so be patient. It could take up to 30 minutes or so.

In Summary

To summarize, in this tutorial, we successfully trained a machine learning algorithm to identify different flower types. In the above steps, we imported the TensorFlow libraries and APIs we would need to get running. We then loaded an open database of images as our dataset. Our dataset included 3670 images of flowers, all labeled as either Dandelion, Roses, Daisy, Tulips, or Sunflowers.

We structured the data into batches appropriately, making image and label pairs to use with TensorFlow. We then retrained a machine learning model using an existing image classifier.

The end result is a machine learning model that can now tag images as flower types with a high degree of accuracy. It can also improve over time if fed more data. Using a similar approach, developers could create ML-driven applications for many other objects too!

TensforFlow: Final Thoughts

Using a TensorFlow deep learning model is its own topic, and this tutorial is already rather lengthy. You can read more about it at this TensorFlow tutorial to learn how you can use a retrained deep learning model in your own projects.

Automation, machine learning, and data-driven decisions aren’t going anywhere. If anything, we’re just going to keep producing more data with each passing year. Simultaneously, business demands are only going to keep getting more extreme.

Businesses that want to practice digital disciplines like predictive analytics or personalized marketing will likely have to incorporate machine learning and deep learning at some point. Knowing how to program deep learning models yourself gives you complete control over your applications and data, making machine learning even more powerful.

J. Simpson

J. Simpson lives at the crossroads of logic and creativity. He writes and researches tech-related topics extensively for a wide variety of publications, including Forbes Finds. He is also a graphic designer, journalist, and academic writer, writing on the ways that technology is shaping our society while using the most cutting-edge tools and techniques to aid his path. He lives in Portland, Or.