Close

Retraining TensorFlow Inception v3 using TensorFlow-Slim (Part 2)

A project log for Elephant AI

a system to prevent human-elephant conflict by detecting elephants using machine vision, and warning humans and/or repelling elephants

neil-k-sheridanNeil K. Sheridan 04/29/2017 at 19:061 Comment

In this experiment I will not be using flowers, but elephants! I'm going to use 5 classes of elephants: baby elephants, elephant groups no babies, elephant groups with babies, lone female elephants, lone male elephants. I'll just start with 100 images for each class. So that's 500 images in the dataset. I'll take 80% for training and 20% for validation.

Protocol for experiment:

1. Convert dataset to TensorFlow's native TFRecord format. Here each TFRecord contains a TF-Example protocol buffer. First we need to place the images in the following directory structure "data_dir/label_0/image0.jpeg", "data_dir/label_1/image0.jpeg" etc. Then we can convert using a modified version of build_image_data.py (original is written by Google and licensed under http://www.apache.org/licenses/LICENSE-2.0). This is the original code on github.

2. So we should have several TFRecord files created now!

3. Next to define a Slim Dataset. This stores pointers to the data file, as well as various other pieces of metadata: class labels, the train/test split, and how to parse the TFExample protos. TF-Slim dataset descriptor using the TF-Slim DatasetDataProvider code :

import tensorflow as tf
from datasets import elephants

slim = tf.contrib.slim

# Selects the 'validation' dataset.
dataset = elephants.get_split('validation', DATA_DIR)

# Creates a TF-Slim DataProvider which reads the dataset in the background
# during both training and testing.
provider = slim.dataset_data_provider.DatasetDataProvider(dataset)
[image, label] = provider.get(['image', 'label'])

4. Downloading the Inception v3 checkpoint. Modify later for Inception v4 instead! (see https://github.com/tensorflow/models/tree/master/slim)

$ CHECKPOINT_DIR=/tmp/checkpoints
$ mkdir ${CHECKPOINT_DIR}
$ wget http://download.tensorflow.org/models/inception_v3_2016_08_28.tar.gz
$ tar -xvf inception_v3_2016_08_28.tar.gz
$ mv inception_v3.ckpt ${CHECKPOINT_DIR}
$ rm inception_v3_2016_08_28.tar.gz
5. Now we can retrain from the checkpoint we downloaded using train_image_classifier.py . See https://github.com/tensorflow/models/blob/master/slim/train_image_classifier.py for code.
$ DATASET_DIR=/tmp/elephants
$ TRAIN_DIR=/tmp/elephants-models/inception_v3
$ CHECKPOINT_PATH=/tmp/my_checkpoints/inception_v3.ckpt
$ python train_image_classifier.py \
    --train_dir=${TRAIN_DIR} \
    --dataset_dir=${DATASET_DIR} \
    --dataset_name=elephants \
    --dataset_split_name=train \
    --model_name=inception_v3 \
    --checkpoint_path=${CHECKPOINT_PATH} \
    --checkpoint_exclude_scopes=InceptionV3/Logits,InceptionV3/AuxLogits \
    --trainable_scopes=InceptionV3/Logits,InceptionV3/AuxLogits

6. Next is to evaluate performance using the the eval_image_classifier.py. See https://github.com/tensorflow/models/blob/master/slim/eval_image_classifier.py for code.

CHECKPOINT_FILE = ${CHECKPOINT_DIR}/inception_v3.ckpt  # Example
$ python eval_image_classifier.py \
    --alsologtostderr \
    --checkpoint_path=${CHECKPOINT_FILE} \
    --dataset_dir=${DATASET_DIR} \
    --dataset_name=imagenet \
    --dataset_split_name=validation \
    --model_name=inception_v3

7. Next to feed in a single image! See https://www.tensorflow.org/versions/master/tutorials/image_recognition but I haven't got that far yet!

Discussions

Gabriele wrote 05/12/2017 at 13:14 point

Nice experiment. How different is the code to create the dataset from the original one?

  Are you sure? yes | no