Building Tensorflow Lite on the BBAI Works but Examples are Given me the Willies!

Hello,

I built the tflite module on the AI since porting it is maybe out of the question. So anyway, it is built.

I have access to making python3.9 work w/ the TFLite module on the BBAI. I have come across some issues. I am following along in this udacity classroom to get example source and ideas relating to TFLite.

Here is the class online: Udacity . Anyway, if you are using TFLite on the BBAI and wondering how to offload and work w/ examples, I think this post can help the community at large.

Seth

P.S. Here is the source I am working on right now:


import tensorflow as tf

import pathlib
import numpy as np
import matplotlib.pyplot as plt

from tensorflow.keras.models import Model
from tensorflow.keras.layers import Input

# Create a simple Keras model.
x = [-1, 0, 1, 2, 3, 4]
y = [-3, -1, 1, 3, 5, 7]

model = tf.keras.models.Sequential([
    tf.keras.layers.Dense(units=1, input_shape=[1])
])
model.compile(optimizer='sgd', loss='mean_squared_error')
model.fit(x, y, epochs=200, verbose=1)

export_dir = 'saved_model/1'
tf.saved_model.save(model, export_dir)

# Convert the model.
converter = tf.lite.TFLiteConverter.from_saved_model(export_dir)
tflite_model = converter.convert()

tflite_model_file = pathlib.Path('model.tflite')
tflite_model_file.write_bytes(tflite_model)

# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()

# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

# Test the TensorFlow Lite model on random input data.
input_shape = input_details[0]['shape']
inputs, outputs = [], []
for _ in range(100):
  input_data = np.array(np.random.random_sample(input_shape), dtype=np.float32)
  interpreter.set_tensor(input_details[0]['index'], input_data)

  interpreter.invoke()
  tflite_results = interpreter.get_tensor(output_details[0]['index'])

  # Test the TensorFlow model on random input data.
  tf_results = model(tf.constant(input_data))
  output_data = np.array(tf_results)
  
  inputs.append(input_data[0][0])
  outputs.append(output_data[0][0])

plt.plot(inputs, outputs, 'r')
plt.show()


try:
  from google.colab import files
  files.download(tflite_model_file)
except:
  pass

Now…from what I understand so far, I think I need to supply a working, say, Debian Distro w/ TensorFlow on it so this software can ultimately make the TensorFlow-Lite module. Does this seem correct? Anyway…if you are into making the BBAI guess with education on what may take place w/ tensors, please reply. I am sure we can work on this together or w/ another portion of the community.

Hello Again,

So, it seems I may have mistaken the tensor entirety as something it was not. In any light, I need to build TensorFlow on the dev. desktop and then proceed to transfer the builds of the tensor to the TFLite, e.g. which the AI can handle.

Now, about offloading to DSPs or EVEs, I am open to suggestions as I will be looking now and in the future for efforts to this issue.

Seth

P.S. I will be reporting back once I can configure my first tensor into TFLite models which the BBAI can then handle and promote in specific quality.

Hello,

I almost forgot to provide building instructions for the BBAI w/:

uname -a: Linux BeagleBone 5.10.59-ti-r22 #1bullseye SMP PREEMPT Wed Sep 29 17:33:28 UTC 2021 armv7l GNU/Linux

cat /etc/dogtag: BeagleBoard.org Debian Bullseye IoT Image 2021-10-02

That exact image and others can be found here: Index of /rootfs/debian-armhf/2021-10-02


git clone https://github.com/tensorflow/tensorflow.git

sudo apt install cmake 

# There may be other dep. you will need but Debian is smart and will notify you...

mkdir tflite_build && cd tflite_build

cmake ../tensorflow/lite

cmake --build .

cmake ../tensorflow/lite/c

cmake --build .

That should do it. It may take a while b/c of 1.5Ghz of CPU power but there are other factors involved, i.e. other CPU tasks being taken up.

Also, if you have questions, please refer to this idea: Build TensorFlow Lite with CMake

I found that the build natively is painless compared to what to do w/ the build once it is finished w/ Cross-Compiling.

Seth

P.S. I am currently building the TensorFlow lib. now on a Debian Machine. I will update in accordance w/ this short.

Oh and also, this does not work as you can tell. We are using an entirely different toolchain on this build for native and/or cross-compiling.


ARMCC_FLAGS="-march=armv7-a -mfpu=neon-vfpv4 -funsafe-math-optimizations"
ARMCC_PREFIX=${HOME}/toolchains/gcc-arm-8.3-2019.03-x86_64-arm-linux-gnueabihf/bin/arm-linux-gnueabihf-
cmake -DCMAKE_C_COMPILER=${ARMCC_PREFIX}gcc \
  -DCMAKE_CXX_COMPILER=${ARMCC_PREFIX}g++ \
  -DCMAKE_C_FLAGS="${ARMCC_FLAGS}" \
  -DCMAKE_CXX_FLAGS="${ARMCC_FLAGS}" \
  -DCMAKE_VERBOSE_MAKEFILE:BOOL=ON \
  -DCMAKE_SYSTEM_NAME=Linux \
  -DCMAKE_SYSTEM_PROCESSOR=armv7 \
  ../tensorflow/lite/

and…

Please remember if you come across issues to use this cmake command: -DTFLITE_ENABLE_XNNPACK=OFF

It is on by default for Linux builds and cannot be used for many reasons, e.g. NEON does not support it.

Hello,

First and foremost…the build env. for bazel and tensorflow outside of tensorflow-lite needs to be done this way or other complications arise.

bazel --version: bazel 4.2.1

Build Environment: Linux MoreKT 5.4.72-microsoft-standard-WSL2 #1 SMP Wed Oct 28 23:40:43 UTC 2020 x86_64 GNU/Linux or any Debian Distro with Bullseye will do…

Here are the connections and commands needing to be made to build successfully.


sudo apt install apt-transport-https curl gnupg

curl -fsSL https://bazel.build/bazel-release.pub.gpg | gpg --dearmor > bazel.gpg

sudo mv bazel.gpg /etc/apt/trusted.gpg.d/

echo "deb [arch=amd64] https://storage.googleapis.com/bazel-apt stable jdk1.8" | sudo tee /etc/apt/sources.list.d/bazel.list

sudo apt update && sudo apt install bazel

sudo apt update && sudo apt full-upgrade

That is the bazel install that should take place to get the required build items ready. It can also be found here: Installing Bazel on Ubuntu - Bazel main

Also…we can now move to Build from source  |  TensorFlow for using the build on WSL2 Debian Bullseye. This is NOT using a virtual env.


sudo apt install python3-dev python3-pip

pip install -U --user pip numpy wheel
pip install -U --user keras_preprocessing --no-deps

git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow

./configure

There will be a plethora of questions to answer. I think six or less. All are okay as default unless you are using specific instances of use case scenarios. Anyway…I typed N for most or just plane ENTER as the command for DEFAULT.


bazel build //tensorflow/tools/pip_package:build_pip_package

# The above command will build version 2 of tensorflow and it takes a bit.
# I have an older computer with a quad-core set of processors. 
# It took about 12000 files and then it was done after three hours. Blah...

./bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

# The above command will install the lib. in /tmp/tensorflow_pkg
# Then once that builds, you can go to that dir. and see the file that produced
# Copy it.
# It will need to be installed via pip3 or pip since Bullseye does not support Python2

pip install /tmp/tensorflow_pkg/tensorflow-version-tags.whl

# and where it states /tmp/tensorflow_pkg/tensorflow-version-tags.whl, use the copied file.

Now, everyone can hopefully help out if you build it. I am thinking something is wrong right now w/ transferring with cmake to the BBAI from the development desktop.

Seth

Hello and One Last Thing,

I have not gotten this completely configured to handle DSPs or EVEs but it works on the Debian Machine. See here:


#!/usr/bin/python3

# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""label_image for tflite."""

import argparse
import time

import numpy as np
from PIL import Image
import tensorflow as tf


def load_labels(filename):
  with open(filename, 'r') as f:
    return [line.strip() for line in f.readlines()]


if __name__ == '__main__':
  parser = argparse.ArgumentParser()
  parser.add_argument(
      '-i',
      '--image',
      default='birdie.JPG',
      help='image to be classified')
  parser.add_argument(
      '-m',
      '--model_file',
      default='tensors/mobilenet_v1_1.0_224_quant.tflite',
      help='.tflite model to be executed')
  parser.add_argument(
      '-l',
      '--label_file',
      default='tensors/labels_mobilenet_quant_v1_224.txt',
      help='name of file containing labels')
  parser.add_argument(
      '--input_mean',
      default=127.5, type=float,
      help='input_mean')
  parser.add_argument(
      '--input_std',
      default=127.5, type=float,
      help='input standard deviation')
  parser.add_argument(
      '--num_threads', default=None, type=int, help='number of threads')
  args = parser.parse_args()

  interpreter = tf.lite.Interpreter(
      model_path=args.model_file, num_threads=args.num_threads)
  interpreter.allocate_tensors()

  input_details = interpreter.get_input_details()
  output_details = interpreter.get_output_details()

  # check the type of the input tensor
  floating_model = input_details[0]['dtype'] == np.float32

  # NxHxWxC, H:1, W:2
  height = input_details[0]['shape'][1]
  width = input_details[0]['shape'][2]
  img = Image.open(args.image).resize((width, height))

  # add N dim
  input_data = np.expand_dims(img, axis=0)

  if floating_model:
    input_data = (np.float32(input_data) - args.input_mean) / args.input_std

  interpreter.set_tensor(input_details[0]['index'], input_data)

  start_time = time.time()
  interpreter.invoke()
  stop_time = time.time()

  output_data = interpreter.get_tensor(output_details[0]['index'])
  results = np.squeeze(output_data)

  top_k = results.argsort()[-5:][::-1]
  labels = load_labels(args.label_file)
  for i in top_k:
    if floating_model:
      print('{:08.6f}: {}'.format(float(results[i]), labels[i]))
    else:
      print('{:08.6f}: {}'.format(float(results[i] / 255.0), labels[i]))

  print('time: {:.3f}ms'.format((stop_time - start_time) * 1000))

  1. Where the image birdie.JPG is one of my own.

birdie

  1. Where the outcome on tensorflow in python3.9 is this:

0.592157: limpkin
0.133333: American alligator
0.113725: peacock
0.082353: ruffed grouse
0.011765: quail
time: 76.463ms

and…

  1. Where none was right but you can see why…

  2. Where this below .zip file contains all the required data for testing, e.g. .txt file and .tflite file.

wget https://storage.googleapis.com/download.tensorflow.org/models/tflite/mobilenet_v1_1.0_224_quant_and_labels.zip

Seth

P.S. That should do it. Now…I need to configure how to add this data, the source, and the build to the BBAI and offload the high volume of memory and data to the slave processors.

Hello and Okay,

Geaux Cajuns and BBBs, i.e. in this case, a BBAI. I got the info. installed correctly and I would like to share the ultimate photo of a Gabor done w/ OpenCV but superimposed by a tensorflow-lite script to handle what it actually is in its current state.

Sadly, the photo is odd at best and even the most intrinsic source from TensorFlow-Lite cannot describe it well enough. So, w/out further ado…

Bab

This photo was changed on the AI.

Besides that…the above commands are okay but not for that specific piece of software.


debian@BeagleBone:~/tensors/image_classification$ ./classify.py --filename Bab.PNG --model_path ../mobilenet_v1_1.0_224_quant.tflite --label_path ../labels_mobilenet_quant_v1_224.txt
web site 0.45098039215686275
screwdriver 0.1411764705882353
screen 0.08627450980392157

Above is where you can see the commands given and what transpired. The mobilenet data guessed and with education in its guess, this outcome was clear about the web site at about 45%. It is just too odd of a photo.

Here is another piece of software in Python3.9 to handle tensors and mobilenet data from the classify.py file from our command.


#!/usr/bin/python3

# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from tflite_runtime.interpreter import Interpreter
import numpy as np
import argparse
from PIL import Image

parser = argparse.ArgumentParser(description='Image Classification')
parser.add_argument('--filename', type=str, help='Specify the filename', required=True)
parser.add_argument('--model_path', type=str, help='Specify the model path', required=True)
parser.add_argument('--label_path', type=str, help='Specify the label map', required=True)
parser.add_argument('--top_k', type=int, help='How many top results', default=3)

args = parser.parse_args()

filename = args.filename
model_path = args.model_path
label_path = args.label_path
top_k_results = args.top_k

with open(label_path, 'r') as f:
    labels = list(map(str.strip, f.readlines()))

# Load TFLite model and allocate tensors
interpreter = Interpreter(model_path=model_path)
interpreter.allocate_tensors()

# Get input and output tensors.
input_details = interpreter.get_input_details()
output_details = interpreter.get_output_details()

# Read image
img = Image.open(filename).convert('RGB')

# Get input size
input_shape = input_details[0]['shape']
size = input_shape[:2] if len(input_shape) == 3 else input_shape[1:3]

# Preprocess image
img = img.resize(size)
img = np.array(img)

# Add a batch dimension
input_data = np.expand_dims(img, axis=0)

# Point the data to be used for testing and run the interpreter
interpreter.set_tensor(input_details[0]['index'], input_data)
interpreter.invoke()

# Obtain results and map them to the classes
predictions = interpreter.get_tensor(output_details[0]['index'])[0]

# Get indices of the top k results
top_k_indices = np.argsort(predictions)[::-1][:top_k_results]

for i in range(top_k_results):
    print(labels[top_k_indices[i]], predictions[top_k_indices[i]] / 255.0)

Seth

P.S. As you can tell in the from/import sections of the file, one will need python3-pil and python3-numpy, e.g. plus some above recommendations for the daring.

Also…to train, classify, and/or make models in TFLite on the BBAI, see here:

  1. Image classification  |  TensorFlow Lite
  2. Image classification with TensorFlow Lite Model Maker
  3. Object detection  |  TensorFlow Lite

and…

pip3 install tflite

@ tflite · PyPI