B4A Library TensorFlowLite - an experimental machine/deep learning wrapper

TensorFlowLite - an experimental machine/deep learning wrapper for B4A

New version: 0.20 (29/08/2018):
I have updated the library and the sample-models because the first version was based on older code.
I attach also the updated java-sources.
I also created my own model to use with the wrapper which works really well. I have put some links to some good resources/tutorials. See the Spoiler for some screenshots using my guitar-model.
If you used the first version, please update the demo and the B4A-libs.


After a recent gallstone operation, I am now at home for a week or so before it's time to go back to work. So I am using this "free time" to do some funny and interesting stuff with B4A.

I started playing around with TensorFlowLite for Android/B4A and came up with this experimental wrapper based on various examples found on the internet.

First some background (from the TensorFlow website):

What is TensorFlow?
TensorFlow™ is an open source software library for high performance numerical computation. Its flexible architecture allows easy deployment of computation across a variety of platforms (CPUs, GPUs, TPUs), and from desktops to clusters of servers to mobile and edge devices. Originally developed by researchers and engineers from the Google Brain team within Google’s AI organization, it comes with strong support for machine learning and deep learning and the flexible numerical computation core is used across many other scientific domains.

What is TensorflowLite?
TensorFlow was designed to be a good deep learning solution for mobile platforms. As such, TensorFlowLite provides better performance and a small binary size on mobile platforms as well as the ability to leverage hardware acceleration if available on their platforms. In addition, it has many fewer dependencies so it can be built and hosted on simpler, more constrained device scenarios. TensorFlowLite also allows targeting accelerators through the Neural Networks API.

PS: There is also another implementation for mobile, namely TensorFlow Mobile, which currently has more functionaility than TensorflowLite, but as far as I have understood, it will eventually be replaced with TensorflowLite which has a smaller binary size, fewer dependencies, and better performance.

You can read more here:
https://www.tensorflow.org/

The user scenarios can be numerous. This wrapper (and the demo-app) provided by me lets you to take a picture which TensorFlowLite will then analyze and try to figure out what object it is. The objects suggested can be more than one and therefore they are sorted per a confidence-score. This can be done because TensorFlowLite is analyzing the image against a predefined model (a sort of classifier, graph), which has been trained to recognize certain objects. In this demo, I am using a very generic sample-model created by Google and which recognizes various objects (see the list attached).

More importantly, you can create and train your own models, specifically trained to perhaps recognize animals, cars etc and let you get far better results in terms of accuracy. I have also created my own model which recognizes my guitars. I set up what needed on my Mac and created a model in about 30 minutes. You can find instructions on TensorFlow web-site and of course on YouTube. I also recommend the following two Codelabs (tutorials) by Google: tensorflow-for-poets and tensorflow-for-poets-2-tflite-android.

ballpoint1.jpg
glass1.jpg
mug1.jpg

myguitars1.jpg
12string.jpg
classic.jpg
starsun.jpg
yamaha.jpg
electric.jpg

How to use the wrapper in your app?
1) The official TensorFlowLite library by Google is being developed continuously and perhaps future libraries may not work with my current implementation. Therefore, for this wrapper you will need to download the following version (1.10) from here:
https://tinyurl.com/y9jlc59w
and copy it to your additional library folder.
2) In this wrapper, I am also using the Guava IO Library which you can download from here:
https://github.com/google/guava/releases/download/v26.0/guava-26.0-android.jar
Then copy it to your additional library folder.
3) Download and extract the attached TensorFlowLite library wrapper and its XML-file and copy them to your additional library folder.
4) Finally, you will of course need a model (classifier). In this case, for the demo-app, you need to download the file "assets.zip" from here:
https://www.dropbox.com/s/keuwqr8fys1lx8m/assets.zip?dl=0
and extract its content and copy the files to your app's assets-folder. The easiest way to do this is just to add the files using B4A's Files Manager. The demo-app is attached as well. It is basically Erel's Camera2 sample-app which I have stripped down to its bare minimum adding the TensorFlowLite functionality.
5) Look at the code in the demo-app and you can see how quick and easy it is to implement this wrapper in your own apps.

The wrapper exposes only two methods, namely:
-Initialize
which initializes TensorflowLite. Reads Model-file (tflite) and label-file from Assets-folder. Default value of Input-size is 224.
-recognizeImage
which requests TensoflowLite/classifier to recognize bitmap/image and to return possible results.

Note: the minSdkVersion for the demo-app is 21, probably because Camera2 requires this. If you don't use Camera2 in your app, then you can probably use a much lower minSdkVersion. It should work with at least minSdkVersion 15 but I read somewhere it might even work with minSdkVersion 4(!) although I haven't tried. You will need to experiment.

Note2: I have added the Java-sources too if someone would like to add/change functionality or maybe keep the wrapper updated and in line with newer future releases of TensorFlowLite.

Note3: Combined wih @JordiCP's excellent wrapper of OpenCV, I think you have a good base to really come up with some really nice stuff (although in this case the TensorFlowLite wrapper might need to be customized for your needs).

Ideas for improvements:
-Implement real-time object detection/recognition using the video-camera.
-Use cropping to let TensorFlowLite better analyze an image with multiple objects. Contemporary multiple object recognition is not supported in my wrapper.

Please remember that creating libraries and maintaining them takes time and so does supporting them. Please consider a donation if you use my free libraries as this will surely help keeping me motivated. Thank you!

Enjoy!



 

Attachments

  • B4ATensorFlowLiteLibs.zip
    12.1 KB · Views: 1,004
  • javasrc.zip
    5.1 KB · Views: 887
  • TensorFlowLiteSampleNew.zip
    15.6 KB · Views: 973
  • labels.txt
    11.2 KB · Views: 870
Last edited:

inakigarm

Well-Known Member
Licensed User
Longtime User
New version: 0.20 (29/08/2018):
I have updated the library and the sample-models because the first version was based on older code.
I attach also the updated java-sources.
If you used the first version, please update the demo and the B4A-libs.

I also created my own model to use with the wrapper which works really well. I have put some links to some good resources/tutorials if you want to create a compatible model. Some sample screenshots of my guitar-model are shown in the spoiler in the first post.

Great! Any chance to have an B4J Lib? (It seems it supports Java access to their API https://www.tensorflow.org/install/install_java)
 

moster67

Expert
Licensed User
Longtime User
Great! Any chance to have an B4J Lib? (It seems it supports Java access to their API https://www.tensorflow.org/install/install_java)
Yes, yesterday I started writing a small wrapper for B4J. It works fine. There are some methods I'd like to add before publishing it such as dealing with Images (equivalent to Bitmaps in B4A) in memory. I also want to test it with a model created by me. Once I have this working, I will publish it.
 

microbox

Active Member
Licensed User
Longtime User
Hi moster67...thank you for this lib. I'm new to this (ML) and I would like to create an app to classify leaf if this possible. Any guide that will help me to start is much appreciated.
 

moster67

Expert
Licensed User
Longtime User
Follow the two tutorials mentioned in the first post

You need to set up Python on your computer, better if it is Mac or Linux but it will work with Windows too although you may need to change some things in the parameters you pass on to the training scripts.

You can probably record a video of your objects (leaves) i.e a video for each object. Make sure to record from different angles, in different light conditions etc. Then you can use ffmpeg to extract single photos of each object and then classify them according to the instructions in the tutorial

Then you train your model, trying to find the right resolution of the photos to use. You need to experiment a lot for best results in order to avoid overfitting and underfitting. It takes trial and error to get the best results

Watch tutorials on YouTube, get a book to understand better.

With patience you will get there. Good luck.
 

microbox

Active Member
Licensed User
Longtime User
@moster67
Hi...Thanks for again for the library. The example runs with no problem. I followed the tutorial at code lab. I have a running python and tensorflow. My question is... to test the flower photos on your given demo application. I need to drop existing graph.lite and labels.txt and replace it with the same filename and extension produced in tensorflow-for-poets-2 under android\tflite\app\src\main\assets folder?

I think I'm missing somewhere...it recognise daisy photos only...don't recognise rose and other flowers.

regards,
joe
 
Last edited:

moster67

Expert
Licensed User
Longtime User
Add your graph and labels file to your Asset-folder. I don't remember but I think you can name them whatever you want as long as you use the right extension names (i.e *.lite and *.txt). Then when you initialize the lib, you pass on the names of the files:
B4X:
ten.Initialize("", 224, "yourgraph.lite", "yourlabels.txt")

If you use my demo-app and use different names, make sure to change the code which verifies if a file exists in the asset-folder.
At this point, you should be able to remove any previous files (if present)
 

microbox

Active Member
Licensed User
Longtime User
Add your graph and labels file to your Asset-folder. I don't remember but I think you can name them whatever you want as long as you use the right extension names (i.e *.lite and *.txt). Then when you initialize the lib, you pass on the names of the files:
B4X:
ten.Initialize("", 224, "yourgraph.lite", "yourlabels.txt")

If you use my demo-app and use different names, make sure to change the code which verifies if a file exists in the asset-folder.
At this point, you should be able to remove any previous files (if present)
I think I know what I'm missing...the following code does not execute "tflite_convert --help" this tool is required to convert
retrained_graph.pb to *.lite file. How can install tflite converter? BTW i'm running on mac.

thanks,
Joe
 

moster67

Expert
Licensed User
Longtime User
My library allows user to use their graphs with Android and hence your question is rather off-topic since your problem seems related to setting up the tools needed on your Mac and generating the graphs.

As mentioned earlier, to create the graphs, to do the training avoiding overfitting/underfitting and so on take some time and lots of "trial and error" etc. There are many sources on the internet, such as You Tube tutorials, guides, information on StackOverflow which can help you. That is the way I did it and eventually I got it right :)

In your case, maybe something such as a dependency was not installed correctly or maybe the input files in the script(s) are not pointing to the correct file locations and so on. Hard to tell for me. You are probably better off asking on StackOverflow giving as much information as possible as to what you have done and perhaps someone can help you out.

I am sorry I cannot be of any further help now, also because I have moved on to other projects which is keeping me really busy and also because it was some time ago I worked on the wrapper and machine learning and I don't remember everything (I am getting old...).
 
Last edited:

peacemaker

Expert
Licensed User
Longtime User
Hi, All
Is it possible by this lib to identify the objectX among similar object0....object999 that have some image-hashs in a db?
 

moster67

Expert
Licensed User
Longtime User
I guess you mean through image recognition. If yes, then it should be possible but you need to train your model and test it properly.

The easiest way is to film your single objects (from different angles and with different light conditions), then label each video (maybe with your image-hashs) and then use ffmpeg to extract the frames. Using the frames, you can train the model. Once you have generated the model with all your existing objects, you can thereafter add other objects when required, repeating partial steps of the training.

I wrote some hints in this thread what regards tutorials etc. Be prepared it will take some time to get good results with the training and to fine-tune.

Good luck.
 

peacemaker

Expert
Licensed User
Longtime User
to get good results with the training and to fine-tune
Thanks. But anyway what can be the probability of correct recognition an object among the almost same ? Say, cups set with the various printings on them.
 

moster67

Expert
Licensed User
Longtime User
Thanks. But anyway what can be the probability of correct recognition an object among the almost same ? Say, caps set with the various printings on them
As long there is something which distinguishes them apart, I think you can expect good results. I know people who have implemented recognition of flowers. The source (video-frames) and the training (avoiding overfitting/underfitting) are important. Start practicing and testing and see what results you get.
 

peacemaker

Expert
Licensed User
Longtime User
recognition of flowers

Here yes, i can imagine rather noticeable difference in flowers. But if similar Internet-shop's items, with difference just in some part of the same surface...
And if for this kind the probability is < 99%, i guess, such solution won't be interesting to the customer...
But, it seems, unknown result without practical test.
 

moster67

Expert
Licensed User
Longtime User
unknown result without practical test
Yep, you need to test. That is part of the learning curve. Trial and error will give you experience and better understanding if a project is viable or not. In this case, there are no shortcuts. Sorry.
 

rkxo

Active Member
Licensed User
Longtime User
hi, i get error. Any Idea?
ok - model file is present in assets-foilder
** Activity (main) Resume **
java.lang.NoClassDefFoundError: Failed resolution of: Lorg/tensorflow/lite/Interpreter;
at com.tillekesoft.tensorflowlite.TensorFlowImageClassifier.create(TensorFlowImageClassifier.java:58)
at com.tillekesoft.tensorflowlite.TensorflowWrapper$1.run(TensorflowWrapper.java:47)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.lang.Thread.run(Thread.java:764)
Caused by: java.lang.ClassNotFoundException: Didn't find class "org.tensorflow.lite.Interpreter" on path: DexPathList[[zip file "/system/framework/org.apache.http.legacy.boot.jar", zip file "/data/app/com.tillekesoft-ZLxZoPw0sjbm9tX4UwpIjA==/base.apk"],nativeLibraryDirectories=[/data/app/com.tillekesoft-ZLxZoPw0sjbm9tX4UwpIjA==/lib/x86_64, /data/app/com.tillekesoft-ZLxZoPw0sjbm9tX4UwpIjA==/base.apk!/lib/x86_64, /system/lib64]]
at dalvik.system.BaseDexClassLoader.findClass(BaseDexClassLoader.java:134)
at java.lang.ClassLoader.loadClass(ClassLoader.java:379)
at java.lang.ClassLoader.loadClass(ClassLoader.java:312)
... 5 more
 

ZJP

Active Member
Licensed User
Longtime User
Hi,

Great work! Building models for Tensorflow is still hard but an exciting new AI tool called Lobe.ai will change all of this very soon. With that tool you can create any AI model you like and export it to Tensorflow.

It's not released yet but it promises a lot!
Done : Lobe AI is available.


 

SJQ

Member
Licensed User
Longtime User
Done : Lobe AI is available.


Unfortunately Lobe Models cause this error:
Internal error: Cannot create interpreter: Op builtin_code out or range: 111. Are you using old TFLite binary with newer model?Registration failed.
 
Top