B4ML-Kit - The Journey
There is a long back-story here, but I will simply describe
what I did without explaining why I chose to do certain
things in a certain way.
This is a proof of concept; hopefully, an inspiration to
someone accustomed to working with Android Studio
or Intellij IDEA (which is what AS uses anyway).
Code for ML-Kit is available as an AS project. I built
the OCR/Barcode/Face Recognition models, as well
as the Translation model. ML-Kit replaces ML vision
and ML-Kit Firebase models. There are (at least) 2
important effect: 1) Because ML vision is deprecated,
there is no new development, and 2) the new ML-Kit
adds languages to its OCR and Translation models
that were not available on-device previously.
It is now possible to extract text from a Chinese document,
for example, and translate it into, among others, English or
French, etc.
I wanted to do this using B4A.
Please review the attached images.
There are 2 apps involved: the ML-Kit model (which needs
to be modified and built) and a B4A "driver".
I chose some German and Chinese text from our forum and
translated them into English, French and Spanish.
Basically, what was involved was passing the text and source
and destination languages to ML-Kit as an intent for result.
On the ML-Kit side, I modified it to accept such an intent and
return the translation to the B4A caller. I also massaged
ML-Kit so that it would function normally if it wasn't launched
by an intent.
For the ML-Kit OCR model, I stripped all the garbage out so
that a scanned Chinese document could pass its result to
the Translation model (and to B4A).
There is a long back-story here, but I will simply describe
what I did without explaining why I chose to do certain
things in a certain way.
This is a proof of concept; hopefully, an inspiration to
someone accustomed to working with Android Studio
or Intellij IDEA (which is what AS uses anyway).
Code for ML-Kit is available as an AS project. I built
the OCR/Barcode/Face Recognition models, as well
as the Translation model. ML-Kit replaces ML vision
and ML-Kit Firebase models. There are (at least) 2
important effect: 1) Because ML vision is deprecated,
there is no new development, and 2) the new ML-Kit
adds languages to its OCR and Translation models
that were not available on-device previously.
It is now possible to extract text from a Chinese document,
for example, and translate it into, among others, English or
French, etc.
I wanted to do this using B4A.
Please review the attached images.
There are 2 apps involved: the ML-Kit model (which needs
to be modified and built) and a B4A "driver".
I chose some German and Chinese text from our forum and
translated them into English, French and Spanish.
Basically, what was involved was passing the text and source
and destination languages to ML-Kit as an intent for result.
On the ML-Kit side, I modified it to accept such an intent and
return the translation to the B4A caller. I also massaged
ML-Kit so that it would function normally if it wasn't launched
by an intent.
For the ML-Kit OCR model, I stripped all the garbage out so
that a scanned Chinese document could pass its result to
the Translation model (and to B4A).
Attachments
-
1.png39.5 KB · Views: 2,871
-
b4mlkit.png13.3 KB · Views: 707
-
de-en1.png39.2 KB · Views: 572
-
de-en2.png43.8 KB · Views: 912
-
zh-en2.png40.6 KB · Views: 323
-
zh-en1.png38.1 KB · Views: 353
-
de-fr2.png44.2 KB · Views: 351
-
de-fr1.png38.8 KB · Views: 376
-
zh-es1.png38.6 KB · Views: 328
-
zh-es2.png41.4 KB · Views: 387