B4A Library OpenCV 3.x

OpenCV (Open Source Computer Vision Library) is a really huge project/framework actively developed, mainly written in C++ . It is released under a BSD license.
Read more here: http://opencv.org/
OpenCV versions: https://opencv.org/releases/


OpenCV library for B4A: wraps the official OpenCV 3.x release for Android (in fact not all, about 95% of it)
  • Feel free to test and use it. You can even
  • License: You can use it for your projects, but you are not allowed to distribute nor sell this library. Of course you can distribute apps that use it (remember that OpenCV itself has BSD license as stated before)
  • Supported hardware is: armeabi-v7a and arm64-v8
  • Versions
    • 1.04 (2020/05/17)
      • This version wraps OpenCV 3.4.1 Android release and fixes some bugs (mostly some instance methods and some not exposed classes) of the previous version.
      • One of the major additions is the DNNmodule and related classes.
      • Link to the B4A library files: HERE :)
    • 1.00 (2017/09/27)
      • First B4A library wrapper (OpenCv320forB4A V1.00) which replicates (95%) the official OpenCV 3.20 Java API for Android.
      • (removed link. Use 1.04)

  • Please note that my support will be limited to issues with the wrapper itself, not to help translating OpenCV code from other languages to B4A ?‍♂️


========================================================================

(There may be some inaccuracies in what I expose in this post, related to the OpenCV project, since I am relatively new to it and don't know all about its internals. If you find any, please let me know and I will correct it)

A bit of explanation
There exist 'official' OpenCV wrappers for different languages and platforms. Android is one of them.
The official OpenCV 3.20 for Android API includes a lot of classes, organized in modules. But it does not include "all" the original OpenCV modules (since there are other 'experimental', non-free, or platform specific modules which may be present in other platforms but not for Android). Also, there are build options to "tune" it...

I have played quite a lot with it this last year, with a huge project which I started with inline Java, and also translating examples or testing features. But what I have used is just a small percent of the classes and methods exposed. So, there may be some (let's hope not too many) things to fix.

How to learn OpenCVforB4A
If you have worked before with OpenCV, the learning curve will be easy.
If it was using Java with OpenCV for Android, then it will be immediate, since all the methods have exactly the same syntax (except for initializers, polimorphism, and some special cases where simply I did my best).
Anyhow, the ways that I can think of, are (will add links later, also suggestions as online tutorials,.. are welcome)
  • Attached examples.
  • B4A OpenCV Tutorials. I will write a couple of them with what I consider the most important building pieces (for instance the Mat class, which in B4A is OCVMat) and modules
  • Internet examples: there are A LOT of examples over there, written in C++, Java, Python, JavaCV. I would look for examples in the language that is easier to understand for you and then try to translate. Some tips about it (based on my experience)
  • OpenCV syntax has changed as new versions. So there is an 'old' syntax in which nearly everything started with "cv...". Since version 3.X, there was 'cleaner' organization (project was written in C++ instead of C), and there were major syntax changes.
  • JavaCV: Translating from JavaCV to OpenCV should be quite easy but not always direct. JavaCV uses a mix of the old OpenCV syntax with some of its own, and at the beginning it can be a bit confusing, but then it is also easy.
  • Python: there is a lot of material...

First steps. Prepare for some crashes...
  • In OpenCV nearly everything takes part in the native code.
  • When we call a Sub/method/algorithm, it performs some internal checks to see if all the input data is correct. This check is perfomed in the native side. If something is not correct (wrong OCVMat dimensions, some incoherent parameters,...) it throws an exception and crashes. If we are lucky, perhaps we see in the log some clues about the check that made it crash.
  • On the good side, it is very easy to achieve results with OpenCV (check the examples). The real difficult part, as with many other things, is to fine-tune it: OpenCV has a collection of really powerful 'primitive' objects and operations, and really complex algorithms that can do many things. But it is the user who has to glue all of them to achieve the desired results.


(from the previous Beta announcement)
  • IMPORTANT: you must take this into account:
  • OpenCV (the included binary modules) is a free(*) project, but subject to license terms as described here: http://opencv.org/
    • (*): There are some modules in the OpenCV project which are on-free, but here I am refering to the ones included in the library
  • My work: (the B4A library) is free to test and use, but you can for it :). I'll keep donators updated with "advanced" material and examples
  • If you are interested, please PM me with your mail address and I will send you a link with the library and some basic examples. (be patient if you don't receive it immediately, I'll do it as soon as possible).
  • There is no documentation. In short, the syntax is nearly-exactly the same as the OpenCV3.20 Java API, adding "OCV" prefix and only the minimum modifications to adapt to B4A, For reference (taking into account described syntax changes) you can look at http://docs.opencv.org/java/3.1.0/ (which is not the latest one, but the API is nearly the same).
  • It would be preferable if you have worked before with OpenCV and/or can translate examples from Java/C++ and/or simply are interested in it.
  • I recommend starting with the examples and try to understand what is done. Just experimenting can lead to crash after crash of the native libraries with nearly no useful information, and can be very discouraging.
  • I forgot, the included binaries are for ameabi-v7a and arm64-v8 devices
---------------------------------------------------------------------
Some screenshots taken from the examples
Canny operator - Features2D - Color space conversion
s1.png
2D-FFT
s2.png
Color Blob detection
s3.png
 

Attachments

  • JavaCameraView2.zip
    2.7 KB · Views: 1,894
  • CameraOpenCvTest7.zip
    8.9 KB · Views: 1,404
  • BlobDetector5.zip
    15.3 KB · Views: 1,285
  • FaceDetector8.zip
    21 KB · Views: 1,366
Last edited:

JordiCP

Expert
Licensed User
Longtime User
Do you mean that the example is not accurated when using the cascadG file? How isn't it accurate? ---> the coordinates are not correct or it doesn't detect what you want?
 

jchal

Active Member
Licensed User
Longtime User
cascadG file
it does not detect a human when it passes infornt of the camera, it detects may other things but not a human, why?
 

JordiCP

Expert
Licensed User
Longtime User
There may be different issues: one of them is that the xml file is poorly trained. Later I'll try to find a better trained file and adapt the example to see if it works better.

Also, a bit of background is needed to understand why things don't work as expected. If you don't make the effort to investigate yourself what each function does, and what the parameters mean, I won't be able to assist you anymore.
 

jchal

Active Member
Licensed User
Longtime User
maybe it is possible to have pooly trained hml files, what i try to do is to understand how it works, there for i tryed to use difrent hml files to see if it works
if you try it your self you will see what i mean.
my phone locates something in a second but it a chair with my shirt on it and it is not a human the next secont it detects the miror and it is not a human ther miror show a window not a human.
the other hxml file is for detecting fire, well it detects difrent things every second.
in both cases what i thing the proper action must be .
for the 1st case human
no detection until camera sees a human
in the 2nd case
no detection until camera sees a fire or a smoke , i still dont know what the xml file acutualy is , is it fie or smoke.
therefore i am asking the community, that has more experiance than me how can i correct if possible this falut.
is it a poor xml? or it is something else?
also the idea of the detection is the same no matter if it is face, car, body, cat please correct me if i am wrong
 

JordiCP

Expert
Licensed User
Longtime User
ok, I'll try to give a little bit of background. Please understand that all what I will say here can be found in internet, and possibly much better explained than what I do here. That's why I can't write tutorials about it (and why the only advice I can give is to read and investigate, and never assume that things have to work in a certain way)

Background: there are different types of "detection" techniques in OpenCV, all of them based on some 'properties' (visible or mathematical) of the image or group of images.
  • There are, for instance, detectors based on color, where compact groups of pixels which are 'near' to a given color are detected and then can be classified based on size, shape, ...
  • Other detectors can be based on movement: for instance if the camera is static you can compute the difference between frames to find moving objects.
  • Then there are some more complex ones, based on cascade classifier files (which is the case for facedetector, human body, ...). They need 2 parts to make them work:
    • On one side, some files (don't ask me how they work internally) that have been trained with hundredths (preferably, thousands) of "positive" images and hundredths of negative images of a certain "thing" (and eye, a license plate, a donkey, an apple). If you have trained it for apples, and you have used only green apples, then when you try to detect a red apple it won't succeed. The same for faces: if you have only trained for frontal looking faces of people without glasses, it will probably fail to detect anything when a slightly rotated face of someone with glasses.
    • On the other side, you have the 'detector'. The detector is nothing else than an algorithm that takes the frames as an input, and, according to some parameters and the cascade classifier file, gives some outputs. These outputs basically mean that it has found enough features in the image which, compared to the file, seem to contain the object it was looking for. About the configuration parameters that can affect detector performance, they are
B4X:
  mFaceDetector.detectMultiScale(mGray,faces,1.1,2,2,mSize,mSize2)  
  '1st parameter: source Mat
  '2nd parameter: list where the detector will place the detected objects (in this case, faces)
  '3rd parameter: scaleFactor --> the set factorby which the algorithm will consecutively reduce the source mat to find features (recommended value)
  '4rt parameter: minNeighbours
  '5th parameter: flags
  '6th parameter: minSize --> minimum size of object to detect. If you know that some objects must appear in the image bigger than a certain size, this will help the detector
  '7th parameter: maxSize: --> the same for max size. Leave uninitialized if not used.
      • All this information can be found HERE, if you google for opencv + detectmultiscale


  • In short, just think that both the detector and the files are really powerful but very silly, because they will only work based on the information they have. For instance, if the "fire" file has been trained only with fire from a match, but not from a forest, perhaps its color (blueish against reddish or other) or its shape, will be so different that it will say: no, the image you are showing me is not similar enough to the ones that I have in my database, so for me it is not a 'fire'. So, the detector won't detect anything and you will think that it does not work....
 

cramarc

New Member
Licensed User
Longtime User
Hello JordiCP

As a user of B4A for some years, but newby in OpenCV
I would have interest to try you library for some robot experiments.
I hope my skill and research on the internet and of course your lib
will help me to do some recognitions in live time on a small robot.
If success I hope to combine it with IOT ( MQTT )
Please, if lib still available, let me know how to download the library.

Many thanks
CraMarc (Huyghe Marc)
 

JordiCP

Expert
Licensed User
Longtime User
Sure, please PM me with your mail address.

Are these robots running Android (or sending the acquired images to an android device)?
 

cramarc

New Member
Licensed User
Longtime User
Hello JordiCP,

Thanks for the fast response.

In fact I will try use the Android as an intelligent sensor ( intern camera ) which does some pre-processing on captured pictures.
The results should be shared in a network " Internet of things " (MQTT) where other units ( mostly ESP8862 ) will do
the other jobs. It will be a sort of multiprocessing with a very flexible and wide programming experiment.
It could also be done by a "Client - Server" network as the ESPs do have WIFI
The Android does not need to activate external hardware directly.
I have already done the first part ( driving and control with simple sensors )
but working with camera will be total new for me, and I am a little afraid to have enough knowhow on these technics(maths).

My E-Mail is huyghe_marc@telenet.be

Greets from DEINZE in BELGIUM (EU) and thanks in advance.

CraMarc ( Marc )
 

jchal

Active Member
Licensed User
Longtime User
in face detector i want to mark an area like a square or place a line and every time it passes the line or everytime it is withn the maked area to display a message on screen, how can i do it?
how can i drow the line or the square and tell it to send me a message on screen?
 

stu14t

Active Member
Licensed User
Longtime User
in face detector i want to mark an area like a square or place a line and every time it passes the line or everytime it is withn the maked area to display a message on screen, how can i do it?
how can i drow the line or the square and tell it to send me a message on screen?

I think, to be fair to @JordiCP , He's done a magnificent job in porting this to B4A and now it's over to us, as programmers, to work out for ourselves what is needed to achieve our goals.

Yes, do ask him if it's specifically to do with the Lib but also learn for yourself how to do these things, it is far more rewording and you learn quicker too.
 

jchal

Active Member
Licensed User
Longtime User
I think, to be fair to @@JordiCP , He's done a magnificent job in porting this to B4A and now it's over to us, as programmers, to work out for ourselves what is needed to achieve our goals.

Yes, do ask him if it's specifically to do with the Lib but also learn for yourself how to do these things, it is far more rewording and you learn quicker too.
dear stu14t i understand and agree that jordiCP has done a greate job in this lib but you must have in mind that different people learn in different ways,
my question to the comunity was what way i must use in order to do this. i am a learner you see not an expert!
 

moster67

Expert
Licensed User
Longtime User
Personally I think the best way is to try to translate the examples you can find on the Internet. You can probably find examples which are similar to what you want to do by googling and/or studying many of the examples available. It's true that most of them are either in C++ or Python but if you examine the code you should be able to translate the code to be used with B4A/B4A OpenCV wrapper.
 

JordiCP

Expert
Licensed User
Longtime User
I agree. The fact is that the OpenCV user base in B4A isn't yet broad enough (and perhaps will never be, since it is quite specific and not trivial) so that there could be more examples to learn from, as a "first contact" companion before jumping to internet.

in face detector i want to mark an area like a square or place a line and every time it passes the line or everytime it is withn the maked area to display a message on screen, how can i do it?
how can i drow the line or the square and tell it to send me a message on screen?
Take a look to faceDetector5 example (first post in this thread). I show how to draw a rectangle on the screen (the one that bounds the detected face, if any). So this is the way to draw. In a similar way, to draw a line, you can use
B4X:
mImgProc.line(....)    '<-- mImgProc has been declared at the beginning of the example as Dim mImgProc as OCVImgProc
Note that most of the 'static' utility methods are in mImgProc ( which has type OCVImgProc) and mCore (which has type OCVCore)

You want to draw another rectangle or line on the screen and be alerted when the face crosses the line----> OpenCV will not do it for you. But you already have all the information, so it is just programming. The detected face is inside a bounding rectangle (the one that we draw in the example). You just must compare its coordinates (explore the fields of OCVRect) with the ones of the bigger rect or the line, and it the comparison gives the expected results, you can draw text on the screen with mImgProc.putText (or putText1 or putText2, depending on parameters).


I know there is not a lot of material, but I spent many hours building the examples to help, so the first thing to do is to try to understand them, since they can answer some questions.
 

barzaghino1

New Member
Licensed User
Longtime User
I'd be interested to see your work

Many thanks for creating this.

PM on the way

tell me How much the donation is? thanks
 

padvou

Active Member
Licensed User
Longtime User
Hello,
could you please help me recognize a specific shape for example a rectangle or circle in an image?
I'm new with this and I could appreciate any help.
Thank you.
 

JordiCP

Expert
Licensed User
Longtime User
Hi padvou,
You must think of OpenCV as a big 'set' of powerful functions for processing. The problem can be faced in different ways, but it will always work better if it has some 'clues' (as many as you can give it)

For instance:
Is it a preloaded image or real-time input from a camera?
in case of a circle: is it always filled with a color (and is it the same color)? Which is the (relative to picture dimensions) minimum and maximum size that it can have?
For square, nearly the same questions: also, will the square be aligned to picture edges or can it be in 'perspective'?

The 'optimum' way to program it will depend on the answers to these questions. With these answers (also, if you can send an example picture will be much better), I (or others) will try to help you. :)
 

padvou

Active Member
Licensed User
Longtime User
Hi padvou,
You must think of OpenCV as a big 'set' of powerful functions for processing. The problem can be faced in different ways, but it will always work better if it has some 'clues' (as many as you can give it)

For instance:
Is it a preloaded image or real-time input from a camera?
in case of a circle: is it always filled with a color (and is it the same color)? Which is the (relative to picture dimensions) minimum and maximum size that it can have?
For square, nearly the same questions: also, will the square be aligned to picture edges or can it be in 'perspective'?

The 'optimum' way to program it will depend on the answers to these questions. With these answers (also, if you can send an example picture will be much better), I (or others) will try to help you. :)

Hi Jordie,
upload_2017-7-14_15-18-33.png

This is something like the one I would like to detect on a loaded image in a panel.
Of course I can change it so that the mid circle is black etc.
The relative dimensions are not known beforehand unfortunately. As the camera can move back and forth the relative dims get smaller or larger. But the shape to be detected remains the same.
The goal is to detect the position of the center of the circle in the panel, so that I can take the X and Y coordinates for further use.
 

padvou

Active Member
Licensed User
Longtime User
Top