Sharing some fun with some my 6DOF robotic arm (RAVA, Robotic Arm Version A) grabbing squishy characters using computer vision and objects classifications.
Brief description of the application:
- 6 DOF Robotic arm, actuated by analog servos MG996R controlled via PCA9685 board communicating with WeMos ESP8266 via I2C;
- WeMos running B4R code implementing robot commands, kinematic inversion and MQTT client to communicate with the PC app;
- PC app coded with B4J implementing MQTT interface with the Wemos and with the Python macro;
- Python macro implementing openCV object identification and TensorFlow/Keras model application to the identified objects for classification. No need to say that both OpenCV and Tensorflow/Keras are unbelievably incredibly good libraries...
- CNN model trained with 150 pictures of Hulk and 150 of Venom (30% used for validation).
Brief description of the application:
- 6 DOF Robotic arm, actuated by analog servos MG996R controlled via PCA9685 board communicating with WeMos ESP8266 via I2C;
- WeMos running B4R code implementing robot commands, kinematic inversion and MQTT client to communicate with the PC app;
- PC app coded with B4J implementing MQTT interface with the Wemos and with the Python macro;
- Python macro implementing openCV object identification and TensorFlow/Keras model application to the identified objects for classification. No need to say that both OpenCV and Tensorflow/Keras are unbelievably incredibly good libraries...
- CNN model trained with 150 pictures of Hulk and 150 of Venom (30% used for validation).