- 1 Robot Programming Tools
- 2 Robotics Academy
- 3 Drones
- 4 Visual SLAM
- 5 DeepLearning
- 6 Previous projects
Several development areas: robot programming tools, learning robotics, drones, SLAM algorithms, DeepLearning. All of them are open for collaboration.
Robot Programming Tools
VisualStates is a tool for programming robot behaviors using automata. It combines a graphical language to specify the states and the transitions with a text language (Python or C++). It generates a ROS node which implements the automata and shows a GUI at runtime with the active state every time, for debugging. Take a look at some example videos.
- https://jderobot.org/Club-PushkalKatara Pushkal Katara
Adding full compatibility with ROS
The current state of VisualStates only supports subscription and publish for topics. We aim to integrate all the communication features of the ROS and also basic packages that would be useful for behavior development. In the scope of this project the following improvements are targeted for a new release of VisualStates tool:
- The integration of ROS services, the behaviors will be able to call ROS services.
- The integration of ROS actionlib, the behaviors will be able to call actionlib services.
- The generating and reading smach behaviors in VisualStates and modify and generate new behaviors.
Library of parameterized automata
Every automaton created using VisualStates can be seen as a state itself and then be integrated in a larger automata. Therefore, the user would be able to add previously created behaviors as states. When importing those behaviors, the user would have two options; copying the behavior on the new behavior or keeping reference to the imported automata such that if it is changed, those changes are going to be reflected on the new behavior too. The idea of this project is to built and support an automata library. There will be a library of predefined behaviors (automata) for coping with usual tasks, so the user can just integrate them as new states on a new automata, without writing any code. In addition, each automaton may accept parameters to fine tune its behavior. For example, for moving forward a drone, we'll have a state 'moveForward', so the user only have to import that state indicating as a parameter the speed he wants.
PyOnArduino: Compiling Python to Arduino language
JdeRobot-Kids is an academic framework for teaching robotics to children in a practical way. It is based on Python, the kids have to program typical robot behaviors like follow-line using Python. JdeRobot-Kids is now mostly centered in the mbot robot, from makeblock, both the real robot and the simulated one in Gazebo. Mbot is an Arduino based robot. Currently the student application runs at a regular computer, which is connected to the onboard Arduino. Arduino and PC interact using the Firmata protocol. This approach requires a continuous connection between the robot and the off-board computer. Arduino is limited on computer power so it is not enough to run a Python interpreter. The goal of this project is to "compile" the Python application to Arduino microprocessor. This way the kid program can be fully downloaded on the Mbot robot and run completely autonomous. Another possibility is to translate Python application to C/C++, as gcc/g++ already compiles it to Arduino microprocessor. Some ideas to explore are: LLVM compiler infrastructure, cython...
VisualCircuit is a tool for programming robot behaviors using a digital electronics language and abstractions. In reconfigurable circuits (FPGAs) a hardware description language is used to visually specify the circuit configuration and its behavior. For instance, the open source IceStudio tool uses such visual language to configure FPGAs. The idea of this project is to explore the use of the same visual language to program reactive robot behaviors. There are blocks (existing circuits) and wires to connect their inputs and outputs. Instead of synthesizing the visual program into a FPGA implementation the goal is to synthesize it into a Python program. Each block is translated into a thread that runs a transforming function at fast iterations. Each iteration reads the block inputs, does some specific processing to compute the right values and writes them on its outputs. Each wire is translated into a shared variable where the blocks can write or read. The expected result is a new tool to program reactive robot behaviors using a visual language based on blocks and wires.
- Robotics-Academy* is an framework for learning robotics and computer vision, built on JdeRobot Foundation. It is composed of a collection of cool exercises in Python about robot programming and computer vision. Each exercise includes several exercises where each one includes a Python application that connects to the real or simulated robot and provides a template that the students have to fill with their code for the robot algorithms.
- Carlos Awadallah, (grad): Robotics-Academy
- Pablo Moreno, (grad): Robotics-Academy
- Irene Lope, (grad): new exercises in Robotics-Academy
- Ignacio Malo, (grad): robotic manipulator exercise in Robotics-Academy.
- Arsalan Akhter (GSoC-2018) Robotics-Academy
- Hanqing Xie (GSoC-2018) Robotics-Academy
Computer vision exercises
Autonomous cars exercises
Mobile robots exercises
New exercise: fleet of robots for Amazon logistics store
One nice exercise to be included is the navigation of a fleet of robots, their path planning and coordination. The scenario is an Amazon warehouse, where the fleet of Kiva robots should autonomously move the goods from the providers input to the storing location and from there to the output bay. The robot model in Gazebo has to be developed, also the Python template node (with its GUI) that will host the student code and a tentative solution.
The main idea of this project is to introduce the OMPL (Open Motion Planning Library) into JdeRobot-Academy, in a new robot navigation exercise. For this task, the student will develop a new exercise and their solutions using different path planning algorithms of an autonomous wheeled robot or drone which moves along a known scenario in Gazebo.
Simultaneous Localization and Mapping (SLAM) algorithms play a fundamental role for emerging technologies, such as autonomous cars or augmented reality, providing an accurate localization inside unknown environments. There are many approaches available with different characteristics in terms of accuracy, efficiency and robustness (ORB-SLAM, DSO, SVO, etc), but their results depend on the environment and resources available.
- Elías Barcia, (master): visual SLAM, slam-testbed
- Jianxiong Cai (GSoC-2018) Creating realistic 3D map from online SLAM result
slam-TestBed is a graphic tool to compare objectively different Visual SLAM approaches, evaluating them using several public benchmarks and statistical treatment, in order to compare them in terms of accuracy and efficiency. The main goal of this project is to increase the compatibility of this tool with new benchmarks and SLAM algorithms, so that it becomes an standard tool to evaluate future approaches.
The next video shows one of the SLAM algorithms (called ORB-SLAM) that will be evaluated with this tool:
MapGenerator: create realistic 3D maps from SLAM algorithms
SLAM algorithms provide accurate localization inside unknown environments, however, the maps obtained with these techniques are often sparse and meaningless, composed by thousands of 3D points without any relation between them.
The goal of this project is to process the data obtained from SLAM approaches and create a realistic 3D map. The input data will consist of a dense 3D point cloud and a set of frames located in the map. The next video shows one of the SLAM algorithms (called DSO) whose output data will be used to create the 3D map. The expected result of this project is a tool for building realistic 3D maps from a 3D point cloud and frames.
- Alexandre Rodriguez (master): DeepLearning
- David Pascual, (master): Convolutional Pose Machines
- Nuria Oyaga, (master): Predicting images, learning time sequences
- Vanessa Fernández (master): Visual Control with DeepLearning
- Pretrained network models
DetectionSuite is a C++ on-development tool to test and train different DeepLearning architectures for object detection on images. It accepts several known international datasets like PASCALVOC and allows the comparison of several DeepLearning architectures over exactly the same test data. It computes several objective statistics and measures their performance. Currently it support YOLO architectures on Darknet framework.
Adding support for segmentation, more datasets and more DL frameworks
The goal of this project is to expand the supported datasets (ImageNet, COCO...) and expand the neural frameworks (Keras, TensorFlow, Caffe...). In addition several detection architectures should be trained and compared with the new release of the tool.
The expected result is a new release of DetectionSuite tool extending the existing functionality for objection but also for two new deep learning problems: classification and segmentation including new statistics for each of them.