top of page

Completed in 

JumBot

2022

Robotics, AI, Education

Teaching children about categorization and artificial intelligence without the use of a computer or tablet.

Project Gallery

JumBot is a 6-person student collaboration between 2 design engineers and 4 child-development students to create a functional prototype of an educational toy to teach children about machine learning and artificial intelligence.  

 Kids can join JumBot at the circus and categorize the circus toys into the colorful baskets to find out what category criteria JumBot is thinking about, and create their own categories for JumBot to guess!


Vision and Team Formation:

The JumBot team is a cross-disciplinary collaboration between Child Development and Mechanical Engineering students. Our goal is to create a fun, engaging toy that uses Artificial Intelligence and Machine Learning to teach children ages 5-8 about AI and ML topics.

The Child Development team brought to our group the “Big Idea” concept - what do we want the children to walk away understanding. For JumBot, the Big Idea is Categorization, so one of our success criteria will be whether the children can identify the categories JumBot is using to sort objects, whether they can predict what category a new object will be sorted into, and whether they understand that different objects might be in different categories from one level to the next based on the categorization schemes of the different levels.

In order for children to learn anything from JumBot, the toy itself must be engaging and fun. A friendly-looking, cute, soft robot circus elephant, with colorful circus lights and engaging movements, draws children in, and the colorful 3D-printed toys allow for open-ended play with or without the robot. Circus theming for the toys and the “game board” help tie everything together.


Software and Machine Learning:

 JumBot uses Python keypoint color blob identification to narrow down the options for identifying the object being presented, and then used three Google Teachable Machine models to identify the object within the set of objects of that color. 


Once the object is identified, JumBot identifies which category the object belonged to based on the level (Level 1: color, Level 2: animal, food, or clothing, Level 3: first letter of object name) and reacts according to a movement and color scheme for that category.


Hardware - Mechanical:

JumBot uses a soft, stuffed animal skin over a 3D printed bespoke frame with 3 degrees of freedom.   Three servomotors provide the motion control, while two DC motors attached to wheels mounted on the "circus ring" platform provide an additional degree of freedom.


Hardware - Electronic:

  • Raspberry Pi Zero /Pi 4

  • PiCamera

  • RPi UPS PowerPack, 3.7V Li-polymer battery

  • 12V AA-battery power pack

  • NeoPixel "smart" LED ring

  • 2 DF9GMS microservo motor

  • 1 MG995 5V heavy duty servomotor

  • 2 12V DC motors

  • PCA9685 Servo HAT

Challenges

The creation of JumBot suffered many challenges, especially with regards to hardware and electrical compatibility. Our initial design used an Arduino RPi2040 microprocessor to provide control for the servomotors, NeoPixels, and DC motors. However, the 3.3V RP2040 was not able to drive the larger MG995 servomotor due to that motor’s higher voltage requirements and large current draw. That drove a switch to the Raspberry PiZero, necessitating a complete software rewrite to use the AdaFruit Blinka libraries to control of the NeoPixels (which are designed for a microcontroller environment).  Due to the current global supply chain shortages limiting availability of Raspberry Pi hardware, we developed initially on a PiZero with a broken camera connector, and explored the possibility of using an OpenMVCam m7 connected over serial to the Raspberry Pi. This approach was eventually rejected for both complexity and speed reasons, and for weight and size reasons - the Raspberry Pi camera is approximately half the weight of the OpenMV cam and has a much smaller profile, allowing it to be more easily mounted on the “ear flap” of the elephant without disrupting the operation of the small servomotor.

Although setting up the visual identification categories with Google Teachable Machine was very easy, porting the resulting TensorFlow models to the Raspberry Pi was very difficult. A host of library and operating system dependencies prevented the model from running on either the PiZero or a borrowed Raspberry Pi4 without significant configuration management. Because of this, in our live demonstrations and testing days with the child participants, the computer vision system was not used and objects were identified by a human teleoperator.


Teleoperation and Contingency Operation

Multiple stages of fallback operation modes were implemented in the JumBot software to allow for testing before all systems were online and interoperating:


  • Operation Mode 1 (Supervised Mode) assumes all subsystems are online, but allows for operator intervention to identify the object if the computer vision system is insufficiently confident about the categorization.

  • Operation Mode 2 (Teleoperated Mode) does not use the computer vision system, but instead prompts the operator for the level number and the name of the object being shown. JumBot then uses the level selection and categories to select the appropriate response. This mode also allows for training of Level 4 via selected object features.

  • Operation Mode 3 (Test Mode) does not use computer vision or the categorization systems, and just prompts the teleoperator for a response. This allows for testing of the various hardware and electronic systems, settings for the animations.

A robustly modular software system was critical to the success of our testing — although all the software subsystems were not able to be connected, teleoperation modes allowed for testing of the system and getting feedback on the user interface and product viability in our first round of child user testing, and for evaluation of the learning objectives in our second round of user testing.

Documentation (files or external links): 

Avery Cohen, Tala Khoury, Joshua Fitzgerald, Kat Allen, Sydney Ho, Yuri Aguilar

bottom of page