This project appears in Make: Vol. 75. Subscribe today for more great projects delivered right to your mailbox.

Machine learning is getting lots of attention in the maker community, expanding outward from the realms of academia and industry and making its way into DIY projects. With traditional programming you explicitly tell a computer what it needs to do using code; with machine learning the computer finds its own solution to a problem, based on examples you’ve shown it. You can use machine learning to work with complex datasets that would be very difficult to hard-code, and the computer can find connections you might miss!

How Machine Learning Works

How does machine learning (ML) actually work? Let’s use the classic example: training a machine to recognize the difference between pictures of cats and dogs. Imagine that a small child, with an adult, is looking at a book full of pictures of cats and dogs. Every time the child sees a cat or a dog, the adult points at it and tells them what it is, and every time the child calls a cat a dog, or vice versa, the adult corrects the mistake. Eventually the child learns to recognize the differences between the two animals.

In the same way, machine learning starts off by giving a computer lots of data. The computer looks very carefully at that data and tries to model the patterns. This is called training. There are three ways to approach this process. The picture book example, where the adult is correcting the child’s mistakes, is similar to supervised machine learning. If the child is asked to sort the pages of their book into two piles based on differences that they notice themselves — with no help from an adult — that would be more like unsupervised machine learning. The third style is reinforcement training — the child receives no help from the adult but instead gets a delicious cookie every time they get the answer correct.

Once this training process is complete the child should have a good idea of what a cat is and what a dog is. They will be able to use this knowledge to identify all sorts of cats and dogs — even breeds, colors, shapes, and sizes that were not shown in the picture book. In the same way, once a computer has figured out patterns in one set of data, it can use those patterns to create a model and attempt to recognize and categorize new data. This is called inference.

Moving to Microcontrollers

ML training requires more computing power than microcontrollers can give. However, increasingly powerful hardware and slimmed-down software lets us now run inference using models on MCUs after they’re trained. TinyML is the new field of running ML on microcontrollers, using software technologies such as TensorFlow Lite.

There are huge advantages to running ML models on small, battery-powered boards that don’t need to be connected to the internet. Not only does this make the technology much more accessible, but devices no longer have to send your private data (such as video or sound) to the cloud to be analyzed. This is a big plus for privacy and being able to control your own information.

Bias Beware

Of course, there are things to be cautious about. People like to think computer programs are objective, but machine learning still relies on the data we give it, meaning that our technology is learning from our own, very human, biases.

I spoke to Dominic Pajak from Arduino about the ethics of ML. “When we educate people about these new tools we need to be clear about both their potential and their shortcomings,” Pajak said. “Machine learning relies on training data to determine its behavior, and so bias in the data will skew that behavior. It is still very much our responsibility to avoid these biases.”

Joy Buolamwini’s Algorithmic Justice League is battling bias in AI and ML; they helped persuade Amazon, IBM, and Microsoft to put a hold on facial recognition. Another interesting initiative is Responsible AI Licenses (RAIL), which restrict AI and ML from being used in harmful applications.

Maker-Friendly Machine Learning

There’s also a lot of work to do to make ML more accessible to makers and developers, including creating tools and documentation. Edge Impulse is the exciting new tool in this space, an online development platform that allows makers to easily create their own ML models without needing to understand the complicated details of libraries such as TensorFlow or PyTorch. Edge Impulse lets you use pre-built datasets, train new models in the cloud, explore your data using some very shiny visualization tools, and collect data using your mobile phone.

When choosing a board to use for your ML experiments, be wary of any marketing hype that claims a board has special AI capabilities. Outside of true AI-optimized hardware such as Google TPU or Nvidia Jetson boards, almost all microcontrollers are capable of running tiny ML and AI algorithms, it’s just a question of memory and processor speed. I’ve seen the “AI” moniker applied to boards with an ARM Cortex-M4, but really any Cortex-M4 board can run TinyML well.

Shawn Hymel is my go-to expert for machine learning on microcontrollers. His accessible and fun YouTube videos got me going with my own experiments. I asked him what’s on his checklist for an ML microcontroller: “I like to have at least a 32-bit processor running at 80MHz with 50kB of RAM and 100kB of flash to start doing anything useful with machine learning,” Hymel told me. “The specs are obviously negotiable, depending on what you need to do: accelerometer anomaly detection requires less processing power than voice recognition, which in turn requires less processing power than vision object detection.”

Shawn Hymel built this machine-learning Lego Finder by training a model on Edge Impulse, then running TensorFlow Lite on the OpenMV H7 Camera microcontroller.

The two ML-ready boards I’ve had the most fun with are Adafruit’s EdgeBadge and the Arduino Nano 33 BLE Sense. The EdgeBadge is a credit card-sized badge that supports TensorFlow Lite. It has all the bells and whistles you could hope for: an onboard microphone, a color TFT display, an accelerometer, a light sensor, a buzzer and, of course, NeoPixel blinkenlights.

Here I’ll take you through the basics of how to sense gesture with an Arduino Nano 33 BLE Sense using TinyML. I also recommend Andrew Ng’s videos on Coursera and Shawn Hymel’s excellent YouTube series on machine learning. You can take a TinyML course on EdX that features Pete Warden from Google, who coined the term and also authored a fantastic book on the subject. For keeping up-to-date on the latest developments, seek out Alasdair Allan’s interesting and informative articles and blog posts on embedded ML; you can find him on Twitter and Medium @aallan.

Arduino’s Nano 33 BLE Sense board crams a Cortex-M4 MCU and tons of sensors to support machine learning into a tiny, wearable form factor.

Project: Recognize Gestures Using Machine Learning on an Arduino

Figure A

Since I got my hands on the Arduino Nano 33 BLE Sense at Maker Faire Rome last year, this little board has fast become one of my favorite Arduino options (Figure A). It uses a high-performance Cortex-M4 microcontroller — great for TinyML — and loads of onboard sensors including motion, color, proximity, and a microphone. The cute little Nano form factor means it works really well for compact or wearable projects too. Let’s take a look at how to use this board with TinyML to recognize gestures.

Tiny ML in the Wild: Machine Learning Projects From the Community

CORVID-19: Crow Paparazzo
by Stephanie Nemeth

The longing for interaction during the Covid-19 lockdown inspired Stephanie Nemeth, a software engineer at Github, to create a project that would capture images of the friendly crow that regularly visited her window and then share them with the world. Her project uses a Raspberry Pi 4, a PIR sensor, a Pi camera, Node.js and TensorFlow.js. She used Google’s Teachable Machine to train an image classification model in the browser on photos of the crow. In the beginning, she had to continually train the model with the newly captured photos so it could recognize the crow. A PIR sensor detects motion and triggers the Pi camera. The resulting photos are then run through the trained model, and, if the friendly corvid is recognized, the photos are tweeted to the crow’s account using the Twitter API. Find the crow on Twitter @orvillethecrow or find Stephanie @stephaniecodes.

Nintendo Voice Hack
by Shawn Hymel

Embedded engineer and content creator Shawn Hymel imagined a new style of video game controller that requires players to yell directions or names of special moves. He modified a Super Nintendo (SNES Classic Edition) controller to respond to the famous Street Fighter II phrase hadouken. As a proof-of-concept, he trained a neural network to recognize the spoken word “hadouken!” then he loaded the trained model onto an Adafruit Feather STM32F405 Express, which uses TensorFlow Lite for Microcontrollers to listen for the keyword via MEMS microphone. Upon hearing the keyword, the controller automatically presses its buttons in the pattern necessary to perform the move in the video game, rewarding the player with a bright ball of energy without the need to remember the exact button combination. Shawn has a highly entertaining series of videos explaining how he made this project — and other machine learning projects — on Digi-Key’s YouTube channel. You can also find him on Twitter @ShawnHymel.

Constellation Dress
by Kitty Yeung

Kitty Yeung is a physicist who works in quantum computing at Microsoft. She also makes beautiful science-inspired clothing and accessories, including a dress that uses ML to recognize her gestures and display corresponding star constellations. She uses a pattern-matching engine, an accelerometer, an Arduino 101, and LEDs arranged in configurations of four constellations: the Big Dipper (Ursa Major), Cassiopeia, Cygna, and Orion. Yeung trained the pattern-matching engine to memorize her gestures, detected by the accelerometer, to map to the four constellations. To learn about the project or see more of her handmade and 3D-printed clothing, check out her website kittyyeung.com or find her on Twitter @KittyArtPhysics.

Worm Bot
by Nathan Griffith

Artificial neural networks can be taught to navigate a variety of problems, but the Nematoduino robot takes a different approach: emulating nature. Using a small-footprint emulation of the C. elegans nematode’s nervous system, this project aims to provide a framework for creating simple, organically derived “bump-and-turn” robots on a variety of low-cost Arduino-compatible boards. Nematoduino was created by astrophysicist and tinkerer Nathan Griffith, using research by the OpenWorm project, and an earlier Python-based implementation as a starting point. You can find a write up of his wriggly robot project on the Arduino Project Hub by searching for “nematoduino” and you can find Griffith on Twitter @ComradeRobot.

What will the next generation of Make: look like? We’re inviting you to shape the future by investing in Make:. By becoming an investor, you help decide what’s next. The future of Make: is in your hands. Learn More.

Project Steps

Set up the board

As usual, you’ll start off by installing the board on your Arduino IDE. Go to Tools → Board → Board Manager and search for Nano 33 BLE. Install the package labelled Arduino nRF528x Boards. Next you need to install two libraries. Go to Tools → Manage Libraries and install Arduino_TensorFlowLite and Arduino_LSM9DS1.

Connect your Arduino Nano 33 BLE Sense to your computer using a micro USB cable, then go to Tools → Board and select Arduino Nano 33 BLE. Next, go to Tools → Port and make sure your board is selected there too.

Check out the sensors

ng to be sensing gestures, so let’s run some example code to try out the motion sensors on our board. Earlier you installed a library for the LSM9DS1, which is a motion sensor module with nine degrees of freedom (9-DOF) made up of a 3-axis accelerometer, a 3-axis gyroscope, and a 3-axis magnetometer. This means it can detect three key aspects of movement: acceleration, angular velocity, and heading. Go to File → Examples → Arduino_LSM9DS1 and choose one of the three example sketches to try out. Upload it to your board using Sketch → Upload. Take a look at the values from your motion sensor using Tools → Serial Plotter or Serial Monitor, then give your board a good wiggle to watch the values change. Here’s what my gyroscope test data looked like on the Serial Plotter (Figure B).

If you’re new to 9-axis sensors, spend some time playing with the three example sketches to get comfortable with how the accelerometer, gyroscope, and magnetometer work. Each sensor gives us lots of valuable information on its own, but to get the most accurate picture of what our board is up to we’re going to use two sets of sensor readings. Also, because we’re going to use this data in a machine learning project for training a model, we need to be able to capture those readings in the right format. Luckily for us, the lovely people at Arduino have already written some code to help us do just that. Let’s take a look at their example code (Figure C).

This code reads the data from the accelerometer and the gyroscope and prints the values in a CSV format with headers that will help train our model. Open up your web browser and navigate to Arduino’s TensorFlow Lite tutorials on Github to find the code snippet IMU_Capture.ino that I’m using here. Send the code to your board using Sketch → Upload, then go to Tools → Serial Plotter.

Capture some gestures

3. Capture some gestures

It’s time to play with gestures! Take the board in your hand and try different gestures to see what the data looks like. For my first gesture, I punched my fist forward four times, which gave me a very satisfying visualization of the data coming from my motion sensors (Figure D). For my second gesture, I tried a Karate Kid-style “wax on, wax off” motion which gave me a less dramatic but still recognizable pattern (Figure E).

Spend some time to find a couple gestures you really enjoy. Let your imagination run wild! You can strap it to your leg if you fancy experimenting with machine learning and yoga poses, or onto your wrist if you’re more into tennis, or even onto your head if you’ve ever wondered what it’s like to be a giant joystick (haven’t we all?).

Figure F


Once you’ve chosen your gestures, you can start gathering the data needed to train your model. Close the Serial Plotter, press the reset button on the board, and open up the Serial Monitor. Do your first gesture at least ten times, in my case punching. You should see a whole load of data print to your Serial Monitor (Figure F).

Copy all the data in the Serial Monitor into a plain text file and name it punch.csv. (You don’t have to use a punch as your gesture here, but we’re using a model that’s ready-made for us by Arduino, which requires the filename punch.csv.) Clear the output on your Serial Monitor and press reset on the board again, then repeat the process for your second gesture. This time, call your file flex.csv. You’re ready to train your model.

Train your machine learning model

ain the machine learning model in a web browser. Head to the Tiny ML on Arduino gesture recognition tutorial to get started (Figure G).

On the Google Colab page you’ll see a panel on the left-hand side containing icons and menus for Table of Contents, Code Snippets, and Files. In the main work area of the page you’ll find a step-by-step tutorial with headers and code “cells” that you can run by pressing the Play icon.

Start off by running the Setup Python Environment cell, then upload punch.csv and flex.csv to the Files section. You can then run cells to graph your data and train your neural network, before building and training your model. Finally, run the cell called Encode the Model in an Arduino Header and download the resulting model.h file.

Run your model on the Arduino

Figure H

To run the model trained on your data, you need to add it into a sketch on the Arduino. In your web browser, navigate back to Github to download the IMU_Classifier.ino code I’m using here. Open up the sketch in your IDE (Figure H) then create a new tab by clicking the down arrow in the top right corner then selecting New Tab. Call this tab model.h and insert the contents of your downloaded model.h file.

Figure I

Compile and run the sketch, then open the Serial Monitor (Figure I) and try out your gestures. You should see punch and flex followed by a number between 0 and 1. These numbers tell you what degree of confidence your model has in categorizing what you’re doing as one or the other of your trained gestures, with 0 being low confidence and 1 being high confidence.

Punch Above Your Weight

If it’s all working, you should feel very proud of yourself. You have managed to train and run a machine learning model on an Arduino board — an awesome new skill.

These outputs can be used for all sorts of fun applications. The Arduino team used their model to type emojis with gestures, but you can use any outputs you like: servomotors, blinky lights, noisy alarm systems, or sending secret signals to another device.

Now that you’ve got to grips with the basic process, you can also replicate this project with sets of data from different sensors. For your next Arduino and TinyML project I’d highly recommend experimenting with the tools from Edge Impulse to help you get creative with machine learning.