Hands On with ROS 2: Nodes, Topics, and Services

Robotics Technology
Hands On with ROS 2: Nodes, Topics, and Services

Featured illustration by Josh Ellingson CC BY-NC 2.0

Cover of Make Volume 95. Headline is "Super [Tiny] Computers". A Raspberry Pi 500+ with RGB lights and an Arduino Q board are on the cover.
This article appeared in Make: Vol. 95. Subscribe to Make: for more great articles.

You’ve been hired to work on a large humanoid robot along with a dozen other engineers. Your job is to develop the arms and hands: moving them to specific locations, picking up objects, shaking human hands, and so on. You do not have direct sensor input, as that’s another engineer’s job. You also need to work with the locomotion team, as the arms need to move to help balance the robot. Everyone meets one day to figure out how all this will work together.

In the conference room, on a giant whiteboard, you map out all the components and how they’ll interact. You need the cameras and motion sensors in the head to send raw data to the main processing core, which computes motions for the arms and legs. But those positions also affect the stability of the robot, so positional data needs to be sent back to the processing code.

How will you accomplish all this communication? Will you need to develop a unique scheme for each component?

The Robot Operating System

Up until the late 2000s, roboticists struggled with these exact concepts and would often re-create messaging schemes from scratch for every new robot. “Reinventing the wheel” of messaging and underlying frameworks ended up consuming more time than actually building the robot!

In 2006, two Ph.D. students at Stanford’s Salisbury Robotics Lab, Eric Berger and Keenan Wyrobek, set out to solve this problem. They created the Robot Operating System (ROS) to standardize communication among various robot components. Scott Hassan, founder of the Willow Garage incubator, took notice and invited Berger and Wyrobek to continue their work in Willow Garage’s program.

Over the next 3 years, the team built the PR2 robot, a successor to the PR1 begun at Stanford, and fleshed out ROS to act as the underlying software framework for the PR2.

The PR2 is an advanced research robot capable of navigating human environments and interacting with objects (seen here at Maker Faire Bay Area in 2011).
Photo by Timothy Vollmer CC BY 2.0

ROS is an open-source robotics middleware framework and collection of libraries. It is not a true “operating system” like Windows, macOS, or Linux, as it cannot control hardware directly and does not have a kernel for handling processes and memory allocation. Rather, it is built on top of an operating system (usually Linux), handles multiprocessing communication, and offers a collection of computational libraries, such as the Transform Library 2 (TF2) for handling coordinate frame transformations.

Because ROS requires a full operating system and assumes you’re working in a multiprocessing environment, it is not well suited for simple, single-purpose robotics, like basic vacuum robots or maze solvers. Rather, scalability is the deciding factor: when you have multiple, complex components that need to operate together, ROS can save you hundreds of hours of work and frustration.

While ROS is used across academia for research, it also has been adopted by industry for real, commercial robots. Examples include some of the Amazon warehouse robots, Avidbots commercial-grade cleaners, and Omron’s TM manipulator arms.

If you’re looking to build a large, complex helper bot for household chores, improve your robotic programming skills for a job, or simply see what the hype is about, we’ll walk you through installing ROS and creating simple communication examples using topics and services.

Install ROS Docker Image

The first iteration of ROS had some technical limitations in the underlying messaging layers, so the team created ROS 2, which began life in 2014. ROS 1 reached end-of-life status on May 31, 2025, which means it will no longer receive updates or support. ROS 2 has fully replaced it.

About once a year the ROS team releases a new distribution, which is a versioned set of ROS packages, much like Linux distributions. Each release is given a whimsical, alliterative name featuring a turtle and progressing through the alphabet. The latest release, Kilted Kaiju, came out in May 2025, but we will stick to Jazzy Jalisco, which has long-term support until 2029.

Each ROS distribution is pinned to a very particular version of an operating system to ensure that all of its underlying libraries work properly. Jazzy Jalisco’s officially supported operating systems are Ubuntu 24.04 and Windows 10 (with Visual Studio 2019).

For a real robot that communicates with motors and sensors, you likely want Ubuntu installed on a small laptop or single-board computer (e.g. Raspberry Pi). For this tutorial, I will demonstrate a few ROS principles using a premade Docker image. This image pins various package versions and works across all the major operating systems (macOS, Windows, and most Linux distributions).

XRP robot modified to hold a Raspberry Pi and battery, from my Digi-Key video series “Intro to ROS.” Photo by Shawn Hymel/Digi-Key

If you do not have it already installed on your host computer, head to docker.com, download Docker Desktop, and run the installer. Accept all the defaults.

Next, get the ROS Docker image and example repository. Navigate to the GitHub repository, click Code, and click Download ZIP. Unzip the archive somewhere on your computer.

Open a command line terminal (e.g. zsh, bash, PowerShell), navigate to the project directory, and build the image:

cd introduction-to-ros/

docker build -t env-ros2 .

Wait while the Docker image builds. It is rather large, as it contains a full instance of Ubuntu 24.04 with a graphical interface.

Once that finishes, run the image with one of the following commands, depending on your operating system.

For macOS or Linux:

docker run --rm -it -e PUID=$(id -u) -e PGID=$(id -g) -p 22002:22 -p 3000:3000 -v “${PWD}/workspace:/config/workspace” env-ros2

For Windows (PowerShell):

docker run --rm -it -e PUID=$(wsl id -u) -e PGID=$(wsl id -g) -p 22002:22 -p 3000:3000 -v “${PWD}\workspace:/config/workspace” env-ros2

If everything works, you should see the Xvnc KasmVNC welcome message. You can ignore the keysym and mieq warnings as well as the xkbcomp error message.

Open a browser on your host computer and navigate to https://localhost:3000. You should be presented with a full Ubuntu desktop.

This env-ros2 Docker image is based on the XFCE Ubuntu webtop image maintained by the LinuxServer.io group.

Topics: Publish and Subscribe

In ROS 2, applications are divided up into a series of nodes, which are independent processes that handle specific tasks, such as reading sensor data, processing algorithms, or driving motors. Each node runs separately in its own runtime environment and can communicate with other nodes using a few basic techniques.

The first communication method is the topic, which relies on a publish/subscribe messaging model. A publisher can send data to a named topic, and the underlying ROS system will handle delivering that message to any node subscribed to that topic.

Each node in ROS is a separate process written in one of several supported programming languages. Nodes can contain one or more publishers, which push messages out over a named topic. They can also contain one or more subscribers, which, if subscribed to the same topic, will receive a copy of the message.

Nodes in ROS are independent runtime processes, which essentially means they’re separate programs that can be written in one of several supported programming languages. Out of the box, ROS 2 supports Python and C++, but you can write Nodes in other community-supported languages like Ada, C, Java, .NET (e.g. C#), Node.js (JavaScript), Rust, and Flutter (Dart). The beauty of ROS is that nodes written in one language can communicate with nodes written in other languages!

In general, you’ll find C++ used for low-level drivers and processes that require fast execution. Python nodes, on the other hand, offer faster development time with some runtime overhead, which makes them great for prototyping and working with complex vision processing (e.g. OpenCV) and machine learning frameworks (e.g. PyTorch, TensorFlow).

ROS relies heavily on object-oriented programming principles. Individual nodes are written as subclasses of the Node class, inheriting properties from it as defined by ROS. Publishers and subscribers are object instances within these nodes. As a result, your custom node can have any number of publishers and subscribers.

In our example, we will create a simple subscriber in a node that listens on the my_topic topic channel and prints to the console whatever it hears over that channel. We will then create a simple publisher in another node that transmits “Hello world” followed by a count value to that topic twice per second.

Create a ROS Package

Double-click the VS Code ROS2 icon on the left of the desktop. This opens an instance of VS Code in the Docker container preconfigured with various extensions, and it automatically enables the ROS 2 development environment.

Click View → Terminal to open a terminal pane at the bottom of VS Code.

Navigate into the src/ directory in the workspace/ folder. Note that we mounted the workspace/ directory from the host computer’s copy of the introduction-to-ros repository. That gives you access to all the code from the repository, and any changes you make to files in the workspace/ directory will be saved on your host computer. If you change anything in the container outside of that directory, it will be lost, as the container is completely deleted when you exit!

In ROS 2, a workspace is a directory where you store and build ROS packages. The workspace/ folder is considered a ROS 2 workspace. A package is a fundamental unit of code organization in ROS 2. You will write one or more nodes (parts of your robot application) in a package in the src/ directory in the workspace. When you build your nodes (in a package), any required libraries, artifacts, and executables end up in the install/ directory in the workspace. ROS 2 uses the build/ and log/ directories to put intermediate artifacts and log files, respectively.

From the workspace/ directory, navigate into the src/ folder and create a package. Note that the Docker image mounts the workspace/ directory (on your host computer) to /config/workspace/ in the container.

cd /config/workspace/src

ros2 pkg create --build-type ament_python my_first_pkg

This will create a directory named my_first_pkg/ in workspace/src/. You might need to click the Refresh Explorer button at the top of VS Code to see the folder show up. The my_first_pkg/ folder will contain a basic template for creating nodes. Note that we used ament_python as the build type, which tells ROS 2 that we intend to use Python to create nodes in this package.

We will write our source code in my_first_pkg/my_first_pkg/. The other files in my_first_pkg/ help ROS understand how to build and install our package.

The workspace contains a number of other example packages from the GitHub repository, such as my_bringup and my_cpp_pkg. You are welcome to explore those to see how to implement other nodes and features in ROS.

Create Publisher and Subscriber Nodes

Create a new file named my_publisher.py in my_first_pkg/my_first_pkg/ and open it in VS Code:

code my_first_pkg/my_first_pkg/my_publisher.py

Open a web browser and navigate to bit.ly/41TlYuj to get the publisher Python code.

Copy the code from that GitHub page into your my_publisher.py document. Feel free to read through the comments.

Notice that we are creating a subclass named MinimalPublisher from the ROS-provided Node class, which gives us access to all the data and methods in Node. Inside our MinimalPublisher, we create a new publisher object with self.create_publisher(), which is a method in the Node class.

We also create a new timer object and set it to call the _timer_callback() method every 0.5 seconds. In that method, we construct a “Hello world: ” string followed by a counter number that we increase each time _timer_callback() executes.

In our main() entrypoint, we initialize rclpy, which is the ROS Client Library (RCL) for Python. This gives us access to the Node class and other ROS functionality in our code. We create an instance of our MinimalPublisher() class and then tell it to spin(), which just lets our node run endlessly.

If our program crashes or we manually exit (e.g. with Ctrl+C), we perform some cleanup to ensure that our node is gracefully removed and rclpy is shut down. The last two lines are common Python practice: if the current file is run as the main application, Python assigns the String ‘__main__’ to the internal variable __name__. If this is the case, then we tell Python to run our main() function.

Save your code. Create a new file named my_subscriber.py in my_first_pkg/my_first_pkg/ and open it in VS Code:

code my_first_pkg/my_first_pkg/my_subscriber.py

Open a web browser and navigate to bit.ly/3JqL0ut to get the subscriber Python code.

Copy the code from that GitHub page into your my_subscriber.py document.

Similar to our publisher, we create a subclass from Node named MinimalSubscriber. In that node, we instantiate a subscription object with a callback. The callback is the method _listener_callback() that gets called whenever a message appears on the topic we subscribe to (‘my_topic’). The message is passed into the method as the msg parameter. In our callback, we simply print that message to the screen.

As with our previous program’s main(), we initialize rclpy, instantiate our node subclass, and let it run. We catch any crashes or exit conditions and gracefully shut everything down.

Don’t forget to save your work!

Build the Package

Before we can build our package, we need to tell ROS about our nodes and list any dependencies. Open my_first_pkg/package.xml, which was created when we generated the package template. This file is the package manifest that lists any metadata and dependencies for the ROS package. You can give your package a unique name and keep track of the version here. The only line we need to add tells ROS that we are using the rclpy package as a dependency in our package. Just after the <license> line, add the following:

<license>TODO: License declaration</license>

<depend>rclpy</depend>

<test_depend>ament_copyright</test_depend>

We add this line as our package requires rclpy to build and run, as noted by the import rclpy line in our publisher and subscriber code. While you can import any Python packages or libraries you might have installed on your system in your node source code, you usually want to list imported ROS packages in package.xml. This helps the ROS build systems and runtime know what ROS packages to use with your package. Save this file.

Next, we need to tell the ROS build system which source files it should build and install. Open my_first_pkg/setup.py, which was also created on package template generation. The entry_points parameter lists all of the possible entry points for the executables in the package. It allows us to list functions in our code that act as entry points for new applications or processes.

Add the following to the console_scripts key:

entry_points={

  ‘console_scripts’: [

    “my_publisher = my_first_pkg.my_publisher:main”,

    “my_subscriber = my_first_pkg.my_subscriber:main”,

  ],

},

Save this file. We’re finally ready to build! In the terminal, navigate back to the workspace directory and use the colcon build command to build just our package, which we select with the --packages-select parameter. Colcon is the build tool for ROS 2 and helps to manage the workspace.

cd /config/workspace/

colcon build --packages-select my_first_pkg

Your package should build without any errors.

Run Your Publisher and Subscriber

Now it’s time to test your code. Click on the Applications menu in the top-left of the container window, and click Terminal Emulator. Repeat this two more times to get three terminal windows. You are welcome to use the desktop icons in the top-right of the container window to work on a clean desktop.

We need two of the terminal windows to run our publisher and subscriber as two separate processes. We’ll use the third window to examine a graph of how our nodes are communicating.

In the first terminal, source the new package environment and run the node. The Docker image is configured to initialize the global ROS environment, so ROS commands and built-in packages work in these terminal windows. However, ROS does not know about our new package yet, so we need to tell it where to find that package. So, every time you open a new terminal window, you need to source the environment for the workspace you wish to use, which includes all the packages built in that workspace. We do this by running an automatically generated bash script in our workspace.

From there, run our custom publisher node using the ros2 run command:

cd /config/workspace/

source install/setup.bash

ros2 run my_first_pkg my_publisher

This should start running the publisher in the first terminal.

In the second terminal, source the workspace setup.bash script again and run the subscriber:

cd /config/workspace/

source install/setup.bash

ros2 run my_first_pkg my_subscriber

You should see the “Hello world: ” string followed by a counter appear in both terminals. The publisher prints its message to the console for debugging, and the subscriber prints out any messages it receives on the my_topic topic.

Finally, in the third terminal, run the RQt graph application, which gives you a visualization of how your two nodes are communicating:

rqt_graph

RQt is a graphical interface that helps users visualize and debug their ROS 2 applications. The RQt graph shows nodes and topics, which can be extremely helpful as you start running dozens or hundreds of nodes in your robot application. RQt contains many other useful tools, but we’ll stick to the RQt graph for now. Note that you might need to press the Refresh button in the top-left of the window to get the nodes to appear.

Press Ctrl+C in each of the terminals or close them all to stop the nodes.

Clients and Servers, Requests and Responses

The publish/subscribe model works well when you need to regularly transmit data out on a particular subject, such as sensor readings, but it doesn’t work well when you need one node to make a request to another node. For this, ROS has another method for handling messaging between nodes: services.

A service follows a client/server model where one node acts as a server, waiting to receive incoming requests. Another node acts as a client and sends a request to that particular server and waits for a response. This pattern works well in instances where you want to set a parameter, trigger an action, or receive a one-time update from another node. For example, your computation node might request that the motion node moves the robot forward by some amount.

As with topics, nodes can contain one or more servers, which broadcast their ability to respond to requests via a particular service. Nodes can also instantiate clients, which are used to send requests to the servers.

Create Subscriber and Client Nodes

In my_first_pkg/my_first_pkg/, create a new file, my_server.py:

cd /config/workspace/src/

code my_first_pkg/my_first_pkg/my_server.py

Copy the code found at bit.ly/4fSWUcT and paste it into that file.

Here, we create another subclass node named MinimalServer. In that node, we instantiate a service named add_ints, which turns the node into a server. We must specify an interface for the service so that we know what kind of data it contains. In this case, we specify AddTwoInts as the interface, which was imported at the top of the file in the from example_interfaces.srv... line. ROS 2 contains a number of example interfaces, but you can also define your own. Finally, we attach the callback method _server_callback() to the service.

Whenever a request comes in for that named service, _server_callback() is called. The request is stored in the req parameter. This service works only with the AddTwoInts interface, so we know that the request will have two fields: a and b, each containing an integer. We add the two integers together, store the sum in the sum field of the response, print a message to the console for debugging, and then return the response object. The ROS 2 framework will handle the underlying details of delivering the response message back to the client. Save your file.

Now let’s create our client. In my_first_pkg/my_first_pkg/, create my_client.py:

code my_first_pkg/my_first_pkg/my_client.py

Copy the code found at bit.ly/47bipmK and paste it into that file.

In our client node, we create a client object and give it the interface, AddTwoInts, and the name of the service, ‘add_ints’. We also create a timer object, much like we did for the publisher node. In the callback method for the timer, we fill out the request, which adheres to the AddTwoInts interface. We then send the request message to the server and assign the result to a future. We repeat this process every 2 seconds.

A future is an object that acts as a placeholder for a result that will be available later. We add a callback to that future, which gets executed when this node receives the response from the server. At that point, the future is complete, and we can access the value within. The response callback simply prints the resulting sum to the console. Don’t forget to save your work!

As we did with the publisher and subscriber examples, we need to tell the ROS 2 build system about our new nodes. Add the following to my_first_pkg/setup.py:

entry_points={

  ‘console_scripts’: [

    “my_publisher = my_first_pkg.my_publisher:main”,

    “my_subscriber = my_first_pkg.my_subscriber:main”,

    “my_client = my_first_pkg.my_client:main”,

    “my_server = my_first_pkg.my_server:main”,

  ],

},

Rebuild your package:

cd /config/workspace/

colcon build --packages-select my_first_pkg

The package should build without any errors.

Run Your Server and Client

As you did with the publisher and subscriber demo, open three terminal windows. In the first window, source your workspace environment and run the server node:

cd /config/workspace/

source install/setup.bash

ros2 run my_first_pkg my_server

In the second window, source the workspace environment again and run the client node:

cd /config/workspace/

source install/setup.bash

ros2 run my_first_pkg my_client

You should see the server terminal receive the request from the client containing two random integers for a and b, each between 0 and 10. The server sends the response back to the client, which prints the sum to the terminal.

You are welcome to run rqt_graph in the third terminal, but you will only see the nodes — no service interface or lines connecting them as you saw with topics.

Mighty Middleware

It might seem odd that we spent all this time just getting some pieces of software to talk to each other, but these concepts form the basis for ROS. Remember, ROS is a middleware messaging layer at its core, not a collection of robot drivers or sensor libraries. ROS solves an important challenge by helping developers scale software projects in large, complex robotics. It contains far too much overhead to be useful for smaller robotics projects.

That being said, I hope this brief introduction satisfied your curiosity about ROS. If you’d like to learn more, check out the examples and getting started video series. Beyond the messaging system we just looked at, ROS ships with a number of libraries, like TF2, diagnostic tools, and visualizers to help you build, test, and deploy your robot software. Topics and services are just the beginning; ROS scales up to handle extremely complex designs and is currently used in many commercial robots found around the world!


This article appeared in Make:Volume 95.

Tagged
Shawn Hymel

Shawn Hymel is a course creator and instructor for edge AI and embedded systems that inspire and teach developers of all skill levels. Based in Lafayette, Colorado, he can be found giving talks, running workshops, and swing dancing in his free time.

View more articles by Shawn Hymel
Discuss this article with the rest of the community on our Discord server!

Advertisement

Nailing the Details: Scale Model Surfaces

Airplanes Craft & Design Drones & Vehicles
Nailing the Details: Scale Model Surfaces

Metal panels on full-sized aircraft are attached in a variety of ways. Some panels butt against each other with no space between. Others, like the Mustang, have noticeable gaps exposing chromate primer below. Racing planes and sailplanes tend to have putty between the panels and over flush rivets, making them nearly invisible. Major stress junctions require overlapping panels. Panels are usually outlined with flush rivets or round-head rivets, but some modern airplanes have panels glued in place.

Cal Branton’s quarter-scale FW 190A

Fabric surfaces are typically reinforced with stitches around the stringers or ribs and covered with cloth tape. Rib stitches on wings have to be strong enough to support the weight of the aircraft. Fabric on fuselages and other non-lifting surfaces generally don’t require stitches but they may be smoothed and reinforced with a layer of tape over the stringers. Spacing between stitches varies from 1 to 3 inches on the full-sized aircraft, depending on expected air speed.

For best results, examine close-up photos of the airplane being modeled to determine the type of surface detail—flush or round rivets, overlapped or abutted panel lines, pinked-edge tape or straight, etc. If not available, seek pictures of a similar aircraft. Best of all, visit your local air museum. See and feel the surface. If the museum guard asks why you are fondling his airplane, tell him you are a scale modeler. He will understand.

Panel lines

After covering the surface with glass cloth or other material, lay down one coat of sandable primer. Sand off almost all of it, leaving a smooth surface.

Photography by David P. Anderson

Using scale three-views and photos as references, draw panel lines on the surface with a flexible straight edge and a fiber-tip pen.

Lay one-inch wide masking tape beside each panel line and then lay 1/32” Chartpak or other masking tape on the panel line. Snug the tape against the wider masking tape for a straight line. Remove the wider tape, leaving only the Chartpak tape.

Apply another coat of primer with a mini-roller. Sand off almost all of it, leaving the Chartpak tape exposed.

Remove the Chartpak tape. Ball up and remove any remaining Chartpak adhesive from the panel lines with a rubber squeegee. Sand the edges very lightly to remove flashing.

For overlapped panels, apply one or more layers of regular masking tape and apply primer over one edge of the tape. When dry, remove the masking tape and lightly sand the edge.

Sometimes prominent hatches, such as gun covers, require a surface too thick to be implemented with primer. In this case, mask off the hatch with more than one layer of masking tape, apply a thin layer of Bondo or other automobile body putty with a squeegee or credit card, and sand to the tape. Remove the tape and lightly sand the edges.

Rivets

Round-head domed rivets appear on highly stressed surfaces such as wing fillets and landing gear doors. They are stronger than flush rivets but create more drag. Real aluminum miniature rivets are available from MicroFasters. Just drill holes and glue them in.

Flush rivets and smaller round-head rivets can be simulated with yellow glue, thinned with about 20% water. Rivet tape can be purchased from Scale Model Products.

Attach the rivet tape. Apply a 20% diluted solution of white or yellow glue. For flush rivets, squeegee off the excess with a credit card. For domed rivets, place a drop of glue over each hole and don’t squeegee. Remove the tape immediately while the glue is wet. The rivets will appear to be too large at first but they will shrink as the glue sets.

Pop rivets have a hole in their centers. If the glue is diluted even more and just right, it will wrinkle slightly when drying, creating a pop-rivet look. This requires some experimentation to get right—try it on a scrap surface first. Rivet tape can be reused but only once or twice.

Small hatches can be made from aluminum Duck Tape. Cut to size and apply. Wet-sand with 600-grit sandpaper to remove high spots. Flush rivets can be embossed into the aluminum with a sharpened brass tube. Run a #11 X-acto blade around the inside edge of a brass tube to sharpen its edge, press the tube into the duct tape, and rock the tube in a small circular motion. Flush rivets can also be embossed into primer by this method if the primer is thick enough.

Stitches

If you are a balding middle-aged man purchasing 15 rolls of Scotch Hair Set Tape at the drugstore or beauty shop and you are asked what you are using it for and you say you are building a quarter-scale Stearman, the clerk won’t understand unless of course the clerk is also a scale modeler. Then you will receive a knowing grin of understanding.

Begin by making a rib-stitching tool. Squash the end of a ¼” aluminum tube with a pliers. Don’t completely close the end. Leave a thin slit. Prepare a solution of white glue diluted about 20% with water in a bottle to a depth of about one inch. Dip the tube into the bottle, squashed end first, pause, then place your thumb over the open end. Remove the tube from the bottle and wipe the outer surface with a paper towel.

Lay a ruler next to a rib and touch the tip of the glue pen to the surface, rocking slightly side-to-side. Repeat, moving the pen by a measured amount. The glue pen will make 10-20 stitches before refilling. If you make a mistake, remove the glue with a damp towel and try again. The stitches will look too big at first but they will shrink as the glue dries.

Scotch Hair Set tape is about right for quarter scale. Apply it or other pinking tape (see References) over the rib stitches. Apply nitrate dope to Scotch Hair Set tape to secure the tape and to raise the fibers. Sand smooth.

The Focke Wulf TA 152H; and other fast warbirds typically have stitched fabric only on their control surfaces, but many classic airplanes such as the Howard Pete also have fabric-covered fuselages in which tape without stitches is used over fuselage stringers.

The Hawker Hurricane is typical of airplanes having fabric-covered fuselages. They tend to have closely spaced stringers without stitches or visible taping. These stringers can be simulated by placing 1/32” or 1/16” 3M masking tape on the fuselage sheeting before covering with fiberglass cloth. The result is only a slightly bulging line.

Phillips pan-head and flat-head screws appear on the Grumman Lynx and other modern airplanes. The best way to simulate screw heads is to use real screws. Miniature screws in a large variety of styles and sizes are available from MicroFasteners.


Reprinted from the book Model Airplane Construction Techniques With Emphasis on Giant Scale.

For more scale how-to articles and free plans, visit www.mnbigbirds.com.

Good building!

References:

http://www.pink-it.net

http://www.personainternet.com/scaleribstitch

http://www.microfasteners.com

http://rcscaleproducts.com/products-page/finishing/rivet-tape-16th-15th-and-14-scale

Tagged
David P. Andersen

David P. Andersen has been building model airplanes since the 1940s. He has published 20 scale model airplane designs. He was recently inducted into the Academy of Model Aeronautics Hall of Fame. His book, The Cello Maker And Other Stories of Creative People, is available from Barnes&Noble.

View more articles by David P. Andersen
Discuss this article with the rest of the community on our Discord server!

Advertisement

Skiing on Jet Power

Drones & Vehicles Rockets
Jet-Powered Skis

The first time I punched the throttle, there was no explosive launch, no violent shove. Instead, I just started sliding forward, smoothly, quickly gaining speed faster and faster. Before I knew it, I was flying past everyone around me. I had strapped 16 kilowatts of electric jet thrust to my arms, and it was exhilarating! This project had lived in my head for years, and I had just made it a reality.

Like many makers, I grew up watching inventors like Colin Furze strap jet engines to all sorts of things, pushing ideas to their absurd limits. Maybe this is why I have always had a fascination with going fast. I’m sure this is a common thought for a lot of makers; how to take something normal and push it to the extreme. Well, that was me with skiing. As soon as I had enough confidence to go fast downhill, there was always this lingering thought in the back of my mind that I could make something to push the boundary on what it meant to go fast on skis. I sat on this thought for years, wondering if it was possible and how I would approach it. Jet engines always seemed like the logical choice, but I always ended up thinking it would be too impractical. Thinking something isn’t practical doesn’t answer the question of if it is possible though, so one day I decided to put pen to paper and see if it could be done. Time to do some –

Research

All projects start with research, and this is no different. The search began with looking at JetCat micro jets. These are true dead dinosaur powered metallic thrust producers. However, when looking into the feasibility, it was clear it just would not work out. The engines would be ingesting a lot of foreign objects being close to the ground mounted to my skis, and they would be too hot to have close to my body. The auxiliary systems also seemed like a lot to work around. Fuel lines would have to be routed across my body, as well as high voltage wires for the ignition system. All exposed and close to the ground, just waiting to be snagged on something. Did you know that snow is just frozen water? Did you know that exposed high voltage wires like to spark on conductive things like water? Did you know that a spark can ignite flammable liquids leaking out of a snagged fuel line? Yeah, a lot of complexity, and complexity is synonymous with a safety nightmare. So I kept looking for another solution.

I have seen electric ducted fans (EDFs from now on) used on projects. If you aren’t familiar, they are basically really powerful drone motors put in a tube with a high blade count propeller for moving lots of air. They are cool, but they never wowed me… they just never seemed to stack up with my past experiences of what a real jet engine could do. That being said, EDFs come in a wide range of sizes and power, and after looking at specs online, it became apparent that my opinion of them was tainted by people choosing smaller-sized EDFs and commercially available batteries with limited output for their projects.

EDFs would be great if they could work out. It removes a lot of complexity, and also means that I don’t have to worry about heat, thus I can mount them to my arms. Keeping the EDFs away from the ground solves the foreign object ingestion problem, but it also means that I don’t have to “re-learn” how to ski. With the thrust acting on the skis, turning or stopping might require a change in technique, meaning my existing muscle memory would fail me. However, with the thrust acting on my upper body, the mechanics of skiing shouldn’t change, and that’s a huge benefit that can’t be ignored.

I built a spreadsheet of all the largest 120mm diameter EDFs. Any bigger and you get into engines meant to go on gliders that carry people, which also means the cost jumps by quite a bit due to certifications. I quickly narrowed it down to just a few choices based on two main criteria; thrust per dollar, and thrust per watt. Both of these are important metrics. A motor with a good thrust per dollar but bad thrust per watt means a bigger and more expensive battery, and will probably offset any cost benefit that the motor itself has.

In the end, I settled on the Ejets Jetfan-120 ECO with a HET 800-74-590 motor, an EDF that produces 20.5lbs of thrust (9.3kg) at 53.9 volts and 124 amps (or about 6.7kW). Doing the math with the peak voltage of the battery I planned to build, that changes to 58.8V and 135A, a whopping 8kW of power per motor! That’s just shy of 11 horsepower per EDF! In other words, the equivalent of having a Honda Grom motorcycle under each of my arms as far as power output is concerned. Now we are talking! This seems like it just might work!

I did some quick math in my notebook to check the validity of my intuition. I found some coefficients of friction for skis on snow by searching research papers online. I then plugged the range of values I found into two equations. I wanted to see what my velocity would be after 10 seconds, as well as the holding angle where the EDFs perfectly counter the force of gravity. The reasoning here is that any mistake in one equation should become apparent if the answer to the second one didn’t seem to match. With the velocity at 10 seconds, I got 21 mph for an average friction, and 32 mph for a best case. Similarly with the holding angle, I got 5.7 degrees average, and 8.6 degrees best case. This passed the sniff test for me.

As a final check, I turned to the online maker community and asked them to fact-check my work. It had been a handful of years since I had graduated university, so I was a bit worried that my physics was a bit rusty. Thankfully, someone realized I was mad enough to do this project, and it wasn’t just me asking for homework help. They confirmed my math, which gave me enough confidence to start ordering parts. Big shout out to the online maker community. It really does take a village, even when it looks like it’s just one person building something cool.

More powah!

The first challenge was testing all the parts after I got them. It’s the small things you don’t think about, but how do you test a ~60V motor that pulls 8kW? I have a lab power supply, it can output 60V. You know what it can’t output though? Yep, 8kW.

Chances are you are familiar with lithium-ion (Li-ion) batteries already. These batteries power your laptop, power tools, and even electric cars. They win out on energy density, or the amount of energy you can store per unit of volume or weight.

The next contender is LiPo, or the batteries most commonly used on drones and RC aircraft. LiPo batteries win when it comes to high output applications. They can dump way more power at once than a lithium-ion battery, but they don’t have as much total energy. A way to think of this is that our Li-ion battery is a 2L bottle of water with a small hole in the cap, but the LiPo is a normal water bottle with the cap fully removed. The smaller LiPo water bottle could dump more water at once, but it has less total water than its 2L Li-ion counterpart. This would probably be the battery to go with for a quick YouTube video, but I wanted these to be usable for a full day of skiing. Oh, also, LiPo batteries seem to light on fire a lot easier than Li-ion, which is something I would like to avoid.

So, the first thing I tackled was the battery. There were three main battery chemistries I wanted to look at, Li-ion, LiPo, and LiFePO4.

Last, we have LiFePO4. These are what you might find in a home power wall, or at a solar farm for energy storage. They can hold a lot of power, but they end up being heavier than Li-ion for the same amount of energy stored. They also have a slightly lower cell voltage, so I would be leaving some performance on the table. The main reason I was looking at them is because the battery cell I was considering (the Headway 38120) has screw terminals, so building a battery pack would have been a lot easier with them compared to spot welding.

In the end, I went with some 18650 Li-ion batteries. Samsung 25R batteries to be exact. I settled on a 14p, 14s battery configuration. This means that the assembled battery has blocks with 14 batteries in parallel, and then 14 of those blocks will be linked together in series. A total of 196 18650 cells. The final battery should have about 10 minutes of full-throttle runtime, which given the above speed calculations should be plenty! To be honest, in retrospect, I wish I pushed the batteries a bit past their rated amperage output and went with a 14s12p configuration. The 10-minute runtime was plenty, but the battery weight is just borderline bearable, so shaving 28 18650 cells off the battery would probably have made it just a bit more comfortable to wear.

Contrary to the LiFePO4 batteries that had screw terminals, the 18650s required spot welding. The rating on the thickest nickel strip I found was 10 amps. I think my math might have been wrong, but I calculated that a 30s burst would heat the nickel up 100s of degrees C for its measured resistance. I did find some copper strip that was nickel coated, but could not get it to spot weld with any spot welder I tried. Instead, I found that you could get flat braided copper wire (sold as copper drain cable for grounding electronics), and spot weld that to the nickel strip to both lower the resistance, as well as act as a heat sink. This worked wonderfully, but was very time consuming (hence the mention of the screw top batteries). I haven’t seen it done before, so I thought I would throw it out there for anyone else trying to make really high output batteries.

Also, spot welders heat up FAST. I ended up using two cheap spot welders so that I could trade off and let one cool down while not interrupting the spot welding flow. Don’t underestimate the benefit of some additional equipment for large builds like this!

Anyway, with the battery built, I could actually power the EDFs up and get to testing!

It’s all just LEGOs. Very powerful LEGOs.

The actual build is fairly simple. The battery has an off-the-shelf battery management system. This connects to an off-the-shelf electronic speed controller (a Flyfun 160A v5 ESC, the cheapest one I could find). This could talk to an off-the-shelf RC receiver, though that’s not what I did. Anyway, my point is that at this point, it was smooth sailing, and it’s incredible how plug and play most things are now a days.

I wanted the controls of the project to be integrated, and not use a separate RC aircraft remote. Because of this, I deviated away from RC airplane land, and moved into the more comfortable (for me) micro-controller territory. My one complaint is that RC aircraft parts often speak RC aircraft lingo in their docs, and don’t have details you might want if you are DIYing your control scheme. For instance, the PWM frequency or value range for the throttle was not listed anywhere online or in the physical manual the speed controller came with. This took a bit of time to figure out, but eventually what I realized is that these speed controllers expect you to calibrate them every time you turn them on. You input max throttle on power up, it beeps, you back off to 0 throttle, it beeps again, and then it will actually turn on and start processing throttle commands. I found this out the hard way when my testing caused the ESC to start ramping up the EDF’s speed while on my workbench. The EDF ate the paper manual I had open in front of it, filling my room with an ultra-fine paper dust that took hours to fully settle out of the air! Luckily, as mentioned prior, the manual didn’t have any useful info in it for microcontroller-related integration anyway, so nothing important was lost. I did however get the opportunity to learn how paper both smells and tastes. Anyway, with the ESP32 microcontroller now being able to control the speed of the EDF, I just needed some way to tell the ESP32 how much throttle to command.

I went with a knock off Wii nunchuck controller. I absolutely love them as far as projects go. They are cheap, fit very comfortable in the hand, have a built in gyroscope, and speak i2c. If you don’t want to do the i2c handling yourself, there’s even an ESP32 library specific to the Wii nunchuck that handles it all for you! Just call wii_i2c_decode_nunchuk(data, &state); in a loop, and then use state.y to get the value of the joystick Y axis. This is mapped to the PWM range that will be output to the speed controller. In total, the code for this project came out to 60 lines, most of which are comments, or additional things like using the trigger on the nunchuck as a safety switch you have to hold down for the ESP32 to send the throttle command. If you go with an ESP32 + a Wii nunchuck, check out https://github.com/moefh/esp32-wii-nunchuk – I had no issue with it, and it made the integration of the nunchuck a breeze!

Lets get to the nuts and bolts of it!

Similar to the electrical and control side, the mechanical parts of the project are all very straightforward. While working on the project, I talked to another maker you might be more familiar with, James from Hacksmith Industries. He mentioned not using 3D prints for any crucial connections between the EDF and my body. I felt stupid for not coming to that conclusion myself, as I had planned to 3D print the bracket, but it makes sense. I threw a test 3D print in the freezer to see how bad it would have been, and I can confirm that James was correct. Looking at it the wrong way caused it to break, so even a minor bump or fall could have been disastrous.

While I could have laminated some plywood together and cut a bracket out of that, I decided that having wood on the project just didn’t fit the theme I was going for, so I opted to make as much as I could out of aluminum.

The main backbone of the project is some extruded 3030 T-slot aluminum. The EDF mounts to that with a milled aluminum mount. In reality, using a service like JLCCNC would have been smart, but I am trying to grow my maker skill set and chose to mill the parts myself. The aluminum extrusion was drilled out and tapped on the end for an eye-bolt to be installed, which then attaches to a climbing harness via a locking carabiner. This acts as the main path of force, and is using components that are all way overbuilt for the ~20 lbs of force I expect them to see. This constrains the EDFs to my body, but it does not constrain their motion. At this point, if I powered them up, they would be flying around in circles chewing up anything they could suck in, while attached to my hips!

Because of this, needed some way to attach them to my arms. More specifically, an arm mount that would keep them co-linear with my arms. The only thing I found that seemed like a good arm mount was a range-of-motion brace meant for physical therapy. Sadly, these were on the more expensive side, and I ultimately ended up destroying the fancy range of motion mechanism to just get the lower arm portion of the brace. That said, it used aluminum for a bulk of the construction, and some robust feeling plastic for the Velcro arm loops. Good enough for dealing with secondary side-to-side forces!

I whacked up some quick L brackets to attach the range of motion brace to the T-slot aluminum, and the only thing left was mounting the electronics. This was fairly straight forward. I made some small 3D-printed enclosures, put some JST connectors on some prototyping PCB board, and stuck the electronics anywhere I could find space. A few thin wires for signal, and a few chunky ones for power, and everything was tested and working.

Jet skis (but for snow)

Well, with everything built and tested, the only thing left to do was head to the mountain! It’s one thing to test the thrust standing on solid ground with normal shoes, but testing it when standing on snow with slippery sticks under your feet is a whole-nother experience.

With me being the builder and knowing the numbers, I immediately had a sense of respect for the jets as soon as I turned the battery on and heard the ESCs chirp to life. Giving it a quick test fire, I Instantly felt myself slide forward. It wasn’t an abrupt shove or anything that would scare you, but your brain understands the power involved, even if you aren’t thinking about the 250 amps being delivered to your arms. I would say it elicits a mixture of excitement mixed with fear… the same feeling you get as you begin to lower the restraints on a roller coaster, or the feeling you get before jumping from a high diving board. As for the thrust, it’s an assertive but gentle acceleration that just continues to build. It doesn’t take a long burst of throttle for you to understand that staying on the throttle for more than a few seconds would have you going pretty fast.

While I didn’t have someone out with a radar gun, I can confirm that the ballpark measurements done in the notebook are fairly close to real life. As I mentioned, I do have a proclivity for speed. I have gotten up to around 68 mph going downhill on skis before (without the jet skis), and while the conditions were nowhere good enough to reach that kind of speed safely, I could immediately tell that a long enough burst of throttle, even on a more gradual green or blue slope (beginner to intermediate-class runs) could have broken my own personal speed record; a record I set on a black (expert) run that was perfectly groomed for a race that was happening off to the side.

Another fun takeaway from trying out the project was that I definitely could go uphill with them. I suppose the slushy late season snow I was skiing on had a higher static coefficient of friction, so getting uphill required a bit of a “running start”. More accurately, a jet powered start where I used the EDFs to get some speed on the flat part leading into the slope – but assuming I was moving, they did continue me uphill. Even on the steeper uphill areas where people were having to turn and walk sideways, I was able to become near weightless with the assistance of the thrust, and walk up with no real effort.

I also tested towing a second person. I attached a long strap to my backpack, and my safety buddy I was with was able to grab on and get towed along flat sections of the mountain. I’m sure he ate more than his daily recommended amount of atmosphere, sitting in the jet wash and all, but he seemed pretty happy that he didn’t have to put in effort to cross long flat areas (cat tracks, for those that know).

So, all in all, I think the project was a success. It did end up being pretty tiring having a 40 pound battery on my back all day, so I do think that a slightly smaller battery would have been better. The 14p configuration hardly heated up even with extended use, so I do think pushing the cells a bit harder with a 12p configuration would have been fine. But hey – you don’t know until you try!

I had a blast bringing the idea to life and using it. Ultimately though, it’s not going to replace normal skiing. It’s heavy, it’s loud, and that makes it very impractical. But that was never the point. This was an impractical idea that lived in my head for years. I just needed to know if it could be done.

YouTube player

Now I know.

I also happened to learn that sixteen kilowatts of thrust feels incredible. I would consider that a win.

Tagged
Alex / Methodical Maker

Just a guy documenting his projects with woodworking, welding, fabricating, 3D printing, homelab, and hacking. More at @Methodical Maker on YouTube.

View more articles by Alex / Methodical Maker
Discuss this article with the rest of the community on our Discord server!

Advertisement

Innovative Design and Collaboration at Game On.

Education Fun & Games Maker Faire Technology
Innovative Design and Collaboration at Game On.

In the world of experiential entertainment, few concepts have captured the imagination quite like escape rooms. But James Hopkin and Eric Mittler, the creative minds behind Game On in Berkeley, California, are pushing the boundaries of what these immersive environments can be. Rather than static, one-time experiences, Game On offers a cooperative challenge center that thrives on evolution, iteration, and the liberating power of failure.

At the heart of Game On’s design philosophy is a simple but powerful idea: puzzles should never be truly finished. Hopkin and Mittler draw inspiration from an eclectic mix of sources — museums, engineering hacks, and classic games — to build interactive, modular environments that grow smarter and more engaging over time. Their rooms are deliberately designed for fast-paced trial and error, encouraging teams to communicate openly, collaborate under pressure, and embrace failure not as a setback, but as an essential step toward success. What sets Game On apart from traditional escape rooms is its commitment to rapid prototyping. Rather than locking in a design and walking away, Hopkin and Mittler treat every guest visit as a live testing session. Player behavior, unexpected solutions, and moments of confusion all feed directly back into the design process. This iterative loop allows them to refine mechanics, re-balance difficulty, and introduce fresh challenges with remarkable speed. Their modular approach to puzzle construction is equally significant. By building systems with interchangeable components, the duo can swap out elements, reconfigure interactions, and introduce new variables without overhauling an entire room. This flexibility means that returning players encounter a genuinely evolving experience, while new players benefit from designs sharpened by hundreds of hours of real-world feedback.

Hopkin and Mittler are also candid about the lessons failure has taught them. Early designs that seemed brilliant on paper often collapsed under the weight of real player interaction. Instead of treating these moments as defeats, they view them as critical data points. A puzzle that frustrates players in the wrong way signals a communication breakdown in the design itself, and that breakdown becomes an opportunity to rebuild something better. Their talk emphasizes that great game design is less about genius inspiration and more about humble observation. Watching how people move through a space, where they get stuck, what makes them laugh, and what sparks genuine teamwork reveals truths that no amount of theoretical planning can uncover. The result is a design culture that values curiosity, flexibility, and continuous improvement above all else.

Watch the full video from Maker Faire Bay Area, here.

YouTube player
Tagged
Gillian Mutti

About Gillian Mutti serves as the Director of Marketing for Make: and also holds the role of Co-Producer for Maker Faire.

View more articles by Gillian Mutti
Discuss this article with the rest of the community on our Discord server!

Advertisement

Getting the Shot: A Conversation with Cinematographer John Brown on Hacking Macro Photography and the Secrets of Bees of All Kinds

Craft & Design Maker News Photography & Video
Getting the Shot: A Conversation with Cinematographer John Brown on Hacking Macro Photography and the Secrets of Bees of All Kinds

We recently had the chance to sit down with cinematographer John Brown to talk about his incredible macro work, the challenges of filming tiny creatures, and how technology is shaping the future of natural history filmmaking. John has been making films for about 30 years, and his soon to be releasing project on James Cameron’s the Secrets of Bees.

National Geographic Explorer Bertie Gregory immerses viewers in the remarkable lives of bees, some of the most vital creatures on Earth. Over the course of three years, specialized cameras offered an unprecedented look inside a single hive, unveiling its hidden dynamics. With over 20,000 bee species pollinating a third of the world’s food supply, this series reveals their stunning architectural abilities and intelligence, unlocking the secrets of their world.

SCIENTIFIC FIRSTS 

  • The world’s first shot of broomstick bees in flight. 
  • The world’s first footage of a vulture bee nest. 
  • The world’s first footage of varroa mites invading a hive of honeybees defending themselves

BESPOKE BUILDS

  • Custom hives and sets
  • Hacked and DIY camera lenses and mounts

Here is our conversation, preserved in John’s own words.

Gillian Mutti: Lovely to meet you, John. Thank you for giving me some time, I really appreciate it.

John Brown: Pleasure.

Gillian Mutti: So, I do have to ask, I’ve watched the sneak peak episodes and if you don’t mind, we’ll jump right into it. I am clearly amazed by the cinematography in this, and the shots that were able to take place. How were you able to accomplish that?

John Brown: So, this kind of filming is something I’ve done for years. I mean, I’ve been making films for about 30 years, and a lot of it has been this very detailed macro photography. So we use a variety of different tools. The main lens we used to film this program was a thing called a probe lens, or a scope lens, which is very long, thin. Almost like a medical endoscope, but a bit thicker, and it just means you can get the viewer right in between, you know, the frames within a beehive, or right down at ground level, so it gives you a very sort of intimate perspective on, on the sort of insect world.

Gillian Mutti: Does the equipment give off the vibration or disrupt the bees?

John Brown: No, no, not at all, no, no. So, the bees, I mean, they’re very tolerant of disturbance. I should caveat that. It depends on the species. Honeybees like very tight spaces, and they like it dark, and they like to be squished together, which is obviously not ideal for filming. So if you’re filming honeybees, you need to sort of make the spaces within the colony a bit bigger and put some light in there. And it can take them a while to get used to that. So if you’re changing what they’re used to significantly, then they might take a while to get used to it. But if you’re filming something outdoors, where there used to be in natural light, then you don’t really disturb them with the presence of the camera at all. The camera doesn’t really vibrate, there’s no sort of movement, and you’re going to be moving the camera in a very slow, precise way anyway, so they do.

Gillian Mutti: I don’t know if it was a little serendipitous, but yesterday I was at a colleague’s winery, and there was a swarm of bees in one of the trees.  Our founder called someone to come collect them and they were so docile somebody came out the women shook the branch the swam was in and they followed the queen into a box

John Brown: I love honeybee swarms, I mean, I’ve kept bees for half my life, so I’m very familiar.

They’re just delightful when they’re swarming, because, they are not getting grumpy, they’ve got nothing to defend, they’re just sort of completely focused on keeping the queen safe. So you can, you know, you can pick up a swarm, you know.

Gillian Mutti: It was really quite magical to see. On that note what was a challenge or something you came up against that you didn’t think you were gonna have to solve for when shooting?

John Brown: I think probably the hardest shoot, I spent 5 weeks in the Amazon in Ecuador, doing two different sequences for the first film, one about fire bees, and one about vulture bees. And they were both extremely difficult, because the bees themselves were absolutely tiny. So the bee was about the same size as a honeybee’s head. So you’re working with the subject, which is absolutely minuscule. And they actually were quite sensitive, so you needed to be very calm around them, and particularly the fire bees were doing their thing on these very thin branches about 6 foot off the ground, so just even rigging the camera around them without disturbing them, and clamping everything off so it doesn’t blow in the wind. Then trying to tell a story of this tiny, tiny little bee, and make it characterful and interesting and appealing. That was really difficult. And plus, it was a very remote place, and it kept raining the whole time, and it was just one of those shoots that you were quite glad to get home at the end.

Another constant challenge was the size. I mean, often it’s the fact that you’re trying to create images that the audience will fall in love with, but your subject is tiny. And just the laws of physics mean that it’s not like photographing a human face. You’re photographing something that’s two or three millimeters across, but you want to give it all the character and sort of engaging qualities of a human face. I mean, you still want audiences to look at this little animal and engage with it.

Gillian Mutti: Yes, well, from what I’ve seen from my fortunate sneak peak you definitely did accomplish that, well done. And how did you get into cinematography?

John Brown: I’ve always been obsessed with natural history, so as a kid, I would just draw pictures of animals. I was obsessed with drawing pictures of animals, and when I could afford a camera, I bought a camera, and then I did a biology degree at university here in England, but always wanted to make films, and it just so happened that there was a film company near. So I went to Oxford University, there was a very famous film company called Oxford Scientific Films, so I kind of started working for them during my university holidays. And just slowly got to know people. I was super shy, I never said anything to anyone, but sort of slowly, slowly got to learn how to work cameras and so on. There are no sort of set routes into this industry. Everyone’s got their own different story.

Gillian Mutti: For sure. As technology has evolved throughout your time, what has been the biggest change with the technology you’re using, or do you find that it stayed pretty true in your field?

John Brown: Yeah, no, that’s a really good question. I mean, I think, in my career, because I’ve been doing it for a long time. I started shooting on 16mm film. Okay. So I was that generation that you. I grew up shooting with 400-foot reels of 16mm film, and you couldn’t see what you’d shot until you got back home and got it developed and all that kind of thing. So, there was a huge change when we switched from film to digital, and then you could see what you’d filmed, and you weren’t  nervous about whether you got the shot or not, because you could just check.

So there’s that transition from film to digital was significant. Our camera’s now much more light sensitive, which makes a big difference, particularly when you’re filming small subjects where you typically would need a lot of light to get a good quality image. So, now our cameras have good light sensitivity.

John Brown: The lenses haven’t changed that much, so some of my favorite lenses for filming macro are old microscope lenses from the 1970s, so it’s a mixture of some technology, particularly to do with lenses and the glass which hasn’t changed much, but camera technology has evolved. You know, quite remarkably, and the big change was from film to digital. I used to go away, you know, you might go away for 4 weeks to the Amazon, say, and you would take a suitcase full of film stock. Previously you might take 40 or 50 rolls of film and you don’t know whether it all got messed up and fogged at the airport on your way out, you know, so you could have spent a month filming the most beautiful stuff and get home and find the whole thing was ruined.

Gillian Mutti: Yeah, unfortunately, I know that a little bit too well, my father is a photographer and he actually has done quite a bit of stuff with National Geographic.  We had a dark room in the house I grew up in, and you didn’t want to be around when the film went… went sour coming back from location.

John Brown: Oh, Godness. So now we just have a much better idea of what we’re getting. On a day-to-day basis, it means we can start editing in the field, and that is something that’s search,

It’s such a luxury to start, particularly with these kinds of sequences, where they’re complicated stories. You’re kind of piecing them together like a jigsaw puzzle, and if you can start viewing what you shot every night, starting to edit a sequence, then you kind of know whether you’re building the story that you’re that you’re looking for.

Gillian Mutti: Does the story ever shift on-site?

John Brown: I would say all the time, and I think that’s probably one of the skills that… I think to be light on your feet as a filmmaker is probably one of the most important qualities. It’s good to go out there with a plan, but nearly always the plan changes, and often for the better, you know?

I was gonna say, one of the worst things you can do is stay committed to a plan when it’s not quite working. I think you just need to read the room and see… I mean, the animals kind of tell you where they want the story to go, so it’s a question of sort of being responsive to them and changing the story to sort of suit what they’re doing, basically.

Gillian Mutti: And I think that’s kind of the magic of it a little bit. I understand you have your plan, but when plans don’t always go the same way, it keeps you and your creativity on your toes.

John Brown: Oh, I love it. I think it’s one of the most exciting things, actually, is to sort of feel like no matter how much preparation you do, you’re still telling the story of something that you know, the animal’s gonna do what it’s going to do. So it’s a question of being prepared to sort of, you know, go with the flow of what’s in front of you.

Gillian Mutti: Well, that’s wonderful. Thank you so much for this. I really appreciate it. Thank you very much for your time.

John Brown: No, thanks to you!

Watch the Secrets of the Bees

Unscripted Series and Specials, Premieres March 31 at 8/7c on National Geographic and National Geographic WILD and Streams Next Day on Disney+ and Hulu.

Tagged
Gillian Mutti

About Gillian Mutti serves as the Director of Marketing for Make: and also holds the role of Co-Producer for Maker Faire.

View more articles by Gillian Mutti
Discuss this article with the rest of the community on our Discord server!

Advertisement

Flashforge AD5X Is a Best-Selling Printer for a Reason

3D Printing & Imaging Digital Fabrication
Flashforge AD5X Is a Best-Selling Printer for a Reason

Manufacturer: Flashforge

Price: $550

Link: flashforge.com/products/flashforge-ad5x-3d-printer

Cover of Make Volume 96. Headline is "Change it Up!" 3D printers Snapmaker U1 and Prusa XL are on the cover.
This article appeared in Make: Vol. 96. Subscribe to Make: for the latest articles.

“Multicolor Is Hot!” proclaimed the cover of Make: Volume 88, highlighting the Bambu Lab X1 Carbon AMS system that finally broke multi-material printing through to the masses. But with its original price of almost $1,500, a large swath of the maker audience was excluded from the multi-material mania by cost. That’s why I was excited to learn about the Flashforge AD5X at RAPID + TCT 2025 in Detroit (you can read my recap) — especially when I learned it would cost only about $500.

Without really trying, I can find the AD5X online discounted by $100, $150, or more, which makes this machine incredibly accessible and explains why, as I type this, it’s the #1 best-selling 3D printer on Amazon. So, what do you get for your money? For starters, a robust, compact, solid machine, with all four filament reels on one side. It’s not enclosed (though an enclosure kit exists, $49) and it prints PLA, TPU, and PETG out of the box, and even materials like PLA-CF thanks to a hardened steel nozzle. Like most new machines the AD5X uses CoreXY motion, for impressive print speeds of up to 600mm/s.

Setup is a breeze compared to some other multi-material devices — just mount the display, attach the spool holders, and install the Intelligent Filament System (IFS) module. Flashforge’s multi-material IFS system is mechanically simple and easy to use, with automatic filament loading, cutting, and unloading, and the ability to switch spools seamlessly as they run out, when printing in a single color. A meager allocation of filament is included with the machine in order to give a taste of four-color printing, but you’ll need to buy more almost immediately. Multi-material printing works well, though the waste outlet is located in the back of the machine, so you’ll want to download the “AD5X Poop Chute” from Flashforge’s wiki in order to route waste for easier retrieval.

Flashforge AD5X “Poop Chute”

Material waste is on par or better than similar filament-swapping devices (though compared to the Snapmaker U1 almost every other machine feels wasteful). The AD5X has many features we take for granted these days, such as auto bed leveling, and even some that other machines in this class might lack, like an Ethernet port and quick-change nozzle. At this price though, some of the mod cons, such as a camera and LED lighting, are optional add-ons. The resistive touchscreen feels a little outmoded after experiencing the capacitive screens found on other new machines, and the OS/GUI, while relatively simple to use, can sometimes be unintuitive.

Print quality is excellent, although the print volume feels slightly claustrophobic at 220mm×220mm×200mm. Flashforge’s slicer is based on Orca, and uses its color painting feature to easily assign filament colors to prints. The Flash Maker smartphone app is also helpful for monitoring prints (even more so if you add the optional camera!).

The Flashforge AD5X is an excellent first multi-material printer, or even a first 3D printer, period. Catch it on sale and consider adding the $40 camera and $15 LED light strip, and you’ll have a fast, capable, easy-to-use multi-material machine for not much over $400. I have really come to appreciate and enjoy this machine over the past few months of testing — it just works, and it does everything I need it to, for the price of a single-filament machine!


This review appears in Make: Vol. 96.

Tagged

David bought his first Arduino in 2007 as part of a Roomba hacking project. Since then, he has been obsessed with writing code that you can touch. David fell in love with the original Pebble smartwatch, and even more so with its successor, which allowed him to combine the beloved wearable with his passion for hardware hacking via its smartstrap functionality. Unable to part with his smartwatch sweetheart, David wrote a love letter to the Pebble community, which blossomed into Rebble, the service that keeps Pebbles ticking today, despite the company's demise in 2016. When he's not hacking on wearables, David can probably be found building a companion bot, experimenting with machine learning, growing his ever-increasing collection of dev boards, or hacking on DOS-based palmtops from the 90s.

Find David on Mastodon at @ishotjr@chaos.social or these other places.

View more articles by David Groom
Discuss this article with the rest of the community on our Discord server!

Advertisement

Review: Liene PixCut S1 Sticker Printer

Craft & Design
Review: Liene PixCut S1 Sticker Printer

Manufacturer: Liene

Price as tested: $299

Link: liene-life.com/products/pixcut-s1-photo-sticker-printer-and-cutter

At a previous Maker Faire, I shared a booth with someone who brought a thermal label printer. They churned out designs during the event as inspiration struck, and sold a bunch of stickers that hadn’t existed before the show. It worked out great, and that was just printing in black and white.

The PixCut S1 is perfect for that scenario. It’s a portable sticker maker that prints and cuts out 4″×7″ sheets (or 4″×6″ photo sheets). It’s compact if you’re comparing it to an average desktop printer or vinyl cutter. And it has a built-in slot to store the feed tray loaded with material during transit. You’ll need to store the AC adapter separately.

Since it’s imagined as portable-first machine, the app is currently mobile-only. I downloaded a desktop version from the manufacturer, but was unable to actually upload a print. That’s a shame, since I found the desktop design features easier to use, and it doesn’t require the web to work (often finicky at conventions and craft fairs). Fortunately, both versions of the app support importing saved images. And the auto-background removal feature worked well. There were also initially connection issues but these thankfully improved over time.

Printing in progress.
Screenshot by Sam Freeman

The stickers print one color at a time, and they look great. The S1 uses thermal dye sublimation, and you can see the depth of color on the page. Liene also included non-adhesive photo stock. While there was some banding, colors were clear and vibrant.

There are comments online about the adhesive failing, and they felt a little easier to remove than others, though not enough to cause problems. But just to be sure I ran a few tests, starting with a regulation orbital sander. I gave it a few light passes with 220 grit, and while the ink unsurprisingly came off, the backing held up.

Before sanding.
After sanding.

TEST stickers were by PixCut S1. Cursor was heavy-duty vinyl that I’m surprised someone gave away.

I also ran some samples through the dishwasher on high-heat. The edges lifted a bit during the cycle and I lost one of the small ones, but overall they fared better than the control stickers from my collection drawer. While the heavy-duty vinyl Makey held fast, the “I Voted” sticker and number label disappeared entirely.

Before dishwashing.
After dishwashing.

Left, Pixcut. Right, random samples.

Finally, somewhat more practically, I stuck some up outside to see how they do against the elements. In six weeks, they went through rain, hail, and direct sunlight, with temperatures from 30°F to 89°F. So far they still look great.

Test stickers on my phone (kept in my pocket) stayed put throughout reviewing. Another sample on my boot was still there after a jog around the block.

The printer isn’t cheap. For the cost you could order 1,000 small or 300 large custom stickers. There’s also a slight bump where the start and end of the cuts overlap, though it’s easy to miss.

The app includes ducky clipart for the Sports Racers. Slight bump on bottom right.

I didn’t mind this when I held my first batch, but side-by-side the notched kiss-cut style looks cheaper than die-cut products.

But again, this is machine is designed with portability in mind, and in that there are few competitors. The image quality is great, and they held up well in normal conditions. If you attend in-person events and want a way to quickly customize stickers for customers, or always get the perfect idea too late, the PixCut S1 is a great fit.

Tagged

Sam Freeman is an Online Editor at Make. He builds interactive art, collects retro tech, and tries to get robots to make things for him. Learn more at samtastic.co, or on socials @samdiyfreeman.

View more articles by Sam Freeman
Discuss this article with the rest of the community on our Discord server!

Advertisement