I spent much of my childhood dismantling toys and gadgets and cobbling them back together in interesting ways. One proud example combined a slot car, a one-function wireless remote, a 9-volt battery, and a few fabricated gears and bits to create (in my mind, in the early 80s) the world’s smallest remote control car.
The 2-inch vehicle was top-heavy and had too much torque, but it accelerated violently to the right every time I pressed the remote button — it worked! — until it finally tore itself apart, like a tiny top-fuel dragster. In my mind it was a success, and it sparked my lifelong interest in interactive, kinetic projects.
In the early 90s, I began building a wireless, rolling, remote control robot with a two-way video chat setup positioned at eye level, which enabled real-time, face-to-face communication. I found most of the materials dumpster diving or at garage sales: a motorized wheelchair, a few old video cameras, a wireless baby monitor, and some R/C toys. Separately these were junk, but combined they became an interactive sculpture that allowed me to see, hear, chat, and move through a remote location with complete autonomy.
I could “become” my creation, temporarily merging my own identity into that of this machine/human hybrid. I named the robot SPARC-I, a rough acronym for Self-Portrait Artifact — Roving Chassis, or Sparky for short.
Sparky Works the Room
I originally made Sparky to explore the boundaries of the body and how our identities change when filtered through technology, topics that have recently become hot in our age of online profiles and avatars.
I’ve been upgrading Sparky ever since, as newer technologies have become available. Over the years, the Sparky experience has developed into what I call Autonomous Telepresence, an experience combining remote sensing and locomotion, web video, social networking, and human interaction.
It’s interesting to watch Sparky “work the room” at an art opening or cocktail party. At first, people are drawn to the robot as a techno-spectacle.
But it’s remarkable how quickly people forget the machine and interact with the remote person, joking, flirting, or having long, deep conversations as if there were nothing unusual. So Sparky has informed my sense of body, self, and identity, but it has also guided me toward insights and decisions in an area I never expected.
The original Sparky had severe limitations, such as bad audio quality and a broadcasting range of just a few meters, but I improved its performance over the years by swapping in new technologies, including better audiovisual transmission components, radio control upgrades, and fresh batteries.
The biggest upgrade was the decision to leave the old-school analog AV components behind and make the leap to digital. Using the power of wi-fi and the internet, Sparky became truly remote, enabling real-time chat and control from virtually anywhere in the world.
During the mid- to late-90s tech boom in the San Francisco Bay Area, Sparky became a party machine. What tech-savvy startup wouldn’t want a robot at their launch party? Sparky would mingle and schmooze for hours on a full battery charge, while I worked behind the curtain.
We made appearances at (and crashed) parties for the San Francisco Museum of Modern Art, San Jose Museum of Art, Burning Man, the E3 Media and Business Summit, Industrial Light and Magic, Intel, and others. Sparky was even, briefly, the singer and leader of a jazz quartet.
To Productize or Not?
Back then, it seemed everyone was getting VC funding for new tech businesses based purely on speculation. Meanwhile, I had created a proven, one-of-a-kind prototype that seemed to have commercial potential. Sparky’s success in such varied professional and social settings inspired me to consider its potential in a wider range of environments, from facilitating distance learning to working as a museum tour guide.
I researched the nascent mobile telepresence market and discovered that several Sparky-like devices were already for sale or soon to hit the market, ranging from children’s toys to the $100,000-plus hospital-bots that appear on ER.
Given this commercial environment, I understood that it would take a lot of effort to define Sparky as unique, and then raise the venture capital to pursue designing, manufacturing, and protecting it as intellectual property.
I wrote a business plan and met with potential investors to pursue Sparky’s commercial development. In the meantime, the Sparky upgrades continued. In 2006, John Celenza and I built Sparky 2 (see page 53), and I had a great time using it to telepresently cruise the exhibit floor at the first Maker Faire. Sparky 2 also appeared on the History Channel’s Modern Marvels as a possible “future of the telephone,” and I demonstrated it at technology and entertainment industry conferences.
I weighed the pros and cons of going commercial. John and I could freeze development, treat Sparky’s design as a trade secret, and attempt to “productize” it under the distraction and meddling of investors. This did not sound fun or interesting to me.
On the other hand, without investment, we could keep experimenting, trying new video chat clients, motor-control schemes, and other Sparky-relevant technologies as they improved and became cheaper. We could bounce from one technology to the next, adapting what worked for our one-of-a-kind creation, and enjoying the journey free from investor expectations.
This offered a far more appealing path, and with this insight, I realized two things:
One, I’m a maker first, and a business guy second.
Two, Sparky and Autonomous Telepresence are not defined by hardware, software, or other technologies — which are changed and upgraded too frequently. Instead, Sparky is defined by the unique experience it offers, which has remained consistent over its years of evolution: an opportunity to become a hybrid identity that can have intimate, face-to-face interactions and move freely in a remote location.
At New York’s American International Toy Fair in early 2008, I saw a cheap mobile telepresence toy, clearly not designed for hackability, and that clinched it. I decided to share Sparky 2 as an open source DIY project, based on the MAKE Controller board and the components I had lying around.
I co-developed the DIY version of Sparky 2 with my longtime friend and programmer extraordinaire, John Celenza. It uses the MAKE Controller connected to an onboard Mac Mini, which transmits audio, video, and motor control data over a wi-fi network.
We have several software versions, including a lag-free one that requires a web server to connect robot and controller, and another one that sends the motor-control data through Skype. We’ve also developed a small iPhone patch that allows it to connect in any location, even without wi-fi. If the “Jesus phone” can connect, so can DIY Sparky.
The DIY Sparky shown here was built using scrap components I had available, including Vex kit components for the chassis and the motors, an old Mac Mini, an unused LCD monitor, an iSight webcam, a 12V scooter battery, and an AC inverter. Your version will be different, based on whatever you have lying around.
The only part I purchased was the MAKE Controller board — I could have gone with the cheaper and better-known Arduino, but the MAKE board offered additional helpful features like the
4 plug-and-play servo connectors. The MAKE board also has numerous digital and analog ins and outs, which give DIY Sparky plenty of room to grow new appendages, like movable gripper arms, sensors, and bumpers.
See more of Sparky at makezine.com/16/sparky.
One experimental Sparky design addresses a problem common to all webcam-based video chat applications: the annoying lack of eye contact.
During a video chat, we tend to look at the image of the other person on our monitor rather than into the webcam mounted above or beside it. Because of this, we seem to gaze over the shoulder of the person we’re chatting with, which fails to replicate the intimacy of face-to-face communication.
The new Sparky design takes a cue from the way teleprompters work. For both Sparky’s “head” and the remote operator’s webcam, an angled piece of one-way glass reflects the video image of the other person directly toward viewers. Meanwhile, a video camera placed behind the glass captures the face(s) looking at the screen, creating a simulation of eye contact.
My experience has shown that this illusion of eye contact is effective. It encourages users to talk more normally, look into each other’s eyes, and forget that there may be thousands of miles physically separating them.