With New Board, Nvidia Makes It Easier to Build ‘Thinking Machines’

Artificial Intelligence Cars Computers & Mobile Drones
NVIDIA Jetson TX1
NVIDIA Jetson TX1
Kespry wants to use the Jetson TX1 to make its commercial drones more ‘perceptive.’

NVIDIANestled in Silicon Valley, just down the road from Facebook, Paul Doersch runs a company that makes drones. But you don’t operate his quadcopters with a joy-stick. In fact, you don’t operate them at all.

You just give them a few simple instructions from an iPad app, and then, off they go — to survey 100-acre quarries and other industrial sites, taking hundreds of photos and stitching them together to generate 3D visualizations accurate enough to measure how many tons of gravel, say, is stockpiled on a given spot.

Now Doersch’s Kespry Inc. — the name derived from combining kestrel and osprey — is prototyping an even brainier drone, one able to process 15,000 images a minute, onboard, to distinguish pickups, dump trucks, dozers, and excavators. Not just see them with cameras but to “perceive” them, with all the understanding and interpretation that comes with perception itself.

With the advances inside the Jetson TX1, NVIDIA is also paying heed to those ‘doing those mind-blowing things you see at the Maker Faire.’

As gee-whiz as that sounds, this may be just as amazing: He’s doing it with the just-debuted NVIDIA Jetson TX1 Developer Kit, a tiny, energy-efficient board that operates at super-computing speeds — for $599, well within reach of Makers with ambitious and potentially commercial projects in mind.

As it does that, the diminutive NVIDIA Jetson TX1 is making something far bigger accessible, too — a pod of interrelated artificial intelligence (AI) advancements that promise to spur individual developers, inventors, and entrepreneurs to create who-knows-what by way of other super-smart “autonomous” devices and Internet-of-Things endpoints.

Advances Inside ‘Autonomous Devices’

So it’s worthwhile to start understanding them now, because, you’ll be hearing more about them even if you’re watching from the sidelines. They are:

  • Neural networks, a data processing approach inspired by the human brain, are made of nodes (neurons), connected with each other and organized in layers. Neural networks teach computers how to solve perceptual problems, such as computer vision, speech recognition and natural language processing.
  • Deep learning describes the process of using several layers in a neural network to teach a computer how to “learn” over time. Researchers at companies like Google and Facebook use deep learning to train computers to recognize photos and respond to everyday human language. It can be used for simple things – like recognizing your face – to more complex tasks, like teaching a robot to screw a cap onto a bottle. It’s also being used to help cars “learn” to drive autonomously.
  • Visual computing, a term that Nvidia uses to describe everything it does, from enabling detailed computer, gaming, and 3D graphics to providing the fast, efficient graphic processing units (GPUs) for autonomous devices and even data centers. When it comes to making, however, it’s about putting images through a processor fast enough for a device to “see” and respond to their own, even suddenly changing, surroundings — like drones must do if they’re going to safely deliver an Amazon package to your high-rise apartment patio.

By the way, machines have long been able to perceive — and learn. But we humans are the gold standard. Even with the mayhem around him, scientists say an NFL linebacker gets it right 94.9% of the time — the ball, the quarterback, the blockers. So do you in everyday life. Five years ago, machines could only get it right 72% of the time. But just this year computers set a new benchmark. They surpassed even human recognition capabilities, with a 95.1% accuracy for seeing and classifying images properly.

Because of its commercial potential, more progress will be made, and fast. Soon, machines will hit near perfection, which makes for resounding possibilities, such as picking an identified terrorist out of dense crowd.

1 trillion operation per second

This is the real promise, to everyone, of the NVIDIA Jetson TX1, which succeeds the company’s 18-month-old Jetson TK1. (The NVIDIA Jetson TK1 development kit went on sale to the Maker community for just $99

NVIDIA Jetson TX1
The Jetson TX1 is a credit card-sized supercomputer.

recently.)

The Jetson TX1 outpaces its older sibling, and by a lot. It’s two to three times faster, capable of processing up to 1 teraflop. How fast is that? Consider: A flop is an acronym that stands for the number of floating point operations a processor can perform per second. A “tera” is a trillion. So, yes, the tiny Jetson TK1 is capable of processing speeds of that magnitude. It isn’t hyperbole to say it is a supercomputer the size of a credit card.

And Jetson TX1 delivers that horsepower with energy efficiency. NVIDIA says it can process 258 images for deep learning every second with less than 10 watts of electricity — or what it takes to power a night-light.

For autonomous devices, energy efficiency means more than just extended battery life. The cooler a processor runs, especially an onboard or “embedded” one, the less it needs fans and heat sinks that would make a drone, for example, bigger and heavier with all the attendant disadvantages.

The Jetson TX can process 258 images every second with the power needed for a night-light.

The Jetson TX1 packs its firepower into a little package. It arrives as a small module, another testament to its utility aboard autonomous devices. Even though it will be used for more exotic projects, it still sports general purpose capabilities its predecessor didn’t.

For example, it comes with Wi-Fi and Bluetooth, along with interfaces that make it as easy to connect peripherals as you do with a regular laptop, the Arduino, and the Raspberry Pi boards.

It’s a nod to the broader markets NVIDIA wants to target. Jesse Clayton, the product manager who marshaled the Jetson TX1 to its November debut, says so himself. While the Jetson TX1 is bound to be adopted first by advanced, commercially oriented developers, NVIDIA is also paying heed to the folks “doing those mind-blowing things you see at the Maker Faire.”

The Jetson TX1 is available for pre-order now through Amazon, NewEgg, Micro Center and NVIDIA as a $599 developer kit, with an education discount to $299. The module itself will be available next year for $299 each for orders of 1,000 or more.

Affordable Special Purpose Satellite

Besides Kespry’s proof-of-concept, NVIDIA says the Jetson TK1 Developer Kit is already being used in other similarly sophisticated prototypes. Percepto, an Israeli startup backed by Shark Tank’s own Mark Cuban, is using it to develop retrofitting to turn low-cost drones into self-navigating ones.

MIT students are using it in small, self-driving race cars that they want to go as fast as 20 mph. As Make: reported, the Jibo, a social robot for home use, is powered by one. And Spain’s Herta Security is using it for real-time facial recognition and biometrics.

And Doersch, who is 27, continues to dream big. He sees the day when onboard supercomputers like the NVIDIA Jetson TX1 will power a fleet of drones that can constantly watch vast industrial sites as an airborne distributed computing system, with each drone crunching data onboard then cost-effectively streaming it to the cloud, where the data can be used for a variety of purposes.

Streaming that much video from a drone just isn’t feasible. There’s not enough bandwidth. Even if there was, it would carry an impractically enormous costs.

Instead, his drone fleet would be like giving a company its own affordable real-time satellite — its own eyes in the sky — allowing it not just to track vehicles, but to move them when and where they can operate more effectively, fuel-efficiently, and safely.

State-of-the-art autonomous devices like Kesprey’s drones will continue to make what was previously impractical even more possible — and far more widespread. And now it’s becoming ever more practical and affordable for Makers to create their own super-smart projects, too.

What will the next generation of Make: look like? We’re inviting you to shape the future by investing in Make:. By becoming an investor, you help decide what’s next. The future of Make: is in your hands. Learn More.

Tagged

Patrick Houston is a veteran technology editor and online publishing executive. He is a former editor-in-chief of CNET, and he led the team that launched Yahoo Tech. He is also a media entrepreneur who believes the Maker Movement is at the cutting edge of a new economic epoch that will thrive on inventors, startups, and the 'micro-enterprise.'

View more articles by Patrick Houston
Discuss this article with the rest of the community on our Discord server!

ADVERTISEMENT

Escape to an island of imagination + innovation as Maker Faire Bay Area returns for its 16th iteration!

Prices Increase in....

Days
Hours
Minutes
Seconds
FEEDBACK