Nvidia’s 2017 GPU Technology Conference was a whirlwind of technological marvels and advancements. The keynote presentation was especially interesting, and just a non-stop barrage of everything Nvidia has in store, from smart cars to a new breed of intelligent A.I. The keynote speaker, Nvidia’s President and CEO Jensen Huang, was full of life and energy as he paced the stage and interacted with industry heads, like Amazon and Microsoft, as well as fellow coworkers speaking from a fully functional holodeck.
The holodeck presentation faced a few technical issues in the beginning of the presentation, but that did nothing to diminish just how cool it was to see. Christian von Koenigsegg, the creator of the world’s fastest supercar, the Koenigsegg Regera, took the audience on a virtual tour of the vehicle. The holodeck captured an exact 3D replica of the Regera, from its beautiful exterior to the powerful engine.
An x-ray feature of the holodeck offered glances of the inside of the car, including the engine and computer systems. The holodeck could also split the car into every single one of its individual parts. Both of these features are going to revolutionize how mechanics-in-training get to interact with unfamiliar vehicles.
Koenigsegg even showcased how the objects in the holodeck were “solid.” His hands could not pass through the parts of the car, no different than the real world. He even got into the car, started the engine, and was able to grip the steering wheel and drive around. From outside the holodeck, even I could feel the power of the Regera emanating from the hologram. It looked so cool. If that holodeck can feel as amazing as it looks, video game VR might be facing a major upgrade in the near future.
The New Tesla V100
Square Enix Takes the Stage
Square Enix was one of the last companies I expected to see during Huang’s keynote presentation. They showcased what a trailer for their animated film Final Fantasy XV: Kingsglaive, as well as what their video game character models and environments, would look like with Nvidia’s new Volta GPU. I have never seen graphics that good. For the first 10-15 seconds of the trailer, I honestly could not even tell I was watching CGI. Everything looked so lifelike.
I was able to experience this new engine first-hand on the exhibit floor later that day. I got to play Mass Effect: Andromeda with graphics I had never seen before. I could see individual pieces of code dance across Ryder’s wrist whenever he used his Omnitool. It was stunningly beautiful and I wish I had photos that captured how good it looked. I was way too busy exploring Andromeda and wishing I could play the whole game right then and there. When it comes out, the Volta will be a must for any gamer looking to build a gaming PC.
The End of Our Galaxy
Speaking of Andromeda, did you know that (if the human race is still around by then) that galaxy is going to crash into the Milky Way in about 5.3 billion years and kill us all?
Using Volta, NVIDIA was able to capture all of the stars in both our galaxy and the nearby (as in 2.5 million light years away) Andromeda Galaxy. A simulation then used Volta to map how the two galaxies would interact with each other over the next several billion years. Turns out, the two rotate around each other, slowly getting closer and closer, until their two cores are close enough to cause gravity fluctuations that send the outreaching stars into random tailspins. Ultimately, the two cores collide and the two galaxies implode before springing outwards from each other in a stunning explosion of light. Our solar system does not make it.
From It to She
Matt Wood (General Manager of Deep Learning and AI at Amazon Web Services) and Jason Wood (Corporate Vice President of Microsoft Azure at Microsoft) both came on stage to talk about their companies’ A.I. services (Alexa and Cortana respectively) and how they hope to implement Nvidia’s Volta into their service. Doing so should make both Alexa and Cortana smarter, faster, and more personable.
Nvidia wants to change how smart cars drive themselves. Their new plan and A.I. car platform (called Drive PX) will work in three different ways to offer customers the best driving experience. Their work impressed Toyota, who are pledging to use Drive PX in all of their smart cars from here on out.
The A.I. notes what streets you take, and marks roadsigns, construction markers, street lights, and pedestrians. It learns how to drive while it is being driven. This helps with the other two smart car functions.
Once the A.I. has learned what it thinks it needs to know about a road, it will ask the driver if they would like to let the car take over. For example, drive to work a few times and by the end of the week, the A.I. should know your daily commute and be able to drive for you.
Whether the driver is in control or the car is in co-pilot, the A.I. will be on a constant look-out for dangers that the driver cannot see or predict. If the car’s A.I. senses another car is about to speed through a red light, it will stop the driver from proceeding through the intersection.
Teaching a Robot to Think
Named after both the scientist and the A.I., ISAAC is Nvidia’s new robot simulator that will train the artificial brains for the robots of the future. ISAAC trains hundreds of A.I. brains in an artificial universe that obeys all the same laws as ours. Once one of the brains is able to accomplish a task, it is copied into hundreds more bodies and the process continues.
The brains that do not quite make the cut get deleted, ensuring that the final product is the fastest, most intuitive learner. It was adorable (and a little terrifying) to watch a robot learn how to play hockey and golf in much the same way an actual human child would. The similarities were a little too uncanny.
Speaking of brains, I got to meet some amazing minds on the exhibit floor after the keynote presentation. There were professional makers and creators from almost every industry, from medical to agricultural and everything in between.
Mapping the Brain
The company qure.ai came from India, and were developing a system with Nvidia’s tech that could read abnormalities in the brain in a matter of minutes. That is significantly faster than any other type of scanner. Getting an accurate read of the brain’s condition as fast as possible after a concussion or other head injury is very important.
Real-Time Graphics for Film Production
It is possible to render complex datasets in real-time with the Nvidia Quadro GP100, to the point that Pixar uses Nvidia’s tech to walk around their own world and environments when discussing scenes and brainstorming plot scenarios. The writers can move about the world and find inspiration from the artists’ work.
This type of program has other uses as well. An architect could walk around every inch of a building they are trying to create or an engineer can find a cheaper means than a holodeck to move among the pieces of a car or airplane.
Your Digital Double
Ever wish you could be in a video game or animated film? Another World Studios uses Nvidia’s technology to map people’s faces into a database where developers and directors can “scout” for talent. If your face is ever picked for a project, you will get paid a percentage in royalties. No acting experience required. The animators will take care of putting your face through its paces.