Folks have been speculating about Google’s foray into wearable computing for some time now. The wait is over. Google has released details about Project Glass, their super-secret augmented reality project that packs a rich visual interface inside a sleek wrap-around heads-up display unit. The project will be conducting tests with employees in public, but have yet to hint at when it’ll hit the shelves.
Having to not reach for something to take a picture is one noticeable feature of Project Glass. How would you use a device like this to interact with the world around you? Let us know in the comments below.
74 thoughts on “Project Glass: In Your Face, Out of the Way”
It’s a good way to keep tabs on your friend’s energy levels, too.
But what will it do if it’s OVER 9000?
You’ll have to crush it.
Could watch how to videos, instructions, without looking away from project.
It looks like it monitors everything you are looking at, and hearing. This thing is so scary, in so many ways. Just wait until it monitors your temperature and alpha rythms, too…
I’ll pass, Google.
I understand your reticence, but why not embrace the positive aspects of this new paradigm while trying to find ways to mitigate the privacy implications? I would be surprised if this system didn’t come with tweak-able privacy levels and settings. That would just be stupid of Google, and they want this to succeed.
Looking for that lost word during conversation, finding my way while walking. Recording that great idea, linking GPS with the photo. And the historic context.
Getting directions as I hike, linking GPS and photo with the historical context, looking for that lost word during a conversation. Looking up context during a guided tour. Finding a business – site recognition and address.
I would use it to tone down my hotness.
I have been dreaming of something like this for a long time. Constant video/audio buffering so when you see something really cool or hear something you want to remember you can just save it.
Just try and count the times you have said to yourself, wow that was amazing! I wish I could have recorded that.
There was this thing I saw a year or two ago, Looxcie. It is like a bluetooth ear piece but it has audio and video running. It is always running a buffer and when you press the button it will save the last 30sec of audio and video as a file. So when one of those moments happen like you described, you’d just push the button.
What about those of us who already wear glasses?
That frame is hideous, so I hope there will be other options. Particularly since I’ll have to have one. Hopefully they’ll do an open framework so I can get a Borderlands style HUD…
Worst case I’ll have to hack it onto some sunglasses.
Welcome aboard Google a presentation on TED has had this kind of interface for months now.
Cool but did any of that information, all of which is available on a smart phone that you keep in your pocket, make his day any better? Consider the issues that we are already aware of:
1) Voice command – currently present in Siri for iphone. People generally don’t like talking to their phones in public. Voice recognition is hit and miss. Why is a talking to a HUD going to be better?
2) In-scene navigation information – research in using maps for soldier HUDs has shown that a HUD is not an ideal method of displaying a map. The screen is two small to provide meaningful navigational landmarks. A touch display is better suited because of the ease of scrolling and zooming, which again comes back to the smart phone. Having said that, the way point cues would be useful as long as they didn’t interfere with your field of view. Humans receive most of their navigational and spatial awareness cues through vision. It’s very easy to get lost in the data and miss important cues like that big truck behind the map. Good thing he wasn’t crossing the street when looking at that. Or driving.
3) Video calls – caller can only see what your seeing and not see you. Most high-end smartphones these days have two cameras which you can switch between during the video chat.
4) Always on camera – This may raise privacy issues. You can tell when someone is taking an image right now because they point a sensor at you. If everyone had one of these you will never know when someone is taking a picture.
This product may have use for those who can’t use a smart phone for whatever reason (disability or occupation). For the rest of us, I didn’t see anything here that isn’t already better provided in a smartphone. In my opinion HUDs are a neat tech demo with limited real world practicality.
I would use it to learn and practice yoyo tricks at the same time and record first-person parkour videos.
[…] Project Glass: In Your Face, Out of the Way (makezine.com) Rate this: Share it!EmailShare on Tumblr Pin ItLike this:LikeBe the first to like this post. Tags: Augmented reality, Companies, Google, Head-up display, Project Glass, Search Engines, Searching, Technology Comments RSS feed […]
Wow… as an actor.. I would never forget another line ;-)
I’d add the x-ray specs from the back of comic books. That would make it complete!
oh sorry it only reads up to 8999
Several things come to mind.
1) AR for DIY/Repair: I had to work on my car the other day and it would have been helpful to have the repair procedure overlaid in my field of vision. Take it up a notch and have an AR tag placed somewhere so that it could point to what bolt or part you needed to address next.
2) Voice recognition via skull vibration: One of the issues with voice recognition is dealing with and filtering out what sound is not the user’s voice. However, if a device is sitting on a users’s skull (temples of glasses/bridge of nose) it could use the vibration patters that it feels/hears from the users skull while they speak. That could be used as a pass filter for the audio microphone, like a jawbone headset.
3) Collecting research: When looking over a book you could highlight and capture data by tracing the line with your finger. The camera snaps a photo and then watches what you trace and highlights it in the photo.
4) Visceral teaching: Imagine being able to visit historical sites and see a visual overlay of what ancient structures looked like. Visit the site of a battle and be humbled by watching it unfold over the landscape around you. Take a city tour that takes you from its founding to structures being planned for the future. Walk around AR ‘holograms’ of data visualization. (atomic structures, DNA, Cells, environmental change)
5) Safety: This would be 100% optional but imagine being able to call 911 and feed a GPS feed with live video to emergency officials. First on the scene of a car accident? Online EMT talks you through life saving first aid being able to see what you see. Someone approaches you demanding your wallet/purse they are on camera with a full video record of the event. Even seeing someone with one of these on could become a crime deterrent. Criminals would look for another mark. Working in a dangerous place? Have safety information overlaid to keep you from getting into areas you shouldn’t be.
I just want one that I can call Jarvis and has the voice of Paul Bettany, just like in IronMan. :-)
Use it to upload evidence of cops gone rogue.
1) Voice command
They have a social barrier to overcome with talking to devices. That should come down as adoption rises and adoption of the device will provide more incentive to improve voice recognition for Google just as Siri is driving development on voice recognition from Apple.
2) In-scene navigation information
I agree with overuse of HUD maps, but the map given was only temporary simpler cues were given as navigating like as a turn left reminder. Simpler visual and auditory cues are effective. These notifications can also be more passive. It’s an interface design issue to be tackled by the software engineers as they hone the system. Google also has impressive street level matching for images, so landmark based navigation may not be too far off.
3) Video calls
That’s form factor. The purpose of communication through the device is not one to one video chat. The device is to simplify communication. Not all conversation on a new smart phone are video, and taking the conversation down to audio would be less distracting while in motion.
4) Always on camera
As personal recording devices such as smart phones, AVR devices, tablets, and the like become more pervasive this will be a ever more important social issue. The simplest solution is social contract, “Hey, do you mind taking/turning that off for a second?” (Well, as long as the other people are nice about that request.)
— tl;dr —
The practicality here lies in the fact you’ll adapt to having the device on. It’s no longer something you pull out, log in, and pull up. It’s just there. The major issue will be the design of ambient interfaces. Keeping information in the periphery until it us wanted. Making notifications unobtrusive and not a distraction. That will require wide adoption of the form factor and focus from communities of makers, hackers, and engineers to drive making the experience of using these devices as enjoyable and useful as possible. Making the device available to the masses is the first step.
— wtl;rdr —
A request for the engineers at Google: Make the Google Glass API open! Let us Beta! Let us develop! Let us play! Invest us in your augmented vision and we’ll return the favor.
The killer app for me would be if it could do facial recognition and show me the name of the person I’m speaking to – I’m terrible with names!
Me too, aways have been.
The video looks like the HUD extends all the way across the field of vision, but in the images of people wearing it it seems to only cover the top 1/3 or so.
Try sticking your finger at about the same position as the display relative to your eye. It seems to take up about half of my vision if I am looking straight ahead, and more if I focus on it.
I believe a friend of mine said it best.. awesome-sauce. I wonder what kind of eye fatigue this would cause.. but I assume no worse than looking at a computer screen or cellphone all day. Just give me one that uses e-ink.. I’m old fashioned that way.
[…] may be some time before we start to see mainstream adoption of Project Glass-like augmented reality, but in the meantime, that’s not stoping folks like AR hacker Will Powell […]
I use a HUD for my flying job. I love it. I don’t have any issues with eye strain. What will be real nice is when software can do a good job at object recognition. Say for example, you want to move an big heavy antiquated window airconditioner out of your shed – with a single voice command and without even looking at the data plate, the HUD searches the internet and returns for you the weight of the object – it is way heavy – call the neighbor! It could also feed you information based on anticipated requests, such as the already mentioned case of peoples names, etc.
If we assume that it has an accelerometer in it it could be used to track allergies… it’s just another sensor on your head. Each person probably has a ‘signature’ sneeze or cough that could be detected. Google could pool the data to find out if everyone in your neighborhood was sneezing at the same time (presumably because of pollen or a dust storm). If there was a ‘wave’ of pollen in your neighborhood you might get a warning… it’s just like tracking the live traffic in a GPS. Perhaps the snap of the head could also be used as a power source for recharging the glasses.
[…] Glass, interactive, voice-controlled “augmented reality smart glasses.” We covered it here and here. It sounds far out, but in a blog post written for GigaOM, Mindshare’s Paul […]
[…] Glass, interactive, voice-controlled “augmented reality smart glasses.” We covered it here and here. It sounds far out, but in a blog post written for gigaOM, Mindshare’s Paul […]
[…] Google Glass, interactive, voice-controlled “augmented reality smart glasses.” We covered it here and here. It sounds far out, but in a blog post written for GigaOM, Mindshare’s Paul Armstrong […]
[…] Project Glass: In Your Face, Out of the Way […]
[…] Project Glass: In Your Face, Out of the Way […]
Comments are closed.