you all say oculus rift but why not google glass?

Started by
14 comments, last by Tom Sloper 7 years, 4 months ago

same question like in topic. google glass can have the same functionality so why a helmet instead of glasses?

Advertisement

Because it can't have "the same functionality."

The Rift is a VR device. Glass was a relatively limited (compared to current offerings) AR device. And it's dead.

^^ That. It's AR vs VR.
In some ways, AR will be the VR killer, when it's good enough.
At the moment though, AR systems have terrible FOV, terrible resolution, terrible cost, etc -- when compared with current VR systems.

And it's dead.

Not dead, just back into stealth mode.
It's Microsoft's turn to bring out a prototype (HoloLens) and you can be sure that Google will be back in the game when/if they succeed in bringing it to market.

Google has been investing in Magic Leap, so I guess they will eventually be using their AR technology sooner or later. With that said, there is lots of room for many different types of interaction models - not just headsets. You can use a smartphone for AR relatively easily, just like Pokeman...

anyway whats the magic behind creating such googles (i mean creating that lets say oled display on glass.plastic) how do they do it?

edit.

i ask because i would love to try to make one. at least i could have light weight, almost invisible interactive headset, ofc for game dev but first i need to know how they do it so i can compare my thoughts to real things. but dont ask me what i want to do it, i can tell only that it will require a positioning devices, but not gps

I don't know how the hololens screen projection system works. I suspect that they have three very thin layers of glass and each layer contains the red, green and blue channel and they rely on the additive properties of light to render a colored image.

Rendering an imagine is the first of many big challenges you'd face if you're going to make your own AR device. Remember that augmented reality is... augmenting reality! That means you have to know a bunch of things about reality before you can augment it! You need to know where things are in reality space, so that means you'll need to have two cameras that are constantly doing image capture, feeding the capture to a processor, which then tries to gather depth information to create a "z-buffer" for the real world. Why do you need this? For occlusion of course. If you have a ball in augmented reality and it rolls behind the couch, your device will have to know that the 3D position of the ball isn't visible anymore because it is being occluded by the couch. But, maybe a portion of the ball is still visible? So you'd have to do some fancy logic against each pixel in the ball mesh to see if its depth buffer test is less than or greater than the real world depth buffer. Then you've also got to map out the shape of the objects in the real world so that you can have collisions with it. Your ball should bounce off of the couch rather than phasing right through it, and in order for that to happen, you have to do some time processing the environment around you in order to generate triangulated collision meshes.

This is computationally expensive stuff, so the "magic" of the hololens is that you're really wearing a stand alone computer on your head which has enough processing power and battery life to do this decently for a reasonable amount of time. It's really quite incredible if you think about how much computational power you'd need to do this, and they've shrunk it down to the size of a head band. You also have a microphone, so you can issue voice commands to the system which is pretty smart at interpreting them (I've never tried that feature).

Anyways, trying to compare hololens to oculus rift is like comparing boats vs cars and complaining that your boat can't drive on the highway, or your car doesn't float. They are fundamentally very different...

I suspect that they have three very thin layers of glass and each layer contains the red, green and blue channel and they rely on the additive properties of light to render a colored image.

Sounds odd to me. I don't think the additive properties of light have anything to do with this. It probably has a full spectrrum (white) light source which then would be filtered through the three layers. An aqua layer to filter unwanted red, a yellow layer to filter unwanted blue, and a violet layer to filter unwanted green. A glass layer cannot magically add a red tone to light, AFAIK.

I suspect that they have three very thin layers of glass and each layer contains the red, green and blue channel and they rely on the additive properties of light to render a colored image.

Sounds odd to me. I don't think the additive properties of light have anything to do with this. It probably has a full spectrrum (white) light source which then would be filtered through the three layers. An aqua layer to filter unwanted red, a yellow layer to filter unwanted blue, and a violet layer to filter unwanted green. A glass layer cannot magically add a red tone to light, AFAIK.

I think you're both talking about two different aspects of the system. Forgive the ascii art -- sounds like slayemin is talking about how the projection gets reflected into your eye (right), and you're talking about where the projection comes from (top).

   Green Source  ___
     Red Source  __ \
    Blue Source  _ \ \
                  \ \ \ 
                  L L L
                  a a a
  Eye->           y y y
                  e e e
                  r r r
                  1 2 3

I think you're both talking about two different aspects of the system. Forgive the ascii art -- sounds like slayemin is talking about how the projection gets reflected into your eye (right), and you're talking about where the projection comes from (top).

   Green Source  ___
     Red Source  __ \
    Blue Source  _ \ \
                  \ \ \ 
                  L L L
                  a a a
  Eye->           y y y
                  e e e
                  r r r
                  1 2 3

Hum, you'reI probably right. Cute diagram, BTW :)

Google should sell glass again without its features, giving ability for us to program gpu on it if its there. The only reason im asking it is because they stopped selling it plus they made the funxtionality wrong. I rather dont want to say what i could make with this if it had only ability to render with gpu , use its gyro and have ability to connect external radio positioning device but its my dream, too bad i dont know how to make such tiny things. The original second question was how do they build such layers, not how they are designed.

This topic is closed to new replies.

Advertisement