John Carmack joins Oculus as CTO

Started by
11 comments, last by Servant of the Lord 10 years, 8 months ago

you need to use a "special SDK" to use it. In other words, it is the same shit as all those hobby engineer products.

Yes, the game needs to specifically be built to render at the appropriate FOV, draw in stereo with the appropriate eye separation, etc... and needs to implement head tracking.
As swiftcoder mentions above, there's a lot of community hacks/mods out for existing games already. Unfortunately, these will always be quite poor quality compared to a real integration (unless games allow mods to re-write their rendering pipeline and/or camera code completely...),

e.g. many simply re-route head-tracked movements to be interpreted as mouse movements, which is quite poor compared to an actual mouse + head-tracking camera set-up. In most head-tracked games I've played, your existing input devices control the absolute orientation of the camera/player/vehicle, while head-tracking is a separate relative orientation applied on top.

the Rift or headtracking (perhaps via the Kinect) would add great immersion to games without much work for developers and without much extra processing required.

I've been using a TrackIR5 for years, and yes, the added immersion is beyond words... The stereo vision and wide FOV of the oculus is another giant leap on top (but as I've reported in other threads, it's too far into the uncanny valley for my brain, and I get sick if I use it for more than 15-30 minutes. One hour of TF2 gave me over 12 hours of nausea when I tried to "push through" the warning symptoms at about 30 minutes).

As Samoth points out above though, developers need to make use of a specific SDK in order to support these kinds of peripherals, so not many games actually do support them -- the Arma FPS series (aka "DayZ"), some flight sims, and some racing games (Codemasters series is good) have head-tracking support, but it's not mainstream.

What I'd really like to see is a "de facto standard for devices that give 6DOF values" (with a single easy, usable API), so at least every mainstream program supports them, and supports them properly, and in the same way.

FreeTrack is already trying to do that for head tracking. Companies like NaturalPoint are fighting against them though... sad.png Hopefully Oculus cooperates with them.

Anyway, still hardly any devs support head-tracking... There's also facetracknoir, which doesn't require any peripherals besides a webcam (but is obviously less precise).

An API that supports all of the above would indeed be beneficial.

Advertisement

FreeTrack is already trying to do that for head tracking.

That's perfect then, because OpenGL has been having ...

The oculus SDK isn't that super special, you just need to provide a split-screen image to it

GL_FRONT_LEFT and GL_FRONT_RIGHT buffers for about two decades.

Only no IHV bothers to support them, instead a lot of hacks and shit are put in place. Now what I'd really like to see is that what's already standardized and -- in theory working -- is used, instead of reinventing the wheel for every little piece of hardware.

In other words, get away from what we had in the 1980s when you had to write separate versions for every different little bit of hardware.


Only no IHV bothers to support them, instead a lot of hacks and shit are put in place. Now what I'd really like to see is that what's already standardized and -- in theory working -- is used, instead of reinventing the wheel for every little piece of hardware.

Nvidia has supported them for quite some time... on their Quadro series. As with other "advanced" OpenGL features like... any form of readback, they decided that you need a $3000 Graphics board to be allowed to use them (in the case of readback it is part of core so you get an unnecessarily slow version instead).

FreeTrack is already trying to do that for head tracking.

That's perfect then, because OpenGL has been having ...

The oculus SDK isn't that super special, you just need to provide a split-screen image to it

GL_FRONT_LEFT and GL_FRONT_RIGHT buffers for about two decades.

Only no IHV bothers to support them, instead a lot of hacks and shit are put in place. Now what I'd really like to see is that what's already standardized and -- in theory working -- is used, instead of reinventing the wheel for every little piece of hardware.

In other words, get away from what we had in the 1980s when you had to write separate versions for every different little bit of hardware.

but, apparently, why do that when you can have a library dependency, render the scene twice, have to deal with accounting for a bunch of device variables (to render correct output), warp the images via a shader, ... ?

in what time I had to mess with it, I tweaked my stereo-3D rendering to work with the Rift, but the initial results weren't very good.

my engine thus far was doing 3D mostly as a trick based on parallax warping, but this leaves it as a tradeoff between weak 3D effect and bad artifacts along edges.

I may at this rate end up having to switch to a cheaper rendering strategy (to allow higher framerates and full stereo rendering).

bigger problems I think at the moment is the low resolution of the current devkits (pretty obvious when using it, *1), and its tendency to quickly cause pretty bad levels of motion sickness (or "simulator sickness", either way...).

also partly hoped-for a few times has been some sort of camera for it to allow it to be used for augmented-reality stuff, and also to avoid having to take it off and put it back on to deal with external stuff.

IOW: capture video on what is essentially a stereo-webcam, which can either be fed back directly into the HMD (via a "bypass button" or similar), or could be processed by the game for some effect (reality+HUD+3D objects, ...). sort of like some games for cell-phones or the 3DS, or maybe something more useful.

*1: not sure if due to actual screen resolution, or just that much of ones' field of vision is mostly confined to a fairly small area of pixels.

at-present, there is currently a lot of fairly obvious pixels, making it almost look like one is playing a game in 320x240 mode.

IOW: 1280x800, means 2x 640x800, of this, ~ 640x640 is usable (top/bottom pixels are mostly out-of-view), and with the software+IPD thing, looks like probably only 400x400 is really usable.

compared with 320x240, it isn't a *huge* difference quality-wise.

much preferable would be ~ 1280x1280 for each eye, meaning a roughly 2560x1600 or 2560x1440 screen.

them planning a 1920x1080 panel for the consumer release should at least be an improvement (giving 960x1080 per eye).

not sure if there is much of a good solution to the motion-sickness issue though...

could be better, but still pretty cool I guess...

[armchair opinion]

I also agree with ranakor - yet another a self-contained console would be horrible. We have had, what? Eight of them in the past year and near future? No thanks!
But a new hardware peripheral that works with your existing computer, that'll help encourage other hardware manufacturers to develop similar offerings and lead to lower prices and greater support in games? Yes please!


But isn't that exactly what the Rift is not? Unless I grossly misunderstood what's on Wikipedia and in the company propaganda, you need to use a "special SDK" to use it. In other words, it is the same shit as all those hobby engineer products.


There's no existing general-purpose SDK for these kinds of headsets, so they have to make their own. But just like with gamepads (and videocards, monitors, keyboards and mice, scanners, printers, digital cameras, and other ubiquitous hardware peripherals), for the first five or ten years, every company will be creating their own SDKs for their own products and it'll be a mess for developers.

Eventually, one or more of these things will happen:

A) Microsoft, Intel, Apple, NVidia, or AMD will create a consistent API for developers (because it directly benefits these companies' user satisfaction with the new hardware), and the peripheral companies will choose to support it (because it directly benefits user satisfaction with their hardware).
B) Developers will get fit up, and create a SDK that wraps the four or five SDKs of the different peripheral companies.
C) The companies (two or more) will come to mutual agreement and work together on a standard, and the rest will fall in line.
D) One of the companies will create a standard that, if the others accept it, will become a governing body like OpenGL or the DVD standard body, or the BluRay standard body.
E) One of the companies will become dominate, and the other companies will design to imitate that company's interface for consistency, or nobody will buy the "buggy" hardware that developers aren't supporting.

It directly benefits everyone involved to create consistency. The reason why it'll take ten years before that consistency arises, is because every company will duke it out to try to dominate the field with new features and to try to make themselves become the one that writes the standard.

There's many ways that it has happened in the past, but IF the market for these products become large enough, it eventually will lead to standardization. The market will ensure it.

Ofcourse, others can upset the standard. Microsoft created a uniform standard for gamepads, and then stabbed everyone in the back by making the XBox 360 controller a new standard of its own... but it quickly balances out by other gamepad makers taking path [E] above.

3Dconnexion's SpaceNavigator comes to mind here. I bought one of these back when they were "big hype" because it's so awesome, such a great help for modelling, and many programs including Blender support it. Turned out it was good for next-to-nothing and Blender didn't support it at all (eventually a homebrew developer snapshot did). In the mean time, like 3 years or so later, it is indeed supported, but even so, it's more of a nuisance than a help. Using the numpad to align and the mouse to dolly is a hundred times faster and easier, and zooming doesn't work very well either way in any case.

The key is that the market has to become big enough. If it stays a small market, then the companies making the product have to fight an uphill battle to get developer support. Developers will support it if enough devices are sold, enough devices will get sold if developers support it. It's kinda a catch-22 you have to break through, either by throwing money at developers (paying them to support it in its infancy), or making it so easy for developers that they can just throw a day or two of work to get it working, or making the peripheral "just work" out of the box, by imitating another device type like a keyboard or mouse or monitor or printer.

The first 5-10 years will definitely be a mess, unless the hardware manufacturers decide to work together straight out of the gate - which is possible, but uncommon.

[/armchair opinion]

This topic is closed to new replies.

Advertisement