• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Fadey Duh
      Good evening everyone!

      I was wondering if there is something equivalent of  GL_NV_blend_equation_advanced for AMD?
      Basically I'm trying to find more compatible version of it.

      Thank you!
    • By Jens Eckervogt
      Hello guys, 
       
      Please tell me! 
      How do I know? Why does wavefront not show for me?
      I already checked I have non errors yet.
      using OpenTK; using System.Collections.Generic; using System.IO; using System.Text; namespace Tutorial_08.net.sourceskyboxer { public class WaveFrontLoader { private static List<Vector3> inPositions; private static List<Vector2> inTexcoords; private static List<Vector3> inNormals; private static List<float> positions; private static List<float> texcoords; private static List<int> indices; public static RawModel LoadObjModel(string filename, Loader loader) { inPositions = new List<Vector3>(); inTexcoords = new List<Vector2>(); inNormals = new List<Vector3>(); positions = new List<float>(); texcoords = new List<float>(); indices = new List<int>(); int nextIdx = 0; using (var reader = new StreamReader(File.Open("Contents/" + filename + ".obj", FileMode.Open), Encoding.UTF8)) { string line = reader.ReadLine(); int i = reader.Read(); while (true) { string[] currentLine = line.Split(); if (currentLine[0] == "v") { Vector3 pos = new Vector3(float.Parse(currentLine[1]), float.Parse(currentLine[2]), float.Parse(currentLine[3])); inPositions.Add(pos); if (currentLine[1] == "t") { Vector2 tex = new Vector2(float.Parse(currentLine[1]), float.Parse(currentLine[2])); inTexcoords.Add(tex); } if (currentLine[1] == "n") { Vector3 nom = new Vector3(float.Parse(currentLine[1]), float.Parse(currentLine[2]), float.Parse(currentLine[3])); inNormals.Add(nom); } } if (currentLine[0] == "f") { Vector3 pos = inPositions[0]; positions.Add(pos.X); positions.Add(pos.Y); positions.Add(pos.Z); Vector2 tc = inTexcoords[0]; texcoords.Add(tc.X); texcoords.Add(tc.Y); indices.Add(nextIdx); ++nextIdx; } reader.Close(); return loader.loadToVAO(positions.ToArray(), texcoords.ToArray(), indices.ToArray()); } } } } } And It have tried other method but it can't show for me.  I am mad now. Because any OpenTK developers won't help me.
      Please help me how do I fix.

      And my download (mega.nz) should it is original but I tried no success...
      - Add blend source and png file here I have tried tried,.....  
       
      PS: Why is our community not active? I wait very longer. Stop to lie me!
      Thanks !
    • By codelyoko373
      I wasn't sure if this would be the right place for a topic like this so sorry if it isn't.
      I'm currently working on a project for Uni using FreeGLUT to make a simple solar system simulation. I've got to the point where I've implemented all the planets and have used a Scene Graph to link them all together. The issue I'm having with now though is basically the planets and moons orbit correctly at their own orbit speeds.
      I'm not really experienced with using matrices for stuff like this so It's likely why I can't figure out how exactly to get it working. This is where I'm applying the transformation matrices, as well as pushing and popping them. This is within the Render function that every planet including the sun and moons will have and run.
      if (tag != "Sun") { glRotatef(orbitAngle, orbitRotation.X, orbitRotation.Y, orbitRotation.Z); } glPushMatrix(); glTranslatef(position.X, position.Y, position.Z); glRotatef(rotationAngle, rotation.X, rotation.Y, rotation.Z); glScalef(scale.X, scale.Y, scale.Z); glDrawElements(GL_TRIANGLES, mesh->indiceCount, GL_UNSIGNED_SHORT, mesh->indices); if (tag != "Sun") { glPopMatrix(); } The "If(tag != "Sun")" parts are my attempts are getting the planets to orbit correctly though it likely isn't the way I'm meant to be doing it. So I was wondering if someone would be able to help me? As I really don't have an idea on what I would do to get it working. Using the if statement is truthfully the closest I've got to it working but there are still weird effects like the planets orbiting faster then they should depending on the number of planets actually be updated/rendered.
    • By Jens Eckervogt
      Hello everyone, 
      I have problem with texture
      using System; using OpenTK; using OpenTK.Input; using OpenTK.Graphics; using OpenTK.Graphics.OpenGL4; using System.Drawing; using System.Reflection; namespace Tutorial_05 { class Game : GameWindow { private static int WIDTH = 1200; private static int HEIGHT = 720; private static KeyboardState keyState; private int vaoID; private int vboID; private int iboID; private Vector3[] vertices = { new Vector3(-0.5f, 0.5f, 0.0f), // V0 new Vector3(-0.5f, -0.5f, 0.0f), // V1 new Vector3(0.5f, -0.5f, 0.0f), // V2 new Vector3(0.5f, 0.5f, 0.0f) // V3 }; private Vector2[] texcoords = { new Vector2(0, 0), new Vector2(0, 1), new Vector2(1, 1), new Vector2(1, 0) }; private int[] indices = { 0, 1, 3, 3, 1, 2 }; private string vertsrc = @"#version 450 core in vec3 position; in vec2 textureCoords; out vec2 pass_textureCoords; void main(void) { gl_Position = vec4(position, 1.0); pass_textureCoords = textureCoords; }"; private string fragsrc = @"#version 450 core in vec2 pass_textureCoords; out vec4 out_color; uniform sampler2D textureSampler; void main(void) { out_color = texture(textureSampler, pass_textureCoords); }"; private int programID; private int vertexShaderID; private int fragmentShaderID; private int textureID; private Bitmap texsrc; public Game() : base(WIDTH, HEIGHT, GraphicsMode.Default, "Tutorial 05 - Texturing", GameWindowFlags.Default, DisplayDevice.Default, 4, 5, GraphicsContextFlags.Default) { } protected override void OnLoad(EventArgs e) { base.OnLoad(e); CursorVisible = true; GL.GenVertexArrays(1, out vaoID); GL.BindVertexArray(vaoID); GL.GenBuffers(1, out vboID); GL.BindBuffer(BufferTarget.ArrayBuffer, vboID); GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(vertices.Length * Vector3.SizeInBytes), vertices, BufferUsageHint.StaticDraw); GL.GenBuffers(1, out iboID); GL.BindBuffer(BufferTarget.ElementArrayBuffer, iboID); GL.BufferData(BufferTarget.ElementArrayBuffer, (IntPtr)(indices.Length * sizeof(int)), indices, BufferUsageHint.StaticDraw); vertexShaderID = GL.CreateShader(ShaderType.VertexShader); GL.ShaderSource(vertexShaderID, vertsrc); GL.CompileShader(vertexShaderID); fragmentShaderID = GL.CreateShader(ShaderType.FragmentShader); GL.ShaderSource(fragmentShaderID, fragsrc); GL.CompileShader(fragmentShaderID); programID = GL.CreateProgram(); GL.AttachShader(programID, vertexShaderID); GL.AttachShader(programID, fragmentShaderID); GL.LinkProgram(programID); // Loading texture from embedded resource texsrc = new Bitmap(Assembly.GetEntryAssembly().GetManifestResourceStream("Tutorial_05.example.png")); textureID = GL.GenTexture(); GL.BindTexture(TextureTarget.Texture2D, textureID); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)All.Linear); GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)All.Linear); GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, texsrc.Width, texsrc.Height, 0, PixelFormat.Bgra, PixelType.UnsignedByte, IntPtr.Zero); System.Drawing.Imaging.BitmapData bitmap_data = texsrc.LockBits(new Rectangle(0, 0, texsrc.Width, texsrc.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppRgb); GL.TexSubImage2D(TextureTarget.Texture2D, 0, 0, 0, texsrc.Width, texsrc.Height, PixelFormat.Bgra, PixelType.UnsignedByte, bitmap_data.Scan0); texsrc.UnlockBits(bitmap_data); GL.Enable(EnableCap.Texture2D); GL.BufferData(BufferTarget.TextureBuffer, (IntPtr)(texcoords.Length * Vector2.SizeInBytes), texcoords, BufferUsageHint.StaticDraw); GL.BindAttribLocation(programID, 0, "position"); GL.BindAttribLocation(programID, 1, "textureCoords"); } protected override void OnResize(EventArgs e) { base.OnResize(e); GL.Viewport(0, 0, ClientRectangle.Width, ClientRectangle.Height); } protected override void OnUpdateFrame(FrameEventArgs e) { base.OnUpdateFrame(e); keyState = Keyboard.GetState(); if (keyState.IsKeyDown(Key.Escape)) { Exit(); } } protected override void OnRenderFrame(FrameEventArgs e) { base.OnRenderFrame(e); // Prepare for background GL.Clear(ClearBufferMask.ColorBufferBit); GL.ClearColor(Color4.Red); // Draw traingles GL.EnableVertexAttribArray(0); GL.EnableVertexAttribArray(1); GL.BindVertexArray(vaoID); GL.UseProgram(programID); GL.BindBuffer(BufferTarget.ArrayBuffer, vboID); GL.VertexAttribPointer(0, 3, VertexAttribPointerType.Float, false, 0, IntPtr.Zero); GL.ActiveTexture(TextureUnit.Texture0); GL.BindTexture(TextureTarget.Texture3D, textureID); GL.BindBuffer(BufferTarget.ElementArrayBuffer, iboID); GL.DrawElements(BeginMode.Triangles, indices.Length, DrawElementsType.UnsignedInt, 0); GL.DisableVertexAttribArray(0); GL.DisableVertexAttribArray(1); SwapBuffers(); } protected override void OnClosed(EventArgs e) { base.OnClosed(e); GL.DeleteVertexArray(vaoID); GL.DeleteBuffer(vboID); } } } I can not remember where do I add GL.Uniform2();
    • By Jens Eckervogt
      Hello everyone
      For @80bserver8 nice job - I have found Google search. How did you port from Javascript WebGL to C# OpenTK.?
      I have been searched Google but it shows f***ing Unity 3D. I really want know how do I understand I want start with OpenTK But I want know where is porting of Javascript and C#?
       
      Thanks!
  • Advertisement
  • Advertisement
Sign in to follow this  

OpenGL Virtual Reality for Dummies

This topic is 377 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Out of curiosity (for now), what does it take to get your 3D-game into VR? I'm not planning to actually do it anywhere soon, but I'd just like to know briefly:

 

* What kind of 3D(buffer)data does a VR kit need (minimally) to start rolling?

I would think it needs a depth-buffer to begin with. Is that just your ordinary depth buffer most apps have for various techniques, or does it needs to be spiced up with additional data? Or maybe something entirely different?

 

* Can you turn any 3D program into VR? Or does it really require specific techniques from ground up?

* So I have a 3D (first person) game made in OpenGL (4.5), can that eventually be transferred to OpenGL?

* There are couple of kits out there now (Occulus, PS4 VR, ...). I guess they come with their own SDK's. do they roughly work the same (can I swap easily between them), or are these SDK's big and hard to master?

* Controls (with your head) - I guess that is really bound to whatever SDK the VR kit brings with it, right? Or is this also standardized, straight forward stuff?

* For artists, is there anything that needs to be changed or tweaked in their workflow (creating 3D props, textures, normalMaps, ...)?

* For design / game-rules, would you need to alter things like motion-speed, the size of your rooms? Or can it be mapped pretty much one-on-one from a non-VR setting?

* Audio - anything that needs adjustments here, asides from beging 3D/stereo as good as possible?

* Performance. Being close to your eyes, would you need larger resolutions? Anything else that slows-down the performance?

* Your personal experience - easy, hard? Maybe easy to set-up, but hard to make it working *good*?

 

And excuse me if there were stupid questions. The only VR experience I have, was watching Jurassic Park pictures with those green/red glasses back in the nineties!

Share this post


Link to post
Share on other sites
Advertisement

I can only think of two special features:
* fish eye lense: Higher resolution / more details / better AA at the center. MipMap FrameBuffer, where the high res buffer only covers the central region

* Motion thickness: readOut head orientation, do a last render pass to rotate view, free sync it out. It would even be desirable to incorporate the rotation as a scanline rendering algo into the buffer readOut circuit. You know, orientation data goes from VR set to AdressInto FrameBuffer generator, right pixel is fetched and values pushed to LCD. Did I write LCD? LCD is slow. OLED? 120 FPS!

So basically the graphics card has to support VR. Nothing a game programmer can do or not do.

Share this post


Link to post
Share on other sites

>> Nothing a game programmer can do or not do

That almost sounds too good to be true :D

I can understand the fisheye. But why lowering quality at the outer regions? Is that to emulate "out of focus" - but wouldn't your eyes already be doing that - I mean your own real eyes? Or is it more just to avoid overkilling your head with too much visuals all around?

Share this post


Link to post
Share on other sites

I have no VR experience beyond reading some things.

 

"Special" techniques I've heard of are foveated rendering, and asynchronous time warp. (temporal reprojection)

 

Also I remember when Nvidia's pascal was announced they had a feature called "Simultaneous Multi-Projection" which reuses geometry between the projections for the left and right eye as well as more... but I can't describe the more portion beyond saying it uses multiple projections per eye... google it.

 

edit - here SMP http://www.anandtech.com/show/10325/the-nvidia-geforce-gtx-1080-and-1070-founders-edition-review/11

 

* Performance. Being close to your eyes, would you need larger resolutions? Anything else that slows-down the performance?

I've read that with the current resolution of VR (slightly greater than 1080p) you can make out individual pixels.  But you need a minimum of 90fps with no dips from what I've heard.  Otherwise when you move your head fast there is a severe lag that IIRC (again from what I read) causes nausea/dizzness.  

Edited by Infinisearch

Share this post


Link to post
Share on other sites

From 3D graphics and shaders to anti-vomit code. I can imagine a poor performance, crazy lighting effects, or certain environment settings are sickening indeed.

But seriously, from what I read the biggest challenge is to keep up speed then, which is quite hard when looking at my own program that barely reaches 60 FPS on a 1600 x 900 resolution. Certainly with big particles flying around, things can get smudgy. And I guess even some AAA engines/titles suffer the same.

 

Gazing through the SMP article you posted, it seems you have to render everything twice (with a small offset like your own eyes have). But techniques like "Simultanous Multi Projection" saves you from having to push geometry twice, and the fishbowl approach avoids having to render everything twice. But are those steps automagically done for you by the videocard, or do you still have to teach your GPU a lesson?

Share this post


Link to post
Share on other sites

@[member='MJP'], Does the nausea have more to do with the latency or framerate or both?  

He mentioned using the sticks for looking/movement -- even with amazing framerate/latency, this feels really weird. If the VR camera rotates, but your head doesn't rotate, your brain gets upset. Your inner ear is telling your brain that you didn't move your head, but your eyes are telling it that you did move your head, so your brain decides that you've eaten poison and starts trying to get it out of your stomach...

 

FWIW though, everyone decided that using a stick to move was a really bad idea early on, hence all the VR games with teleportation now... but I've played a lot of Onward, which uses a movement stick (plus room scale movement) and I find it to be completely fine. 

So personally, I don't mind having a thumbstick for movement (as long as the movement speed is quite slow), but using a thumbstick to look around is still a terrible idea.

 

The latter point can cause problems for the Oculus rift, because out of the box it is a "front-facing VR" experience. It's not designed for the player to be facing backwards (without forking out extra $$$ for a 3rd tracking camera)... so a well designed Oculus VR game should use gameplay that doesn't require the player to turn around... Some games are working around this by having a "turn 180º button", which fades to black, rotates you, and then fades back in. This is a little less disorienting than giving you a turn thumbstick.

 

 

A side note to add to the above -- the SteamVR/OpenVR SDK isn't technically tied to the Vive. SteamVR is meant to be an open software platform that works with all hardware - currently they support HTC Vive, Oculus Rift and OSVR.

Oculus also has their own SDK that you can use instead of OpenVR, which is tied to the Oculus hardware.

Share this post


Link to post
Share on other sites

MJP covered most everything nicely, just thought I'd mention:

For distant objects the parallax/view warp can be pretty nonexistent. If you've got the time something nice can be to do 1 eye with a full depth/depth buffer etc. and the other only render nearby objects/stuff close enough to have visible parallax. You can then reproject all the pixels>distance from one eye to the other and avoid drawcalls/models/etc.

Also positional audio is very nice, and so is a lot more audio detail to go with it. In VR it's definitely more noticeable when say, you drop a physics object and it does't make a sound, than when it happens in a normal game.

Share this post


Link to post
Share on other sites

I'll tell you right now that if you port a FPS to VR that uses analog sticks for movement and rotation, you're going to have some very nauseous players. There are some players that can handle more extreme situations without discomfort, but I personally am of the belief that as VR developers we have a responsibility to make comfort a top priority. It's already a small, niche market, and we're never going to expand past that if we make the average user want to throw up when they play our games.

FWIW though, everyone decided that using a stick to move was a really bad idea early on, hence all the VR games with teleportation now... but I've played a lot of Onward, which uses a movement stick (plus room scale movement) and I find it to be completely fine. So personally, I don't mind having a thumbstick for movement (as long as the movement speed is quite slow), but using a thumbstick to look around is still a terrible idea.


How is it on PC?

I don't have any VR experience, but i don't want to play a game with teleporting or worse if i don't have accurate control of viewing direction.
The FPS way of moving and looking around is the most important progress in games we ever had - VR is not worth to give it up.
I'm not going to pay a lot of money just to play rail shooters or something, and i guess i'm not alone and the same is true for a lot of core gamers.
Not average gamers, but maybe those who would be more willing to invest in VR and accept initial motion sickness for better games.

Personally i think VR is going to fail because of this:
Fancy Controllers (too expensive, needs exclusive games - just start with the headset and wait until there is a market for those things)
Comfortable games (like going back to the age of interactive movies on CD-Rom? Where are the 'real' games?)
Too high resolutions (too expensive and limiting - try a optical low pass filter to hide pixels)



So what i suggest for PC games is simply:

Don't change controls. Keep mouse look and just add the additional head rotation to that. Use a gamepad or keyboard for motion.

Don't create VR only games, instead make more games compatible with VR. Let the users decide if they like it or get sick the first hour but feel better next day.
(I played Trackmania every day for years, stopped for some months, got back and there was extreme motion sickness just from the regular monitor to both me and my girl, after some days we got used to it again, no more sickness)

Don't try to invent a new genre just for VR - this will happen automatically with time. Instead and again make current genres work with VR.

No room tracking please - i don't want to stumble over my 3 year old, fall out of the window, no. Just wanna sit lazy in my chair as always and see virtual things in 3D.



Maybe i'd change my mind if i had real VR experience (i did render 2 views side by side on screen and pinched my eyes - it works! :) ),
but i would like to hear what you guys personally think about it. (Keep in mind that if this may work, it would have worked with far less investments and the 400$ glasses everyone wanted)


And a more technical question: Did you try to calculate speccular only once from the middle of the eye? (thinking of object space shading)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement