Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 16 Jan 2012
Offline Last Active Aug 28 2016 03:53 PM

Posts I've Made

In Topic: Understanding the difference in how I should be drawing lots of objects

22 March 2016 - 06:24 AM

If you don't need to read the vbo on the cpu side and if you don't have to update many different parts of the vbo it would propably be faster to just push updates to the vbo using GL.BufferSubData. That way the driver doesn't have to retrieve the data back from the gpu. This is because mapping the buffer has the driver retrieve the specified part of vbo from vram into ram and then on unmap transfering the whole thing back to vram. With buffersubdata it never retrieves anything from gpu and only updates the part specified in the command.


But all of the methods suggested here are valid and the only way of knowing wich one has the most performance is to profile them with your actual use case. And when doing that you propably should look into highprecision timer and timing individual frames. Keep a list of say 600 frames and you can get the shortest the longest and the avarage. Measuring performance in how many milliseconds or if you want even in nanoseconds is important as that gives a much better idea of just how much your performance drops than frames per second. Also keeping a record of individual frames lets you see the longest frames and help you find stutters and such more easily.

All you have to worry about is if your frame time exceeds 16 milliseconds then you can no longer quarantee 60 fps and you might need to optimize. Or if you are targeting a differrent fps(for vr i think you need over 90 and for mobile 30 should be enough) you can calculate the millisecond limit by 1000/fps.

In Topic: C# Color Wheel

11 December 2015 - 02:03 PM

I'm sorry, but I'm really not comfortable divulging the purpose of it. I understand that in some cases it could make it easier to understand the best way to do it, but in this case I've told you all that is necessary. I really want nothing more than a color picker that can restrict the range of colors that a user can pick. What more clarification do I need? The reason is not important, but if anyone has a way to accomplish this, I'm all ears.

The problem here is that there are a few thousand different methods to do this depending on what you are using to interact with the user. For example the most common approach would be to use winforms but you can also use mono, unity or pretty much anything else ever concieved to interact with the user. For all we know you might be using a COM interface to make a grid of leds go on and off. Or if you don't have anything at all then I suggest you start looking at winforms tutorials and when you have a window with some interactivity you come back and tell us what you used to make the window and we will happily figure out a way to make a color picker for that. Making a simple color picker from nothing at all is several months of work even for a professional. Even if we picked a library at random and made a ready to use color picker for it it would propably take a long long time for you to integrate it into your project. So help us help you and give us little more details.

You are right that you don't have to tell us what your project is about but everyone else is also rigth that we need more information to be able to help.

In Topic: GLSL not behaving the same between graphics cards

06 November 2015 - 04:26 PM


Do you check for errors when compiling and linking the shaders? Is it failing to link, but you just ignore the failure and try and render with it anyway?

I should have mentioned this in the original post, sorry. I'm not getting any errors while compiling or linking.


Do you use glGetError, glGetProgramInfoLog and glGetShaderInfoLog. Shader errors do not always show in glGetError. You can also try to use glValidateProgram. If none of these show anything then we can consider a driver bug.

In Topic: Terrain Normals : Quick questions about calculating on the fly...

20 August 2015 - 09:37 AM

Games like Magic Carpet or Populous the Beginning come to mind for me


As far as I know those games didn't use normals or anything similar. They just used a predrawn tileset. There just isn't any algorithm that could be used 20 years ago that could do that realtime so they faked it by using predrawn images.

In Topic: OpenGL ES 2: Scale texture on creation?

24 July 2015 - 08:18 AM

Why do you want to scale it during creation. What is wrong with letting opengl scale it for you during drawing? Unless you want to implement your own scaling algorithm or use a library to get slow but better quality scaling you are betteroff just letting opengl do it during drawing.