Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Ubik

Member Since 07 Nov 2002
Offline Last Active Yesterday, 12:13 PM

Posts I've Made

In Topic: Why is glGetShaderiv(); crashing my written program.

13 June 2015 - 09:34 AM

There still seems to be a potential problem in the code. ShaderCompileLOG is defined to be 256 bytes long, but ShaderCompileLENGTH is passed to glGetShaderInfoLog. After the latter glGetShaderiv, ShaderCompileLENGTH will contain the actual length of the info log, which might be more than 256 bytes long! Still, the buffer given to glGetShaderInfoLog will always be just 256 bytes, so OpenGL would end up trying to write more text to the buffer than it can contain.

 

The easy way to fix this is to pass 256 as the maxLength parameter to glGetShaderInfoLog. Using a constant is probably a good idea to make sure the array length and the parameter stay in sync. Of course, if the log is longer than that, you will get only part of the data, so vstrakh's suggestion of allocating memory for the log text (ShaderCompileLENGTH bytes, to be exact) can be a good idea in that regard too.


In Topic: What are your opinions on DX12/Vulkan/Mantle?

08 June 2015 - 12:47 PM

Apple has announced their upcoming OS X version El Capitan. Thought that it would be relevant to this thread, as it will bring support for their Metal API to OS X.

 

I have no details, but apparently there has been claims of 50 % improvement in performance and 40 % reduction in CPU usage.


In Topic: Vulkan is Next-Gen OpenGL

20 April 2015 - 06:35 AM

It's nice to notice the video is recorded on a KDE desktop, hinting it's running on Linux. I suppose that also implies the demo is run with an Intel GPU. Well, I guess it's actually the video being a LunarG demo that implies it being run on Intel GPU, my understanding is that they wrote the Intel Linux driver.


In Topic: Vertices and indices in the same buffer?

06 April 2015 - 10:35 AM

Seems kind of curious that using big combined buffers in pre-AZDO world has apparently not really been a thing, considering it could maybe be used to minimize state changes when editing buffer contents, if the "AZDO approach" is to use them as larger memory pools. Like, how hasn't it been a thing earlier? I guess there just isn't that much benefit (with the older GL API at least), when combining buffers of single usage type alone creates large enough blocks of memory, along with keeping things conceptually easier to grasp. Oh well, I'm not advanced enough graphics programmer to make any strong conclusions.

 

Anyway, I think I'll choose to have just a general Buffer type instead of artificially separated VertexBuffer, UniformBuffer et cetera (this approach actually still entices me, I'm a little bent on having strict compile-time sanity and validity checks), though mostly from the viewpoint of there being less code in general.

 

Thanks for your feedback!


In Topic: Vertices and indices in the same buffer?

04 April 2015 - 01:08 PM

@mhagain: Good point on the new APIs revealing much about the current state of the hardware and them being largely agnostic about the buffer contents (as far as I can tell too).

 

@vlj: Your wording on it being forbidden by the spec somehow brought up one OpenGL-related thought I've had on background - the API may permit things, but may not actually work outside the common ways of use.

 

@unbird: Thanks for testing this out!

 

@Promit: I may very well be reading incorrectly between the lines, but I get the impression that mixing different kinds of data in single buffer (in old or current GL or D3D anyway) is not something you've done or seen done, at least enough to be a somewhat common pattern?


PARTNERS