Jump to content

  • Log In with Google      Sign In   
  • Create Account

We need your feedback on a survey! Each completed response supports our community and gives you a chance to win a $25 Amazon gift card!


bluntman

Member Since 29 Aug 2004
Offline Last Active Nov 18 2014 06:43 AM

Topics I've Started

HTML5 for game UI?

26 June 2014 - 06:26 PM

I'm looking for a "poor mans" Scaleform solution, and after discovering Awesomium I am interested in the idea of using HTML5. I'm interested in any experiences of people who have done this, and what animation tool they used. Google Web Developer seems rather basic and doesn't support custom components, Adobe Edge apparently uses jQuery for all animation resulting in slow and large animations.

http://www.webdesignerdepot.com/2013/10/html5-app-smackdown-which-tool-is-best/

Non of these seem ideal.


VAO and GL_ELEMENT_ARRAY_BUFFER.

17 April 2014 - 08:11 PM

The OpenGL wiki has me confused here:

 

 

 

Index buffers

Indexed rendering, as defined above, requires an array of indices; all vertex attributes will use the same index from this index array. The index array is provided by a Buffer Object bound to the GL_ELEMENT_ARRAY_BUFFER​binding point. When a buffer is bound to GL_ELEMENT_ARRAY_BUFFER​, all rendering commands of the formgl*Draw*Elements*​ will use indexes from that buffer. Indices can be unsigned bytes, unsigned shorts, or unsigned ints.


The index buffer binding is stored within the VAO. If no VAO is bound, then you cannot bind a buffer object to GL_ELEMENT_ARRAY_BUFFER​.

 

You CAN'T bind to GL_ELEMENT_ARRAY_BUFFER unless you have already bound a VAO?! This makes no sense to me as one would naturally want to bind to GL_ELEMENT_ARRAY_BUFFER when filling it in and not want to be worried about binding an associated VAO. Is there another way to fill an index buffer? Does having 0 bound as VAO count as having a VAO bound? 

 


Modernizing my GL based renderer.

29 July 2013 - 05:11 PM

So I came back to my main project after a couple of years away and GL has changed quite a lot! I have familiarized myself with the new 4.4 specs for the API and GLSL, and now am left with questions of best-practices.

 

My old engine used interleaved vertex arrays to provide vertex data, which appears to still be possible via glVertexAttrib using the stride parameter, but it occurred to me that it might be better from an engine design standpoint to use separate buffers. These buffers could by dynamically bound to vertex shader inputs based on interrogation of the shader program. i.e. use consistent naming of vertex shader inputs (e.g. 'vertex', 'normal', 'uv' etc.) and map these to separate attribute buffers in my geometry objects. This would essentially make my vertex formats data driven by the shader, and allow for easy detection of mismatches between vertex data and shader requirements. Good idea, or overkill (or just a total misapprehension!)?

 

The second question I have is regarding buffer backed uniform blocks. It want to use the unsized array feature (last member of block can be an unsized array, size defined by your API call) for light specifications, material specifications to match material IDs (my renderer uses deferred lighting), and cascaded shadow frustum matrices. Is this an appropriate use, or is there a more canonical method? 

 

My head is buzzing with new ideas and I haven't even got to the tessellation stages yet (something about awesome water)!

 

 

 

 


Memory Allocator from a specified block of memory.

05 May 2012 - 12:53 PM

Does anybody know of a good allocator that can use a specified block of memory as its pool? All the ones I can find seem to simply work on global pools or internally managed pools. I just want to give the allocator a big contiguous block of memory along with the size and have it return memory allocated from within that. Alternatively (although it is functionally the same) give it a pool size only have have it return blocks from the pool as offsets into it. I guess you could call it an abstract or virtual allocator...

Even better would be an entire system that will manage pointers into a block, where the block can be specified by me, which would include an allocator, and a pointer wrapper that deferences the allocated object from the specified block.
This (incase you haven't guessed) is to allow me to share memory between processes, and dynamically create and allocate objects into it.

Weird depth rendering.

19 May 2011 - 08:21 AM

I am trying to implement shadow mapping, and my rendered depth buffer looks like this:
Posted Image

Normally the depth bits that haven't been written should be white (1.0), and you can see the ones around where actual depth writing has occurred have been set to 1. But it looks like the buffer started off without them set to 1. I am calling glClear(GL_DEPTH_BUFFER_BIT) to clear the depth buffer to 1.0 right before rendering. I have tried calling glClearDepth(0.5) to set the cleared values to 0.5 instead of 1, and that stops depth values greater than 0.5 from being rendered as one would expect, but the entire background of the depth buffer is still black!
The "steps" around the edge of the sphere rendered seems to indicate that the fragment shader is writing values in blocks (as one would expect) and this is allowing the correct background depth value to some how show through.
I can't work out what problem would cause this result, so any help would be appreciated!

PARTNERS