Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 23 Mar 2010
Offline Last Active Jul 30 2015 07:46 PM

Topics I've Started

preprocssor defines not allowed in compile shader

08 June 2012 - 07:18 AM

While working on my graphics engine, I have some shaders that never change and I want to keep them inside my compiled code so I have less files floating around, but compiling a shader from a string with pre processor macros causes an error to be thrown.

Error CompileShaderFromMemory, Shader@0x03D5F140(1,76): error X3000: syntax error: unexpected token '#'

An undetermined error occurred

Are any pre procesor defines not allowed when compiling a shader from memory? I did a search and found nothing, so I am curious.

Reliable UDP library

06 August 2011 - 10:35 PM

I wrote a network library based on the User Data gram Protocol (UDP). I wrote this because after using ENet (which is a great product), it lacked features like encryption, and threading support. So, instead of modifying the Enet library, I found it easier to write my own. So, I wrote a c++ network library that supports reliable and unreliable transmission, encryption, and runs on multiple threads (if a server is created). Since it is written in c++, the code should be easy to modify and understand by most people.

The library is free for use for any purpose. It is here http://nolimitsdesigns.com/UDP_Engine/html/index.html

render target views

09 February 2011 - 09:41 AM

I havent been able to find out the answer to this: Does a render target view have to have the same format as the texture to which it is bound as long as long as the data strides match?

For example, can I create a texture with a format of RGBA8 which is 32 bits, then create a render target for RG16 which is also 32 bits.

I want to be able to share render targets between different parts of my program instead of having multiple render targets created.

It would be nice if I could swap around the same base texture with the only difference being different render target views being created for each purpose.

Also, does a stencil buffer have to be in the 24x8 bit format? Can I use an 8 bit stride format instead of having to waste all that extra buffer solely for the purpose of having a stencil buffer? I use a 32 bit depth buffer, which is why I am wondering.

I realize that I can "Just try it," but I am going to be away from my computer the whole day and I was hoping for an answer by the time I got back.

texture speed access

30 December 2010 - 05:18 PM

I have always wondered the answer to this, hopefully someone knows. Given the following:

I create a ARGB8 texture (called Tex1)

In the pixel shader, are the actual costs for texture fetching the same?

float4 test = Tex1.Sample( Linearfilter, texcoord);// this samples all 4 components

float2 test = Tex1.Sample( Linearfilter, texcoord).xy;// this samples only two

Now, does the hardware always sample for an entire stride for each sample --all four components? Or, does the hardware only fetch exactly what I am specifying. This may seem like a strange question, but I know that video cards are optimized in strange ways and I have always wondered about this.

Also, one more question. Are the costs for sampling an ARGB32 the same as a ARGB8? I know that on cpu's there is no speed different when working with 8 bit or 32 bit integers, so I wonder if the same applies to video cards as well.

Texture format Questions

18 November 2010 - 08:50 AM

I am trying to figure out an OpenGL example and I am used to Directx, so this is very confusing. There does not seem to be good documentation on OpenGL like there is for directx so its no fun!
Here are my questions:

float dataarray[64*64];// pretend I put some data in here

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F_ARB, 64, 64, 0, GL_RGBA, GL_FLOAT, dataarray);

In the above code, OpenGL converts the 32 bit float dataarray to a 16 bit float format clamped to [0, 1] and stores that into the texture I am creating?

glTexImage3D(GL_TEXTURE_3D, 0, GL_LUMINANCE_ALPHA16F_ARB, 64, 64, 4, 0, GL_LUMINANCE_ALPHA, GL_FLOAT, dataarray);

What exactly does Luminance do? Is this like an Alpha blending for the texture I am generating? In the example I am going over, the Luminance texture is bound as a render target and the output in the pixel shader is

vec2 outfrag;// doesnt matter whats inside
gl_FragColor = outfrag.xxxy;

Now, my question is, what is happening? What is being drawn to the render target? Is the there an alpha blending going on? Again, the documentation sucks.