Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Don't forget to read Tuesday's email newsletter for your chance to win a free copy of Construct 2!


vanattab

Member Since 18 Sep 2012
Offline Last Active Jul 22 2013 12:25 PM

Posts I've Made

In Topic: What makes an RTS great?

27 December 2012 - 03:57 PM

I really, really like RTS that have fewer more powerful units. Like in the best RTS of all time Company of Heroes the player typically has only a large handful of units 10-15 ish. Each infintry unit is really a squad of soldiers but they act as one unit. I don't like games were you just build a massive army of 40 units group them up and just click the center of the enemys base. CoH battles are all about micro-management and environmental interaction. If you have not tried it you can pick up all three games on steam pretty cheap. Also different races are a must IMO but it makes game balance hard and is a lot of extra work. Also having a couple of types of resources is ok but don't go overboard keep the game play focused on interesting and fun battle mechanics, not tedious base management. It’s not fun building a million harvesters and having to constantly keep an eye on them but it is fun to fight for control of the ore this is why CoH's capture of territories is so good.


In Topic: Why is this 128Bit Color Format being converted to 32Bit (HLSL/SLIMDX-9)

16 October 2012 - 08:15 AM

I think that he doesn't expect the values to be limited to 1/255 increments. They should have the same accuracy as a floating point value.
That's the problem since the beginning.

That's why I proposed to write a pixel shader to fill the buffer with a desired floating point value just to narrow down the problem.

Cheers!


I tried writting a simple texture shader to fill the texture as you suggested but I got an odd error and I am not sure why?

This was my shader code:
[source lang="cpp"]sampler2D Tex0 : register(s0);float4 fill128( float2 coords : TEXCOORD0) : COLOR{ return float4(1, 0.612549, 0.612549, 0.612549);}[/source]

In my application code I then compile the shader as so:

[source lang="vb"]psByteCode = ShaderBytecode.CompileFromFile("filePath", "fill128", "ps_2_b", ShaderFlags.None)shader = New TextureShader(psByteCode.Data)............' '''''''' Then in my rendering code before I call device.BeginScene() I call TestTexture.Fill(shader)[/source]

But I get the following error on the TestTexture.Fill(shader) call. SlimDX X9Exception with a msg of:

Additional Infromation: E_FAIL: An undetermined error occurred.


In Topic: Why is this 128Bit Color Format being converted to 32Bit (HLSL/SLIMDX-9)

16 October 2012 - 07:32 AM

Then what is not behaving as expected?
You have values that, when multiplied by 255.0f, result in whole numbers, 1.0f, 2.0f, 3.0f, etc.

There is no fraction, so you are obviously going to get 0.0f every time.
No mystery here. This is exactly what you should expect.
If it is not what you desire, you should change it to be whatever exactly it is that you desire.


L. Spiro



I believe you misunderstand what I am trying to do. I am working on a color vision test to test for color blindness and I would like to be able to display more colors then is possible on an 8bit / per channel display (i.e. 24bit/32bit depth). Say for example I want to display 10bits of color per channel (0-1024), this is equivalent to being able to display 0.25 increments of the 8bit (0-255) scale. For example say I want to render a large letter "A" with a floating point value of 0.503921... on a scale of 0-255 this is 128.50 the way I want to mimic this on an 8bit display is to render half the pixels as 128 and half the pixels as 129. When viewed at a reasonable distance the eye averages over the pixels and interprets the color of the letter as 128.5. Please see the picture below as a reference to want I am talking about. When looking at the image imagine that one of the colors of green is 128 and the other is 129 and that each box was only 1 pixel not 16. Your eye will interpret this as 128.5. Note in the below example it is perfectly uniform in my basic random chance model this is not necessarily true.

www.users.muohio.edu/vanattab/BasicDithering.png

In order to implement this with SlimDX and DirectX 9 what I tried was creating a 128Bit surface (I also tried 64Bit for floating point format and regular) so that I can store color to a higher depth. A 64Bit surface should be able to store 2^16 = 65536 possible values (24/32Bit is 2^8=256). So when I render the "A" to a 64 bit surface the color data should be stored in increments of (1/65536) not (1/255). In my pixel shader I then want to get multiply that floating point number by 255 and should get a number like 128.5 if you take the above example. When I then take the frac(of that number) I should get .5 which corresponds to the % chance that I want scale the color value up to 129. But that is not happening for some reason even though I am drawing the text to a 64bit format the color seems to be quantized to 24/32bit format. Does that make sense now?

In Topic: Why is this 128Bit Color Format being converted to 32Bit (HLSL/SLIMDX-9)

15 October 2012 - 06:46 AM

The frac(oldColor * 255) call always returns 0. While the oldColor values are 0-1 floating point numbers they only are changed in increments of (1/255)

In Topic: Why is this 128Bit Color Format being converted to 32Bit (HLSL/SLIMDX-9)

12 October 2012 - 10:56 AM

Could you elaborate? I am using D3D9 via the SlimDX wrapper. Maybe SlimDX is taking a float color but converting it to 0-255 before passing it to DirectX?

PARTNERS