Jump to content
  • Advertisement

KonstantinGorskov

Member
  • Content Count

    5
  • Joined

  • Last visited

Community Reputation

100 Neutral

About KonstantinGorskov

  • Rank
    Newbie
  1. KonstantinGorskov

    Is integer gets truncated?

    It is code part from GLSL fragment shader (Unity 3.3), (GLSL 2.0 I guess) I'm passing 4 bytes encoded into RGBA, and emulating bit shifting to get them back; (32bit depth, on C# side it is int32 value) int depth2_i = ((color_full.x * 255) * 16777216) + ((color_full.y * 255) * 65536) + ((color_full.z * 255) * 256) + 0; float depth2 = depth2_i/2147483647f; Inside C# code, depth2 returns 0.49..... after division, as it should. Inside GLSL shader, I get black pixel (0) I assume, that my (int depth2_i) gets truncated, same goes for float depth2_i . Is that GLSL limitation?
  2. KonstantinGorskov

    Help (understand) convert small vector code to C#

    Thanks a lot! It all makes sense now.
  3. KonstantinGorskov

    Why image bits are ouf of range?

    Oh... thank you! Pretty stupid of me to confuse that.
  4. With FreeImage.NET library, I can read EXR 32bit/c file. And get pointer to image bits. IntPtr input = FreeImage.GetBits(dib); byte* ptr = (byte*)input.ToPointer(); What is want to do next, is reconstruct first channel value and store it as float; for (int i2 = 0; i2 < _width; i2++) { for (int k = 0; k < _height; k++) { byte[] nn = new byte[4]; nn[0] = ptr[(pix * 32)]; nn[1] = ptr[(pix * 32)+ 1]; nn[2] = ptr[(pix * 32)+ 2]; nn[3] = ptr[(pix * 32)+ 3]; float depth = BitConverter.ToInt32(nn,0); But using this loop, I get out of bit range. Why? If channel has 32 bits, I can safely loop through all bits within _width*_height_*32 range? Pixel bit0per0channel is 96. Where I might be wrong?
  5. I have found code, which packs float value into RGBA texture. I have founded a way to extract float value from my EXR depth texture on C# side. Now I need to pack into png image. Original code is: const vec4 bitSh = vec4(256.0 * 256.0 * 256.0, 256.0 * 256.0,256.0, 1.0); const vec4 bitMsk = vec4(0,1.0 / 256.0, 1.0 / 256.0,1.0 / 256.0); vec4 comp = fract(depth * bitSh); comp -= comp.xxyz * bitMsk; I can easily emulate vec4 with C#. But what about this fract(depth * bitSh) part? What is actually going on in there? And what comp.xxyz stands here for?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!