Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

120 Neutral

About Micha?Ossowski

  • Rank
  1. Hi, I have simple shader: void vs() { asdouble(uint(1), uint(2)); } I try to compile it using following command: fxc /T vs_5_0 /E vs test.hlsl And this results in program crash (Microsoft DirectX blah blah blah has stopped working). I'm sure this feature once worked for me under Win7. I'm using Shader Compiler 9.29.952.3111 (D3DCompiler_43.dll). Under Win8 (D3DCompiler_46.dll) it does work. Have you any experience using this function?
  2. Hi, I want to run my application on GeForce GTX 670. It's designed for D3D11.1 and works properly under Ref Device so I want to test it on hardware. I tried with two drivers: 306.23 and 306.02 BETA on Win8 RTM x86 and both doesn't support this feature level. According to Wikipedia and release notes, this card model should support it. Do you have any information explaining this situation?
  3. Micha?Ossowski

    f32tof16 confusion

    Thank you all. After some time I realized that I just didn't understand HLSL implicit conversions and that was general problem for me. ;) For example, if I want to get "raw" float16 value, I must set RT format to R16G16B16_FLOAT (simplest way) or R32G32B32_UINT (in this case, the value is stored in the LSB). And if want to send the value via semantic (to another shader), I just have to set its type to uint. Then, the f32tof16 intrinsic works as I want it to work.
  4. Micha?Ossowski

    f32tof16 confusion

    Hmm, I switched render target format to R16G16B16_FLOAT and use following shader: float4 PS(in float4 vPosition : SV_POSITION) : SV_TARGET { return float4(f32tof16(1.5), 1.5, 1, 1); } And finally I looked up the output. The red channel was 0x73c0 (incorrect) and the blue channel 0x3e00 (correct 1.5 representation).
  5. Micha?Ossowski

    f32tof16 confusion

    That's working, thank you, but documentation says that f32tof16 returns uint and I thought, that float16 bits will be stored in the lower part of uint similarly to f16tof32, where it reads from these bits.
  6. Micha?Ossowski

    f32tof16 confusion

    Hi, I try to find out why this intrinsic is always returning zero. Here's my code: float4 PS(in float4 vPosition : SV_POSITION) : SV_TARGET { float o = 1.5f; uint res = f32tof16(o); float resf = asfloat(res); return float4(resf, 1, 1, 1); } I tried with different values. The result should be in the lower part of returned value. Function is compiled with ps_5_0 profile. Render target's format is R32G32B32_FLOAT. Device: nVidia Quadro 1000M, feature level 11.0. EDIT: checked with command-line compiler fxc. The result is: // // Generated by Microsoft ® HLSL Shader Compiler 9.29.952.3111 // // // fxc /T ps_5_0 /E ps test.hlsl /Od // // // // Input signature: // // Name Index Mask Register SysValue Format Used // -------------------- ----- ------ -------- -------- ------ ------ // SV_POSITION 0 xyzw 0 POS float // // // Output signature: // // Name Index Mask Register SysValue Format Used // -------------------- ----- ------ -------- -------- ------ ------ // SV_TARGET 0 xyzw 0 TARGET float xyzw // ps_5_0 dcl_globalFlags refactoringAllowed dcl_output o0.xyzw mov o0.xyzw, l(0,1.000000,1.000000,1.000000) ret // Approximately 2 instruction slots used I'm wondering, why the compiler skips the code without any warning...
  7. Hi, I have a problem with applying mask to texture. The steps are as follows: mask = new RenderTarget2D(GraphicsDevice, graphics.PreferredBackBufferWidth, graphics.PreferredBackBufferHeight, false, SurfaceFormat.Color, DepthFormat.None); ... GraphicsDevice.SetRenderTarget(mask); GraphicsDevice.Clear(Color.Transparent); spriteBatch.Begin(); spriteBatch.Draw(terrain, Vector2.Zero, Color.White); spriteBatch.End(); BlendState bs = new BlendState(); bs.ColorSourceBlend = Blend.One; bs.ColorDestinationBlend = Blend.One; bs.ColorBlendFunction = BlendFunction.Add; bs.AlphaSourceBlend = Blend.One; bs.AlphaDestinationBlend = Blend.One; bs.AlphaBlendFunction = BlendFunction.Add; spriteBatch.Begin(SpriteSortMode.FrontToBack, bs); spriteBatch.Draw(tex2, new Vector2(100f, 250f), Color.White); spriteBatch.End(); GraphicsDevice.SetRenderTarget(null); GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); spriteBatch.Draw((Texture2D)rt, Vector2.Zero, Color.White); spriteBatch.End(); The mask and terrain are PNG textures with alpha channel. I just want to take alpha value of each pixel from mask and "put" them into render target (strictly, multilply alpha values of source and destination). BlendState class has (almost) no documentation but I don't want to use shaders. Do you happen to know how to configure this object? EDIT: Thank you for not responding. This motivated myself to investigate the problem thoroughly. My configuration is: bs.ColorSourceBlend = Blend.Zero; bs.ColorDestinationBlend = Blend.SourceAlpha; bs.ColorBlendFunction = BlendFunction.Add; bs.AlphaSourceBlend = Blend.Zero; bs.AlphaDestinationBlend = Blend.SourceAlpha; bs.AlphaBlendFunction = BlendFunction.Add; Now, it works perfectly.
  8. Well, glaux is pretty ancient library and so is glut (another common tool for OpenGL context management). I would recommend freeglut for your first applications using OpenGL. And you should know that OpenGL is only some kind of standard so there will be no SDK such as DirectX SDK. There is the docummentation of course, but for samples you should search third party sites.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!