• Advertisement


  • Content count

  • Joined

  • Last visited

  • Days Won


matt77hias last won the day on March 15

matt77hias had the most liked content!

Community Reputation

552 Good

About matt77hias

Personal Information


  • Twitter
  • Github

Recent Profile Visitors

12969 profile views
  1. The first image is provided in the .zip (not my conversions). I also checked the second .zip, but that contains bump maps. The original of Frank Meinl misses these 13 additional normal maps/bump maps.
  2. @Vilem Otte I had something similar before: with my old conversion code: def normalize_image(fname): img_in = cv2.imread(fname) img_out = np.zeros(img_in.shape) img_in = img_in.astype(float) / 255.0 height, width, _ = img_in.shape for i in range(0,height): for j in range(0,width): img_out[i,j] = np.clip((normalize(img_in[i,j]) * 255.0), 0.0, 255.0) cv2.imwrite(fname, img_out) But then I realized that I wasn't considering negative coefficients. The tangent and bitangent coefficient range from -1 to 1 instead of 0 to 1. So xy should be considered as SNORM and z as UNORM (isn't needed anymore after normalization).
  3. An extension of the Sponza model adds 13 tangent-space normal maps (now every submodel has a normal map). Though, these additional normal maps seem far from normalized. Luckily, the z channel is provided as well. So I decided to normalize them myself (I converted all .tga files first to .png files): import cv2 import numpy as np def normalize(v): norm = np.linalg.norm(v) return v if norm == 0.0 else v / norm def normalize_image(fname): # [0, 255]^3 img_in = cv2.imread(fname) img_out = np.zeros(img_in.shape) # [0.0, 1.0]^3 img_in = img_in.astype(np.float64) / 255.0 # [-1.0, 1.0]x[-1.0, 1.0]x[0.0, 1.0] img_in[:,:,:2] = 2.0 * img_in[:,:,:2] - 1.0 height, width, _ = img_in.shape for i in range(0,height): for j in range(0,width): # [-1.0, 1.0]x[-1.0, 1.0]x[0.0, 1.0] v = np.float64(normalize(img_in[i,j])) # [0.0, 1.0]^3 v[:2] = 0.5 * v[:2] + 0.5 # [0, 255]^3 img_out[i,j] = v * 255.0 cv2.imwrite(fname, img_out) This works ok for the original tangent-space normal maps, but not for the additional 13 tangent-space normal maps. Way too dark original: Not so very blue-ish normalized version: Any ideas? Another example:
  4. HLSL's snorm and unorm

    Didn't expect this. Thought it would be transparent as is the case for SRVs/RTVs.
  5. I noticed that snorm and unorm are, apparently, HLSL keywords (after trying to use them as variable names). MSDN is very brief about these "type modifiers": So I wonder: Does casting between snorm float and unorm float and float work as intended? What happens when casting from a float that is out-of-range to a snorm and unorm? Is it also possible to add these type modifiers to half and double. Should one prefer these type modifiers over a manual conversion (is a "mad" instruction guaranteed?)
  6. Normal encoding/decoding

    If HLSL uses the same convention then that would be ideal: float2 EncodeNormal_Spherical(float3 n) { // (0.0f, 0.0f, 1.0f) const float phi = atan2(n.y, n.x); // 0.0f hopefully :D const float cos_theta = n.z; // 1.0f return SNORMtoUNORM(float2(phi * g_inv_pi, cos_theta)); // SNORMtoUNORM (0.0f, 1.0f) } float3 DecodeNormal_Spherical(float2 n_unorm) { const float2 n_snorm = UNORMtoSNORM(n_unorm); // (0.0f, 1.0f) const float phi = n_snorm.x * g_pi; // 0.0f float sin_phi; // 0.0f float cos_phi; // 1.0f sincos(phi, sin_phi, cos_phi); const float cos_theta = n_snorm.y; // 1.0f const float sin_theta = CosToSin(cos_theta); // 0.0f return float3(cos_phi * sin_theta, // 0.0f sin_phi * sin_theta, // 0.0f cos_theta); // 1.0f }
  7. Normal encoding/decoding

    Sorry for the confusion, but I am talking about two different encodings in my questions: spherical and spheremap (method #3 and #4 :P).
  8. Some questions about (world-space) normal encoding/decoding for GBuffers (lighting and post-processing): All spherical encoding samples, I found so far use atan2 to compute the arctangent of phi in the [-pi, pi) range. How does this work for a normal of (0,0,1) since both the mathematical and HLSL atan2 are undefined for (x=0, y=0)? Can sphere-map encoding (decoding) directly operate on world-space normals or should I first convert to (from) view-space? Does one in practice use and get away with a DXGI_FORMAT_R8G8_UNORM or rather stick with a DXGI_FORMAT_R16G16_UNORM for accuracy (for both spherical and sphere-map parameterizations)?
  9. HLSL structs

    Didn't know this syntactic sugar. Though, I am not really a fan, since implicitly registers will be assigned which is what I want to avoid. At my highest level, I will (and need to) add the registers myself making use of a header file that is included in both C++ and HLSL.
  10. HLSL structs

    Yes, was just referring to HLSL function to HLSL function data transfer (The inside world ) Nice! (I was worrying as well that somehow I would create a separate temporary for every global variable. But now I can and will go structs the whole way. )
  11. HLSL structs

    HLSL packs data so that it does not cross a 16-byte boundary. How does this translate to all the pre-defined types such as Texture2D, SamplerState? What is the size of these by default? Since, HLSL just inlines all code involved in a shader, I wonder if it is a good practice (with regard to performance) to pass structs around with a bunch of related parameters instead of having to pass a giant list of primitive types around (for cases with and without extra padding)? (My goals is to partition all my code into two layers: one layer that uses user-bound data and one layer that doesn't depend on these variable globals).
  12. I totally agree with this. If you want to develop a codebase (which both version control software such as Git and repository hosting systems such as Github support and facilitate), header files, template implementation files and implementation files should be put in the same directory for ease of use during development. If you want to distribute your API, you only provide an Include directory containing the header files and template implementation files, and a binary .lib. And of course, you could add a script to your codebase to automate this stripping and building. Unfortunately, many C++ Github repositories aim at distribution and integration into external projects instead of development of the internal projects themselves.
  13. But what does he actually do with the AO, multiply it with the total indirect radiance? How do you choose your max distance in your AO computation (which you get for free)? (I rather use a constant than a tweakable parameter :P)
  14. More precisely, I was actually wondering what the difference is between: at http://simonstechblog.blogspot.be/2013/01/implementing-voxel-cone-tracing.html I have no idea why the author applies AO to the direct radiance. But apart from that I do not get the difference. Apparently, an AO strength factor is enabled; whatever that means? Good point. Though, I did not consider this as AO, since it is just AO per cone (it nearly is a visibility itself for small cones) instead of full AO over the cosine-weighted hemisphere.
  15. To avoid the light leaking slightly in Voxel Cone Traced Indirect Illumination, Voxel Cone Traced AO can apparently be added. You get the AO even for free while tracing cones. But how does one combine this with the calculated indirect radiance? Is this just a multiplication? AO GI and Voxel Cone Tracing GI seem just like two unrelated GI approximations. Furthermore, visibility is already taken into account while computing the indirect radiance, so how does an additional averaged visibility fit in?
  • Advertisement