Jump to content

  • Log In with Google      Sign In   
  • Create Account

Radikalizm

Member Since 05 May 2011
Offline Last Active Yesterday, 10:40 PM

#5289297 Question about Open World Survival Game Engines

Posted by Radikalizm on Yesterday, 01:32 PM

A relatively experienced engineer will easily be able to adapt to any tool, especially if it's a well documented tool like Unity, Unreal or CryEngine. The tool should be chosen for the benefit of the project first, previous experience with the tool comes second.

 

Focus on what you know and do best, leave the technical decisions to the people who have the technical background.




#5289272 Question about Open World Survival Game Engines

Posted by Radikalizm on Yesterday, 11:26 AM

You're going about this completely backwards. Normally seeing you gather a team of engineers based on your pitch (and proper compensation of course), and you let them decide what technology to use for the project, since they'll be able to make a much more educated decision than you ever will.




#5275043 Diffuse IBL - Importance Sampling vs Spherical Harmonics

Posted by Radikalizm on 09 February 2016 - 02:03 PM

So I'm correct in assuming that you're using the same approach as your specular where you're sampling off of a fairly high resolution HDR cubemaps for your diffuse results?

If so, don't bother, because as you've figured out for yourself that will require a massive amount of samples to get working nicely as you're sampling the entire hemisphere rather than a focused area as you would with glossy specular materials.

 

What you could do here is filter and downsample a seperate HDR cubemap offline just like how you would approach filtering your specular mip levels for high roughness values, but just for your diffuse term. You can use a similar importance sampling approach to do this. This gives you a fairly tiny texture which should give very acceptable results with only a minimal amount of samples required. At this point you should consider just doing an SH representation though as that will give pretty much the exact same results (as you're dealing with very low frequency data anyway) with a lower memory footprint, but if you're not familiar with that just yet you can experiment with just the separate diffuse texture.

 

Whether you use an SH approach or a texture-based one will not make a huge change in the end result as we're dealing with very low frequency data here anyway. Spherical harmonics are just another way to represent your data, they're not a different way of generating that data. You'll still need to sample your diffuse data somehow before you can store it.

 

Additionally you won't need any type of LUT as your diffuse term is usually solely dependent on your normal vector, unless you're doing some expensive fancy diffuse model which is view-dependent or which takes surface roughness into account. Even if that would be the case, this usually results down to taking your fresnel factor into account somewhere which can be accounted for after sampling your diffuse IBL.

 

(PS: Dag Kevin, lang geleden!)




#5274496 BRDFs in shaders

Posted by Radikalizm on 05 February 2016 - 12:19 PM

 

All I care about is shader code, really..

I take that back, I was really mad.

I did, however, give up on the explanations from Real-Time Rendering. They are really hard for me to understand.

Also, it seems like the author was explaining PBR without saying it was PBR..?

 

I'm looking for better (easier) explanations about this topic, rather than the book yoshi_t mentioned. Any idea?

 

 

As a piece of advice, it might be better to work through the hard stuff if you want to understand BRDFs and PBR. You're going to hit a point where there's no way around the math and radiometry theory, and you're going to hit it fast.

 

You're entering a realm of graphics where there's no more training wheels, better get used to it sooner rather than later.




#5274381 Footware at work

Posted by Radikalizm on 04 February 2016 - 08:57 PM

 

Yeah, just get some slippers/loafers/sandals -- something with a rubber sole though; I don't think I'd like using communal bathroom facilities with a foam or cloth foot-bottom, not to mention subsequently tracking it all over the office. Something comfy, easy on/off, and suitable for indoor/outdoor use is probably a good place to start.

 

 

 

Yeah got some of those slippers that look like big fluffy bunny rabbits.  If the rules change to stop fluffy slippers then you know somebody is out to get you.

 

 

It's unicorns for me. Seems like taking off shoes and walking around on socks is very common at our office, but I guess it really does depend on the company.




#5272446 transition barrier strictness

Posted by Radikalizm on 24 January 2016 - 02:43 AM

You're right, the debug layer does not notify you of all transition issues yet, so the fact that it doesn't complain about things doesn't mean that your code is correct. Expect these issues to turn into errors at some point in time.

 

There's not a lot of hand holding in D3D12, make sure to do your research and cover all cases.




#5271100 Developing an Ocean Shader for Unreal Engine 4

Posted by Radikalizm on 14 January 2016 - 12:52 PM

The surface in yours looks like a large ocean of jello. The primary reason for this is you stretch and compress the whole mesh even the finest details. From watching videos of oceans waves the high noise and slow moving froth stay in place and move much less than the low frequency waves and crests. When watching videos it's like you start with a large taunt blanket then you have high frequency noise that moves very little in relation to the lower frequency waves. As you add lower frequencies of waves the movements go up and they affect each other more. Also low frequency waves crest. The lower the frequency (larger) the wave the more foam. You lose a lot of detail without this.

 

Also for more realistic rendering (which adds a lot to the overall effect) you might want to render your objects first to a texture and extract a depth map. Then use that when rendering the water and take the difference between the depth values. Using something like beer-lambert for the depth based transparency would add a lot. (I'm sure there's a paper with a more accurate volumetric transparency though for the water).

 

Absolutely this. Your water surface looks like a thin piece of highly specular elastic fabric constantly being stretched and then relaxed. As Sirisian said this absolutely works for creating the illusion of low frequency movement, but it breaks down quite badly for your high frequency details. Introducing independent movement for your high frequency details should help quite a bit.




#5268237 [D3D12] CreateDevice fails

Posted by Radikalizm on 28 December 2015 - 04:58 PM

If Radikalizm is right about finding the integrated instead of the discrete graphics, IIRC there is a control panel app that lets you switch between the two (force) on a per application basis.

 

Better yet, adjust your code so it properly looks for the most suitable adapter so your integrated card can never be picked to create the device on. No need to mess with control panel settings.




#5268231 [D3D12] CreateDevice fails

Posted by Radikalizm on 28 December 2015 - 03:51 PM

Since you're never actually passing in a device pointer I assume this code purely tests whether an adapter would support D3D12? You're running a GT 650M, which tells me you're probably running this on a laptop. Is there any chance this code is finding your laptop's integrated card (if you have one that is) first and is then failing because this card does not support D3D12?




#5267813 Screen-Space Reflection, enough or mix needed ?

Posted by Radikalizm on 24 December 2015 - 12:51 PM

Depends on the visual requirements of your game. Do you often have scenarios where you can't get any proper samples for your screen trace? If so, does it have a significant impact on your game's visuals?

 

If so, you could definitely look into a traditional cubemap based approach where you sample from your cubemap whenever you're not able to resolve a screen-space path. This is what a lot of modern AAA games do these days.

 

As for PBR, what technique for generating your reflections has nothing to do with whether your shading is physically based or not. How you resolve your sampled reflection data however does, and how you want to make it fit within the constraints of a physically based shading model is completely up to you.




#5264476 Is it a good idea to give each cmdlistallocator a different fence?

Posted by Radikalizm on 01 December 2015 - 03:30 PM

Thanks.

 

BTW does anyone know if it is wise to use more than one command list with an command allocator?  Or is it frowned upon?

 

Is there any scenario you're seeing which would actually require this? The only thing that would happen is that your command allocator would take up more and more space for each command list you use with it, and you'd want to keep this in mind for any following resets you'd do on this allocator. Remember that command allocators repurpose already allocated memory when they get reset, they do not free it until the allocators themselves get destroyed.

 

What you could do is record multiple smaller command lists on a command allocator which was previously used to record one large command list as to reuse memory optimally. It seems like this would complicate your allocator management quite a bit though, so you'd have to weight the pros and cons to see whether this would be worth it.




#5263765 difference between D3D11_USAGE_DEFAULT and D3D11_USAGE_DYNAMIC

Posted by Radikalizm on 27 November 2015 - 12:01 AM

UpdateSubresource is generally not a direct upload from system memory to video memory, but will probably do something similar to using an intermediate dynamic buffer to upload your data. Because of this, it is good practice to only use UpdateSubresource on resources that won't get updated frequently as the cost of getting data into video memory this way is fairly high.

 

The difference between DEFAULT and DYNAMIC is exactly as the msdn documentation describes it. A dynamic resource can be directly written to from system memory by using a Map/Unmap, a default resource can't. Have a look at this page for an explanation on dynamic resources: https://msdn.microsoft.com/en-us/library/windows/desktop/dn508285(v=vs.85).aspx

 

Dynamic buffer resources can be used to build some very powerful and useful tools such as linear or ring allocators for frequently changing data using the D3D11_MAP_WRITE_NO_OVERWRITE flag.




#5262647 Lighting Shaders

Posted by Radikalizm on 18 November 2015 - 04:27 PM

This is an implementation of a really basic Phong (note: not Blinn-Phong!) lighting model.

 

These two lines are the important ones:

vec3 reflect_direction = reflect(-light_direction, norm);  
float spec = pow(max(dot(view_direction, reflect_direction), 0.0), 32);

This calculates the surface reflection vector and calculates the phong specular term by doing f_spec = pow(R.V, phong_exponent)

 

Those vertex shader lines calculate a view-space position and a view-space normal vector. If you're confused about the inverse transposed view matrix, there's plenty of explanations on the math behind it and why it's required available on google :)




#5257731 How to create shader that fades based on two different connecting surface angles

Posted by Radikalizm on 17 October 2015 - 05:44 PM

A potentially cheaper solution would be to bake your AO into your vertex colors if you're dealing with cubes anyway. SSAO sounds like overkill for this situation.




#5254689 What is the best indirect lighting technique for a game?

Posted by Radikalizm on 29 September 2015 - 05:27 PM

Sadly enough a best technique which scales from low-end to high-end hardware in huge scenes which works with both static and dynamic geometry within a reasonable amount of time within a reasonable amount of memory (which is also easy to implement) does not exist.

 

Dynamic indirect lighting is still a ridiculously difficult problem to solve accurately in a rasterizer for real-time applications.

 

If you're working with scenes with static lighting for example (and really, this is just an example, not the best or most flexible solution) you can look at using light probes which is an offline process in which you can capture diffuse and specular lighting. Diffuse lighting can be stored using spherical harmonics, while specular lighting information can be captured in a cubemap which you then resolve for your specular BRDF. This gives you a fairly cheap solution from indirect lighting coming off of your environment, but dynamic objects will not influence your bounce lighting. For this you could use a local solution like reflective shadow maps.

 

If you do want lighting to be dynamic you'll have to do some very careful research on your exact requirements and the techniques available out there. That, or you could do your very own cutting edge research and solve this problem for all of us once and for all. We would be very grateful.






PARTNERS