Ideal rendering engine?

Started by
5 comments, last by RPTD 5 years, 6 months ago

I'm looking to create a small game engine, though my main focus is the renderer.

I'm trying to decide which of these techniques I like better: Deferred Texturing or Volume Tiled Forward Shading ( https://github.com/jpvanoosten/VolumeTiledForwardShading ). Which would you choose,if not something else?

Here are my current goals:

  • I want to keep middleware to a minimum
  • I want to use either D3D12 or Vulkan. However I understand D3D best so that is where I'm currently siding.
  • I want to design for today's high-end GPU's and not worry too much about compatibility, as I'm assuming this is going to take a long time anyway
  • I'm only interested in real-time ray-tracing if/when it can be done without an RTX-enabled card
  • PBR pipeline that DOES NOT INCLUDE METALNESS. I feel there are better ways of doing this (hint: I like cavity maps)
  • I want dynamic resolution scaling. I know it's simply a form of super-sampling, but I haven't found many ideal sources that explain super-sampling in a way that I would understand.
  • I don't want to use any static lighting. I have good reasons which I'd be happy to explain.

So I guess what I'm asking you fine people, is that if time were not a concern, or money, what type of renderer would you write and more importantly "WHY"?

Thank you for your time.

Advertisement

I just realized I typed all that without explaining what type of game it would be.

It would be a first-person shooter/survival game set in a somewhat open-world environment. Single player only.

D3D v Vulkan -- for D3D v GL, I'd go with D3D without hesitation, but D12/VK are pretty much the same as each other. D12 is a bit easier IMHO. 

6 hours ago, Swartz27 said:

I'm only interested in real-time ray-tracing if/when it can be done without an RTX-enabled card

RTX is just NVidia's marketing buzzword for "supports RTRT APIs" :D

6 hours ago, Swartz27 said:

PBR pipeline that DOES NOT INCLUDE METALNESS. I feel there are better ways of doing this 

Metalness + color VS diffuse color + specular color / roughness VS glossiness isn't much of a muchness. They're two ways of encoding the exact same information. You can easily support both (which could be useful if you're sourcing artwork from different places). Cavity maps are an additional feature that work with both encodings in the same way. 

6 hours ago, Swartz27 said:

I want dynamic resolution scaling. I know it's simply a form of super-sampling, but I haven't found many ideal sources that explain super-sampling in a way that I would understand.

Super-sampling is just rendering at a higher resolution than the screen. e.g. Draw to a 4k texture and then resize it to 1080p for display on a 1080p screen. Dynamic resolution is the same thing but you pick a different intermediate/working resolution each frame, based on your framerate. Often it's used to under-sample (render at a lower rest than the screen). 

6 hours ago, Swartz27 said:

I'm trying to decide which of these techniques I like better: Deferred Texturing or Volume Tiled Forward Shading

If money and time aren't an issue, implement them both and see which one performs better on your specific game scenes. 

6 hours ago, Swartz27 said:

So I guess what I'm asking you fine people, is that if time were not a concern, or money, what type of renderer would you write and more importantly "WHY"?

I've been working on an indie game in a custom engine mostly-full-time for years, so kind of doing this. Before that, I was working professionally as a graphics programmer on a game engine team, so knew what I wanted -- first and foremost, my new renderer had to be easy for a graphics programmer to work with, easy to experiment with new features, easy to change. No two games that I've worked on have ever used the same renderer, so I knew that switching out algorithms easily had to be easily supported in my ideal renderer. 

Our game started off as traditional deferred + traditional forward (able to switch between them at runtime), then tiled deferred + tiled forward (able to switch between them at runtime), then clustered forward (only).

Other features like shadows (many techniques) , SSAO, reflection probes, SSR, motion blur, planar mirrors / portals, etc, occasionally need to be added or experimented with... So there needs to be enough flexibility to slot these (or techniques that haven't yet been invented) into the pipeline. 

One of my inspirations for this was Horde3D's data driven rendering pipelines, where you told the engine how to render a scene with an XML file! I managed to convert Horde3D from traditional deferred to Inferred Rendering in a weekend by only writing a little bit of XML and GLSL. That impressed me a lot as a graphics programmer (it was so much nicer than the 'professional' engine I was using at work at the time...) 

This concept has largely caught on and is commonly referred to now as a "frame graph". Each step of an algorithm/technique is represented as a single input->process->output node, and then a data/configuration/script file uses those nodes to build a graph of instructions on how the frame will be drawn. This makes it very easy to modify the frame rendering algorithms over time my experiment with new features, but, it also allows the engine to perform lots of optimisation when it comes to D3D12 resource transition barriers / VK render passes, render target memory allocation and aliasing, and async compute as well! 

Thank you so much @Hodgman I truly do appreciate it. I'm currently trying to play catch-up with your post ;)

[quote[RTX is just NVidia's marketing buzzword for "supports RTRT APIs" :D[/quote[

Yeah, I knew this was likely the case. I want to strangle the people at Nvidia (I ask very simple questions once in a while, but since I'm an indie dev I guess I don't matter to them).

I'll do both and then map out the results for others in a blog post.

I want to say for the record that my understanding of C++ is terrible: HLSL makes a lot more sense to me.

Have you taken a look at Unity's upcoming "Scriptable Renderer Pipeline"? I have to admit that it seems impressive (giving the graphics programmer a lot more control, even over the rendering order).

Opposite opinion, sort of, from Hodgman. Vulkan is more compatible (Windows 7!) has some more useful features, and is even ever so slightly faster than DX12.

Also, Deferred Texturing is nice, if you want to not bother with metalness. You can do whatever you want with materials then, as good as a forward pipeline, along with other neat features. IE no sitting there and writing half of the deferred features in for Forward anyway just because you want SSAO or etc.

Ideally? Ideally I'd write a ratyrace only, non polyogonal modelling only (Voxels? Subdivision? Combination of?) engine with GPGPU physics only (it works for Claybook... SDFs for the win!), among other things. I'd ditch normal UV mapping for Volume UV mapping only, maybe figure out how to do volume particles only. Particles today are so horrid, even when running a ton of Raytracing on assumedly a $1k 2080ti, with fancy lighting and reflections, they just look like terrible, 2 dimensional, unlit junk. Just compare that to Tintin (2011), which primarily used Raster (old Renderman) and had secondary single bounce GI occlusion of such a low res you could see aliasing artifacts, yet in the plane crash scene the sand is so much better than anything in games today.

On 10/16/2018 at 7:21 PM, Swartz27 said:

PBR pipeline that DOES NOT INCLUDE METALNESS. I feel there are better ways of doing this (hint: I like cavity maps)

Thumbs up for somebody not wanting the broken metalness concept (meaning, going against the original idea of using PBR).

Life's like a Hydra... cut off one problem just to have two more popping out.
Leader and Coder: Project Epsylon | Drag[en]gine Game Engine

This topic is closed to new replies.

Advertisement