Jump to content
  • Advertisement
Samiorga

Game Engine or custom game engine?

Recommended Posts

Just now, Vilem Otte said:

Yes, it should be possible through this:

Nah... it's not right this way. The problem is that the advantage of fixed function hardware vanishes with custom intersection shader:

You have no control what rays process it an a shared CU. You do most work on regular shader cores. You might need to implement your own mini BVH (which is total nonsense at this point because rays have no shared memory) Conclusion: You better do it in just compute.

With compute you can batch all rays that may intersect a patch (or a number of patches), then you could tesselate these patches to LDS, and brute force all rays of the batch over it. 

But this is maybe how RTX works under the hood, so what you do is re-implementing fixed function hardware, just because the existing one is too restricted / black boxed.

At the end you'll just pre-tesselate your patches to triangles and the potential advantage is lost.

 

I have exactly this problem with tracing my GI sample hierarchy, which is geometry, hierarchy and LOD on one data structure. My compute tracing here matches RTX performance - as far as i can compare algorithms with differing time complexity.

I want to use RTX for specular reflections, which is fine. But using my sample hierarchy should work too... strange situation. I'll wait how RT support on upcoming consoles will look like. On the long run i want to use my GI stuff as a fall back for 2nd. bounces with a path tracer - only then i'll join in peace with RTX :)

 

... would be very nice if you make a blog post about your RTX experience when you get at it.

Share this post


Link to post
Share on other sites
Advertisement
18 minutes ago, JoeJ said:

then you could tesselate these patches to LDS, and brute force all rays of the batch over it. 

To find a ray intersectoins you don't need to tasselate at all. Yo need to solve intersection equation that cheaper then test rays against all tasselated triangles. For sufaces wich order not exids 2 (spheres cylinfers cones e.t.c) analitical methods work better. For more complexive surfaces Newton method is very fast in case we get a intersection with control mesh for starting approach, and much more faster if we can get a intersection point of other close ray as starting aproach.  Using splines in most cases possible to test for intersection a whole mesh solwing only equation that usualy not exids 6th order and have accurate results and normals, instead to solve huge quantity of 1st order equations for each triangle and have only approximated results.

Edited by Fulcrum.013

Share this post


Link to post
Share on other sites
4 minutes ago, Fulcrum.013 said:

Yo need to solve intersection equation that cheaper then test rays

Oh, sure! Did not thought about this. Lucky guy! ;)

Share this post


Link to post
Share on other sites
7 hours ago, Fulcrum.013 said:

Nobody has promised that it simplier to implement then triangles. Especially in current 2-stage tesselation architecture (on initial de Casteljau algo number of subdivisions determined on vertex position computation stage ). And also splines and other curved surfaces much better and efficient for ray-trasing (that is realy next gen of game realism)  then triangles. All CADs have a ray-traced renders for photo-realistic image generation.

It's not a question of simpler or more complex. I get the impression you feel that game graphics programmers don't know what they're doing or haven't heard about splines in the forty-five years since Catmull-Rom or the nearly sixty years since NURBS. Our job is to provide the maximum amount of visual quality on the hardware that consumers actually have in hand at the current time. Splines and patches do not accomplish that goal. You keep bringing up CAD as if it's somehow relevant, but their goals and priorities are very different from ours.

You're describing a bunch of things which are mathematically sound on paper, but simply do not reflect the reality of achieving visual quality on a consumer GPU. And to the extent we can push the GPU designers for more powerful tools, geometric handling just isn't that interesting or important anymore. Our triangle budgets are high enough nowadays that there are far more pressing problems than continuous level of detail or analytical ray intersection tests or analytical solutions to normals and tangents.

Share this post


Link to post
Share on other sites
18 minutes ago, Promit said:

You're describing a bunch of things which are mathematically sound on paper, but simply do not reflect the reality of achieving visual quality on a consumer GPU. 

This is where real world experience tempers developers. Try to use those fancy things on your dev box, get them to work and you think, "why doesn't everyone do this?" Then you pass that on to someone else to test out and you learn, often slowly or against one's acceptance of reality, the reason why people avoid those fancy things. The bugs and cross compatibility are nightmares when venturing out. And at first you think, I'll just work through this. And you might get it working on five different machines, and you don't really think about all the work it took to get there when you think about releasing your game, then when 100 people try your game and most of them error out... you have to decide which battles you want to fight, do you want to make this fancy new thing a thing or do you want to complete projects that you can release to the general public? Maybe you can do both, but it's not very likely.

I get it, I was that idealistic youngster wondering why no one uses all the fancy new things. I still wish that things worked a little differently. But I'm no longer willing to fight those battles, I'm now only interested in making games. I'm very stubborn so it took me a while to learn that I have to let some things go. If people want to fight that battle though, I don't mind, but people should not assume the reason people aren't using the fancy thing is because they're stupid or are unaware of it. If something makes game dev easier, look better, and run faster, then developers will do it if they don't have to fight and drag the entire industry to get it done.

Share this post


Link to post
Share on other sites

I would also add to what Promit said that the disinterest in exact analytical solutions to graphics also extends to gameplay/physics code. Networked games in particular often have lots of "tricks" to make physics with 100+ms of latency believable. Yes, it's bad if your ragdolls vibrate when they're at rest, but nothing needs to be exactly accurate. It just needs to be believable in the context of the game. As I've said in the past, the vast majority of video games are not simulations. They are magic shows. We only need to trick the player into thinking that they are playing a simulation of an alternate reality. Players do not generally care if our simulation is accurate. They do not care whether we use splines or polygons or deferred rendering or forward rendering or PhysX or Havok or Bullet.

Players care if our game is fun, if the controls are responsive, and if the graphics are beautiful.

Furthermore, most games are not solely concerned with rendering inanimate objects like tanks that an engineer can model with a bit of time. How well do splines and patches play with skeletal animation? Facial animation? And can they do that with an acceptable loss of performance on current (or last) generation GPUs? I don't see anyone using engineering CAD tools to model faces, either.

Edited by Oberon_Command

Share this post


Link to post
Share on other sites
5 minutes ago, Oberon_Command said:

Furthermore, most games are not solely concerned with rendering inanimate objects like tanks that an engineer can model with a bit of time

For instance CADs models (assemblies) also much better for animmation, especially for procedural animations, including AI controlled animations, becouse based on kynematics rules instead keyframes (while allow to capture keyframes offline) and also have all physical properties of objects calculated from its geometry, that wery importent for phisics simulation that drive most of non-AI animation on modern games.

10 minutes ago, Oberon_Command said:

How well do splines and patches play with skeletal animation?

Of cource much much better than poligons. Anywhere where you want to ask "is spline able to perform same as triangles" just rememver that triangle is a spline of 1st level so have a mininal of possible for splines flexibility.

16 minutes ago, Oberon_Command said:

I don't see anyone using engineering CAD tools to model faces, either.

Obviuosly industrial CADs not intended for organics. But NURBS same good for it like for any other geometry and have huge advantage over poligons - spline models can be parametrized . Also CADs offer more powerfull tools that good for many (of cource not for all) cases - 3D scaning of real persons faces to NURBS.

26 minutes ago, Oberon_Command said:

We only need to trick the player into thinking that they are playing a simulation of an alternate reality.

Yes of cource it is approximate show, but than accurate engine able to simulate reality than easier to create a magic show just by changing some factors  to alter it reality. For example set alternate direction of G vector in some volume to simulate a gravity anomaly, or set bounce factor of ball ower 1 to get it accelerated when it hit a wall, and so on, becouse  engine that works close to reality give ability to modify lows of reality, instead to making a separate solution for each object of its reality.

Share this post


Link to post
Share on other sites
52 minutes ago, Oberon_Command said:

Yes, it's bad if your ragdolls vibrate when they're at rest, but nothing needs to be exactly accurate.

It's me who brought this up. I was not clear enough.

The problem is not that they jitter when they are dead, the jitter is only an indication of limitations that prevent living ragdolls.

I've tired the engines you have mentioned. None of those could keep a motorized ragdoll upright, or balanced or even walking. If you want to do this, you have to extend them with your own torque solver, or you use a physics engine with better accuracy.

I did the latter and it was possible to make the ragdoll walk. I don't have the time to continue on this, but i saw it would be possible to have simulated characters in games instead just animation. Yes, people don't care because they've never seen this yet, only in research videos which run at thousands of simulation steps per second. I did it at 120 Hz and the cost is not much more than for a regular dead ragdoll.

I can not prove to you this tech is ready for games, but i'm personally sure it's just lacking software. Hardware is ready since a decade. This means much more agile characters in games because they do not rely on static animation data for every movement they might need to do, and you know how the cost is to generate that data.

So if i'm right, we will see better games with lower costs, and then people WILL see the difference between this and the previous state of the art. And they will care and buy awesome next gen stuff as usual.

AI driven character animation has the same goals. Likely you are more open to this because it comes from more sources than just a single unknown guy like me, but it will happen and both approaches can be combined.

Do you agree this would be nice to have? Or do i run against a wall because you are more happy with state of the art as is and you doubt further progress is necessary? (In the latter case i promise i give up :) )

In the former case there are many other limitations with physics that prevent us from doing stuff that would be fun (in a serious way, not just like the goats.) All this comes from lacking accuracy and robustness, shared by all major physics engines you have mentioned, leading to most game devs simply unaware of the limits.

Share this post


Link to post
Share on other sites
10 minutes ago, JoeJ said:

I've tired the engines you have mentioned. None of those could keep a motorized ragdoll upright, or balanced or even walking. If you want to do this, you have to extend them with your own torque solver, or you use a physics engine with better accuracy.

In my experience, animators usually want a lot of control over the animations of living bodies and are not typically happy about yielding control to the physics engine. When they use mo-cap, they want the animations to look like the mo-cap. ;)

You can extend this somewhat to rendering, too. If the engine produces results that are physically accurate in terms of lighting, but not what the art director wants, then it's the engine that's going to change. Artist/designer/animator vision takes priority. "But this is what it really looks like" is not an argument that usually holds a lot of water.

10 minutes ago, JoeJ said:

This means much more agile characters in games because they do not rely on static animation data for every movement they might need to do, and you know how the cost is to generate that data.

Sounds great for indie devs without the budget for motion capture or lots of animators. Reminds me of this talk:

 

Edited by Oberon_Command

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!