• Advertisement
Sign in to follow this  

Curved surfaces in commercial engines

This topic is 4408 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So while discussing this topic with a friend, we came across some disagreements. How long have commercial game engines been supporting curved surfaces? Which well-known engines (past and present) support them, and which ones don't? Thanks. [Edited by - HalcyonX on January 20, 2006 9:50:56 PM]

Share this post


Link to post
Share on other sites
Advertisement
It was a 'big deal' of sorts when Q3 came out with curved surfaces, and that was about 6 years ago (I think there was one or two other engines that implemented the functionality at about the same time). Even then though, the technology wasn't particularly mysterious; the method used (biquadratic Bezier patches) is very straightforward.

Other games have used bicubic patches (e.g. for terrain). There are also NURBS and subdivision surfaces, but I don't know how commonly those are used in practice, nor what engines today are doing in general. I would assume though that most current engines at least offer support for Bezier patches.

Share this post


Link to post
Share on other sites
As the above poster said, it was a big deal in Q3, because at that point triangles were still precious resources. You skimped wherever you could.

Any more, though, the number of triangles are not a big deal. It more a matter of how many passes you can render, and keeping everything grouped in big batches. As such. most modern curved surfaces in games (Doom 3, for example) are pre-calculated and stuck in as static geometry. Dynamic curves still have their place, but it's become impractical and unnessicary in games.

Share this post


Link to post
Share on other sites
Quote:
Original post by Toji
As the above poster said, it was a big deal in Q3, because at that point triangles were still precious resources. You skimped wherever you could.

Any more, though, the number of triangles are not a big deal. It more a matter of how many passes you can render, and keeping everything grouped in big batches. As such. most modern curved surfaces in games (Doom 3, for example) are pre-calculated and stuck in as static geometry. Dynamic curves still have their place, but it's become impractical and unnessicary in games.


The way the Q3 engine was designed didn't allow for a shape to have shared normals and smoothed lighting. If you made a rounded wall or a cylinger out of rectangles you would always see the edges. The curved surfaces of Q3 provided a way to do that.
If you design your engine a different way, or if you use meshes with shared normals you can make perfectly smooth looking structures, the Q3 way is no longer neccesary.
Cards like the ATI radeon 8500 added hardware support for splines and surfaces, but I don't know if games ever used those, or if they are still supported on modern hardware.

Share this post


Link to post
Share on other sites
btw afaik in quake3 the curved surfaces were converted to actual triangle before the level started + treated as normal meshes , ie there was no LOD going on.

Share this post


Link to post
Share on other sites
To me the issue comes down to when curved surfaces become a hardware accelerated primitive. Otherwise tri's will always be faster. Basically when you send the curve to the GPU (which btw is tons faster than vertex and index data), the GPU would have to tesselate the surface into tri's. It's my understanding that ton of optimizations are made at the driver level with the assumption that the vert buffer is a fixed size. With that assumption, the driver can manipulate the vert buffer and arrange it to a native format or whatever. With modern HW, we would basically get the situation where we have to write the new verts INTO a previously submitted (and optimized) vertex buffer (which previously had only contained the the control points of the curve). This completely blows out the vertex cache coherency on the card and we lose a ton of optimizations. I'm not a hw guy but this is my understanding why we don't have accelrated curves and why p-curves went beta-max on us and died on the early version of the geforce cards.

Yooooo...I have crazy visions of HW-accelerated sub-D surfaces....someday!!!! :)

Share this post


Link to post
Share on other sites
Quote:
Original post by zedzeek
btw afaik in quake3 the curved surfaces were converted to actual triangle before the level started + treated as normal meshes , ie there was no LOD going on.


Not true! That was actually one of the big plusses of the system, that curves only used a lot of triangles when viewed up close. Back then, dynamically using less triangles when needed was faster than a static array with more geometry. This is simply no longer the case. In any case, however, the curves were indeed dynamic UNLESS you had the curve detail cranked all the way up.

Share this post


Link to post
Share on other sites
Quote:

Not true! That was actually one of the big plusses of the system, that curves only used a lot of triangles when viewed up close. Back then, dynamically using less triangles when needed was faster than a static array with more geometry. This is simply no longer the case. In any case, however, the curves were indeed dynamic UNLESS you had the curve detail cranked all the way up.


I don't believe that your are right here. I just loaded up Q3 enabled cheat mode and used the console command "r_showtris 1" to render triangles in wireframe. The tesselation of bezier patches does not change based on viewing distance. You can however set a global tesselation level using the "r_subdivisions" command.

Share this post


Link to post
Share on other sites
Quote:
Original post by DonnieDarko
I don't believe that your are right here. I just loaded up Q3 enabled cheat mode and used the console command "r_showtris 1" to render triangles in wireframe. The tesselation of bezier patches does not change based on viewing distance.
Actually it does, but you may have to have things set a certain way. Interestingly though the lod algorithm does not take field of view into account, so if you zoom in on a curved patch it will retain the lod corresponding to the viewing distance (and therefore not look very smooth).

Share this post


Link to post
Share on other sites
Quote:
Original post by jyk
[Actually it does, but you may have to have things set a certain way. Interestingly though the lod algorithm does not take field of view into account, so if you zoom in on a curved patch it will retain the lod corresponding to the viewing distance (and therefore not look very smooth).


I just rechecked and you are absolutely correct. I had r_subdivision set too high and hence it was hardly possible to see any difference in the dynamic subdiv levels. [embarrass]

Share this post


Link to post
Share on other sites
Quote:
Original post by Toji
Quote:
Original post by zedzeek
btw afaik in quake3 the curved surfaces were converted to actual triangle before the level started + treated as normal meshes , ie there was no LOD going on.


Not true! That was actually one of the big plusses of the system, that curves only used a lot of triangles when viewed up close. Back then, dynamically using less triangles when needed was faster than a static array with more geometry. This is simply no longer the case. In any case, however, the curves were indeed dynamic UNLESS you had the curve detail cranked all the way up.

sorry my mistake (its been a few years since ive played it)

Share this post


Link to post
Share on other sites
Quote:
Original post by daktaris
To me the issue comes down to when curved surfaces become a hardware accelerated primitive. Otherwise tri's will always be faster. Basically when you send the curve to the GPU (which btw is tons faster than vertex and index data), the GPU would have to tesselate the surface into tri's. It's my understanding that ton of optimizations are made at the driver level with the assumption that the vert buffer is a fixed size. With that assumption, the driver can manipulate the vert buffer and arrange it to a native format or whatever. With modern HW, we would basically get the situation where we have to write the new verts INTO a previously submitted (and optimized) vertex buffer (which previously had only contained the the control points of the curve). This completely blows out the vertex cache coherency on the card and we lose a ton of optimizations. I'm not a hw guy but this is my understanding why we don't have accelrated curves and why p-curves went beta-max on us and died on the early version of the geforce cards.

Yooooo...I have crazy visions of HW-accelerated sub-D surfaces....someday!!!! :)


well, ideally the last thing you want is HW parametric surfaces. The problem is that each change of LOD is where most of the computation goes (ou need to calc a whole host of recursive cox-de-boor stuff) . If the LOD change is periodic (ie, every 10 frames or so), then the surface data can be cached as a sum of a set of simple multiplications. This sum is very efficient to perform (depending on how you've cached the data), and can generate near optimal triangle strips.

Any HW implimentation will not be able to cache this data in the same way and will therefore run too slowly (take gluNurbs, each frame it converts a NURBS surface into a set of bezier patches, then tesselates each bezier patch..). So, the basic reason why we don't have GPU accelerated patches is simply that they will always be slower than a CPU based implimentation (which relies heavily on caching)

However, an even bigger reason that NURBS aren't used too much, is that they are a pain for modellers and texture artists to work with - so they simply prefer to use polys or subdivs....

Share this post


Link to post
Share on other sites
Wow, I didn't expect to get so many replies! Thanks guys.

From reading this, though, it seems like my understanding of curved surfaces is a bit off. So just to clarify, the only real advantage that they give over pre-calculated triangles is that they allow for variable LOD? And that with today's gfx cards, it is faster to draw all the triangles to the screen than to perform the tesselation calculations, which is why they are no longer used much?

So the original reason for my post is that my friend and I are developing a 3D game. I did the implementation for graphics module, and he is writing a model loader, with which We plan to load in models from Doom3, HL2, etc. Currently, my graphics module doesn't have any support any curved surfaces, and he is insisting that I add it in. What is your guys' take on this? Is it worth the effort? Thanks.

Share this post


Link to post
Share on other sites
The problem is that if your graphics code doesn't support curved surfaces directly, but your artists still want to be able to set up parametric surfaces, then you need to insert a process into your content build pipeline that will convert those parametric surfaces into polygons (tessellate them). If you don't support it in-engine, and you don't have a tool that will take the model data and tessellate curved surfaces, then you've got no way to turn parametric surfaces into renderables.

Share this post


Link to post
Share on other sites
Yea if i do add in the support for it, then I'll probably modify my graphics code to support it directly.

Share this post


Link to post
Share on other sites
Quote:
If you don't support it in-engine, and you don't have a tool that will take the model data and tessellate curved surfaces, then you've got no way to turn parametric surfaces into renderables.


Turns out that the exporter API for 3dsmax makes it very easy to get the triangle representation of any object that you want to export -- after all, that's what Max does to be able to render the object to screen.

There is no curved surface support in hardware. You might be able to write something like that for the DX10 generation of hardware using the geometry processor, but it's unlikely to perform as well as just plain exported triangle meshes. If you need to animate the surface parameters at runtime, consider regular mesh vertex blending instead. I don't think there's any actual win in supporting curved surfaces at this time, and it's a lot of (unnecessary) work.

Share this post


Link to post
Share on other sites
Quote:
Original post by daktaris
To me the issue comes down to when curved surfaces become a hardware accelerated primitive. Otherwise tri's will always be faster.

...

Yooooo...I have crazy visions of HW-accelerated sub-D surfaces....someday!!!! :)


Wait a few years, when DirectX10 comes out and cards will support it.
It will have functionality for "geometry shaders", that you can use to
procedurally generate primitives that are just sent back to the start of
the pipeline :)
At least when I read about that, I immediately thought about HW-accelerated tesselation.

ch.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement