Sign in to follow this  
rouncED

DX11 nurbs vs subdivision surfaces

Recommended Posts

rouncED    103
To all the modellers out there, what would you pick out of bezier patches and subdivision surfaces for modelling?

Cause im just about to write a dx11 modeller, and I cant decide. Bezier patches are more natively supported in d3d11, you need a few tricks to implement subdivision surfaces but they are also possible.

So which way should I go, and why?

Share this post


Link to post
Share on other sites
ArKano22    650
I vote for subdivision surfaces. I use face extrusion intensively while modelling, so i suppose subdivision fits my style.

It´s just a matter of taste, but i think most people would prefer subdivision to patches.

Share this post


Link to post
Share on other sites
rouncED    103
I see what you mean, because maybe the topology is more free form when your modelling with subdivision surfaces.

I remember a time when subdivision surfaces were considered inferior to patch modelling, like modellers who patch modelled were higher in eschelon to lowly poly modellers... I dont really see why myself now. Maybe its bullshit.

Some people think you get more "control" with patches...

Share this post


Link to post
Share on other sites
kabab    168
SDS are far easier to use particularly for things like characters that said you cannot match the quality of bezier patches for hard surface modelling such as cars etc..

Biggest problem though you will face with bezier patches are the good modelling tools are very expensive (10's of thousands a seat)....

Share this post


Link to post
Share on other sites
RobTheBloke    2553
Quote:
Original post by kabab(10's of thousands a seat)....


Bit of an exaggeration there...

Quote:
To all the modellers out there, what would you pick out of bezier patches and subdivision surfaces for modelling?


Bezier patches are great for programmers, however I've yet to meet a modeller who thinks they are good for modelling.

SubDivs are much more intuitive for modellers, but can suffer performance problems for dynamic meshes (eg skinned meshes). If you can fit the subdivision into a geometry shader, they might not be too bad.

PN-Triangles may also be worth considering...

Share this post


Link to post
Share on other sites
kabab    168
Quote:
Original post by RobTheBloke
Quote:
Original post by kabab(10's of thousands a seat)....


Bit of an exaggeration there...

There are only 2 programs I know of which let you model true bezier patches AliasStudio and IcemSurf both very expensive!

Quote:
Original post by RobTheBlokeBezier patches are great for programmers, however I've yet to meet a modeller who thinks they are good for modelling.

SubDivs are much more intuitive for modellers, but can suffer performance problems for dynamic meshes (eg skinned meshes). If you can fit the subdivision into a geometry shader, they might not be too bad.

PN-Triangles may also be worth considering...
Every single car on the road you see today are modelled using bezier patches, there is no other way to model surfaces of that accuracy....

Share this post


Link to post
Share on other sites
rouncED    103
Quote:
Original post by RobTheBloke
SubDivs are much more intuitive for modellers, but can suffer performance problems for dynamic meshes (eg skinned meshes). If you can fit the subdivision into a geometry shader, they might not be too bad.


Yeh totally true isnt it, that if you animated bezier patches without the subdivision at all it would go faster, less math functions = faster code.

As im studying this area closer, im realizing I probably need to understand both to really do curved surface modelling properly, Since real time subdivision is more like half crossed with nurbs modelling anyway.

I dunno, bezier patches seems more "pure" a method to get together than subdivision, but almost everyone here told me "no, better stick with subdivision, its easier to model" or something.

So, just talking to myself - i think ill do a little bezier patch work alone first, see what i come up with, then most probably ill stick with subdivs for the real thing, unless what the hell - i just include both? :) im confused...

Share this post


Link to post
Share on other sites
RobTheBloke    2553
Quote:
Original post by kabab
Quote:
Original post by RobTheBloke
Quote:
Original post by kabab(10's of thousands a seat)....


Bit of an exaggeration there...

There are only 2 programs I know of which let you model true bezier patches AliasStudio and IcemSurf both very expensive!


Maya. Max. Xsi. Motionbuilder. Blender. That's a price range of between £4000 and £0.

Quote:
Quote:
Original post by RobTheBlokeBezier patches are great for programmers, however I've yet to meet a modeller who thinks they are good for modelling.

SubDivs are much more intuitive for modellers, but can suffer performance problems for dynamic meshes (eg skinned meshes). If you can fit the subdivision into a geometry shader, they might not be too bad.

PN-Triangles may also be worth considering...
Every single car on the road you see today are modelled using bezier patches, there is no other way to model surfaces of that accuracy....


ANY implicit surface provides sufficient mathematical accuracy. For example CSG is still one of the most common methods used in the design of car components. As for beziers, yes they offer accuracy, but have other significant drawbacks (like the lack of continuity at the control points, the difficulty in constructing periodic surfaces etc). NURBS (which can fully represent a bezier curve, and trivially maintain continuity if needed) are the car designers choice these day (and have been for decades). Apart from a few vintage 1960's -> 1970's cars, I'd be willing to bet that no car on the road today has been modelled with bezier patches....

Quote:
Yeh totally true isnt it, that if you animated bezier patches without the subdivision at all it would go faster, less math functions = faster code.

Err no. Subdivision surfaces != Bezier subdivison. With Bezier/NURBS subdivision you can perform extremely aggressive caching to achieve some pretty staggering performance optimisations when you need to re-compute the surface. You should be able to achieve fairly high tessellation levels for dynamic surfaces with relative ease.

With subdivs, those sorts of simple optimisations don't really exist. This means you typically have to do significantly more work to update a dynamic surface (even if the subdivision algorithm is on paper less complex).

The big problem with NURBS/beziers though is simply trying to model anything in them. I'm not sure if you've played around with any knot insertion tools, but they tend to make modelling a real PITA. Modifying the contents of a knot vector does not make for the most intuitive modelling tool. The similarity to poly modelling is what makes subdivs the preferred technique for the vast majority of modellers.... .

Quote:
As im studying this area closer, im realizing I probably need to understand both to really do curved surface modelling properly, Since real time subdivision is more like half crossed with nurbs modelling anyway.


The only similarity is that they generate some extra points at render time. In all other regards they are very different indeed....

Share this post


Link to post
Share on other sites
kabab    168
Quote:
Original post by RobTheBlokeMaya. Max. Xsi. Motionbuilder. Blender. That's a price range of between £4000 and £0.

I guess we have a difference in terminology, In modelling circles when people say bezier patch surfacing they are implying something that can create single span surfaces... Modellers that create mutli-span surfaces get refereed to as NURBS modelling...

For example all your typical DCC app's fall into the Nurbs surfacing category...

As for bezier patches only 2 packages really qualify AliasStudio and Icem surf both of which are really expensive...

Quote:
Original post by RobTheBlokeANY implicit surface provides sufficient mathematical accuracy. For example CSG is still one of the most common methods used in the design of car components. As for beziers, yes they offer accuracy, but have other significant drawbacks (like the lack of continuity at the control points, the difficulty in constructing periodic surfaces etc). NURBS (which can fully represent a bezier curve, and trivially maintain continuity if needed) are the car designers choice these day (and have been for decades). Apart from a few vintage 1960's -> 1970's cars, I'd be willing to bet that no car on the road today has been modelled with bezier patches....

In past life I used to work in the Automotive industry doing "A Class" surfacing... Multispan nurbs are not acceptable in automotive body styling because the spans cause imperfections in the surfaces... Final production surfaces are always bezier patches with G2 continuity...

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By noodleBowl
      I've gotten to part in my DirectX 11 project where I need to pass the MVP matrices to my vertex shader. And I'm a little lost when it comes to the use of the constant buffer with the vertex shader
      I understand I need to set up the constant buffer just like any other buffer:
      1. Create a buffer description with the D3D11_BIND_CONSTANT_BUFFER flag 2. Map my matrix data into the constant buffer 3. Use VSSetConstantBuffers to actually use the buffer But I get lost at the VertexShader part, how does my vertex shader know to use this constant buffer when we get to the shader side of things
      In the example I'm following I see they have this as their vertex shader, but I don't understand how the shader knows to use the MatrixBuffer cbuffer. They just use the members directly. What if there was multiple cbuffer declarations like the Microsoft documentation says you could have?
      //Inside vertex shader cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; struct VertexInputType { float4 position : POSITION; float4 color : COLOR; }; struct PixelInputType { float4 position : SV_POSITION; float4 color : COLOR; }; PixelInputType ColorVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the input color for the pixel shader to use. output.color = input.color; return output; }  
    • By gomidas
      I am trying to add normal map to my project I have an example of a cube: 
      I have normal in my shader I think. Then I set shader resource view for texture (NOT BUMP)
                  device.ImmediateContext.PixelShader.SetShaderResource(0, textureView);             device.ImmediateContext.Draw(VerticesCount,0); What should I do to set my normal map or how it is done in dx11 generally example c++?
    • By fighting_falcon93
      Imagine that we have a vertex structure that looks like this:
      struct Vertex { XMFLOAT3 position; XMFLOAT4 color; }; The vertex shader looks like this:
      cbuffer MatrixBuffer { matrix world; matrix view; matrix projection; }; struct VertexInput { float4 position : POSITION; float4 color : COLOR; }; struct PixelInput { float4 position : SV_POSITION; float4 color : COLOR; }; PixelInput main(VertexInput input) { PixelInput output; input.position.w = 1.0f; output.position = mul(input.position, world); output.position = mul(output.position, view); output.position = mul(output.position, projection); output.color = input.color; return output; } And the pixel shader looks like this:
      struct PixelInput { float4 position : SV_POSITION; float4 color : COLOR; }; float4 main(PixelInput input) : SV_TARGET { return input.color; } Now let's create a quad consisting of 2 triangles and the vertices A, B, C and D:
      // Vertex A. vertices[0].position = XMFLOAT3(-1.0f, 1.0f, 0.0f); vertices[0].color = XMFLOAT4( 0.5f, 0.5f, 0.5f, 1.0f); // Vertex B. vertices[1].position = XMFLOAT3( 1.0f, 1.0f, 0.0f); vertices[1].color = XMFLOAT4( 0.5f, 0.5f, 0.5f, 1.0f); // Vertex C. vertices[2].position = XMFLOAT3(-1.0f, -1.0f, 0.0f); vertices[2].color = XMFLOAT4( 0.5f, 0.5f, 0.5f, 1.0f); // Vertex D. vertices[3].position = XMFLOAT3( 1.0f, -1.0f, 0.0f); vertices[3].color = XMFLOAT4( 0.5f, 0.5f, 0.5f, 1.0f); // 1st triangle. indices[0] = 0; // Vertex A. indices[1] = 3; // Vertex D. indices[2] = 2; // Vertex C. // 2nd triangle. indices[3] = 0; // Vertex A. indices[4] = 1; // Vertex B. indices[5] = 3; // Vertex D. This will result in a grey quad as shown in the image below. I've outlined the edges in red color to better illustrate the triangles:

      Now imagine that we’d want our quad to have a different color in vertex A:
      // Vertex A. vertices[0].position = XMFLOAT3(-1.0f, 1.0f, 0.0f); vertices[0].color = XMFLOAT4( 0.0f, 0.0f, 0.0f, 1.0f);
      That works as expected since there’s now an interpolation between the black color in vertex A and the grey color in vertices B, C and D. Let’s revert the previus changes and instead change the color of vertex C:
      // Vertex C. vertices[2].position = XMFLOAT3(-1.0f, -1.0f, 0.0f); vertices[2].color = XMFLOAT4( 0.0f, 0.0f, 0.0f, 1.0f);
      As you can see, the interpolation is only done half of the way across the first triangle and not across the entire quad. This is because there's no edge between vertex C and vertex B.
      Which brings us to my question:
      I want the interpolation to go across the entire quad and not only across the triangle. So regardless of which vertex we decide to change the color of, the color interpolation should always go across the entire quad. Is there any efficient way of achieving this without adding more vertices and triangles?
      An illustration of what I'm trying to achieve is shown in the image below:

       
      Background
      This is just a very brief explanation of the problems background in case that would make it easier for you to understand the problems roots and maybe help you with finding a better solution to the problem.
      I'm trying to texture a terrain mesh in DirectX11. It's working, but I'm a bit unsatisfied with the result. When changing the terrain texture of a single vertex, the interpolation with the other vertices results in a hexagon shape instead of a squared shape:

      As the red arrows illustrate, I'd like the texture to be interpolated all the way into the corners of the quads.
    • By -Tau-
      Hello, I'm close to releasing my first game to Steam however, my game keeps failing the review process because it keeps crashing. The problem is that the game doesn't crash on my computer, on my laptop, on our family computer, on fathers laptop and i also gave 3 beta keys to people i know and they said the game hasn't crashed.
      Steam reports that the game doesn't crash on startup but few frames after a level has been started.
      What could cause something like this? I have no way of debugging this as the game works fine on every computer i have.
       
      Game is written in C++, using DirectX 11 and DXUT framework.
    • By haiiry
      I'm trying to get, basically, screenshot (each 1 second, without saving) of Direct3D11 application. Code works fine on my PC(Intel CPU, Radeon GPU) but crashes after few iterations on 2 others (Intel CPU + Intel integrated GPU, Intel CPU + Nvidia GPU).
      void extractBitmap(void* texture) { if (texture) { ID3D11Texture2D* d3dtex = (ID3D11Texture2D*)texture; ID3D11Texture2D* pNewTexture = NULL; D3D11_TEXTURE2D_DESC desc; d3dtex->GetDesc(&desc); desc.BindFlags = 0; desc.CPUAccessFlags = D3D11_CPU_ACCESS_READ | D3D11_CPU_ACCESS_WRITE; desc.Usage = D3D11_USAGE_STAGING; desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM_SRGB; HRESULT hRes = D3D11Device->CreateTexture2D(&desc, NULL, &pNewTexture); if (FAILED(hRes)) { printCon(std::string("CreateTexture2D FAILED:" + format_error(hRes)).c_str()); if (hRes == DXGI_ERROR_DEVICE_REMOVED) printCon(std::string("DXGI_ERROR_DEVICE_REMOVED -- " + format_error(D3D11Device->GetDeviceRemovedReason())).c_str()); } else { if (pNewTexture) { D3D11DeviceContext->CopyResource(pNewTexture, d3dtex); // Wokring with texture pNewTexture->Release(); } } } return; } D3D11SwapChain->GetBuffer(0, __uuidof(ID3D11Texture2D), reinterpret_cast< void** >(&pBackBuffer)); extractBitmap(pBackBuffer); pBackBuffer->Release(); Crash log:
      CreateTexture2D FAILED:887a0005 DXGI_ERROR_DEVICE_REMOVED -- 887a0020 Once I comment out 
      D3D11DeviceContext->CopyResource(pNewTexture, d3dtex); 
      code works fine on all 3 PC's.
  • Popular Now