deltaKshatriya

Geometry vs Texturing in Game Art

Recommended Posts

Hey all,

So although I've posted some stuff here for critique, by and large, I'm not really a CG artist who does stuff for games. To be fair, even CG art is primarily a side hobby for me. But anyhow, back on topic. One of the things I've noticed is that for general CG art, we generally prefer to use as much geometry as possible for building a scene, since we don't really care much about render time. For example, sometimes bricks will be modeled in to fully utilize the lighting calculations and what not. Now obviously, with game dev, that just isn't true, and I do have experience with this. What I've noticed, however, is that game art in many games (Dark Souls, etc.) really does seem to look like it's using a ton of geometry, especially with architecture. Now I can tell where textures are used, but I'm curious, what's the balance for geometry vs texture? When is a texture preferred to modeling geometry? 

I thought this would be interesting discussion. 

Share this post


Link to post
Share on other sites
1 minute ago, deltaKshatriya said:

For example, sometimes bricks will be modeled in to fully utilize the lighting calculations and what not. Now obviously, with game dev, that just isn't true, and I do have experience with this. What I've noticed, however, is that game art in many games (Dark Souls, etc.) really does seem to look like it's using a ton of geometry, especially with architecture. Now I can tell where textures are used, but I'm curious, what's the balance for geometry vs texture? When is a texture preferred to modeling geometry? 

Often you do model everything in excessive detail, down to cracks in bricks, first. Then, second you make a lower detailed mesh with the same general shape (a flat wall that covers all the bricks). Third, you "bake" all the geometric details from mesh #1 onto normal/depth/ao textures for use on mesh #2.

If done right, mesh #2 now has all the same details as mesh #1, but hardly any polygons. You can only really tell the difference when looking at it extremely closely, or when looking at the silhouette.

As when when to use each... it can be easy to get carried away with using low-poly meshes with large textures, but textures take up a LOT of memory. A compressed 1024x1024 normal map is about 1.3MB -- in that same amount of memory you could store over 40K vertices! So it can actually be better to use extra polygons and lower resolution textures, where possible.

The main thing that you never want to do is end up with so many triangles that they start to become smaller than one pixel. GPU's like to run pixel shaders on a block of 2x2 pixels at a time, so if a triangle only covers 1 pixel, the GPU will likely run the PS on 4 pixels, throw away three results, and keep one result! This means that meshes that have an extremely high polygon count can actually increase pixel shading costs by 300% :o Moreover, the fixed-function rasterizer probably works on 4x4 or 8x8 blocks of pixels, and only works at full efficiency if your triangles are bigger than that...

So -- use as many triangles as necessary, but then employ LOD (switching out models for lower detail versions when in the distance) to make sure that your on-screen triangles never get too small.

Share this post


Link to post
Share on other sites

Maybe as an artist you should not distinguish between geometry and texture too much at all. There may be a preprocessing toolchain that takes your art and converts it to efficient game assets. It may resample textures, merge materials, decide what details remain geometry or become normal maps, generate LODs. It may even remesh your geometry and turn details to displacement maps, geometry images, voxels or whatever turns out to be most efficient.

So thinking ahead it's probably most important you always create art with very high detail so it still looks as intended even if everything gets resampled and downscaled. E.g. a tool may rotate UVs and straight lines of texels could become jaggy, or a tool may remove long and thin features of geometry, etc. Being aware of such issues may become important, while caring about low poly count may become obsolete for the artist.

 

 

 

Share this post


Link to post
Share on other sites

I'm not a game artist by any means, I was just genuinely curious about the differences. It's interesting that both textures and geometry have limitations for game art. I knew about LOD tricks of course, since I've done some hobbyist game dev myself. 

I'd imagine that things like light fog, distance fog, camera angle, etc. can do a lot to change the visuals, potentially even hide some things? 

Share this post


Link to post
Share on other sites
27 minutes ago, deltaKshatriya said:

I'd imagine that things like light fog, distance fog, camera angle, etc. can do a lot to change the visuals, potentially even hide some things? 

To me the perfect reference is looking SD or highly compressed video. It has only little detail and it's blurry, but it looks 1000 times better than any game graphics. It proofs we can remove a lot of things. ('We' means programmers - programmers need to solve this, artists should not need to care. Actually they spend way too much time on high to low poly conversion, creating UVs etc. That's why i believe in automated tools.)

Share this post


Link to post
Share on other sites

This has been the problem with 3D graphics from the start, how many polygons can we use?

The answer is: no one knows.

There was a time where using alpha maps where used to make geometry look more complex; that was when computers had strict limits on polygons. These days computers can render millions of polygons but alpha sorting is much more expensive than it ever was; so doing the same thing will result with people calling you a fool.

 

So how do you know just how much geometry you can use?

Here is the rule of thumb: 64 000 vertices per draw call.(For optimized engines like Unreal and Unity).The thing is if you don't use the full 64 000 vertices per draw call it's still going to cost the same.

Except how many vertices each mesh has is hard to know. Because smooth groups and UV maps add to the account. So a simple object like a wall with bricks will have a simple almost flat UV map and no need for smooth groups, so can have lots more geometry. A motorcycle will need lots of uv maps for the complex shape and lots of smooth groups for the different materials, so will have less vertices.

Then there is the fact that there is no rule that one object should only have one draw call. Main characters in the game will often already have more than one material. In Dragon Age 2 the characters had many materials and so the characters had huge amount of polygons(> 182 000 even on low), the environment shared one material so objects in it like chairs where only in the hundreds(200 - 600 polygons).

It depends on the computer, the artist, the game type and the model. So no one really knows.

 

On 9/27/2017 at 7:38 PM, JoeJ said:

'We' means programmers - programmers need to solve this, artists should not need to care. Actually they spend way too much time on high to low poly conversion, creating UVs etc. That's why i believe in automated tools.

It's important for the artist to care, not caring would put a lot of extra pressure on the programmers and would allow the artist to shift blame. Team work is always important.

Auto tools can't do complex things yet and when they can they will be as smart as humans, so a artistic hand guiding it is important. Then there are the times where the auto tool makes such a mess you are better off without it.

On 9/27/2017 at 7:38 PM, JoeJ said:

To me the perfect reference is looking SD or highly compressed video. It has only little detail and it's blurry, but it looks 1000 times better than any game graphics. It proofs we can remove a lot of things.

Removing things is fine for videos where everything is batched, nothing is going to change suddenly.

So in a movie you can spend a few minutes compressing data, because you will only do it once. In a game textures have to be compressed at real time and uncompressed when used, so a heavy compression like the ones used in videos would reduce performance not gain it.

Share this post


Link to post
Share on other sites
6 hours ago, Scouting Ninja said:

 

On 27.9.2017 at 7:38 PM, JoeJ said:

'We' means programmers - programmers need to solve this, artists should not need to care. Actually they spend way too much time on high to low poly conversion, creating UVs etc. That's why i believe in automated tools.

It's important for the artist to care, not caring would put a lot of extra pressure on the programmers and would allow the artist to shift blame. Team work is always important.

Auto tools can't do complex things yet and when they can they will be as smart as humans, so a artistic hand guiding it is important. Then there are the times where the auto tool makes such a mess you are better off without it.

On 27.9.2017 at 7:38 PM, JoeJ said:

To me the perfect reference is looking SD or highly compressed video. It has only little detail and it's blurry, but it looks 1000 times better than any game graphics. It proofs we can remove a lot of things.

Removing things is fine for videos where everything is batched, nothing is going to change suddenly.

So in a movie you can spend a few minutes compressing data, because you will only do it once. In a game textures have to be compressed at real time and uncompressed when used, so a heavy compression like the ones used in videos would reduce performance not gain it.

Agree for the moment, but i assume things will (and have to) change a lot to improve performance of both artists and hardware.

A good analogy is the use of scanned real world data: This data requires automatic preprocessing and detail reduction anyways which already does forms of compression similar to video. It works, creates geometry and texture data totally different than a human, and still the result is usually more realistic than human made art. (Yes i know there is still human editing involved in any use of real world data)

I know the opposite is true as well and processing human made art is much more difficult and will introduce some unwanted effects, but we'll deal with that. We already do when baking hires to game asset and no one complains, but if we automate this entire process (which is very hard to achieve), there are lots of advantages for the Artist:

You don't care about generating low poly geometry / LODs.

You don't care about creating UVs (you only use them to paint your hires model, but layout and seams don't matter)

You don't care about any kind of seams at all never again.

 

Could i - under the assumption that minor details get lost but overall graphics quality actually improves -  convince you as an artist with these promises? Serious question - unfortunately i have to create such a tool, so i'd really like to hear more thoughts and doupts to prepare for resistance :) 

 

Advantages for the hardware: Much less triangles, less materials, less overall fragmentation of things. Better utilization of memory (e.g. texture space). Opens up the possibility to use other data structures to represent geometry (In my case a tree of samples covering any surface for realtime GI. Basically i need only a seamless lightmap texture atlas for this, but i think about adding more features to the tool).

 

Share this post


Link to post
Share on other sites

Warning the spoilers are boring , they are just there for people who want to know more about my opinion.

On 9/29/2017 at 7:50 AM, JoeJ said:

but if we automate this entire process (which is very hard to achieve), there are lots of advantages for the Artist:

It's true the way of the future will have artist just capturing what they need to produce there art and some kind of mind scanner will produce what ever they can imagine. I absolutely with no doubt believe this will happen, it was not what I argued against.

The two points I want to make is this:

Removing details from a mesh is not a very efficient way of optimizing models in the long term. The reason is simple it's not cost effective.

Spoiler

 

Think of it this way, if you had a piece of paper where you could only fit four numbers and you wanted to store a phone number. Would it be more efficient to learn the math needed for encryption or buying a bigger piece of paper.

This is what is happening with geometry, the hardware and software needed for rendering more is much cheaper than the software needed for removing data without loss. Then there is the fact that a already better way exist for handling 3D models like subD, decimate tools are used for LODs, and the amount these tools reduce is efficient and improving there ability to reduce polycount is not worth the cost.

The reason it's not worth the cost is that players who need the models overall polycount to drop are players with older computers, they often don't have much money to waste on games anyway and are there for a lesser target than players who buy expensive computers to render more polygons. So by the time software exists that reduces polygons at a much more effective rate, there will be no need to reduce the polygons.

 

The artist should care about how the tools making the art works.

Spoiler

 

The mistake your making here is thinking of the artist as the consumer, if the art was a motor vehicle, the artist is the manufacturer not the driver.

Lets say I had to produce a 3d model and my deadline was in 3 days, something goes wrong and some parts of my normal map is inverted. Who do I blame? Our team programmers maybe they didn't load the normal map correctly? The programmers behind the development software because it doesn't work perfectly at a press of a button? Maybe the API programmers because this is a problem with the shader?

Just maybe the person to blame for the inverted normals was the artist, because he either set the ray cast too long hitting the back of the opposite side of the mesh or the normal on that part of the mesh is flipped causing the ray to bake them inverted.

The simple fact is that the artist is the person best equipped to deal with the problems related to the art. You never have the time to contact a software developer when something goes wrong and you can't wait for a patch. Also knowing how your tools work allows you to better use them.

If you want to use a hammer you should know how it works before you hurt yourself, the same is true for all tools.

 

 

On 9/29/2017 at 7:50 AM, JoeJ said:

this entire process (which is very hard to achieve), there are lots of advantages for the Artist:

You don't care about generating low poly geometry / LODs.

There are 2 LODs a artist makes by hand this is The LOD1 and the last LOD. No matter how aggressive the LOD tool is this is how it's done. The reason is simple, it allows control over the generated LODs because you set the max and the min this way.

Having tools that are better reducing polycount won't change this, the artist needs a way to estimate the end results as they need to produce textures and shaders/materials for the LODs long before they actually have the LODs.

On 9/29/2017 at 7:50 AM, JoeJ said:

You don't care about creating UVs (you only use them to paint your hires model, but layout and seams don't matter)

You don't care about any kind of seams at all never again.

A texture isn't relative to a geometry and even if you did do this you would exceed the other limits of rendering too fast.

Spoiler

 

Consider a texture for a rock vs a building, your texture limit is 4096* 4096. If you then give the building it's own texture at max size and the rock it's own size max texture you get more details on your rock than building. This produces models that don't match.

OK, so what if your geometry was relative to the texture. Well most 3D modeling tools do this, you can just download Blender and use it's auto unwrap tool. The problem is that now your building has a 4K texture and your rock ends up with a 16*16 pixel texture and that will be blurry as a fogy window.

So you fix your relative problem with more textures, giving the rock a nice 4K texture and the building 256* 4K textures because that won't overload your players hard drive.

So to fix any problems you could have with unwrapping you give the software the ability to actually recognize objects so it can see the windows and only unwrap one and then use that part for all the windows. Then to deal with the window depth and make the model more than a block you give it the ability to read depth using the light captured in the images. Except a static light is a bad source so you give it the ability to compare two images. Except now you have the problem most tracking software has that it can't keep track of data behind objects so you give it the ability to imagine the missing data.

Before you know it's starting the robot revolution and over overthrow mankind because it now has the abilities of a human and the cold calculating mind of a computer.:o

 

 

 

So to prevent the total destruction of mankind you should find a way to abandon the concept of textures, or storing data because that is what textures are, and calculate every thing at real time by creating a computer as powerful as the world; except now there is no need to reduce the geometry because it's all particles now.

So all jokes aside there is a reason this hasn't be done before, because no one can think of a way to do it.

On 9/29/2017 at 7:50 AM, JoeJ said:

Could i - under the assumption that minor details get lost but overall graphics quality actually improves -  convince you as an artist with these promises?

There is a way to do this, it's why the toon style was invented. Animators needed to reduce details and keep things as interesting as it was before. So they simply removed the unimportant details and exaggerated the important ones.

If you really do plan to make software that will reduce polycount then start here and learn 3D modeling. If you have questions about 3D modeling you can ask in the art forum. Most 3D modelers are happy to bore other people with what they know.

Edited by Scouting Ninja

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Announcements

  • Forum Statistics

    • Total Topics
      628395
    • Total Posts
      2982432
  • Similar Content

    • By KarimIO
      Hey guys,
      I'm trying to work on adding transparent objects to my deferred-rendered scene. The only issue is the z-buffer. As far as I know, the standard way to handle this is copying the buffer. In OpenGL, I can just blit it. What's the alternative for DirectX? And are there any alternatives to copying the buffer?
      Thanks in advance!
    • By sveta_itseez3D
      itSeez3D, a leading developer of mobile 3d scanning software, announced today a new SDK for its automatic 3D avatar generation technology, Avatar SDK for Unity. The Avatar SDK for Unity is a robust plug-n-play toolset which enables developers and creatives to integrate realistic user-generated 3D avatars into their Unity-based applications. SDK users can allow players to create their own avatars in the application or integrate the SDK into their own production processes for character design and animation.
      “Virtual avatars have recently become increasingly popular, especially in sports games and social VR apps. With the advance of VR and AR, the demand to get humans into the digital world is only increasing”, said Victor Erukhimov, itSeez3D CEO. “Our new Avatar SDK for Unity makes it super-easy to bring the avatar technology into any Unity-based game or VR/AR experience. With the Avatar SDK for Unity now every developer can bring face scanning technology into their games and allow players to create their own personalized in-game avatars, making the gameplay much more exciting and immersive.”
      Key features of the Avatar SDK for Unity:
      Automatic generation of a color 3D face model from a single selfie photo in 5-10 seconds (!). Works best with selfies, but can be used with any portrait photo.
      Shape and texture of the head model are unique for each person, synthesized with a deep learning algorithm crafted by computer vision experts
      Head models support runtime blendshape facial animations (45 different expressions)
      Generated 3D heads include eyes, mouth, and teeth
      Algorithms synthesize 3D meshes in mid-poly resolution, ~12k vertices, and ~24k triangles
      Six predefined hairstyles with hair-recoloring feature (many more available on request)
      Avatar generation API can be used in design-time and in run-time, which means you can allow users to create their own avatars in your game
      Cloud version is cross-platform, and offline version currently works on PCs with 64-bit Windows (support for more platforms is coming soon)
      Well-documented samples showcasing the functionality.
       
      Availability
      The Avatar SDK for Unity is offered in two modes - “Cloud” and “Offline”. The “Cloud” version is available at http://avatarsdk.com/ and the “Offline” version is available by request at support@itseez3d.com.
      ###
      About itSeez3D
      At itSeez3D, we are working on the computer vision technology that turns mobile devices into powerful 3D scanners. itSeez3D has developed the world's first mobile 3D scanning application that allows to create high-resolution photorealistic 3D models of people's' faces, bodies and objects. The application is available for iOS and Windows OS mobile devices powered with 3D cameras. In 2016 the company introduced Avatar SDK that creates a realistic 3D model of a face from a single selfie photo. To learn more about itSeez3D scanning software and 3D avatar creation technology, please visit www.itseez3d.com and www.avatarsdk.com.

      View full story
    • By sveta_itseez3D
      itSeez3D, a leading developer of mobile 3d scanning software, announced today a new SDK for its automatic 3D avatar generation technology, Avatar SDK for Unity. The Avatar SDK for Unity is a robust plug-n-play toolset which enables developers and creatives to integrate realistic user-generated 3D avatars into their Unity-based applications. SDK users can allow players to create their own avatars in the application or integrate the SDK into their own production processes for character design and animation.
      “Virtual avatars have recently become increasingly popular, especially in sports games and social VR apps. With the advance of VR and AR, the demand to get humans into the digital world is only increasing”, said Victor Erukhimov, itSeez3D CEO. “Our new Avatar SDK for Unity makes it super-easy to bring the avatar technology into any Unity-based game or VR/AR experience. With the Avatar SDK for Unity now every developer can bring face scanning technology into their games and allow players to create their own personalized in-game avatars, making the gameplay much more exciting and immersive.”
      Key features of the Avatar SDK for Unity:
      Automatic generation of a color 3D face model from a single selfie photo in 5-10 seconds (!). Works best with selfies, but can be used with any portrait photo.
      Shape and texture of the head model are unique for each person, synthesized with a deep learning algorithm crafted by computer vision experts
      Head models support runtime blendshape facial animations (45 different expressions)
      Generated 3D heads include eyes, mouth, and teeth
      Algorithms synthesize 3D meshes in mid-poly resolution, ~12k vertices, and ~24k triangles
      Six predefined hairstyles with hair-recoloring feature (many more available on request)
      Avatar generation API can be used in design-time and in run-time, which means you can allow users to create their own avatars in your game
      Cloud version is cross-platform, and offline version currently works on PCs with 64-bit Windows (support for more platforms is coming soon)
      Well-documented samples showcasing the functionality.
       
      Availability
      The Avatar SDK for Unity is offered in two modes - “Cloud” and “Offline”. The “Cloud” version is available at http://avatarsdk.com/ and the “Offline” version is available by request at support@itseez3d.com.
      ###
      About itSeez3D
      At itSeez3D, we are working on the computer vision technology that turns mobile devices into powerful 3D scanners. itSeez3D has developed the world's first mobile 3D scanning application that allows to create high-resolution photorealistic 3D models of people's' faces, bodies and objects. The application is available for iOS and Windows OS mobile devices powered with 3D cameras. In 2016 the company introduced Avatar SDK that creates a realistic 3D model of a face from a single selfie photo. To learn more about itSeez3D scanning software and 3D avatar creation technology, please visit www.itseez3d.com and www.avatarsdk.com.
    • By glportal
      GlPortal is a free and open source first person 3D teleportation based puzzle game and platformer. But we have already integrated a physics engine and are planning for some physics based puzzles.
      We want to improve our Visual Studio support. Check out this project:
      https://github.com/kungfooman/glportal-vs
      You can chat with us on gitter https://gitter.im/GlPortal/glPortal
    • By Canislupus54
      I'm looking for programmers for an rpg I want to make. If you're wondering what "semi-turn based" means, it means that you take turns, but instead of a rigid back and forth like Pokemon, a timer determines when you can act, a sort of modernization of the classic Final Fantasy Active Time Battle system. Right now, I'm looking for programmers to create a prototype of both the combat system and the movement outside of combat. Preferably for Unity C#. Concept artists, particularly for characters, and writers to help me flesh out the character and story aspects, would also be helpful.
      Here's a concept doc to fully explain things: https://docs.google.com/document/d/1ObDMAUWsndSAJ1EpQGRDxtR8Xl9xPotx89OZ0sgRaIw/edit?usp=sharing
      If you can fill another role and are interested, feel free to let me know as well.
      At the moment, this is purely a hobby project, with no payment planned. If we produce something we feel we can release, then of course we'll work out something for compensation. But, again, don't join this project counting on payment.
      If you're interested, contact me on here, or at jordestoj@yahoo.com . Thanks.
  • Popular Now