• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Ty Typhoon
      ...if you got time to read and answer i would be happy . 
       
      So me and my co try to do a game.
      It should be in unity couse my co do everything in this engine.
       
      We got the rpg package from evila for inventor, but it only runs on pc right now.
       
      I like to make a online store for guns in the game and a multiplayer open world that runs on pc, android, mobile, ps4, xbox one. 
      Somebody told me that you "only" need to program it like so and that its possible in every engine...
       
      So if you are one of the lucky guys who could help me out or programm that, or even if you know a newer better package for maybe unreal which offers that - please let me know now.
    • By ethancodes
      I'm having a weird issue with detecting a collision. I've tried everything I could find online but nothing seems to work. I have a brick object. It has a 2D Collider attached and I have also attached a 2D Rigidbody on it. I also have an EndScreen 2D Collider. The EndScreen 2D collider is tagged with "EndScreen". I am trying to detect when a brick collides with the end screen collider and simply print "game over" in the console. 
      This is my current code for this part of the program, it is attached to the bricks:
      void OnCollisionEnter (Collision2D collision) { if (collision.gameObject.tag == "EndScreen") { Debug.Log("Game over"); } } Several things have happened depending on the set up. If I have the rigidbody 2D set as static, my ball object can still collide with the bricks, but I get no Log message. If I set it to Kinematic or Dynamic, I get absolutely no interaction between the ball and the bricks, and nothing when the bricks pass through the collider. I have tried to set the collider to a trigger and use OnTriggerEnter2D, no change. I have tried to put the rigidbody on the EndScreen object and tried to set it's body type to all 3 settings, no change. The only thing I can think of that I have not done is put the script on the EndScreen object and switch the tag to the bricks. The reason I have not done this is because I will have several types of bricks, some of which will have different tags. 
       
      Please tell me somebody can see what I'm doing wrong here, because I'm losing my mind over something I feel should be ridiculously simple. Thanks.
    • By Sandman Academy
      Downloadable at:
      https://virva.itch.io/sandman-academy
      https://gamejolt.com/games/sandmanacademy/329088
      https://www.indiexpo.net/en/games/sandman-academy
      https://www.gamefront.com/@sandmanacademy
      http://www.indiedb.com/games/sandman-academy
    • By Sandman Academy
      Downloadable at:
      https://virva.itch.io/sandman-academy
      https://gamejolt.com/games/sandmanacademy/329088
      https://www.indiexpo.net/en/games/sandman-academy
      https://www.gamefront.com/@sandmanacademy
      http://www.indiedb.com/games/sandman-academy
    • By Sandman Academy
      Downloadable at:
      https://virva.itch.io/sandman-academy
      https://gamejolt.com/games/sandmanacademy/329088
      https://www.indiexpo.net/en/games/sandman-academy
      https://www.gamefront.com/@sandmanacademy
      http://www.indiedb.com/games/sandman-academy
  • Advertisement
  • Advertisement
Sign in to follow this  

Unity Help with UV mapping procedural game

This topic is 672 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Lately, I'm working on redoing the mesh algorithms for my game. The game is set up as cubes, and I only render visible sides to save performance. I also re-use vertices for multiple faces to stay away from Unity's vertex limit. If you don't understand, here's a picture:

cuberendering.png

In the example, the blue face would be the visible face. From my 8 possible vertices, triangles would be drawn across (1,4,5) and (1,0,4). As far as I know, this is working correctly. But now I need to do this with UVs. I currently use a texture atlas, and I cant quite comprehend how I'll map the textures to the vertices, because each vertex can be a part of multiple faces. Does anyone know a solution to this? I'll provide my code if needed.

Edited by SpikeViper

Share this post


Link to post
Share on other sites
Advertisement

Although it's highly dependent on art style, I've used an object-space trimapping technique to avoid U/V mapping on my procedural terrain and CSG buildings. Each "material texture" is actually three textures, for the XZ, XY, and YZ planes. In the pixel shader, I use object-space position and a (uniform) scaling parameter to sample each of the three textures, then use the object-space surface normal to slerp between the three samples.

 

Con: no texture alignment; it's only suitable for tiled or detail textures on static objects.

Con: is relatively quite expensive when applied mostly to planar-aligned geometry.

Pro: works on any geometry; with good texture selection, it looks like natural, carved, or poured material.

Pro: no mapping or stretching artefacts.

Share this post


Link to post
Share on other sites

Although it's highly dependent on art style, I've used an object-space trimapping technique to avoid U/V mapping on my procedural terrain and CSG buildings. Each "material texture" is actually three textures, for the XZ, XY, and YZ planes. In the pixel shader, I use object-space position and a (uniform) scaling parameter to sample each of the three textures, then use the object-space surface normal to slerp between the three samples.

 

Con: no texture alignment; it's only suitable for tiled or detail textures on static objects.

Con: is relatively quite expensive when applied mostly to planar-aligned geometry.

Pro: works on any geometry; with good texture selection, it looks like natural, carved, or poured material.

Pro: no mapping or stretching artefacts.

 

Is that a little overkill for simple blocks, or no? Before, I made each face and using UVs wrapped a texture to it, along with a specular and emission map. But, that went over the vertex limit of Unity a bit, so I decided to cut down and re-use vertices.

Share this post


Link to post
Share on other sites

For exclusively plane-aligned faces, yeah it's overkill. I had domes and arbitrary 2D CSG polygons extruded upwards into buildings to cope with.

Share this post


Link to post
Share on other sites

So, should I be making each face with different vertices, or is what I'm trying to do possible?

Share this post


Link to post
Share on other sites

If you need different texture coordinates for different faces of your cube, then the faces can't share vertices. Simple as that.

 

Assuming your cubes are axis aligned, you could use triplanar texturing, and only draw certain sides of the cubes in one draw call (so your cubes are only cubes conceptually - they wouldn't be organized like that in your index buffers). That way you only need one texture sample, instead of lerping between three in the pixel shader based on orientation (the thing Wyrframe is doing). And if you can determine the texture coordinates by x/y/z position, then you potentially wouldn't even need texture coordinates in your cube's vertices, which means you could re-use them between faces.

Share this post


Link to post
Share on other sites

Just use the normal in the pixel shader. If the normal is float3(0, 0, 1) then use world coordinates x and y for the texture coordinates. The point of tri-planar is to merge plane aligned samples based on which plane you are closest to. In your case you are aligned on one only.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement