Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


JTippetts

Member Since 04 Jul 2003
Offline Last Active Today, 06:34 AM

#5224168 Normal Map Generator (from diffuse maps)

Posted by JTippetts on 18 April 2015 - 08:48 AM

Note that the technique of using a normal map derived directly from the diffuse has, in fact, been use in commercial games. See Path of Exile. In that description, the author discusses painting over areas of the texture with highlights to allow CrazyBump to differentiate the high areas a little better. Like all artistic endeavors, it depends much on the guy/girl doing the creation.


#5221847 Skeletal animation in Assimp

Posted by JTippetts on 07 April 2015 - 08:35 AM

If you're rendering a model without worrying about the bone transformations, then you're not rendering an animated model. You're rendering a static model. That barbarian guy up there with his arms out in a T shape? Yeah, he's in his rest pose. That is how he was likely modeled from the start. If you want to render him as he picks up his axe and lops off a head, you're going to have to start worrying about those bone transformations because that's how it's done.




#5220739 Mad question- how to reconstrught height map from normal map

Posted by JTippetts on 01 April 2015 - 11:11 AM

You might check out this by Dave Eberly. Basically, you do need to know a few things about the original height map in order to deconstruct it correctly. A normal map essentially encodes the derivative (or slope) of a surface at a discrete point. The two common methods of calculating this slope from the source height map are to calculate it from (x+1-x), (y+1-y) or from ((x+1)-(x-1)), ((y+1)-(y-1)). The linked document calls these One-Sided Difference Approximations and Centered Difference Approximations. The first calculates the slope from the point at the current location subtracted from the point 1 step to the left and the point one step above, while the centered method calculates the slope at the point from the 2 points bracketing it along each axis. Each method can result in a slightly different normal. For example, consider if the given point in a heightmap is a high "spike" bracketed by lower points. The one-sided method will take into account the height of the spike, resulting in a shallower normal, while the centered method will ignore the spike, resulting in a steeper normal for what it perceives to be a flatter surface. So if you reconstruct a one-sided difference normal map using the centered technique, the reconstructed heightmap won't be quite the same. Still, your state criteria is that the reconstructed heightmap should result in a matching normal map to the one it was reconstructed from, and this should be the case.

Note that as the article indicates, there are an infinite number of potential source heightmaps that can result in a given normal map, given that the normal is a representation merely of the slope of the surface, and says nothing about the height of the surface above 0. But this really shouldn't be an issue for you, as it's easy enough to just reconstruct the heightmap assuming a constant offset of 0.

Additionally, you do need to know if the normal map was generated using wrapping at the map edges, or if it was generated by clamping the edges. Seamlessly tiling normal maps, for example, will calculate the boundary normals by wrapping around to the opposite edge of the source map.


#5220732 Modern cartoon shading that's not cel shading

Posted by JTippetts on 01 April 2015 - 10:34 AM

I'd say that it's mostly "standard" pipelines with the appearance dictated mostly by modeling and material choice. Simple textures, mostly, with all of the character modeling using round, organic shapes. The style somewhat reminds me of Blendman's style (link goes to google image search) in the use of organic character shapes and bright textures.

 

As for the effects, I suspect the fireball is just a 3D point light bouncing around in a box-like 3D scene. You probably could do it using a point light in screen space, but depending on your framework the former might be simpler.

 

The white swoosh could be a post-processing effect, or it could be a custom shader that fades the rendered material to white based on a combination of time(t) and the location of a given fragment on the horizontal axis. 

 

With most simple effects like this, there are usually multiple ways of accomplishing them which depend quite a bit on the actual engine you use.




#5220309 POVray

Posted by JTippetts on 30 March 2015 - 04:14 PM

I have used POVRay before. It is nice for quick, one-off renders of things that I generate procedurally. The bulk of my creation, though, is in Blender. There is a lot of overlap between the two, though. If generating scenes through code is your thing, Blender does have a Python interface. And you can create scenes in Blender and render them using a POVRay-based renderer (available through addons, see

 

Once you move beyond procedural things then creating anything of more than basic complexity quickly becomes quite a chore if you are hand-authoring your POVRay scenes. At that point, you're much better off using a visual editor such as Blender to compose them and exporting if you still prefer the POVRay renderer.

 

Also, Blender uses ray tracing for Blender Internal and Cycles renderers, and has numerous tools for creating Bezier and nurbs based shapes that are tesselated rather than being discrete meshes, so those really aren't advantages that POVRay has over Blender.




#5220244 Best way to render multiple instances with unique license plate

Posted by JTippetts on 30 March 2015 - 11:57 AM

Pass a unique instance ID for every instance, and use that ID to generate the UV coordinates for the digit quads. For example, you could use the instance ID as a random number seed, and from that seed generate a sequence of 4 (or however many digits a plate has) numbers in the range of 0..9, then calculate the UV offsets from the digits.




#5216829 Normal Map Generator (from diffuse maps)

Posted by JTippetts on 16 March 2015 - 06:07 AM


@dpadam: not sure what you mean. So as input I have one diffuse map. What do you mean by "use different images"?
 
What he means is that a diffuse has no depth information, so when you generate a normal map from a diffuse map like this you aren't generating a normal map that actually conforms to the shape of the surface, but rather one that conforms to the shape of a surface whose depth is described by the areas of light and shadow within the diffuse map. So it's not quite right. If you use the same texture for diffuse that you also use to generate a normal map, then those "not quite right" normals correspond directly with the areas of light and shadow in the diffuse. With some textures, it can accentuate the "not quite right"-ness, because it becomes quite obvious that the normal map doesn't accurately represent the shape of the surface. So it can be helpful to generate the normal map from one photograph or diffuse texture, but to use it with another whose areas of light/dark do not correspond with the areas used to generate the normal map.
 
Programs like crazybump have options for shape recognition, which attempt to make a guess at the actual shape of the surface given the lightness/darkness of the diffuse. It's often not a very good guess, but given the right diffuse maps it can work okay.
 
I have seen descriptions of lighting rigs that operate by taking multiple photos of a given surface being lit from different directions, and using the different lighting angles to recreate an approximation to the actual normal of the surface being photographed. It requires some setup and work, but the result is quite a bit more realistic than just naively using a single lit diffuse map.



#5216500 How to Acess a singleton from everywhere?

Posted by JTippetts on 14 March 2015 - 03:43 PM

5) Should I choose another pattern rather than building a sticky mess of singleton spaghetti?

Yes. The answer is yes.


#5215599 why some turn based games are so popular?

Posted by JTippetts on 10 March 2015 - 01:39 AM

It is true that the FF games don't very well demonstrate the possible depth and complexity that turn-based allows. I recommend you pick up something like Divinity: Original Sin. The interactions between elements, environmental conditions, etc can get deep. Break a barrel of oil to create an oil slick so your enemies slip and stumble. Light it on fire with a fireball spell to incinerate the confused foes. Extinguish the flames with a conjured rain storm and create a bank of fog to hide within. Cast lightning at the water they are now standing in to electrocute the whole group.

Once you've played a "good" turn based game, it's hard to go back to the basic FF-type.


#5215209 Destructible terrain

Posted by JTippetts on 07 March 2015 - 05:36 PM

For a hobbyist project, you could take a look at PolyVox. The creator of PolyVox has done some terrain destruction stuff. It's not large scale, and to my understanding it doesn't include any functionality for network streaming, but it might at least give you some ideas.


#5213735 Stat-stick Syndrome : How to avoid ?

Posted by JTippetts on 01 March 2015 - 03:52 PM

You know, this isn't the first time somebody got worked in LoL and made an account here to complain about it, then vanished. Riot must be doing something right, to piss off so many people yet still maintain such a strong user base.




#5212930 3d noise turbulence functions for terrain generation

Posted by JTippetts on 25 February 2015 - 10:19 AM

I've played with noise a lot in the past, and the simple fact is that there are limitations to what these simple, elegant little functions can do. You can implement module chaining, ala the libnoise Select and Blend functions, to vary the terrain types from among the common functions. You can implement F2-F1 cellular noise to approximate "chunky" mountains. You can apply domain perturbations to alter the character of the basic functions. But ultimately, without analogues to physical processes such as hydraulic and thermal erosion, uplift, folding, tectonics, etc... you're going to be unable to achieve some of the more interesting results on display in the real world. There really is only so much a f(x,y,z) can really do.




#5212929 Stat-stick Syndrome : How to avoid ?

Posted by JTippetts on 25 February 2015 - 10:08 AM

Mastering a complex enough stat system can be a skill of its own. In my opinion, going physical-skill-based (ie, aiming) isn't really the answer because I can't aim. I can't play todays shooters, a consequence of a number of injuries that have affected both hands coupled with a natural general suckitude. So the quickest way to turn me off is to take away my stats and make me have to aim at something to hit it. Similarly with dodging. If my ability to win a game comes down to relying on my screwed up hands to pull off a dextrous act of dodging, then I'm hosed.

 

Positional requirements and scouting, however, I am fine with. But I guess I'm probably not your target audience. And that, I think, is why so many games go with stats over physical skill: an attempt to not limit their audience by excluding us klutzes.

 

I'm not sure about your arbitrary "binary" classification. How is increasing an enemy's dps a "binary" action?




#5212675 Sculpting vs. Modeling

Posted by JTippetts on 24 February 2015 - 05:53 AM

If you lack the skills to do the traditional "concept art->base mesh->high poly sculpt" workflow, you can skip right to the sculpt. Personally, I can't really draw my way out of a wet paper bag (to crudely hack the phrase), so I just jump right to sculpting. Using the right tools, sculpting can be a very free-flowing exercise, much like traditional sculpting in clay can be. The character in this screenshot, like all of my characters, was created as a sculpt with no before-hand traditional 2D concept sketching, for example. For someone just starting out, or on a limited budget with a limited traditional art skillset, it can be a workable path.

 

If you don't want to pay high dollar for ZBrush, Mudbox or 3DCoat, you can pick up the nicely intuitive Sculptris for free from Pixologic (the makers of ZBrush). Additionally, Blender now offers an adaptive subdivision scheme during sculpting, similar to that offered by Sculptris, if not quite so UI-friendly as Sculptris. The adaptive subdivision makes it easy to start from a simple primitive such as a sphere and grab/pull/glob your base shape then iterate, adding detail as you go.

 

Of course, as previous posters have mentioned, if you want to actually use the sculpted character in something, you're probably going to have to retopo your sculpt to derive a lower resolution version, complete with normal maps and/or displacement maps. High-res sculpts can easily weigh in the millions of vertices, making them unsuitable for animation without expensive high-end cluster hardware.




#5211229 Hiding savedata to prevent save backup

Posted by JTippetts on 17 February 2015 - 11:54 AM

What if I change hard drives and can't find your save files to properly copy them over? Do I file your game in the "broken" bin, and tell all my friends to steer clear of it?






PARTNERS