Jump to content

  • Log In with Google      Sign In   
  • Create Account

FREE SOFTWARE GIVEAWAY

We have 4 x Pro Licences (valued at $59 each) for 2d modular animation software Spriter to give away in this Thursday's GDNet Direct email newsletter.


Read more in this forum topic or make sure you're signed up (from the right-hand sidebar on the homepage) and read Thursday's newsletter to get in the running!


RnaodmBiT

Member Since 27 Nov 2010
Offline Last Active Yesterday, 10:55 PM

#5169680 [SOLVED] Where to get a Font Sheet for Direct3D 11 use?

Posted by RnaodmBiT on 28 July 2014 - 01:31 AM

Yes, the details on what you get from the fnt file is described in the link I posted above.




#5169652 [SOLVED] Where to get a Font Sheet for Direct3D 11 use?

Posted by RnaodmBiT on 27 July 2014 - 08:56 PM

When you ask for a font sheet, you need to be specific. There are several formats for identifying characters used in computers but most common are ASCII, UTF8 (encompases ASCII), and UTF16. Using UTF16 there are 1,112,064 different characters (Think about all the different languages like English, Arabic, Chinese, Japanese, etc...) so imagine trying to fit all that into a single sheet! BMFont allows you to select only the characters that you will need in your game thus reducing the size of textures needed to store the required glyphs.

 

When using BMFont, you select which characters you want to export, sometimes you might only want some from a certain set ie. ASCII or sometimes you want an extended set if using wide character (ie. UTF16) encoding. So you select which characters you want, probably just the basic Latin characters, they will be highlighted. Then you set the exporter options as to which descriptor file format you want (Text, XML, or Binary) and the character descriptions will be exported. Another (possibly more than one) file will be created as well. These are the glyph pages and will be in what ever texture format you selected (DDS, PNG, or TGA). When using these files you open and parse the .fnt font descriptor file which tells you basic info about the font settings (ie. Line Height, texture settings, etc.) then you load info about each individual glyph, and which character value it corresponds too, which texture page it is located on, and the texture coordinates, size, and how much to advance the drawing position when moving to the next character. These .fnt descriptors might also include kerning data which will adjust the spacing between specific character pairs such that the font renders with a nicer looking spacing.

 

Everything about the program and the file formats can be found in the documentation including examples on how to render some text here: http://www.angelcode.com/products/bmfont/documentation.html

 

As a proof of concept, I have actually just completed a font renderer using the binary file format generated by BMFont.

 

Note: Apologies if I have a mistake about wide character encoding, I mostly just stick to the ASCII set. 




#5159741 Minecraft Terrain Generator

Posted by RnaodmBiT on 11 June 2014 - 05:56 AM

I would also like to mention the use of sampling a 3D noise function using fBm sampling to generate a 'density' function (even just combining several 2D samples can look good). If you then subtract a threshold value from the function you can say that any where the density is less than 0, you have open space and larger than 0 indicates solid ground. This can be used to generate voxel fields or as input for the marching cubes algorithm to generate some nice terrain with the advantage of the ability to specify open spaces 'under ground', ie. caves.

 

-BiT




#5159678 WASD or Mouse movement - Top Down Stealth

Posted by RnaodmBiT on 11 June 2014 - 12:41 AM

Why not have both and just have to option to change control mappings?




#5159677 Minecraft Terrain Generator

Posted by RnaodmBiT on 11 June 2014 - 12:38 AM

The idea of using noise to generate game terrain has been around for a while and making it look decent is one of the most challenging and in my mind, rewarding task I have encountered in game programming.

 

http://freespace.virgin.net/hugo.elias/models/m_perlin.htm has a very good introduction into what fractal Brownian motion (the page has it wrong) is, and how you can make it with examples in simple pseudo-code. Take note of the idea of using several 'octaves' of the same noise signal and summing them with different amplitudes. If you consider higher frequency noise (lots of changes for a small change in sample position) you can consider that to make rough/spikey ground. If you consider the lower frequency noise (much less change for large change in sample position) then this gives a very smooth terrain on the same scale. If take the low frequency noise and multiply it with a large number, you would get mountains and valleys, then by adding a small amount of higher frequency noise you would get smaller details in the mountains like small bumps. If you do this with enough octaves with appropriate amplitude values you can generate interesting looking terrain.

 

A similar technique called the Diamond-square algorithm can also be used to generate interesting terrain (I prefer this for the close in detail). An example can be found at

http://www.gameprogrammer.com/fractal.html. An iteresting thing to note about this one is that if you set the first level or two's values, you can force the algorithm to fill in the blanks of a terrain which you have set the vague shape of. That is if you lowered some of the points, a valley/river/depression would be formed there and the close in detail will be randomly generated for you. Using this idea you could actually use the perlin noise above to generate set heights for every kilometer for example, then use diamond squares to fill in the higher resolution details down to 1m scale.

 

In the end, generating whatever kind of terrain you want is all about combining different types or noise and how you choose to use the values they generate. Making it look good is the result of fine tuning the amplitudes, roughness factors, scales, etc. until you get something that looks appropriate and can be used in your environment.

 

Edit'd to stop perpetuating the fractal Brownian motion/perlin noise mistake. Thanks Bacterius.

 

-BiT




#5156153 Variance Shadow Mapping Shadow Brightness Issue

Posted by RnaodmBiT on 26 May 2014 - 06:04 PM

Okay, for the getting lighter problem, your issue is these lines:

float momentdistance = coord.z - moments.x;
float p_max = variance / (variance + momentdistance*momentdistance);

You can see that as momentdistance approaches 0, p_max will approach 1 / 1, giving you fully light pixels. One simple solution for this would be to scale momentdistance up by some set value, but, to put it simply, there will ALWAYS be this 'getting lighter' problem when using variance shadow mapping, thats part of how it determines shadowing in the first place. In most cases this wont be an issue as the object being shadowed will be thick enough that you wont notice the light section since it will be hidden by the object it self, ie. a crate. 




#5155778 Variance Shadow Mapping Shadow Brightness Issue

Posted by RnaodmBiT on 24 May 2014 - 09:36 PM

The problem is the way variance shadow mapping determines shadow/light per pixel. It takes the difference between the (linear depth)^2 and the depth^2, and the greater the distance, the greater the shadowing term, but when they get close together (which is what happens when the shadow approaches the occluder) the shadowing term gets lower. 

 

I also noticed you are getting the shadow map depth data incorrectly. You need to store both linear and squared depth in the depth map in separate channels. Then you can sample them both instead of 'vec2(depth,depth*depth)'. This means that the depth and depth2 values will be linearly sampled and that depth * depth != depth2 for most cases. This is a key point for variance shadow mapping because it allows you to apply a blur to the shadow map and get nice smooth shadows as a result instead of the square edges you have shown. 

 

As it stands in your code, if we simplify it you get 

float variance = moments.y - (moments.x * moments.x);

which is the same as

float variance = moments.y - moments.y; // ( = 0) this is why you sample both moments.x and moments.y from a depthmap
variance = max(variance, 0.0005); // ( = 0.0005)

and finally

float p_max = variance / (variance + momentdistance*momentdistance);



#5155036 Is it Possible to have a function operate on 2 different input structure

Posted by RnaodmBiT on 21 May 2014 - 05:49 AM

The simpler version instead of using templates is to just define the function twice with two different parameters. Then when the compiler link the program, it picks the function that matches the correct parameter template.

 

Ie.

 

void dostuff(PNT var)

{

    var.Pos = vector3(0, 0, 0);

}

 

void dostuff(PNTWB var)

{

    var.pos = vector3(1, 2, 3);

}




#5139339 Needing some quick tips on how to organize my code

Posted by RnaodmBiT on 15 March 2014 - 05:55 PM

Quite simply the best way (both short and long term) will be to learn the basics of classes and how to use them. They are not very complicated, you can probably learn the basics in a day or two. The key thing about classes is that generally you write them so that they are responsible for doing one single task. I.e. a file parser or graphics controller. You mentioned that you were having trouble keeping track of which variables were being used where and this is another example of what makes classes helpful is that nothing else can touch a classes local variable unless you let them which means that only that instance of the class can read/write it saving lots of confusion later on.

 

-BiT




#5135952 Help with lighting options

Posted by RnaodmBiT on 02 March 2014 - 07:13 PM

You are correct about the second point. The computational complexity would be # Lights x # Objects. But with a simpler shader this wouldn't be too much of a problem. As it is at the moment the complexity of your shader is proportional to the # Lights (because of the for loop) and you execute it once per object. So you are already on par with that anyway, but you will remove branching in the shader by not requiring the for loop, you could also use a different pixel shader for each different type of light removing the need for another if statement. Further more you can check if the object to be rendered is within range of the light and just exclude it from being drawn completely.

 

You say the problem only occurs when you are close to the object, this is most likely because there are more pixels filling the screen, thus more executions for the pixel shader, thus more time spent on computations. I would suggest you disable vsync (if you havn't already) then you will probably notice the performance differences and how they change.

 

-BiT




#5129464 Help with hlsl

Posted by RnaodmBiT on 06 February 2014 - 07:44 PM

As for the side topic here, when using a 4x4 matrix, always use a vector 4, even if the compiler/language allows the combination of 3/4 sizes it will help avoid confusing. You might want to do some reading on the construction of world matrices (scaling, rotation, and translation of the mesh) so that you understand why we use a 1.0/0.0 in the w component. This free course might be quite beneficial for you: https://www.udacity.com/course/cs291

 

Good Luck




#5129412 Help with hlsl

Posted by RnaodmBiT on 06 February 2014 - 03:21 PM

I could be wrong and it could be as you intend it to be but, it looks like your normal calculation will be wrong, you might want to change it to:

float4(vsIn.normalL, 0.0f)

note: the 1.0f changed to a 0.0f

 

This stop the matrix multiplication translating the vsIn.normalL value as it would a position value and only rotates as it should for a direction value.




#5128966 Space RTS design quandary!

Posted by RnaodmBiT on 05 February 2014 - 04:41 AM

Hi,

 

Might help if we know what challanges the player has to test themselves against.

 

My first thought is that building ships takes resources... metals, plastics, fuel, electronics etc. So you could dispense robot mining facilities to send you these resources at some rate. This would mean that in order to make better ships you either have to capture more systems or wait longer periods of time.

 

If we strip the thematic elements it might make this a simpler problem. You are essentially looking for a reason for the player to care about their territory on a map. Simple answer (as stated above) is to restrict their resources used to progress based on the number of territory they have. If they want to progress faster, they must then get more territory. Something along these lines might just suit your game.

 

Goodluck




#5117965 Deferred shading and point light range issue

Posted by RnaodmBiT on 18 December 2013 - 04:48 PM

 vec4 CalcPointLight(vec3 worldPos, vec3 normal)                                                                        \n \
    {                                                                                                                       \n \
       vec3 positionDiff = (UnifShadingPass.mLightPosOrDir.xyz - worldPos);                                                   \n \
                                                                                                                              \n \
       float dist = length(positionDiff);                                            \n \
                                                                                                                            \n \
        float attenuation = clamp(1.0 - (dist / UnifShadingPass.mMaxDistance), 0.0, 1.0);                                                                                                                   \n \
       vec3 lightDir = normalize(positionDiff);                                                                             \n \
       float angleNormal = clamp(dot(normalize(normal), lightDir), 0, 1);                                                   \n \
                                                                                                                            \n \
       return angleNormal * attenuation * UnifShadingPass.mLightColor;                                                      \n \
    } 

That *should* do it, note how the formula is a little different, also the clamp will tidy things up a little bit. As for your falloff you will need to form some kind off function to get you a desired curve.




#5117946 Deferred shading and point light range issue

Posted by RnaodmBiT on 18 December 2013 - 03:17 PM

It is because you are calculating your light strength at each pixel as 1/r^2 which is physically correct, but when using point light volumes as you are results in the shown issue, There are two options:

1) Change the light strength to being 1 - dis / maxDis which will give you a linear light strength to the edge of the lights 'radius' but isn't physically correct

2) You can change the light volume from being a sphere at the lights position to being a fullscreen quad. This essentially will calculate the light strenght for ALL pixels which is physically accurate since light strength is an inverse squared relationship (1/r^2). Since 1/r^2 will never actually be 0 then your point light will have even a tiny effect on pixels extremely far away and a large effect on very close pixels. 

 

I hope that explains your issue.






PARTNERS