RnaodmBiT

Members
  • Content count

    53
  • Joined

  • Last visited

Community Reputation

1096 Excellent

About RnaodmBiT

  • Rank
    Member
  1. Fleet Composition (a little experiment!)

    Not strictly answering your question here but a while ago I found this and thought it might be relevant for you all.
  2. To mirror or not to mirror?

    Although I'm not much of a modeller myself, it occurs to me that if you want to keep the mirror modifier while still have asymmetrical features, you could use something like the Boolean modifier to 'subtract' the ejection port out of the slide.
  3. Well, if you want to keep the information at that level of detail, you need to keep the same resolution texture, thats literally what 'resolution' means, how well you can resolve the detail. If you provided more information about what problem you are trying to solve maybe we could provide some alternatives?
  4. Thanks unbird, thats exactly what I was looking for.   For anyone else who is interested I found that D3D11_SHADER_DESC::Version = 65600 for a vertex shader and 64 for a pixel shader, note that I'm using vs_4_0 and ps_4_0.   If anyone can find a more comprehensive answer than that it'd be more than welcome.
  5. Hey guys,   I am currently trying to automatically load a directory full of compiled shader objects (*.cso) built at compile time by MVS2013. The problem I have is that I seem to be unable to determine if a CSO is a vertex shader or pixel shader by any means other than calling CreateVertexShader and seeing if it fails or not. While this works it is certainly not a desirable solution and it clogs up the output with debug messages and first chance exceptions.   I've been searching around, found out about shader reflection but the documentation on that is far from complete. Does anyone have a nice solution to this? I'd like to avoid particular file naming conventions or directory structures if possible. This seems like a trivial problem and I've probably just overlooked something really simple.   Thanks
  6. [SOLVED] Where to get a Font Sheet for Direct3D 11 use?

    Yes, the details on what you get from the fnt file is described in the link I posted above.
  7. [SOLVED] Where to get a Font Sheet for Direct3D 11 use?

    When you ask for a font sheet, you need to be specific. There are several formats for identifying characters used in computers but most common are ASCII, UTF8 (encompases ASCII), and UTF16. Using UTF16 there are 1,112,064 different characters (Think about all the different languages like English, Arabic, Chinese, Japanese, etc...) so imagine trying to fit all that into a single sheet! BMFont allows you to select only the characters that you will need in your game thus reducing the size of textures needed to store the required glyphs.   When using BMFont, you select which characters you want to export, sometimes you might only want some from a certain set ie. ASCII or sometimes you want an extended set if using wide character (ie. UTF16) encoding. So you select which characters you want, probably just the basic Latin characters, they will be highlighted. Then you set the exporter options as to which descriptor file format you want (Text, XML, or Binary) and the character descriptions will be exported. Another (possibly more than one) file will be created as well. These are the glyph pages and will be in what ever texture format you selected (DDS, PNG, or TGA). When using these files you open and parse the .fnt font descriptor file which tells you basic info about the font settings (ie. Line Height, texture settings, etc.) then you load info about each individual glyph, and which character value it corresponds too, which texture page it is located on, and the texture coordinates, size, and how much to advance the drawing position when moving to the next character. These .fnt descriptors might also include kerning data which will adjust the spacing between specific character pairs such that the font renders with a nicer looking spacing.   Everything about the program and the file formats can be found in the documentation including examples on how to render some text here: http://www.angelcode.com/products/bmfont/documentation.html   As a proof of concept, I have actually just completed a font renderer using the binary file format generated by BMFont.   Note: Apologies if I have a mistake about wide character encoding, I mostly just stick to the ASCII set. 
  8. Minecraft Terrain Generator

    I would also like to mention the use of sampling a 3D noise function using fBm sampling to generate a 'density' function (even just combining several 2D samples can look good). If you then subtract a threshold value from the function you can say that any where the density is less than 0, you have open space and larger than 0 indicates solid ground. This can be used to generate voxel fields or as input for the marching cubes algorithm to generate some nice terrain with the advantage of the ability to specify open spaces 'under ground', ie. caves.   -BiT
  9. Elements of Minecraft

    I found it quite enjoyable that I can hop on with a few friends when we feel the urge to go on an adventure, build a fortress, and explore far-a-way lands. Unfortunately I don't last that long when it's down to just building things but that's just my take on it.
  10. WASD or Mouse movement - Top Down Stealth

    Why not have both and just have to option to change control mappings?
  11. Minecraft Terrain Generator

    The idea of using noise to generate game terrain has been around for a while and making it look decent is one of the most challenging and in my mind, rewarding task I have encountered in game programming.   http://freespace.virgin.net/hugo.elias/models/m_perlin.htm has a very good introduction into what fractal Brownian motion (the page has it wrong) is, and how you can make it with examples in simple pseudo-code. Take note of the idea of using several 'octaves' of the same noise signal and summing them with different amplitudes. If you consider higher frequency noise (lots of changes for a small change in sample position) you can consider that to make rough/spikey ground. If you consider the lower frequency noise (much less change for large change in sample position) then this gives a very smooth terrain on the same scale. If take the low frequency noise and multiply it with a large number, you would get mountains and valleys, then by adding a small amount of higher frequency noise you would get smaller details in the mountains like small bumps. If you do this with enough octaves with appropriate amplitude values you can generate interesting looking terrain.   A similar technique called the Diamond-square algorithm can also be used to generate interesting terrain (I prefer this for the close in detail). An example can be found at http://www.gameprogrammer.com/fractal.html. An iteresting thing to note about this one is that if you set the first level or two's values, you can force the algorithm to fill in the blanks of a terrain which you have set the vague shape of. That is if you lowered some of the points, a valley/river/depression would be formed there and the close in detail will be randomly generated for you. Using this idea you could actually use the perlin noise above to generate set heights for every kilometer for example, then use diamond squares to fill in the higher resolution details down to 1m scale.   In the end, generating whatever kind of terrain you want is all about combining different types or noise and how you choose to use the values they generate. Making it look good is the result of fine tuning the amplitudes, roughness factors, scales, etc. until you get something that looks appropriate and can be used in your environment.   Edit'd to stop perpetuating the fractal Brownian motion/perlin noise mistake. Thanks Bacterius.   -BiT
  12. Increasing the blur might help you, but really, it'd down to you, the old advice 'if it looks right, it is right' certainly applies here.   It might be that in the end, you decide variance shadow mapping might not be the best option if you have lots of thin geometry just as phil_t did. It's really down to you, I've just tried to explain how the algorithm works, and why you are seeing what you are so that you can figure out how you want to go about dealing with it.   Good Luck
  13. Okay, for the getting lighter problem, your issue is these lines: float momentdistance = coord.z - moments.x; float p_max = variance / (variance + momentdistance*momentdistance); You can see that as momentdistance approaches 0, p_max will approach 1 / 1, giving you fully light pixels. One simple solution for this would be to scale momentdistance up by some set value, but, to put it simply, there will ALWAYS be this 'getting lighter' problem when using variance shadow mapping, thats part of how it determines shadowing in the first place. In most cases this wont be an issue as the object being shadowed will be thick enough that you wont notice the light section since it will be hidden by the object it self, ie. a crate. 
  14. The problem is the way variance shadow mapping determines shadow/light per pixel. It takes the difference between the (linear depth)^2 and the depth^2, and the greater the distance, the greater the shadowing term, but when they get close together (which is what happens when the shadow approaches the occluder) the shadowing term gets lower.    I also noticed you are getting the shadow map depth data incorrectly. You need to store both linear and squared depth in the depth map in separate channels. Then you can sample them both instead of 'vec2(depth,depth*depth)'. This means that the depth and depth2 values will be linearly sampled and that depth * depth != depth2 for most cases. This is a key point for variance shadow mapping because it allows you to apply a blur to the shadow map and get nice smooth shadows as a result instead of the square edges you have shown.    As it stands in your code, if we simplify it you get  float variance = moments.y - (moments.x * moments.x); which is the same as float variance = moments.y - moments.y; // ( = 0) this is why you sample both moments.x and moments.y from a depthmap variance = max(variance, 0.0005); // ( = 0.0005) and finally float p_max = variance / (variance + momentdistance*momentdistance);
  15. The simpler version instead of using templates is to just define the function twice with two different parameters. Then when the compiler link the program, it picks the function that matches the correct parameter template.   Ie.   void dostuff(PNT var) {     var.Pos = vector3(0, 0, 0); }   void dostuff(PNTWB var) {     var.pos = vector3(1, 2, 3); }