Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 15 Aug 2005
Offline Last Active Today, 11:08 AM

#5285063 Create holes in plane dynamically

Posted by xycsoscyx on 04 April 2016 - 12:54 PM

Try searching for information about Constructive Solid Geometry.  That lets you do boolean operations of shapes, so you could setup a ground mesh then subtract the spheres from it.  You could also look into voxels, setup a big array of voxels for the terrain, then just start removing the voxels as the user digs.  With voxels, you'd still need to look into some way of calculating the mesh from the voxels though, but there should be a lot of information about that online as well.

#5284663 SpriteFont render Umlaute

Posted by xycsoscyx on 01 April 2016 - 03:51 PM

It looks like it's auto converting to utf-8, but doing straight character comparisons.  There's a lot going on under the hood when working with STL and regex and etc, including dealing with locale.  Here's a good example of setting the locale to support utf-8:  http://stackoverflow.com/questions/11254232/do-c11-regular-expressions-work-with-utf-8-strings


char strings typically are simple strings, but utf-8 is a type of encoding.  You can store a utf-8 string in a std:string, but at that point it's really not human readable anymore.  A lot of editors know that the string is utf-8 and auto convert when displaying the value, but if you look at the raw memory then you'll see that it's actually a buffer of utf-8 data under the hood.  That's what's happening with äüöß to Ã¤Ã¼Ã¶ÃŸ, Ã¤Ã¼Ã¶ÃŸ doesn't make any sense when reading, but the values in it are the utf-8 encoded data for äüöß.


EDIT: This is why your solution works, you're extending the regex to check the encoded utf-8 values.  You won't need to do that if you set the locale ala the SO link, or if you switched to using wchar_t types (wregex/wstring).

#5284621 SpriteFont render Umlaute

Posted by xycsoscyx on 01 April 2016 - 10:07 AM

Why are you trying to render an umlaute to begin with?  Are you trying to support other languages in general, or just other European languages?  If you're just rendering ASCII, then you might just consider supporting English.  This means you don't need any extended characters and can just use char and std::string normally.  If you're trying to support other languages in general, you should consider switching to unicode instead, then you can use std::wstring and not have to worry about char/byte/etc issues.  With your current implementation this becomes a bit tricky, as you'll probably need to start to page the textures since a single texture wouldn't store all the glyphs that you need.


As for the issue you're seeing, it still looks like you're not even initializing the full set of characters for the ASCII set? You're initializing from 32 to 154, which means you're not initializing the glypth for the umlaute.  In your draw function, since the umlaute is 252, it lies outside your range and you skip it.  You need to initialize a larger character set to properly support the extended characters (either initialize more characters, or map the characters so you can have a specific set without needing junk glyphs in between valid ones).

#5284319 Performing Pan Tilt and Zoom using Orthographic camera

Posted by xycsoscyx on 30 March 2016 - 01:19 PM

For pan/tilt, the easiest thing to do is setup your view matrix as a 2D camera, you can set it up with a fixed axis (the Z axis typically) since you're always looking towards your scene, and still move the translation/up/right vectors of the matrix.  Note that a view matrix is (basically?) an inverted world matrix.  Matrices move points from one space to another, so the object matrix moves it from object space to world space, and the view matrix moves it from world space to view space.  World space has some arbitrary center at 0,0,0, while view space has an absolute center at the viewer itself.  So it ultimately subtracts the viewers world space coordinates to move it to be centered around the viewer.


For panning, you just move the cameras position left/right/up/down/etc.  This will give you a more intuitive way of panning since it's really just moving the camera around the scene.


The same holds true for tilt, if you rotate the camera, the view matrix rotation will be the inverse of that, and will rotate things into view space.  You can use Matrix.RotationZ to create a matrix that just rotates around the Z axis, so it will still look towards your scene (the Z axis doesn't change), but the up/left axis rotate around.  This gives you a simpler 2D rotation which will tilt your scene.


To sum up, move/rotate your camera around your scene, then create a final world space matrix from the camera translation/rotation, then invert that to get the view matrix.

#5284110 fading effect

Posted by xycsoscyx on 29 March 2016 - 02:59 PM

If you're using a pixel shader then you don't need to do any blending either, just get the final color of the pixel and blend (or lerp, or get creative with) it with your fog color by the alpha value that you get from the sample code that albinopapa supplied.  If you want to get fancier than that, look into volumetric effects or atmospheric scattering as well, as basic linear fog is really just the first step in shrouding your objects in mystery.

#5283398 Instantiate class in external data without recompile all program (user C++ cl...

Posted by xycsoscyx on 25 March 2016 - 10:01 AM

There's a lot of ways of doing thing, some simple, some complicated.  A simple way would be to setup an interface for your objects, then have derived classes compiled into DLLs with some type of accessor.  Your main application only cares about the interfaces, it operates solely on those.  The DLLs have a way to create an instance of their derived classes and return the base interface.  The main application loads the DLL, calls the creation function, and gets a new instance of it's class as the base interface.  This lets you abstract the application from ever needing to know any of the implementations, and lets you add more implementations later.  With that, you could then specify the DLL/creation function in your XML, or even use a type of smart scan in the application.  Prescan the external directory for DLLs, find all DLLs with your unique creation function that accepts a class name, then when you need a class, iterate over the creation functions with the requested class and see if any of them return a new instance.  After that there are a lot of ways to optimize it (not having to iterate over all creation functions, not needing to keep the external DLLs loaded during runtime, etc), but that's up to how you want to architect the system.

#5282665 Rendering multiple textures and draw text in DirectX11

Posted by xycsoscyx on 22 March 2016 - 10:27 AM

First, make sure you understand the different terms.  The viewport is a rectangle, and your entire projected scene will be scaled inside your viewport.




From end to end (some steps are typically condensed with premultiplied matrices), you have your object space vertex, times your object matrix gets your world space vertex, times your view matrix gets your view space matrix, times your projection matrix gets your viewport space vertex, times the viewport matrix (D3D technically creates a matrix out of the viewport data) gets you your screen space vertex.


Your viewport is just where on your render target do you want to draw the projected data.  For beginners (and probably for most people, unless you are doing a lot of split screen rendering), you're probably just setting the viewport to your entire render target size.  This means your entire scene will be drawn onto your entire render target (screen if you're just rendering to it directly).  For scrolling, you won't be doing anything with the viewport, the viewport just specifies what part of the render target you want to render onto.


To scroll, you want to modify the view matrix because the view matrix is the camera.  You don't scroll by moving the world around the camera, you move the camera along the world.


As for the vertex positions, yes.  If you're using the orthographic projection, then the coordinates you specify when you create it are the extends of the view frustum (well, box for orthographic).  The orthographic projection scales from view space (still in world units) which will be in values of (left,top)x(right,bottom), to screen space which will be in values of (-1,1).  Since you were creating a 2x2 orthographic projection matrix before, your view coordinates where (0,0)-(2,2), so a pixel at (1,1) would be in the center.  If you're using a 800x600 sized orthographic projection matrix now, then a pixel at (1,1) will be at the top left corner.


Basically, if you're using an orthohraphic projection of screenWidth x screenHeight, then would specify the vertex positions as the actual pixel coordinates you want to render at.  If you want a 50x50 box on the top left corner, it'd be (0,0)-(50,50).  If you want a 10x10 box in the center of the screen, it'd be (395,295)-(405,305).  If you want a texture with 1:1 texel:pixel ration on the top left corner, it'd be (0,0)-(texWidth,texHeight).

#5280716 Rendering multiple textures and draw text in DirectX11

Posted by xycsoscyx on 11 March 2016 - 10:23 AM

Ah, so you just mean drawing two textures onto the screen?  You can just draw quads directly with each texture instead of setting the viewport per call:


1. set the viewport (probably the whole screen)

2. set the input layout and vertex

3. use an orthographic projection matrix for your transform

4. for each texture

   a. set the texture

   b. draw a quad where you want it on the screen


The quad that you draw will be placed wherever you want on the screen (easy to do with orthographic projection since you can use screen coordinates directly if you want), and the texture coordinates will just be 0-1 across and down the quad.  This will draw the entire texture on the quad, placing the quad wherever (and whatever size) you want.

#5275438 Note to self

Posted by xycsoscyx on 12 February 2016 - 12:48 PM

Good thing QA "noticed" it


Well they noticed that uninstall was taking a lot longer than usual, but they kept just reverting to snapshots so it took a little while to actually realize what was happening.  XD


What that really means is that I obviously didn't test that change out myself or my dev box would have been bricked.  Note to self, don't test stuff or you may end up bricking your dev box!

#5275056 Note to self

Posted by xycsoscyx on 09 February 2016 - 04:29 PM

Fortunately we had good QA at a previous job, but once I was using SHDeleteFile and apparently instead of deleting a specific directory, I was deleting EVERY directory!




QA noticed it during install once Windows started throwing up errors.  Apparently not all critical files are locked at all times, but when it needs them and they aren't there then it's not a good thing.  XD

#5274477 real time grass

Posted by xycsoscyx on 05 February 2016 - 10:56 AM

Have you searched here yet?  


http://www.gamedev.net/topic/645080-good-way-to-manage-grass-rendering/, which leads to a good article specifically on grass rendering here: http://www.kevinboulanger.net/grass.html (which a few other posts actually link to as well).


GPU Gems 1 apparently has a good section on grass rendering too.

#5271358 When you realize how dumb a bug is...

Posted by xycsoscyx on 15 January 2016 - 05:21 PM

I was recently looking into spherical harmonics for lighting, so I read over a lot of source materials and worked on a basic implementation.  I had something up and running that would generate the coefficients, then would use those coefficients to generate the surface lighting.  For some reason, the result kept being off, it was nothing like the source environment maps (the colors were drastically off).  Even more than that, no matter what I changed (or what inputs I used), it always ended up the same!  It took me about a couple days of struggling to finally realize that despite implementing it in my lighting code, I never actually switched to use that new code path.  It was using the simple hemispherical lighting that I first implemented, no wonder it never changed!

#5262318 D3D12 - Mapping and coping all constant buffers in a sigle operation

Posted by xycsoscyx on 16 November 2015 - 03:46 PM

Here's a good article that covers what you're dealing with:




The basic idea is to use the discard flag with a smaller (whatever maximum size you want) buffer.  This effectively lets you use a single buffer, but remapping it between draw calls.  You create your first batch, filling up the buffer, then draw it.  Then you map it again using the discard flag, copying in the next set of instances, then draw that.  Rinse and repeat until you have nice sheen.

#5145135 Let's say 3 2D Textures and a TextureCube walk into a bar?

Posted by xycsoscyx on 07 April 2014 - 12:44 PM

Since you're not setting textures, but instead shader resource views, it would be fine to add it as part of your array.  I assume you already create a shader resource view for all textures (2d and cube)?  If you did want to separate it then your second call isn't correct though.  The first parameter is the starting stage to set, the second is the number of resources, and the third is the array of resources.  In your second call you are setting 0 resources in the first stage.  It should really be more like this:




This will set the next stage after the array, and set 1 resource.  Again though, as long as you create resources views for all textures, then you can set them all in the array and call PSSetShaderResource only once.

#5036866 Why do it the easy way?

Posted by xycsoscyx on 26 February 2013 - 04:09 PM

How about something akin to what I find in some of my companies legacy code:



ASSERT(isActive); // We should never be inactive so lets just assert in debug, it'll be fine in release

return TRUE;