Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 15 Aug 2005
Offline Last Active Yesterday, 06:56 PM

#5300385 When you realize how dumb a bug is...

Posted by xycsoscyx on 12 July 2016 - 08:54 AM

So I was switching over one of the libraries i use from a prebuilt dynamic DLL to a static library (that I was pulling and building/linking against directly).  Everything compiled and linked successfully, but whenever I ran it I would get crashes deep inside standard runtime libraries!  I was building and rebuilding and changing settings and trying to figure it out for a couple days, frustratingly to no avail.  Finally (I should have done this sooner), I started following the include links in the IDE and noticed that it was still including the old precompiled headers.  The functions all existed so I didn't get any errors during build, but a lot of the signatures and parameters had changed so it wasn't calling things correctly!


Grr, note to self, when changing library versions, just rename the old directory (or delete it entirely) to make sure it's not getting included by accident.  XD

#5297695 Retrieving World Position in Deferred Rendering

Posted by xycsoscyx on 23 June 2016 - 08:16 AM

Are you sure you need the world position?  If you store the linear depth, then this is the view space depth (the distance from the camera).  When rendering your lights, if you render a quad (or an extra large triangle) and use the corners as the view vectors, you can then multiply the view vector per pixel by the depth to get the view space position.  With this, you can do all your lighting in view space, just transform your lights from world space to view space, then the lighting equation remains the same.


Here's the code code to calculate the linear depth in the pixel shader, this is the first shader that stores all the info into my g-buffers:

outputPixel.depthBuffer = (inputPixel.viewPosition.z / Camera::maximumDistance);

This just takes the view space position, takes the depth of it, then converts it to the [0,1] range.


Here's code to use that depth to get the view space position:

float3 getViewPosition(float2 texCoord, float depth)
    float2 adjustedCoord = texCoord;
    adjustedCoord.y = (1.0 - adjustedCoord.y);
    adjustedCoord.xy = (adjustedCoord.xy * 2.0 - 1.0);
    return (float3((adjustedCoord * Camera::fieldOfView), 1.0) * depth * Camera::maximumDistance);

float surfaceDepth = Resources::depthBuffer.Sample(Global::pointSampler, inputPixel.texCoord);
float3 surfacePosition = getViewPosition(inputPixel.texCoord, surfaceDepth);

This uses the texture coordinate, remaps it from [0,1] to [-1,1], then treats that as a vector away from the camera.


Just to finish it off, here's the code that renders a large triangle over the screen.  This is from Bill Bilodeau's vertex shader tricks presentation:


Pixel mainVertexProgram(in uint vertexID : SV_VertexID)
    Pixel pixel;
    pixel.texCoord = float2((vertexID << 1) & 2, vertexID & 2);
    pixel.position = float4(pixel.texCoord * float2(2.0f, -2.0f)
                                           + float2(-1.0f, 1.0f), 0.0f, 1.0f);
    return pixel;

You can just draw a simple 3 vertex primitive with this shader, no vertex or index buffers required since it uses the vertex ID.

#5293264 About resouce files

Posted by xycsoscyx on 24 May 2016 - 03:48 PM

A type of ico in the RC file, and "ico" in FindResource call, should result in a custom resource type.  "ico" is not an existing resource type (https://msdn.microsoft.com/en-us/library/ms648009(v=vs.85).aspx), which means it's being imported and compiled in as a regular blob in the resources.  This is why it gets found when finding it, there's no special handling going on and you're matching the RC type when finding it.


ICON and RT_ICON will do it's own special handling, and I've found that icons can be pretty tricky when used as resources.  Have you made sure that the icon is valid and supported (no odd sizes/laters/color depths/etc)?  Open the icon in Visual Studio and make sure it loads/looks correct, and try actually saving it there as well.  Make sure you save it explicitly (Save As and overwrite, or save to a new name), otherwise it might not think it needs to save.

#5293116 About resouce files

Posted by xycsoscyx on 23 May 2016 - 03:49 PM

Look up Image on MSDN, you'll see a list of functions that you can use from it.  In particular, you should be able to use GetRawFormat to get the GUID of the image format that was loaded.


GetRawFormat: https://msdn.microsoft.com/en-us/library/windows/desktop/ms535393(v=vs.85).aspx

Image formats: https://msdn.microsoft.com/en-us/library/windows/desktop/ms534410(v=vs.85).aspx

#5285063 Create holes in plane dynamically

Posted by xycsoscyx on 04 April 2016 - 12:54 PM

Try searching for information about Constructive Solid Geometry.  That lets you do boolean operations of shapes, so you could setup a ground mesh then subtract the spheres from it.  You could also look into voxels, setup a big array of voxels for the terrain, then just start removing the voxels as the user digs.  With voxels, you'd still need to look into some way of calculating the mesh from the voxels though, but there should be a lot of information about that online as well.

#5284663 SpriteFont render Umlaute

Posted by xycsoscyx on 01 April 2016 - 03:51 PM

It looks like it's auto converting to utf-8, but doing straight character comparisons.  There's a lot going on under the hood when working with STL and regex and etc, including dealing with locale.  Here's a good example of setting the locale to support utf-8:  http://stackoverflow.com/questions/11254232/do-c11-regular-expressions-work-with-utf-8-strings


char strings typically are simple strings, but utf-8 is a type of encoding.  You can store a utf-8 string in a std:string, but at that point it's really not human readable anymore.  A lot of editors know that the string is utf-8 and auto convert when displaying the value, but if you look at the raw memory then you'll see that it's actually a buffer of utf-8 data under the hood.  That's what's happening with äüöß to Ã¤Ã¼Ã¶ÃŸ, Ã¤Ã¼Ã¶ÃŸ doesn't make any sense when reading, but the values in it are the utf-8 encoded data for äüöß.


EDIT: This is why your solution works, you're extending the regex to check the encoded utf-8 values.  You won't need to do that if you set the locale ala the SO link, or if you switched to using wchar_t types (wregex/wstring).

#5284621 SpriteFont render Umlaute

Posted by xycsoscyx on 01 April 2016 - 10:07 AM

Why are you trying to render an umlaute to begin with?  Are you trying to support other languages in general, or just other European languages?  If you're just rendering ASCII, then you might just consider supporting English.  This means you don't need any extended characters and can just use char and std::string normally.  If you're trying to support other languages in general, you should consider switching to unicode instead, then you can use std::wstring and not have to worry about char/byte/etc issues.  With your current implementation this becomes a bit tricky, as you'll probably need to start to page the textures since a single texture wouldn't store all the glyphs that you need.


As for the issue you're seeing, it still looks like you're not even initializing the full set of characters for the ASCII set? You're initializing from 32 to 154, which means you're not initializing the glypth for the umlaute.  In your draw function, since the umlaute is 252, it lies outside your range and you skip it.  You need to initialize a larger character set to properly support the extended characters (either initialize more characters, or map the characters so you can have a specific set without needing junk glyphs in between valid ones).

#5284319 Performing Pan Tilt and Zoom using Orthographic camera

Posted by xycsoscyx on 30 March 2016 - 01:19 PM

For pan/tilt, the easiest thing to do is setup your view matrix as a 2D camera, you can set it up with a fixed axis (the Z axis typically) since you're always looking towards your scene, and still move the translation/up/right vectors of the matrix.  Note that a view matrix is (basically?) an inverted world matrix.  Matrices move points from one space to another, so the object matrix moves it from object space to world space, and the view matrix moves it from world space to view space.  World space has some arbitrary center at 0,0,0, while view space has an absolute center at the viewer itself.  So it ultimately subtracts the viewers world space coordinates to move it to be centered around the viewer.


For panning, you just move the cameras position left/right/up/down/etc.  This will give you a more intuitive way of panning since it's really just moving the camera around the scene.


The same holds true for tilt, if you rotate the camera, the view matrix rotation will be the inverse of that, and will rotate things into view space.  You can use Matrix.RotationZ to create a matrix that just rotates around the Z axis, so it will still look towards your scene (the Z axis doesn't change), but the up/left axis rotate around.  This gives you a simpler 2D rotation which will tilt your scene.


To sum up, move/rotate your camera around your scene, then create a final world space matrix from the camera translation/rotation, then invert that to get the view matrix.

#5284110 fading effect

Posted by xycsoscyx on 29 March 2016 - 02:59 PM

If you're using a pixel shader then you don't need to do any blending either, just get the final color of the pixel and blend (or lerp, or get creative with) it with your fog color by the alpha value that you get from the sample code that albinopapa supplied.  If you want to get fancier than that, look into volumetric effects or atmospheric scattering as well, as basic linear fog is really just the first step in shrouding your objects in mystery.

#5283398 Instantiate class in external data without recompile all program (user C++ cl...

Posted by xycsoscyx on 25 March 2016 - 10:01 AM

There's a lot of ways of doing thing, some simple, some complicated.  A simple way would be to setup an interface for your objects, then have derived classes compiled into DLLs with some type of accessor.  Your main application only cares about the interfaces, it operates solely on those.  The DLLs have a way to create an instance of their derived classes and return the base interface.  The main application loads the DLL, calls the creation function, and gets a new instance of it's class as the base interface.  This lets you abstract the application from ever needing to know any of the implementations, and lets you add more implementations later.  With that, you could then specify the DLL/creation function in your XML, or even use a type of smart scan in the application.  Prescan the external directory for DLLs, find all DLLs with your unique creation function that accepts a class name, then when you need a class, iterate over the creation functions with the requested class and see if any of them return a new instance.  After that there are a lot of ways to optimize it (not having to iterate over all creation functions, not needing to keep the external DLLs loaded during runtime, etc), but that's up to how you want to architect the system.

#5282665 Rendering multiple textures and draw text in DirectX11

Posted by xycsoscyx on 22 March 2016 - 10:27 AM

First, make sure you understand the different terms.  The viewport is a rectangle, and your entire projected scene will be scaled inside your viewport.




From end to end (some steps are typically condensed with premultiplied matrices), you have your object space vertex, times your object matrix gets your world space vertex, times your view matrix gets your view space matrix, times your projection matrix gets your viewport space vertex, times the viewport matrix (D3D technically creates a matrix out of the viewport data) gets you your screen space vertex.


Your viewport is just where on your render target do you want to draw the projected data.  For beginners (and probably for most people, unless you are doing a lot of split screen rendering), you're probably just setting the viewport to your entire render target size.  This means your entire scene will be drawn onto your entire render target (screen if you're just rendering to it directly).  For scrolling, you won't be doing anything with the viewport, the viewport just specifies what part of the render target you want to render onto.


To scroll, you want to modify the view matrix because the view matrix is the camera.  You don't scroll by moving the world around the camera, you move the camera along the world.


As for the vertex positions, yes.  If you're using the orthographic projection, then the coordinates you specify when you create it are the extends of the view frustum (well, box for orthographic).  The orthographic projection scales from view space (still in world units) which will be in values of (left,top)x(right,bottom), to screen space which will be in values of (-1,1).  Since you were creating a 2x2 orthographic projection matrix before, your view coordinates where (0,0)-(2,2), so a pixel at (1,1) would be in the center.  If you're using a 800x600 sized orthographic projection matrix now, then a pixel at (1,1) will be at the top left corner.


Basically, if you're using an orthohraphic projection of screenWidth x screenHeight, then would specify the vertex positions as the actual pixel coordinates you want to render at.  If you want a 50x50 box on the top left corner, it'd be (0,0)-(50,50).  If you want a 10x10 box in the center of the screen, it'd be (395,295)-(405,305).  If you want a texture with 1:1 texel:pixel ration on the top left corner, it'd be (0,0)-(texWidth,texHeight).

#5280716 Rendering multiple textures and draw text in DirectX11

Posted by xycsoscyx on 11 March 2016 - 10:23 AM

Ah, so you just mean drawing two textures onto the screen?  You can just draw quads directly with each texture instead of setting the viewport per call:


1. set the viewport (probably the whole screen)

2. set the input layout and vertex

3. use an orthographic projection matrix for your transform

4. for each texture

   a. set the texture

   b. draw a quad where you want it on the screen


The quad that you draw will be placed wherever you want on the screen (easy to do with orthographic projection since you can use screen coordinates directly if you want), and the texture coordinates will just be 0-1 across and down the quad.  This will draw the entire texture on the quad, placing the quad wherever (and whatever size) you want.

#5275438 Note to self

Posted by xycsoscyx on 12 February 2016 - 12:48 PM

Good thing QA "noticed" it


Well they noticed that uninstall was taking a lot longer than usual, but they kept just reverting to snapshots so it took a little while to actually realize what was happening.  XD


What that really means is that I obviously didn't test that change out myself or my dev box would have been bricked.  Note to self, don't test stuff or you may end up bricking your dev box!

#5275056 Note to self

Posted by xycsoscyx on 09 February 2016 - 04:29 PM

Fortunately we had good QA at a previous job, but once I was using SHDeleteFile and apparently instead of deleting a specific directory, I was deleting EVERY directory!




QA noticed it during install once Windows started throwing up errors.  Apparently not all critical files are locked at all times, but when it needs them and they aren't there then it's not a good thing.  XD

#5274477 real time grass

Posted by xycsoscyx on 05 February 2016 - 10:56 AM

Have you searched here yet?  


http://www.gamedev.net/topic/645080-good-way-to-manage-grass-rendering/, which leads to a good article specifically on grass rendering here: http://www.kevinboulanger.net/grass.html (which a few other posts actually link to as well).


GPU Gems 1 apparently has a good section on grass rendering too.