Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Burnt_Fyr

Member Since 25 Aug 2009
Offline Last Active Yesterday, 03:40 PM

#5153075 How do I add librarys in visual studio 10?

Posted by Burnt_Fyr on 12 May 2014 - 09:15 AM

You will need to setup the library directory (project->properties->configuration properties->vc++ directories) if glaux is not in a directory already accessible by the project, as well as the add the library as an additional dependency (project->properties->configuration properties->linker->input) If you are compiling glaux, you can add that project to the existing solution, and use project->properties->common properties->add new reference instead.

 

If you are including files such as headers, you may need to setup those as well, either as an include from vc++ directories, or through  c/c++->general->additional include directories.

 

the above applies to vs2010, but should work in 2013 unless everything has been changed again.




#5152549 Two vertexShader cannot function simultaneously

Posted by Burnt_Fyr on 09 May 2014 - 09:58 AM

when you say the pixel shader has no effect, do you mean that nothing is rendered? have you tried pix to see what the vertices look like after the vertex shader has passed?

 

EDIT: IIRC SetFVF will not work with the shader pipeline, only the FFP. For shaders, vertices need to be bound with VertexDeclarations, which WILL work with the FFP.

 

http://msdn.microsoft.com/en-us/library/windows/desktop/bb206335%28v=vs.85%29.aspx

 

EDIT2: I was wrong, FVF vertex buffers can be used with vertex shaders.




#5152353 Direct3D 11 Creating GDI compatible texture for backbuffer

Posted by Burnt_Fyr on 08 May 2014 - 12:08 PM

I suspect that the creation of the surface will fail with that flag active though. Try it by all means but last I recall resources meant for shader binding don't play nice with GDI+.

I would assume that DXGI_SWAP_CHAIN_FLAG_GDI_COMPATIBLE would actually force the backbuffer(and others if not double buffering) to be created with the D3D10_RESOURCE_MISC_GDI_COMPATIBLE in the buffer description, internally.

 

@BuckEye: glad you found the bug, how did you come by it? That section wasn't part of the original code you posted, did the GDI text work before resizing the window/buffer or was this something that was called before you had actually presented the first frame?




#5152045 Does swapchain->Present always stretch to the target?

Posted by Burnt_Fyr on 07 May 2014 - 08:43 AM

I think Jason's solution should work, ie; present your back buffer, and draw GDI on top afterwards, although to be honest I feel like I'm and ant among giants in this thread. Have you tried rendering to your MSAA-able surface, and then biltting (if possible) or using that as a texture to draw a full screen quad on to a GDI compatible surface?

 

I tend to do what others have suggested and just use separate HWNDs and swapchains for child areas of the hwndmain client.




#5151256 Should I use NeHe tutorials?

Posted by Burnt_Fyr on 03 May 2014 - 03:24 PM

Just to give a different spin on it, yes nehe's tutorials are out dated, but if you are new to 3d rendering in general, then you will still benefit from understanding how 3d objects are rendered, how the hardware works, etc. You will be need to relearn modern api's all the time, but the underlying theory will stick with you. If you are unfamiliar with windows programming I suggest the forgers win32 tutorials, and if you are unfamiliar with c++ in general, well,  the answers above provide enough links already.




#5150805 DX9 with MRT, access violation

Posted by Burnt_Fyr on 01 May 2014 - 02:37 PM

A simple missed break in a switch statement, which allowed my vertex shader to bind to both the vertex and pixel stage was the issue. That cleared up the assert, and i'll be well on my way to multiple render targets shortly with any luck.

 

EDIT: the other issue was trying to render a cube with 8 verts, and 36 indices. Neither of these would have been a problem with my last my last stable version, as it would have identified the draw primitive error, and my shaders were not based on an interface. In order to get a cleaner interface that will be mostly compatible with directx11, I'm rewriting much of how my code handles DX9 objects, and there are still many bugs that need to be patched. regardless, I have the MRTS rendering, so now it's on to the second pass.




#5150176 algorithm for calulating angle between two points

Posted by Burnt_Fyr on 28 April 2014 - 03:04 PM

 


Correction: you would need to add PI, not 2pi, to transform an interval ranging from from -pi..pi, to the range of 0..2pi. In general to convert from a signed -n..n, range  to an unsigned 0..2n range add 1/2 the range, 1n in this case, or 1pi in the situation above.

The transform I mentioned is not linear: With atan2 giving you angles in the range [+pi,-pi) where [+pi,0] is as desired, but (0,-pi) should be (2pi,pi) instead, you have to add 2pi for all resulting angles less than 0. As pseudo code:

    result := angle < 0 ? 2pi+angle : angle   w/   angle := atan2(-x, z)

 

Some day I'll learn not to spout my mouth off before coffee time :) Thanks for the correction on my correction.




#5150149 algorithm for calulating angle between two points

Posted by Burnt_Fyr on 28 April 2014 - 12:22 PM

First some clarifications: You seem me to want to compute the angle from the principal z direction to the look direction vector, in the x-z plane, and in the counter-clockwise manner.  Notice that this is not the same as computing the angle between the difference from the origin to the camera position and the difference from the origin to the look-at position (as you do in the OP). Saying "the angle between" is not direction dependent, i.e. gives the same result if the look vector is e.g. 45° left or 45° right to the axis. In such a case the dot product can be used, as phil_t has mentioned. However, in the OP you make a distinction between (+1,0,0) to result in 90° and (-1,0,0) to result in 270°, so the following method may be better suited...

 

The look vector is a direction vector. You can extract it from the camera matrix (notice that this is the inverse of the view matrix). Or you can compute it as difference from the camera position to the view at position.

 

The atan2 function can be used to compute the angle (in radian) from the x and z components of the said look vector relative to a principal z axis. The question to answer is whether to use atan2(x, z) or atan2(z, x), and whether one of the components need to be negated. Without having proven it, I assume atan2(-x, z) would do the job (please check twice).

 

The result of atan2 is in [+pi,-pi), so you need to add 2pi if the atan2 itself is negative if you actually need [0,2pi) for some reason. Since you want to have the angle in degrees, you further have to transform it, accordingly, of course.

Correction: you would need to add PI, not 2pi, to transform an interval ranging from from -pi..pi, to the range of 0..2pi. In general to convert from a signed -n..n, range  to an unsigned 0..2n range add 1/2 the range, 1n in this case, or 1pi in the situation above.




#5149512 About Shadow Shadow Volume

Posted by Burnt_Fyr on 25 April 2014 - 05:24 PM

Does this happen only when the camera is inside the shadow? this happened in my game, although i didn't mind it much, but i think they're a way to fix it, but yea, we need more information than that.

IIRC, this is only an issue with depth pass shadow volumes, depth fail,(aka Carmack's reverse) should not suffer from this.




#5149401 About Shadow Shadow Volume

Posted by Burnt_Fyr on 25 April 2014 - 09:31 AM

You will need to give us more than just a picture, most likely, if you want a solution.

 

I would start by debuging the vertices that make up your shadow volume, and find out what is going to with them in the shader when the camera moves. It looks to me like some of the shadow volume is being clipped against the near plane, but I'm just guessing here.




#5148990 Better way to make entities and map comunicate in a Hierarchy Component System

Posted by Burnt_Fyr on 23 April 2014 - 10:51 AM

 

You could pass the Map object by reference:

 player->update(theMap);
 (for each Enemy) enemy->update(theMap);

There's really nothing wrong with passing objects through function calls to the objects that depend on them.

 

I thought about doing something like that, but I don't know, it just don't feels right to be passing everything through parameters. Perhaps I'm just too paranoid. I also had some problems with circular dependency when I first tried doing that, but this is probably some mess I did with the code.

 

 

I think the entities having a reference to the map, and vice versa is the way to go. when the entity needs to get a path, it can query the map, which can use a* or other pathfinding to return the path to the entity. If your entity needs to query for nearest enemy or what not, it can query the map, which knows all entities already. In an ECS system, I would have a pathfinding system to handle the logic, which would have a reference to the map, and use the entities position component to get starting information for the graph search.

 

I was thinking of something like that, like the entities storing a pointer for the current map and vice versa, though I feel like I'm going to have some headache to implement that at first. I'll give some more thought and research on the matter before what I'm really going to do.

 

Thank you guys!

 

You may find this a good read




#5148936 worldspace or not, matrix & extents

Posted by Burnt_Fyr on 23 April 2014 - 06:35 AM

Bingo!




#5148858 Understanding a (d3dx) model matrix's content

Posted by Burnt_Fyr on 22 April 2014 - 06:19 PM

The first row is the object’s X vector, which points to the right of the object.
The second row is the object’s up vector.  In a primitive FPS game this will always be [0,1,0] for a player as the player is always standing upright.
The third row is the object’s Z vector, which points forward in front of the object.  It is the direction the object is facing.
The fourth row is the object’s position.
 
The first row is also scaled by the object’s X scale, and the second and third are scaled by the object’s Y and Z scales respectively.
 
Code to create a matrix from scratch (the easiest way to understand each component) is as follows:

    /** Creates a matrix from our data. */
    void COrientation::CreateMatrix( CMatrix4x4 &_mRet ) const {
        _mRet._11 = m_vRight.x * m_vScale.x;
        _mRet._12 = m_vRight.y * m_vScale.x;
        _mRet._13 = m_vRight.z * m_vScale.x;
        _mRet._14 = 0.0f;

        _mRet._21 = m_vUp.x * m_vScale.y;
        _mRet._22 = m_vUp.y * m_vScale.y;
        _mRet._23 = m_vUp.z * m_vScale.y;
        _mRet._24 = 0.0f;

        _mRet._31 = m_vForward.x * m_vScale.z;
        _mRet._32 = m_vForward.y * m_vScale.z;
        _mRet._33 = m_vForward.z * m_vScale.z;
        _mRet._34 = 0.0f;

        _mRet._41 = m_vPos.x;
        _mRet._42 = m_vPos.y;
        _mRet._43 = m_vPos.z;
        _mRet._44 = 1.0f;
    }
These are Direct3D matrices.


L. Spiro

 

In a left handed system




#5148723 Better way to make entities and map comunicate in a Hierarchy Component System

Posted by Burnt_Fyr on 22 April 2014 - 07:39 AM

I think the entities having a reference to the map, and vice versa is the way to go. when the entity needs to get a path, it can query the map, which can use a* or other pathfinding to return the path to the entity. If your entity needs to query for nearest enemy or what not, it can query the map, which knows all entities already. In an ECS system, I would have a pathfinding system to handle the logic, which would have a reference to the map, and use the entities position component to get starting information for the graph search.




#5148537 Plane inside view frustum

Posted by Burnt_Fyr on 21 April 2014 - 08:35 AM

Knowing the view space Z that you want your screen quad to sit at, We can use a bit of trig to calculate the corners. First get the height and width of the far plane. FOV/2 gives you the angle between view direction and and the top or bottom plane. so tan(fov/2) gives you the slope of that plane in regards to the horizontal plane. multiply that by your Z distance to get the rise, or half height of the quad. multiply that by the aspect ratio to get the half width.

 

each corner will be some combination of half height, half width, and the center of your plane, which can be found by multipying your forward view space vector by your distance.

 

all of these points will be in view space, so if you are drawing something on the screen at this point keep that in mind. you may want to transform them into world space view the inverse view transform so they are in world space, or maybe set your view and model transforms to identity before rendering the quad.






PARTNERS