Jump to content

  • Log In with Google      Sign In   
  • Create Account

Burnt_Fyr

Member Since 25 Aug 2009
Offline Last Active Jul 15 2016 01:17 PM

#5297586 Retrieving World Position in Deferred Rendering

Posted by on 22 June 2016 - 09:44 AM

I'm amazed no one has replied to this yet, so I'll take a stab. Can you debug the shader? If so, try pushing the world space position from the vertex shader to the pixel shader, in the VSOUTPUT,  and then compare results of your depth reconstruction to the WSP you pushed through. It should at least give you an idea of what is going wrong.

 

I've not worked with linear depth, so i can't comment on #1, but #2 should be using the InverseViewProjection, I can't see the shader but i don't think that's what your variable describes. So it could be you are getting a position in Viewspace. Maybe post the full vertex and pixel shaders so we can see the whole routine?




#5260789 Water rendering

Posted by on 06 November 2015 - 10:44 AM

Damn. That's almost an article-worthy "water rendering 101" right there.

I think the same, and actually just followed this thread so i can digest Promit's excellent reply when i have more time




#5260787 Inversion of control across libraries

Posted by on 06 November 2015 - 10:34 AM

Why can EngineLib not be dependant? Would you ever use engine without graphics and physics? If so, perhaps the stuff that is common to both graphics and physics can be broken out into it's own lib?

 

I'm not an expert by any means, but i have dealt with an issue similar to what i think you are describing. I bundled up anything math related into it's own library. The graphics library objects includes the headers it needs from this, and the physics library behaves the same way. The engine includes the graphics, physics, and math headers, and the application being worked on includes the engine headers. Since the actual translation units exist only once in their respective libs, I have no issues including each library into the application.

 

I've even kept the graphics library as agnostic as possible, and created one library that implements a dx9 version(which is what all of my existing code was based on), and another for DX11(which i have slowly been implementing), and yet another for gl4(which i've barely done anything with. The engine only knows about the base graphics library, while the app has the choice of including/linking to the implementation it wants.

#include "GraphicsDX9.h"
App game(new GraphicsEngineDX9( ... ), new PhysicsEngine(...), new InputEngine(...));

GraphicsEngineBase * renderer = game.GetGraphics();

// or

#include "GraphicsGL4.h"
App game(new GraphicsEngineGL4( ... ), new PhysicsEngine(...), new InputEngine(...));

GraphicsEngineBase * renderer = game.GetGraphics();

// etc

Yes it's a bit of the PITA, but it works well enough for what i need.

 




#5259124 Excuse me < How do I create Models/Objects for my game?

Posted by on 26 October 2015 - 08:14 AM

Blender and Gimp have done everything i've ever needed, and there is a world of tutorials for both on the google.




#5258326 OOP inheritance structure

Posted by on 21 October 2015 - 08:55 AM

My first thought is why does NPC Entity have to be it's own class? Is it any different than any other entity?

 

A better idea would be to have a controller class, which is a member of entity. By extending the controller into Player Controller and AI controller, We can swap out control of that entity simply by switching it's controller. The player controller reads input, and applies it to the entity. An AI controller, which could be a state machine  or any other AI technique you want to use(behavior tree, etc) would evaluate the state of the game and decide what the entity should do.

 

By creating different instances of player controller, you can have different characters respond differently to player input, and the same thing with the AI controllers. This clears up entity so that it is agnostic as far as who or how it is controlled, and keeps AI and player code separate as well.




#5257919 Help with 3D camera please

Posted by on 19 October 2015 - 11:05 AM

Turns out the other half of the problem was in the shader.

I have changed the shader to this and the whole scene seems to be working correctly now smile.png
 

VS_Output VShader(VS_Input IN)
{
	VS_Output OUT = IN;

	OUT.pos = mul(IN.pos, worldMatrix); // transform from model to world space
	OUT.pos = mul(OUT.pos, viewMatrix); // transform from world to camera space
	OUT.pos = mul(OUT.pos, projectionMatrix); // transform from camera to projection space

	return OUT;
}

Sorry i kind of afk'd on the convo here, but I see you solved it. Working your critical thinking skills will payback 10x the effort you put into them. I find myself answering so many more questions than i have to ask now. When I run into an issue, I have the experience, as well as the critical thinking to pinpoint the problem in a fraction of the time that i would have spent even a year ago, so good on you for solving your own problem.

 

Looking back at the original shader

 matrix mvp = mul(projectionMatrix, mul(viewMatrix, worldMatrix));

should probably have been

 matrix mvp = mul(mul(worldMatrix, viewMatrix), projectionMatrix);

so that the matrices are being concatenated in the right order. An even better idea would be to concatenate those transforms per object on the CPU, and then upload the fully formed model->projection transform(via update subresource) to the shader as a single matrix, so you are not performing all of those muls in the first place.




#5249074 Large scale terrains (again)

Posted by on 26 August 2015 - 04:26 PM

http://vterrain.org/LOD/Implementations/ is a great site for terrain rendering stuffs. Also since much of the work is academic, you can find good links on citeseer.

 

 

Having just watched "The code" on netflix, I can offer what was done by boeing in their sims(In the 80's mind you). They used random heights, and what appeared to be midpoint displacement or another fractal function to create the fine detail. The run-time looked like ROAM, He gave some detail as to the vertex count, but i can't recall of the top of my head. You might try this to fake increased heightmap resolution when creating your patches. Or get a better data set. I'm currently learning how to stream patches myself and using the SRTM 1 arc minute data.

 

What do you expect the visible distance to be?  knowing this would let you figure out what the max area that would need to be in memory at once. From there you could start to figure out where the trade off goes for increased number of patches ( with less LODS) vs increased patch size( with more index buffers) but my instincts tell me that 10 miles x 10 miles is large for a single patch.




#5237485 Basic Terrain editor help

Posted by on 29 June 2015 - 08:10 AM

I handle this all through shaders, but it was a bit of work to get setup. I'm not sure the performance implications this may have( testing and experience would dictate) but when you lock a buffer you are able to move bytes into and out of it. I would try keeping a local copy of the vertex data, and lock/fill the vb each time you adjust the mesh.

 

As for the adjusting, you have the triangle index, and with it you can find the exact vertices that this triangle is made up of. simply change the height component of those 3 vertices, lock and fill, and render as normal.

 

In my terrain editor in progress, http://www.gamedev.net/blog/900/entry-2259765-slow-days/, I use a height map for elevation data. When using the raise/lower tool, if the mouse is down while updating, i pick the terrain to get an x,z pair at the intersection, use those to get uv coordinates, and use the UV coords to render a small brush to the height map, which raises or lowers the heights in the area that the brush intersects.




#5237482 Understanding D3D11 basics

Posted by on 29 June 2015 - 07:17 AM

What you pass through has to match what the shader expects to receive.

If the shader expects 2x Vector4 values per vertex (1 for position and 1 for color), then you need to supply that.

 

Different shaders can expect different things, for example a single vector position, and the color hard coded in the shader.

struct VS_IN
{
	float4 pos : POSITION;
	float4 col : COLOR;
};

Your vertex shader is expecting 8 floats(2x float4) per vertex, as mentioned above, and this is where that information was dictated. if you do not follow the signature of the shader bad ju ju will follow. In dx9 this meansgarbage in garbage out. In dx10 and later, they got smart by forcing a validation with an input signature when compiling a shader. This is your layout parameter above as well. the shader signature, input layout, and vertex data must match if the shader has any hope of doing something with that data other than mangling it.




#5237478 Lineup of Books For Beginner

Posted by on 29 June 2015 - 07:05 AM

I don't think I've had the pleasure of reading Real Time Rendering, but am a big fan of Frank luna's introduction to directx...(what ever is the latest) they seem to get better with each iteration, as one would hope.

 

In my opinion, Crister's book, RTCD, should be read after Game coding complete. I have the 2nd edition of Mike's book, and it seemed to hit a lot of things at an introductory level. RTCD on the other hand, I would say is more technical, and it's a can of worms that you may not want to eat just yet.

 

Jason's book is probably one of the best bang for your buck books with a price tag in that range, but works best if you've been exposed to the concepts ahead of time, perhaps in an earlier project from GCC.




#5233582 Engine design, global interfaces

Posted by on 08 June 2015 - 12:34 PM

This all falls in line with my existing outlook, but I'm a little surprised to hear the static class method derided as "Another pointless abstraction."  I've read L. Spiro's blog post about her engine design, and that's the method she seems to have gone with.  Her reasoning seems sound, although a lot of it could be emulated by just using global functions in a namespace.

Perhaps L.Spiro should put an "about me" on HIS webpage, as it is not the first time I've seen this mistake.

 

http://l-spiro.deviantart.com/art/Japanese-Model-WIP-1-82010882




#5212807 How do I know if I'm an intermediateprogramming level?

Posted by on 24 February 2015 - 06:02 PM

I'm not sure there is a line in the sand anywhere, but IMHO if you are asking, then you are likely not. I do not see any mention of templates, or the std::lib, which are quite within the range of what I would consider "intermediate" programming in c++. I've been working with c++ for a long time(longer than i care to even admit sometimes) and pascal and basic before that, and I still consider myself quite a "beginner" in regards to programming as a whole. My advice, don't give it a second thought, go out and write more code.




#5205575 c++ Heap corruption w/ std::vector

Posted by on 20 January 2015 - 11:31 AM

BitMaster hit the nail on the head. A typo from my fat fingers:

pUVs[16] = Vector2(  0,  0 ); // Top Quad
pUVs[17] = Vector2(  0,  1 );
pUVs[18] = Vector2(  1,  0 );
pUVs[29] = Vector2(  1,  1 ); // whoops

Thanks all for the suggestions, Bregma, you pointing out that the existing code was fine, led me back to the previous 2 replies. This was in a debug build so i did have access to debugging features but as this was a new situation for me i was not sure what to look for.

 

 

HEAP: Free Heap block f6d48 modified at f6d58 after it was freed

The error visual studio 2010 gave me pointed to the location in the heap, but different runs always pointed to different locations. Is there a way i could have found out what line of code allocated what chunk of memory after the fact? Or would i have had to watch each allocation in turn and then use the address given to figure out the offending code? I'm still quite unfamiliar with a lot of the debugging functionality in visual studio




#5205563 c++ Heap corruption w/ std::vector

Posted by on 20 January 2015 - 10:07 AM

Thanks for taking the time to read this. I'm using a mesh object that contains std::vectors for vertex and index data.

class Mesh2
{
// snipped for brevity
    std::vector<unsigned short>        indices;

}

void Mesh2::AddFace(unsigned short _i1,unsigned short _i2, unsigned short _i3) {
    
    indices.push_back(_i1);
    indices.push_back(_i2);
    indices.push_back(_i3);

}

The above functions reside in a static lib that contains all rendering code, linked to the main executable. Below are functions in the .Exe.

// In main()

Mesh2 cube;
GenerateCube(&cube,true,true);

and the function GenerateCube

GenerateCube(Mesh2* mesh, bool bGenNorms, bool bGenTangents)

{
 // at some point
 mesh->AddFace(0,1,2);

//.. and so on
}

The issue is as soon as mesh.indices has to resize past 10(it's default size) I get a heap corruption. I'm not sure what must be done to rectify this. If anyone has a good link or can spare 5 minutes for a thorough explanation it would be much appreciated. Everything I've pulled up on google so far has to do with crossing DLL boundaries which I'm not doing, but might be happening behind the scenes in the std::vector. I'm heading back to google for now.




#5204761 Vertex Declaration failure

Posted by on 16 January 2015 - 01:22 PM

I'm by no means confident in my answer, but am going to hazard a guess, since the usage of PositionT  disables vertex processing, that it is not designed to work with multiple streams. Have you verified that the multiple streams are available? Are the usages you requested available on your card? Have you been able to multiple streams with  untransformed positions successfully?






PARTNERS