Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 26 Mar 2013
Offline Last Active Yesterday, 06:08 PM

Topics I've Started

How Can I Make This Scene Look Better

25 October 2014 - 04:38 PM

Hello all,


I have been working on a hallway scene for a game that's premise is yet to be decided.


To start this off I'm no artist. I am however relatively versed at this point in 3D modeling however, more in context have become quite familiar with blender.


While I won't say personally the scene looks bad I will say it looks VERY cookie cutter.

And I am more or less wondering what the 3D guru's around here would do to add more of a fidelity flare to their scenes.


-Bloom? Is that applicable to indoor scenes?

-Some kind of Film-grain post-processing? (the kind of effect that seems to be used throughout all of Alien Isolation)

-Shadows of course (After FEAR 1 I am a HUGE advocate of Hard-Shadows :) )


But maybe I'm missing something. Let me know what you think of this scene and what techniques you would utilize to improve/stylize it.


I am using D3D11/C++




Attached File  awd.png   676.83KB   0 downloads

Attached File  asd.png   756.65KB   0 downloads

Attached File  derpin.png   746.34KB   0 downloads

Attached File  sdfsdfsdf.png   799.23KB   0 downloads

OmniDirectional Shadow Mapping

13 September 2014 - 04:11 PM

Hello all,


I've done simple directioal light shadow mapping, but now I'd like to try my hand in point light shadow mapping.


I've decided to shoot for cubic shadow mapping, so I've already setup up a cubic renderTargetView/DepthBuffer/ShaderResourceView and rendered my scene from the view point of the light in the x,x-,y,y-,z,z- directions.


Now I guess I've come to a bit of a standstill on how to best approach this in a shader perspective.

My Initial thought was to send six LightViewMatrices and check against each of them in my pixel shader for the shadow map textureCube.


But that's a lot of overhead! And a lot of data to send down the bus per frame.

Is there a better way to approach this? Especially since I'm using a point light and textureCube construct in my shader.


Any response is appreciated!


Marcus Hansen

Pipeline Configuration for Post-Processing Shaders

23 July 2014 - 07:00 PM

Hello all,


This is a new concept for me, so I apologize if I come across as short-sighted or noobish,

I am writing a post-process shader that will take a scene that has been rendered to a texture and blur it.

Since this blur shader only operates on the texture, there is no need for a Vertex Shader.


So, How do I configure the Pipeline?


I know with a typical effect, you usually have two shaders, a Vertex Shader and a Pixel Shader.

So I would prime the API like so,











//Binding of worldViewProj Matrices and assests to their respective Constant Buffers



And then eventually,


d3dContext->draw || d3dContext->DrawIndexed();


But with only a pixel shader, I would only need to do this, maybe?








But what would I be drawing! If I don't have a vertex shader I don't need to submit a vertex or index buffer. Nor any of the corresponding matrices.


If I don't have a VertexBuffer submitted to D3D, I wouldn't expect the draw call to do anything.


But if I don't make this call: d3dContext->Draw();

How is this shader to be executed?


I guess i'm asking is how to execute a post-process shader or point me if possible in the right direction for learning how to utilize post-process shaders


Thansk for any replies!





Bad Performance On Intel HD

07 July 2014 - 04:45 PM

Hello all,

I am coding a game in C/C++ using the d3d11 api. Everything is going along with minor issues here and there. But one big issue. I have a laptop that I decided to dub as a dev system. Since it's GPU is a Intel HD 2000 Solution If I get my game to perform well on that then it will run well on anything

Performance is TERRIBLE!

If im lucky 2-3 fps! Im only sending 4000-6000 triangles in a single update. And the shader im using handels 5 point lights with one directional light. The shader handels normal/shadow mapping as well (Shadow Mapping only for the directional Light) and about 80% of my geometry is sent down that pipeline.

I have some ideas on where my Performance is going down the tube. I have already ordered my data so that like gemotrey (with identical buffers and textures ect.) Is fed sequentially to D3D as to minimize context switches on the Gpu. But here it goes!

1. I do have a lot of draw calls, maybe instancing could help me (but my fear isthe intel hd "says" it supports d3d11 and all of its features
but only supports these features like instancing and tesselation at a minimum level.)

2. I should probably look into.vertex buffer batching. Since I do create a lot of seperate buffers for seperate objects. And resubmit them to the pipeline per update

3. Maybe the shader I am using or the gemoetry Im sending is to much. (Though even when i substituted shaders that did only basic texture mapping I still had a problem wih speed)

If I missed something let me know, or maybe if one of the above mentioned items is the optimization technique I should look into (or maybe all of them) let me know as well

Hp Laptop specs

i3m 2.3 2nd gen
8gb of ram
intel hd 2000

Assimp and Flipping Normals

25 June 2014 - 10:11 PM

Hello all,


I am using Assimp for my models in my game. I am loading obj models exported from blender, into my classes through a model loader that utilizes Assimp.


I've run into an issue. Now right off the bat I know my shaders are correct, and the trasformation of lights are as well (in my Vertex Shader) since the only meshes that have the soon to be metioned issue are models exported and imported through Assimp.


It seems as if assimp is flipping my models normal vectors! Check the two images below, and it seems as if the backside of my model''s normals are pointing inward!




vsOut.pos = mul(vertex.pos, worldMat);
    vsOut.pos = mul(vsOut.pos , viewMat);
    vsOut.pos = mul(vsOut.pos , projMat);

    vsOut.color = vertex.color;    
    vsOut.wPos   = mul(vertex.pos, worldMat);

    cameraPos = mul(vsOut.pos, worldMat);
    cameraPos = mul(cameraPos, viewMat);

    vsOut.viewDir = camPos.xyz - vsOut.wPos.xyz;
    vsOut.viewDir = normalize(vsOut.viewDir);

    vsOut.norm  = mul(vertex.norm, worldMat);
    vsOut.norm  = normalize(vsOut.norm);




These are my preprocess flags for Assimp and This is how I load the normals in code




unsigned int processFlags =
    aiProcess_CalcTangentSpace         |
    aiProcess_JoinIdenticalVertices    |
    aiProcess_ConvertToLeftHanded      | // convert everything to D3D left handed space (by default right-handed, for OpenGL)
    aiProcess_SortByPType              |
    aiProcess_ImproveCacheLocality     |
    aiProcess_RemoveRedundantMaterials |
    aiProcess_FindDegenerates          |
    aiProcess_FindInvalidData          |
    aiProcess_TransformUVCoords        |
    aiProcess_FindInstances            |
    aiProcess_LimitBoneWeights         |
    aiProcess_SplitByBoneCount         |
    aiProcess_FixInfacingNormals       |







vertexVectorL.at(i).norm               = XMFLOAT3(mesh->mNormals[i].x, mesh->mNormals[i].y, mesh->mNormals[i].z);




Has anyone else heard about assimp doing this? It's been throwing me for a loop for a while now.

If something looks off in my code give me a hint or point me in the right direction!



P.s. I've included screen shots of the issue
      Thanks for any reply an advance!




Attached File  asd.png   536.68KB   2 downloads

Attached File  sd.png   535.44KB   4 downloads