IkarusDowned

Members
  • Content count

    35
  • Joined

  • Last visited

Community Reputation

291 Neutral

About IkarusDowned

  • Rank
    Member
  1. Thanks for the post! I'll look into it
  2. Hi All,   I'm looking for trying to use an existing tool which does Node-based shader / material creation (much like Unity and Unreal Engine) which I can take the output and tie into a game engine...specifically, I need to be able to input a texture, run a shader on it, and output a texture.    Our current restriction is that I can't use Unity / Unreal Engine (duh) but would like to give similar power to artists as opposed to having programmers writing them all by hand.   Any suggestions would be welcome!
  3. Sampling Texture2D vs Texture1D?

    Once again, chatting with you all helped me solve the issue. A couple things seemed to be needed for me to solve this:   1) I changed the Filter type to MIN_MAG_MIP_POINT. linear interpolated across the colors (which...is what it does...) 2) For my usage, I needed to switch the Addressing to CLAMP.    I don't know if this mattered, but I also: 3) moved the three Texture1D declarations in the shader to an array, and did the DSSetShaderResource as an array...this probably didn't change much but I /think/ is a better approach   Also, I double-checked all the channel values and hand-calculated a bunch to make sure that what I was seeing in the SaveDDSToFile call was in fact what I was inputting -- as it turns out the DDS viewer I am using is a bit weird and unreliable...best to just do the math by hand.   Now, the calculations all seem correct and i'm getting some interesting stuff being done using the 1D textures. Thanks again (and double thanks to MJP who helped me out twice in 2 weeks!)
  4. Stretched pixels in windowed mode

    You should also keep in mind that the ratio of buffer size to window size is important. If your backbuffer is set to say 640 x 480 but your window rectangle is 600 x 480, then the result will look "squashed" as you put it. If the window rectangle was, say, 700 x 480, then it would look "stretched."   As a rule I like to not optimize unless I must, and also try to keep things simple until it needs to be changed...thus I would suggest that your backbuffer and window rect keep the width and height in constant ratio of each other. I like to use multiples of the back buffer. Again, this isn't "optimal" as much as I just follow the rule: "get it to look right before getting to be fast."
  5. Sampling Texture2D vs Texture1D?

    @MJP Its a tiny 1-d texture so I don't do mipmap sampling. Yes, I'm doing it in the Domain Shader because of something I'm trying out which needs access to the vertex location properties.   @n3xus: As MJP points out, the sampling is happening in the domain shader so I can't use the default Sample function.    To clarify, I don't want the component value of the RGBA texture value at location 0, i want the ENTIRE value. The 4-d rgba value is (1,0,0,1) where if I do SampleLevel(DomainSampler, 0, 0) and just send that value straight to the Pixel Shader, it should produce red. I'll double check the texture, but the output I get when i cut it to file seems right.    The input to the texture is the following: (1,0,0,1) and (0,0,1,1) So, I have two pixels where the first one is RGBA of Red, the second of Blue, and i want the color value in its whole (not as components, aka JUST the x or y value). I don't understand how, but is it possible that SampleLevel(DomainSampler, 0, 0) is  somehow sampling with the wrong channel bit size? I've set the shader resource view to use the same channel bit information
  6. Sampling Texture2D vs Texture1D?

    I should note I am using DX11 exclusively :)
  7. Hello guys,  I've run into another kinda interesting bug which I'm having trouble figuring out. I'm trying to do some (albeit strange) lighting calculations where the point light diffuse, radius and position are baked (in the case of radius and position, encoded) into 1D textures. As such, I've created 1D textures like so: bool CreateTexture1DClass::Initialize(ID3D11Device *device, std::vector<UINT8> &perChannelData, int width) { m_width = width; D3D11_TEXTURE1D_DESC desc; ZeroMemory(&desc, sizeof(desc)); desc.Width = m_width; desc.MipLevels = desc.ArraySize = 1; desc.Format = DXGI_FORMAT_R8G8B8A8_UNORM; desc.MiscFlags = 0; desc.Usage = D3D11_USAGE_DYNAMIC ; desc.BindFlags = D3D11_BIND_SHADER_RESOURCE; desc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; HRESULT result; D3D11_SUBRESOURCE_DATA subresource; subresource.SysMemSlicePitch = 0; subresource.SysMemPitch = perChannelData.size() * sizeof(UINT8); subresource.pSysMem = &perChannelData[0]; result = device->CreateTexture1D(&desc, &subresource, &m_pTexture); if(FAILED(result)) { return false; } D3D11_SHADER_RESOURCE_VIEW_DESC shaderResourceViewDesc; shaderResourceViewDesc.Format = desc.Format; shaderResourceViewDesc.Texture1D.MostDetailedMip = 0; shaderResourceViewDesc.Texture1D.MipLevels = 1; shaderResourceViewDesc.ViewDimension = D3D11_SRV_DIMENSION_TEXTURE1D; result = device->CreateShaderResourceView(m_pTexture, &shaderResourceViewDesc, &m_shaderResourceView); if(FAILED(result)) { return false; } return true; } Now, for testing purposes I pre-load the color and radius into textures, then outputted them as 1D DDS files to see if my logic was right. The good news here is, it worked!   Now, onto using them in my shaders. As I said I'm trying to do something (possibly) weird, which is to render the point-light diffuse values to a texture as part of the deferred step, but sampling the values in the domain shader (as I have new vertices being made and such). Here's how I've declared my textures and the samplers: Texture2D tex; Texture2D normalMap; Texture2D specularMap; Texture2D displacementMap; Texture1D pointLightDiffuseMap; Texture1D pointLightPositionMap; Texture1D pointLightRadiusMap; SamplerState PixelSampler; SamplerState DomainSampler; in my domain shader for now, I'm just doing the following: output.pointDiffuseAdd = pointLightDiffuseMap.SampleLevel(DomainSampler, 0, 0); this essentially gets piped straight thru to the PixelOutput, so essentially i get a deferred render target which is filled with a flat color.  Now, The problem is this: The pointLightDiffuseMap at the moment is just a 1D texture which has two values: Red, Blue. If my understanding is correct, with the SampleLevel's second parameter being 0, I should get back a full-red image. However, what I get back is a deep shade of purple! (it /almost/ looks like its blending the red and blue). Any ideas as to what could cause this? Is my sampler states setup wrong?   Here's a sampler state setup: D3D11_SAMPLER_DESC samplerDesc; samplerDesc.Filter = D3D11_FILTER_MIN_POINT_MAG_MIP_LINEAR; samplerDesc.AddressU = D3D11_TEXTURE_ADDRESS_WRAP; samplerDesc.AddressV = D3D11_TEXTURE_ADDRESS_WRAP; samplerDesc.AddressW = D3D11_TEXTURE_ADDRESS_WRAP; samplerDesc.MipLODBias = 0.0f; samplerDesc.MaxAnisotropy = 1; samplerDesc.ComparisonFunc = D3D11_COMPARISON_ALWAYS; samplerDesc.BorderColor[0] = 0; samplerDesc.BorderColor[1] = 0; samplerDesc.BorderColor[2] = 0; samplerDesc.BorderColor[3] = 0; samplerDesc.MinLOD = 0; samplerDesc.MaxLOD = D3D11_FLOAT32_MAX; device->CreateSamplerState(&samplerDesc, &m_displacementSampler); P.S. i do check the result to make sure I don't error, I just don't put it here to reduce code-confusion   also, here's how I'm setting my samplers / textures into their respective locations: deviceContext->PSSetShaderResources(0, 1, &diffuseTexture); deviceContext->PSSetShaderResources(1, 1, &normalMap); deviceContext->PSSetShaderResources(2, 1, &specularMap); deviceContext->DSSetShaderResources(0, 1, &displacementMap); deviceContext->DSSetShaderResources(1, 1, &pointLightDiffuseMap); deviceContext->DSSetShaderResources(2, 1, &pointLightPositionMap); deviceContext->DSSetShaderResources(3, 1, &pointLightRadiusMap); Oddly enough, none of the other code seems effected by the weirdness happening from the 1D texture sampling issue. Any ideas as to what I could be doing wrong?
  8. wow...finally solved the issue. I'm such a fool....here's what went wrong, and its pretty freaking obvious... When switching shaders from the deferred shader to the regular texture shader that outputs to the "screen," I call the PS and VS set shader functions...but i wasn't clearing out the HS and DS shaders...in essence they were still in place while only the PS and VS shaders had changed....wow....   Thanks as always guys for your help. just chalk it up as one of the many learning experiences out there....
  9. @Burnt_Fyr Sadly, I have to use VS2010, but the latest "windows 8 Directx SDK"...the one with DirectXMath and forces you to use DirectXTex and stuff. I don't know of any convenient debuggers for this in my situation, but if you have a suggestion i'm happy to try. As for an unlit object blocking the camera, Here's what I've tried: I called the function that renders the really simple scene (2 objects) WITHOUT setting the OMSetRenderTargets to the deferred buffers. So, I basically called the function straight to back buffer as if it were not deferred (by setting the OMSetRenderTargets to the pre-generated back-buffer target view). The back-buffer set is wrapped in a StartScene() call, and the swapChain Present call at the end is wrapped in EndScene();   So, if the function rendering the models to the buffer and calling the "deferred shader" is DrawScene(), then the actual ordering for straight to back buffer rendering is: StartScene(); DrawScene(); EndScene();   This works just fine without me having to edit my shader at all. The SV_Target0 just get set to the back buffer, and for debug purposes I've deleted the other render targets from the Pixel shader output (commented out).   However, following the SAME principle for setting my deferred targets, causes NOTHING to show up...not even the fill color is sent thru. in this case, my StartScene() is replaced by a SetRenderTargets() type function which sets the texture buffers using OMSetRenderTargets. At the end, we swtich back to the back-buffer with a SetBackBufferRenderTarget() call. thus:   SetRenderTargets(); DrawScene(); SetBackBufferRenderTarget();   somehwere later int the render call, I do the StartScene() call, but this time I am just drawing the first render target as a texture to an orthographic projected quad that takes up the screen:   StartScene(); DrawFirstTexture(); EndScene();   again, this DrawFirstTexture() is just for debugging...I want to make sure data is being outputted to the textures.   When i do THIS, nothing gets rendered to the texture. However, if i do the same method but comment out the model-rendering lines, at the very least the fill color (set to white) gets filled into the 1st texture, and shows up on screen.   @MJP I double checked that, and even went as far as to change all non-pixel shader input semantics with POSITION in them to a custom one, like LOCATION. This still didn't work.   Here's my shader code with the larger function's body's removed: //deferred.hlsl //renders to multiple targets cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; cbuffer TessellationBuffer { float tessellationAmount; float3 padding; }; //typedefs struct VertexInputType { float3 position : LOCATION; float4 color : COLOR; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; struct HullInputType { float3 position : LOCATION; float4 color : COLOR; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; struct ConstantOutputType { float edges[3] : SV_TessFactor; float inside : SV_InsideTessFactor; }; struct HullOutputType { float3 position : LOCATION; float4 color : COLOR; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; struct PixelInputType { float4 position : SV_POSITION; float4 color: COLOR; float2 tex : TEXCOORD0; float3 normal : NORMAL; }; struct PixelOutputType { float4 color : SV_Target0; }; Texture2D tex : register(t0); Texture2D displacementMap : register(t1); SamplerState SamplerType : register(s0); SamplerState DisplacementSampler : register(s1); //VertexShader HullInputType DeferredVertexShader(VertexInputType input) { HullInputType output; output.position = input.position; output.color = input.color; output.tex = input.tex; output.normal = input.normal; return output; } ConstantOutputType DeferredPatchConstantFunction(InputPatch<HullInputType, 3> inputPatch, uint patchId : SV_PrimitiveID) { ConstantOutputType output; //...does some constant hull (patch) shader stuff return output; } //Hull Shader [domain("tri")] [partitioning("integer")] [outputtopology("triangle_cw")] [outputcontrolpoints(3)] [patchconstantfunc("DeferredPatchConstantFunction")] HullOutputType DeferredHullShader(InputPatch<HullInputType, 3> patch, uint pointId: SV_OutputControlPointID, uint patchId : SV_PrimitiveID) { HullOutputType output; //...more hull (point) shader stuff return output; } //Domain Shader [domain("tri")] PixelInputType DeferredDomainShader(ConstantOutputType input, float3 uvwCoord: SV_DomainLocation, const OutputPatch<HullOutputType, 3> patch) { float3 vertexPosition; PixelInputType output; //does the new pixel location calculation, as well as multiplying against the view / proj / world matricies and setting it to the output struct return output; } //Pixel Shader PixelOutputType DeferredPixelShader(PixelInputType input) { PixelOutputType output; float4 color; color = input.color; color = tex.Sample(SamplerType, input.tex); color.w = 1.0f; //this is just to force the alpha value, for testing output.color = color; return output; } Thanks for your help so far guys, I'm really stumped....
  10. More to the point: What relationship does the Hull / Domain shader have with the pixel shader? Why would setting a render target from the pixel shader suddenly cause the data not to be outputted? Not even the clear color??
  11. slight update, if i enable the shaders and then push NO data through, i do get the buffers filled with at least the clear fill color. So, the MOMENT i try and render data the entire buffer seems to get "wiped out..."
  12. Hey all, I've been studying up on DX11 and various graphical things using the rastertek tutorials: http://www.rastertek.com/tutdx11.html They've been great in helping me understand the carious parts of DX11, and I wanted to try and combine the Tesselation tutorial with the Deferred Rendering tutorial by just having the output of the Pixel Shader output to SV_Target0, SV_Target1, etc.   However, whenever I modify the code to output the the render target, I get absolutely NOTHING being sent to the target..not even the clear buffer fill color. However, if I change the Pixel Shader output from the custom PixelOutputType { float4 color : SV_Target0; } to just a float4, and render the scene straight to the default buffer (screen), everything shows up just fine.    Also, if i just comment out the HSSetShader() and DSSetShader() code, at the VERY LEAST get the fill color outputting. Is there something that I'm missing that tells the hull and domain shaders to output to the pixel shader??  
  13. Rotating camera around an object

        its not, actually. i linked an article for you immediately after my first post.  Also, this is not "theory," what i gave you was a practical technique. Theory would have been trying to explain how you can do N-dimensional rotations and translations using Matrix math... As it has been noted, you are asking just for code but even if we give it to you, how will you know what to change to make it work for you, when you haven't fully understood the concepts?   Thanks @ BCullis and slicer4ever for their responses, as well
  14. Want to learn programming...again

    As someone who is an avid C / C++ fan, but works a LOT in Python, Java and other languages, I would like to say that a solid understand of C (and the concept of constructors and destructors in C++) will take you a LONG way. Understanding how memory works, what pointers are (and what references AREN'T) can help you pick up other languages REALLY quickly. Having to train people in programming and software development, I find that the people who do well not matter WHAT gets thrown at them have the concepts of scope, memory management, construction / destruction firmly in their head get more work done faster and understand things better.   That being said, it takes both effort and time to get to a level comfortable with some of the craziness of pointers and all that. What is your main purpose to learn programming? The OP doesn't state "game programming" explicitly. If you are interested in just learning some basic programming, the vast majority of scripting languages are just fine -- Perl, Python, Ruby, any of those give you the ability to rapidly produce results with minimal fuss. the core concepts of "programming" are in all languages.   Too often, people get into the "Language Wars." Languages are like the paints and paintbrushes to an artist, or the tools in a toolbox to a craftsman. Each language has its strengths and weaknesses -- pick the right tool for the job, but most importantly is to know WHICH tool is right for which job. 
  15. Rotating camera around an object

    also here's a nice note on the subject http://stackoverflow.com/questions/786293/opengl-rotate-around-a-spot   maybe that is easier for you to understand?