Jump to content
  • Advertisement

Daniel E

Member
  • Content Count

    93
  • Joined

  • Last visited

Community Reputation

224 Neutral

About Daniel E

  • Rank
    Member
  1. Ok, thanks for clarification. Rewrote the affected code.
  2. I've just converted my project to Visual Studio 2012 and ran into a "list iterators not compatible" error. I could reduce the problematic code to the following: std::list test; std::list::iterator end = test.end(); test.clear(); assert(test.end() == end); //fail This fails in a freshly created project. On http://www.cplusplus.com/reference/list/list/clear/ it's said that "All iterators, references and pointers related to this container are invalidated, except the end iterators." Does anyone know if something has changed about that? This worked fine in VS2010...
  3. Daniel E

    c++ and GetOpenFileName

    Not really sure what it is but you could take a look at my implementation for reference (or just use it ) It handles open and save and you can use it like this: CeFileDlg dlg; dlg.addFilter("XML-Document", "xml"); std::string fname; dlg.showOpen(fname); .h #pragma once #define MAXFILENAMESIZE 1024 class CeFileDlg { public: CeFileDlg(void); ~CeFileDlg(void); bool showOpen(std::string& fname); bool showSave(std::string& fname); void addFilter(std::string title, std::string extension); void clearFilter(); private: bool show(std::string& fname, bool open); OPENFILENAME ofn; std::string filter; std::string defExt; }; .cpp #include "PCH.h" #include "CeFileDlg.h" #include "Ce.h" CeFileDlg::CeFileDlg(void) { ZeroMemory(&ofn, sizeof(OPENFILENAME)); ofn.lStructSize = sizeof(ofn); ofn.hwndOwner = NULL; ofn.nMaxFile = MAXFILENAMESIZE; ofn.lpstrFileTitle = NULL; ofn.nFilterIndex = 1; ofn.lpstrInitialDir = NULL; ofn.nMaxFileTitle = 0; ofn.lpstrFilter = "\0"; } CeFileDlg::~CeFileDlg(void) { } void CeFileDlg::addFilter(std::string title, std::string extension) { filter.append(title); filter.push_back('\0'); filter.append("*."); filter.append(extension); filter.push_back('\0'); defExt.clear(); defExt.append(extension); ofn.lpstrDefExt = &extension[0]; ofn.lpstrFilter = &filter[0]; } void CeFileDlg::clearFilter() { filter = ""; ofn.lpstrFilter = "\0"; } bool CeFileDlg::showOpen(std::string& fname) { return show(fname, true); } bool CeFileDlg::showSave(std::string& fname) { return show(fname, false); } bool CeFileDlg::show(std::string& fname, bool open) { char szFile[MAXFILENAMESIZE]; ofn.lpstrFile = szFile; ofn.lpstrFile[0] = '\0'; if(open) { ofn.Flags = OFN_PATHMUSTEXIST | OFN_FILEMUSTEXIST | OFN_NOCHANGEDIR | OFN_NONETWORKBUTTON; GetOpenFileNameA(&ofn); } else { ofn.Flags = OFN_PATHMUSTEXIST | OFN_NOCHANGEDIR | OFN_NONETWORKBUTTON; GetSaveFileNameA(&ofn); } fname = ofn.lpstrFile; return !fname.empty(); }
  4. there was this paper that could be interesting "Mega Meshes - Modelling, rendering and lighting a world made of 100 billion polygons". http://miciwan.com/GDC2011/GDC2011_Mega_Meshes.pdf
  5. Glad you got it working, but why do you transform the position to world space in the pixel shader? You could transform your light positions to view space instead (in vertex shader or on the cpu).
  6. Hey hAk float2 uvFromVPOS(float2 vPos) { return vPos * InvScreenDim.xy + 0.5f * InvScreenDim.xy; } PS_output ps_pointLight(in VS_OUTPUT input, in float2 vPos : VPOS) { PS_output Out; float2 texCoord = uvFromVPOS(vPos); float3 viewRay = float3(input.PositionVS.xy * (FarClipDistance / input.PositionVS.z), FarClipDistance); float normalizedDepth = DepthTexture.Sample(PointSampler, texCoord).x; float3 positionVS = viewRay * normalizedDepth; ... }
  7. Daniel E

    Megatexture Demo

    Unfortunately I still haven't taken care of that bug It's most likely a small bug in the multithreading logic. It works most of the time for me if I wait for all pages in the current view to be loaded and then start painting. Also painting across multiple pages triggers that bug often. I'll update this thread once I've uploaded the current version which is more optimized and supports cool new stuff like texture wrapping.
  8. Daniel E

    Megatexture Demo

    Yeah, the link was leading to a IOTD thread, which is a currently not available feature I guess. There is a executable in the source package in the files section of the sourceforge site. Just download the source and the test data and extract the test data in the source code folder, overwriting the existing folders. Unfortunately it's not the most up to date version but i plan to compile a new release version soon (the old test data isn't compatible anymore and takes very long to upload).
  9. Alright so here is a point light example for RenderMonkey: http://www.mediafire.com/file/iy62h4jahm5pgcd/pointlight.zip Hope it helps. Edit: aligned texels to pixels, now it's even looking correct
  10. It isn't really different, I'm just rendering the point lights with hardware instancing and scale and offset the vertices with the instance data (In.insPos). I'm successfully using his method for point lights and directional lights. Unfortunately I have no new suggestions, but I could fire up rendermonkey real quick and try to make a minimal example.
  11. Hey, I'm using one of MJP's methodes to reconstruct the position. I'll better post the entire shader because I modified the SSAO by José María Méndez. G-Buffer Pass VS_OutGBuffer vs_GBuffer(VS_input input) { VS_OutGBuffer output; /* snip */ output.Position = mul(float4(input.pos.xyz, 1.0f), MatWVP); output.normal = normalize(mul(input.normal, (float3x3)MatWorld)); output.depth = mul(float4(input.pos.xyz, 1.0f), MatWorldView).xyz; return output; } PS_OutGBuffer ps_GBuffer(in VS_OutGBuffer input) { PS_OutGBuffer output = (PS_OutGBuffer)0; /* snip */ output.Depth = length(input.depth); output.Normal = float4(normalize(normal), 1); return output; } View Ray Pass Here I render a view ray to a small buffer (64 x 48 pixels) so I can read it in the SSAO shader using bilinear filtering. VS_OutViewRay vs_viewRay(in VS_INPUT In) { VS_OutViewRay Out; Out.Pos = float4(In.Pos.xy, 0.0f, 1.0f); Out.ray = mul(Out.Pos, MatViewProjInv).xyz - camPosWs; return Out; } PS_output ps_viewRay(in VS_OutViewRay In) { PS_output Out; Out.Color = float4(In.ray, 1); return Out; } SSAO Pass VS_OutSSAO vs_SSAO(in VS_INPUT In) { VS_OutSSAO Out; Out.UV = In.UV + 0.5f / g_screen_size; // align texels to pixels (dx9) Out.Pos = float4(In.Pos.xy, 0.0f, 1.0f); return Out; } float3 getNormal(in float2 uv) { return tex2D(texNormals, uv).xyz; } float2 getRandom(in float2 uv) { return normalize(tex2D(texRand, g_screen_size * uv / random_size).xy * 2.0f - 1.0f); } float3 getPosition(in float2 uv) { const float depth = tex2D(texDepth, uv).x; //align texels to pixels, very crucial here (dx9 only(?)) //rayBufContraction contains: //rayBufContraction.xy = (rayBufferSize - 1) / rayBufferSize; //rayBufContraction.zw = 0.5f / rayBufferSize; uv *= rayBufContraction.xy; uv += rayBufContraction.zw; const float3 eyeToPixel = normalize(tex2D(texViewRay, uv).xyz); return eyeToPixel * depth; } float doAmbientOcclusion(in float2 tcoord, in float2 uv, in float3 p, in float3 cnorm) { const float3 diff = getPosition(tcoord + uv) - p; const float3 v = normalize(diff); const float d = length(diff) * g_scale; return max(0.0, dot(cnorm, v) - g_bias) * (1.0f / (1.0f + d)); } PS_output ps_SSAO(in VS_OutSSAO In) { PS_output Out; Out.Color = 1.0f; const float2 vec[4] = { float2(1, 0), float2(-1, 0), float2(0, 1), float2(0, -1) }; const float3 p = getPosition(In.UV); const float3 n = getNormal(In.UV); const float2 rand = getRandom(In.UV); const float invDepth = 1.0f - tex2D(texDepth, In.UV).w / FARCLIPDIST; const float rad = g_sample_rad * (invDepth * invDepth); float ao = 0.0f; const int iterations = 4; for(int j = 0; j < iterations; ++j) { float2 coord1 = reflect(vec[j], rand) * rad; float2 coord2 = float2(coord1.x * 0.707f - coord1.y * 0.707f, coord1.x * 0.707f + coord1.y * 0.707f); ao += doAmbientOcclusion(In.UV, coord1 * 0.25f, p, n); ao += doAmbientOcclusion(In.UV, coord2 * 0.5f, p, n); ao += doAmbientOcclusion(In.UV, coord1 * 0.75f, p, n); ao += doAmbientOcclusion(In.UV, coord2, p, n); } ao /= iterations * 4.0f; Out.Color = saturate(ao * g_intensity); return Out; } My parameters: g_sample_rad = 0.03f g_intensity = 1.0f g_scale = 0.5f g_bias = 0.2f The result (no blurring): I'm quite happy with it If you have any questions, let me know edit: MatViewProj => MatViewProjInv edit edit: to answer your actual question, the method I use is described here
  12. You said something about 'viewray should be multipled by the word matrix', so i thought that that would be the most appropriate answer. Didn't really know what you were getting at to be honest. Because that's what I have implemented at that time. I mentioned it so you don't mistake it for a fullscreen shader. It wasn't meant to be a "cubes vs sphere" statement. So have you found your error?
  13. In.insPos.w is the radius of the pointlight my pointlight positions are in world space i'm using cubes as light geometry btw
  14. here's my pointlight shader using MJPs method for reconstruction VS_OUTPUT_LIGHTPASS_INSTANCE vs_lightPass(VS_INPUT_INSTANCE In) { VS_OUTPUT_LIGHTPASS_INSTANCE Out; Out.lightCol = In.color.xyz; In.Pos.xyz *= In.insPos.w; In.Pos.xyz += In.insPos.xyz; Out.lightPos.xyz = In.insPos.xyz; Out.lightPos.w = In.insPos.w * In.insPos.w; Out.Pos = mul(float4(In.Pos.xyz, 1.0f), MatWVP); Out.vPos = ConvertToVPos(Out.Pos); Out.vEyeRay = In.Pos.xyz - camPos.xyz; return Out; } PS_output ps_pointLight(in VS_OUTPUT_LIGHTPASS_INSTANCE In ) { PS_output Out; const float depth = tex2Dproj(texDepth, In.vPos).x; const float3 normal = tex2Dproj(texNormals, In.vPos).xyz; const float3 eyeToPixel = normalize(In.vEyeRay.xyz); const float3 posInWorld = camPos.xyz + eyeToPixel * depth; const float3 lightToPos = In.lightPos.xyz - posInWorld; const float distance = length(lightToPos); const float3 lightDir = lightToPos / distance; const float atten = 1.0f - ((distance * distance) / In.lightPos.w); const float diffuse = saturate(dot(lightDir, normal)); const float specular = getSpecular(lightDir, eyeToPixel, normal, 80.0f); Out.Color.xyz = saturate(In.lightCol * diffuse); Out.Color.w = specular; Out.Color *= atten; return Out; }
  15. Hmm I just tried the MJP method. Works nicely :-) Can't help though. Only thing that looks suspect to me is your output declaration, but I'm not familar with DX 10 semantics. struct VertexShaderOutput { float4 PositionCS : SV_POSITION; float4 PositionVS : Position; ?? float2 TexCoord : TEXCOORD0; float3 Normal : TEXCOORD1; }; maybe try something like struct VertexShaderOutput { float4 PositionCS : POSITION0; float4 PositionVS : TEXCOORD0; float2 TexCoord : TEXCOORD1; float3 Normal : TEXCOORD2; };
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!