• Advertisement

Search the Community

Showing results for tags '3D' in content posted in Graphics and GPU Programming.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • News

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • GameDev Unboxed

Categories

  • Game Dev Loadout

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Topical
    • Virtual and Augmented Reality
    • News
  • Community
    • GameDev Challenges
    • For Beginners
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams

Blogs

There are no results to display.

There are no results to display.

Marker Groups

  • Members

Developers

Developers


Group


About Me


Website


Industry Role


Twitter


Github


Twitch


Steam

Found 131 results

  1. Hi, I'm on Rastertek series 42, soft shadows, which uses a blur shader and runs extremely slow. http://www.rastertek.com/dx11tut42.html He obnoxiously states that there are many ways to optimize his blur shader, but gives you no idea how to do it. The way he does it is : 1. Project the objects in the scene to a render target using the depth shader. 2. Draw black and white shadows on another render target using those depth textures. 3. Blur the black/white shadow texture produced in step 2 by a) rendering it to a smaller texture b) vertical / horizontal blurring that texture c) rendering it back to a bigger texture again. 4. Send the blurred shadow texture into the final shader, which samples its black/white values to determine light intensity. So this uses a ton of render textures, and I just added more than one light, which multiplies the render textures required. Is there any easy way I can optimize the super expensive blur shader that wouldnt require a whole new complicated system? Like combining any of these render textures into one for example? If you know of any easy way not requiring too many changes, please let me know, as I already had a really hard time understanding the way this works, so a super complicated change would be beyond my capacity. Thanks. *For reference, here is my repo, in which I have simplified his tutorial and added an additional light. https://github.com/mister51213/DX11Port_SoftShadows/tree/MultiShadows
  2. I have never quite been a master of the d3d9 blend modes.. I know the basic stuff, but have been trying for a while to get a multiply/add blending mode... the best I can figure out is mult2x by setting: SetRenderState(D3DRS_DESTBLEND, D3DBLEND_SRCCOLOR); SetRenderState(D3DRS_SRCBLEND, D3DBLEND_DESTCOLOR); //this isn't quite what I want.. basically I wonder if there is a way to like multiply by any color darker than 0.5 and add by any color lighter than that..? I don't know, maybe this system is too limited...
  3. Hi! I've been trying to implement simple virtual globe rendering system using "3D Engine Design for Virtual Globes" book as a reference. What I do is I use 6 planes to form a cube, send it to GPU and use vertex shader to form a sphere and add random noise to simulate surface of the planet. The problem is how do I do CPU work on the vertex data from now on - how do I get world space coordinates of a terrain patch to perform LOD techniques, how do I do camera-terrain collision detection etc. ?
  4. Hey guys, Are lightmaps still the best way to handle static diffuse irradiance, or is SH used for both diffuse and specular irradiance now? Also, do any modern games use direct light in lightmaps, or are all direct lighting handled by shadow maps now? Finally, how is SH usually baked? Thanks!
  5. Hey guys So I was wondering how modern terrain and water geometry works both with and without tesselation. Essentially: 1) Is Geoclipmapping still the best CPU tesselation technique? 2) Is Geoclipmapping still used with tesselation? 3) Is non-tesselated water just flat? Is there any other (reasonable) ways to simulate it? Do people use Geoclipmapping for that too? Thanks!
  6. hi, until now i use typical vertexshader approach for skinning with a Constantbuffer containing the transform matrix for the bones and an the vertexbuffer containing bone index and bone weight. Now i have implemented realtime environment probe cubemaping so i have to render my scene from many point of views and the time for skinning takes too long because it is recalculated for every side of the cubemap. For Info i am working on Win7 an therefore use one Shadermodel 5.0 not 5.x that have more options, or is there a way to use 5.x in Win 7 My Graphic Card is Directx 12 compatible NVidia GTX 960 the member turanszkij has posted a good for me understandable compute shader. ( for Info: in his engine he uses an optimized version of it ) https://turanszkij.wordpress.com/2017/09/09/skinning-in-compute-shader/ Now my questions is it possible to feed the compute shader with my orignial vertexbuffer or do i have to copy it in several ByteAdressBuffers as implemented in the following code ? the same question is about the constant buffer of the matrixes my more urgent question is how do i feed my normal pipeline with the result of the compute Shader which are 2 RWByteAddressBuffers that contain position an normal for example i could use 2 vertexbuffer bindings 1 containing only the uv coordinates 2.containing position and normal How do i copy from the RWByteAddressBuffers to the vertexbuffer ? (Code from turanszkij ) Here is my shader implementation for skinning a mesh in a compute shader: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 struct Bone { float4x4 pose; }; StructuredBuffer<Bone> boneBuffer; ByteAddressBuffer vertexBuffer_POS; // T-Pose pos ByteAddressBuffer vertexBuffer_NOR; // T-Pose normal ByteAddressBuffer vertexBuffer_WEI; // bone weights ByteAddressBuffer vertexBuffer_BON; // bone indices RWByteAddressBuffer streamoutBuffer_POS; // skinned pos RWByteAddressBuffer streamoutBuffer_NOR; // skinned normal RWByteAddressBuffer streamoutBuffer_PRE; // previous frame skinned pos inline void Skinning(inout float4 pos, inout float4 nor, in float4 inBon, in float4 inWei) { float4 p = 0, pp = 0; float3 n = 0; float4x4 m; float3x3 m3; float weisum = 0; // force loop to reduce register pressure // though this way we can not interleave TEX - ALU operations [loop] for (uint i = 0; ((i &lt; 4) &amp;&amp; (weisum&lt;1.0f)); ++i) { m = boneBuffer[(uint)inBon].pose; m3 = (float3x3)m; p += mul(float4(pos.xyz, 1), m)*inWei; n += mul(nor.xyz, m3)*inWei; weisum += inWei; } bool w = any(inWei); pos.xyz = w ? p.xyz : pos.xyz; nor.xyz = w ? n : nor.xyz; } [numthreads(1024, 1, 1)] void main( uint3 DTid : SV_DispatchThreadID ) { const uint fetchAddress = DTid.x * 16; // stride is 16 bytes for each vertex buffer now... uint4 pos_u = vertexBuffer_POS.Load4(fetchAddress); uint4 nor_u = vertexBuffer_NOR.Load4(fetchAddress); uint4 wei_u = vertexBuffer_WEI.Load4(fetchAddress); uint4 bon_u = vertexBuffer_BON.Load4(fetchAddress); float4 pos = asfloat(pos_u); float4 nor = asfloat(nor_u); float4 wei = asfloat(wei_u); float4 bon = asfloat(bon_u); Skinning(pos, nor, bon, wei); pos_u = asuint(pos); nor_u = asuint(nor); // copy prev frame current pos to current frame prev pos streamoutBuffer_PRE.Store4(fetchAddress, streamoutBuffer_POS.Load4(fetchAddress)); // write out skinned props: streamoutBuffer_POS.Store4(fetchAddress, pos_u); streamoutBuffer_NOR.Store4(fetchAddress, nor_u); }
  7. Hi, can someone please explain why this is giving an assertion EyePosition!=0 exception? _lightBufferVS->viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&_lightBufferVS->position), XMLoadFloat3(&_lookAt), XMLoadFloat3(&up)); It looks like DirectX doesnt want the 2nd parameter to be a zero vector in the assertion, but I passed in a zero vector with this exact same code in another program and it ran just fine. (Here is the version of the code that worked - note XMLoadFloat3(&m_lookAt) parameter value is (0,0,0) at runtime - I debugged it - but it throws no exceptions. m_viewMatrix = DirectX::XMMatrixLookAtLH(XMLoadFloat3(&m_position), XMLoadFloat3(&m_lookAt), XMLoadFloat3(&up)); Here is the repo for the broken code (See LightClass) https://github.com/mister51213/DirectX11Engine/blob/master/DirectX11Engine/LightClass.cpp and here is the repo with the alternative version of the code that is working with a value of (0,0,0) for the second parameter. https://github.com/mister51213/DX11Port_SoftShadows/blob/master/Engine/lightclass.cpp
  8. Hello! I wrote a simple bones system that renders a 3D model with bones using software vertex processing. The model is loaded perfectly, but I can't see any colors on it. For illustration, you can see the 3D lines list, the bones ( 32 bones ) are in correct position ( bind pose ). Now, here's the problem. When I try to render the mesh with transformations applied then I see this: As you can see the 3D lines are disappearing, I'm guessing the model is rendered, but the colors are not visible for whatever reason. I tried moving my camera around the line list, but all I can see is some lines disappearing due to the black color of vertices? I'm not loading any textures, am I suppose to load them? However, if I render the vertices without applying ANY bone transformations, then I can see it, but it's a mess, obviously. If you're wondering why it's red, I have set color of these vertices ( only half of them ) to red and the rest half is white. First of all, my apologies for the messy code, but here it is: I'm not sure if vertices are suppose to have weights in them for software vertex processing. I'm storing them in a container, so you don't see them here. #define CUSTOMFVF ( D3DFVF_XYZ | D3DFVF_NORMAL | D3DFVF_DIFFUSE ) struct CUSTOMVERTEX { D3DXVECTOR3 Position; D3DXVECTOR3 Normal; DWORD Color; }; This is how I store the vertices in container and give them red and white color: This is how I create the device: For every frame: This is the UpdateSkinnedMesh method: I have debugged bone weights and bone indices. They are okay. Bone weights add up to 1.0f, so I'm really wondering why I can't see the model with colors on it?
  9. Hi, We sell presentation app which uses Direct3D 9. It works slow on laptops with dual video cards (Intel + NVIDIA or AMD). We have to ask users to manually choose NVIDIA video card for our app. Is there any API to automatically select dedicated NVIDIA/AMD video card in our app on Windows 10? Thanks,
  10. Hi, can somebody please tell me in clear simple steps how to debug and step through an hlsl shader file? I already did Debug > Start Graphics Debugging > then captured some frames from Visual Studio and double clicked on the frame to open it, but no idea where to go from there. I've been searching for hours and there's no information on this, not even on the Microsoft Website! They say "open the Graphics Pixel History window" but there is no such window! Then they say, in the "Pipeline Stages choose Start Debugging" but the Start Debugging option is nowhere to be found in the whole interface. Also, how do I even open the hlsl file that I want to set a break point in from inside the Graphics Debugger? All I want to do is set a break point in a specific hlsl file, step thru it, and see the data, but this is so unbelievably complicated and Microsoft's instructions are horrible! Somebody please, please help.
  11. Is Direct3D 11 an api function like glMemoryBarrier in OpenGL? For example, if binds a texture to compute shader, compute shader writes some values to texture, then dispatchCompute, after that, read texture content to CPU side. I know, In OpenGL, we could call glMemoryBarrier before reading to assure that texture all content has been updated by compute shader. How to handle incoherent memory access in Direct3D 11? Thank you.
  12. The technique for generating infinite ground is simple. I just calculate the view distance from the camera to whatever location it is, then I scale up or down the ground geometry according to that. Working quite well. But I wonder if I had calculated the navigation data beforehand, will it get affected in anyways? currently I assume there is no problem, because after you calculated the navigation data, it is decoupled from the geometry, when I say put a new "tile" on an area I'm looking towards where there initially was no navigation data when I raise the camera up? because the ground is "scaled" every frame, and probably the navigation mesh sub-system is grabbing the geometry data while you turn your camera again, quite dangerous!!! secondly, the textures are all screwed up/stretched.. that's the only problems... thanks Jack
  13. Hello to all i have got a DirectX 8 game i am fixing all works well besides one DirectShow function that streams a video into a DirectDraw surface, I get an Access Violation error in ->CreateSurface...Can someone help me?. Thanks Here is the function. create_stream(const char *file_path) { IAMMultiMediaStream *local_stream_ptr; IAMMultiMediaStream *global_stream_ptr; IMediaStream *primary_video_stream_ptr; IDirectDrawMediaStream *ddraw_stream_ptr; IDirectDrawStreamSample *video_sample_ptr; LPDIRECTDRAWSURFACE video_surface_ptr; DDPIXELFORMAT ddraw_video_pixel_format; WCHAR wPath[MAX_PATH]; DDSURFACEDESC ddraw_surface_desc; RECT rect; int video_width, video_height; // Initialise the COM library. CoInitialize(NULL); // Initialise the global variables. global_stream_ptr = NULL; primary_video_stream_ptr = NULL; ddraw_stream_ptr = NULL; video_sample_ptr = NULL; video_surface_ptr = NULL; // Create the local multi-media stream object. if (CoCreateInstance(CLSID_AMMultiMediaStream, NULL, CLSCTX_INPROC_SERVER, IID_IAMMultiMediaStream, (void **)&local_stream_ptr) != S_OK) return(PLAYER_UNAVAILABLE); // Initialise the local stream object. if (local_stream_ptr->Initialize(STREAMTYPE_READ, AMMSF_NOGRAPHTHREAD,NULL) != S_OK) { local_stream_ptr->Release(); return(PLAYER_UNAVAILABLE); } // Add a primary video stream to the local stream object. if (local_stream_ptr->AddMediaStream(ddraw_object_ptr, &MSPID_PrimaryVideo, 0, NULL) != S_OK) { local_stream_ptr->Release(); return(PLAYER_UNAVAILABLE); } // Add a primary audio stream to the local stream object, using the // default audio renderer for playback. if (local_stream_ptr->AddMediaStream(NULL, &MSPID_PrimaryAudio, AMMSF_ADDDEFAULTRENDERER, NULL) != S_OK) { local_stream_ptr->Release(); return(PLAYER_UNAVAILABLE); } // Open the streaming media file. MultiByteToWideChar(CP_ACP, 0, file_path, -1, wPath, MAX_PATH); if (local_stream_ptr->OpenFile(wPath, 0) != S_OK) { local_stream_ptr->Release(); diagnose("Windows Media Player was unable to open stream URL %s", file_path); return(STREAM_UNAVAILABLE); } // Convert the local stream object into a global stream object. local_stream_ptr->AddRef(); global_stream_ptr = local_stream_ptr; // Initialise the primary video stream, if it exists. if (global_stream_ptr->GetMediaStream(MSPID_PrimaryVideo, &primary_video_stream_ptr) != S_OK) { warning("Could not get the primary video stream"); return(STREAM_UNAVAILABLE); } else { warning("Get the primary video stream"); } if (primary_video_stream_ptr->QueryInterface(IID_IDirectDrawMediaStream,(void **)&ddraw_stream_ptr) != S_OK) { warning("Could not obtain the DirectDraw stream object"); } else { warning("Obtain the DirectDraw stream object"); } // Determine the unscaled size of the video frame. if (ddraw_stream_ptr->GetFormat(&ddraw_surface_desc, NULL, NULL, NULL) != S_OK) { warning("Could not determine the unscaled size of the video frame"); } else { warning("Determine the unscaled size of the video frame"); } video_width = ddraw_surface_desc.dwWidth; video_height = ddraw_surface_desc.dwHeight; // Create a DirectDraw video surface using the texture pixel format, but // without an alpha channel (otherwise CreateSample will spit the dummy). memset(&ddraw_surface_desc, 0, sizeof(DDSURFACEDESC)); ddraw_surface_desc.dwSize = sizeof(DDSURFACEDESC); ddraw_surface_desc.dwFlags = DDSD_CAPS | DDSD_WIDTH | DDSD_HEIGHT | DDSD_PIXELFORMAT; ddraw_surface_desc.ddsCaps.dwCaps = DDSCAPS_TEXTURE | DDSCAPS_SYSTEMMEMORY; ddraw_surface_desc.dwWidth = video_width; ddraw_surface_desc.dwHeight = video_height; ddraw_surface_desc.ddpfPixelFormat = ddraw_video_pixel_format; // Here i got acccess violation if (ddraw_object_ptr->CreateSurface(&ddraw_surface_desc, &video_surface_ptr, NULL) != DD_OK) { warning("Could not create a DirectDraw video surface"); } else { warning("Create a DirectDraw video surface"); } // Set the rectangle that is to be rendered to on the video surface. rect.left = 0; rect.right = video_width; rect.top = 0; rect.bottom = video_height; // Create the video sample for the video surface. if (ddraw_stream_ptr->CreateSample(video_surface_ptr, &rect, 0, &video_sample_ptr) != S_OK) { warning("Could not create the video sample for the video surface"); } else { warning("Created the video sample for the video surface"); } // Create the event that will be used to signal that a video frame is // available. video_frame_available.create_event(); // Initialise the video textures now, since we already know the // dimensions of the video frame. init_video_textures(video_width, video_height, RGB16); warning("Surface started"); streaming_video_available = true; // Get the end of stream event handle. global_stream_ptr->GetEndOfStreamEventHandle(&end_of_stream_handle); // Return a success status. warning("Stream started"); return(STREAM_STARTED); }
  14. TL;DR Mesh slicing... Anyone try to venture that way? So I'm aware of this thread: But since it's from 2004, I wanted to ask.. are there any games since (or ever) that actually have real dynamic mesh slicing? The only one that I know of that kind of does this is Metal Gear Rising Revenge..? Anyone know / can guess, how they did this? Is this mesh slicing in real time? It doesn't seem precomputed at least.. I tried myself at programming a mesh slicing algorithm, the other day, but for my routine, at about 200 Vertices it gets too slow for real time. Using this paper here: http://www.dainf.ct.utfpr.edu.br/~murilo/public/CAD-slicing.pdf (Obviously some bugs, face orientation messes up, and I have to say it was completely on the CPU, and didn't spend much effort optimizing anything)
  15. Does sync be needed to read texture content after access texture image in compute shader? My simple code is as below, glUseProgram(program.get()); glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI); glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI); glDispatchCompute(1, 1, 1); // Does sync be needed here? glUseProgram(0); glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer); glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0); glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues); Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
  16. Blast from the past: While checking my different renderers I've come upon a problem in a Direct3d8 renderer, which I'm pretty sure worked fine already a while ago. Basically I'm drawing a 2d quad with pretransformed vertices, but the first triangle's texture coordinates seem off. I'm offsetting the coordinates by -0.5,-0.5 as per spec and there's not much more to it. The same part with D3D11 using shaders works fine (well, duh). Anyhow, I'm still curious, as the DX8 renderer is still my final fallback and it currently is supported quite well. Anybody see any glaring mistake? The code in question is this, the texture is a 16x16 pixel sized cut out from a 256x256 texture, these values are used. FWIW I'm running on Windows 10 with some AMD Radeon 5450. iX = 22 iY = 222 Texture rect from the full texture are is (144,0 with 16x16 size) The drawn box is drawn scaled up to a size of 40x40 pixels m_DirectTexelMapping.offset is a 2d vector with values -0.5, -0.5 I've set min-, mag- and mip-mapping filter to nearest. struct CUSTOMVERTEX { D3DXVECTOR3 position; // The position float fRHW; D3DCOLOR color; // The color float fTU, fTV; }; CUSTOMVERTEX vertData[4]; float fRHW = 1.0f; GR::tVector ptPos( (float)iX, (float)iY, fZ ); GR::tVector ptSize( (float)iWidth, (float)iHeight, 0.0f ); m_pd3dDevice->SetVertexShader( D3DFVF_XYZRHW | D3DFVF_DIFFUSE | D3DFVF_TEX1 ); vertData[0].position.x = ptPos.x + m_DirectTexelMappingOffset.x; vertData[0].position.y = ptPos.y + m_DirectTexelMappingOffset.y; vertData[0].position.z = (float)ptPos.z; vertData[0].fRHW = fRHW; vertData[0].color = dwColor1; vertData[0].fTU = fTU1; vertData[0].fTV = fTV1; vertData[1].position.x = ptPos.x + ptSize.x + m_DirectTexelMappingOffset.x; vertData[1].position.y = ptPos.y + m_DirectTexelMappingOffset.y; vertData[1].position.z = (float)ptPos.z; vertData[1].fRHW = fRHW; vertData[1].color = dwColor2; vertData[1].fTU = fTU2; vertData[1].fTV = fTV2; vertData[2].position.x = ptPos.x + m_DirectTexelMappingOffset.x; vertData[2].position.y = ptPos.y + ptSize.y + m_DirectTexelMappingOffset.y; vertData[2].position.z = (float)ptPos.z; vertData[2].fRHW = fRHW; vertData[2].color = dwColor3; vertData[2].fTU = fTU3; vertData[2].fTV = fTV3; vertData[3].position.x = ptPos.x + ptSize.x + m_DirectTexelMappingOffset.x; vertData[3].position.y = ptPos.y + ptSize.y + m_DirectTexelMappingOffset.y; vertData[3].position.z = (float)ptPos.z; vertData[3].fRHW = fRHW; vertData[3].color = dwColor4; vertData[3].fTU = fTU4; vertData[3].fTV = fTV4; m_pd3dDevice->DrawPrimitiveUP( D3DPT_TRIANGLESTRIP, 2, &vertData, sizeof( vertData[0] ) ); God, I hate this borked message editor, it's so not user friendly. This POS message editor really loves to f*ck up code formatting.
  17. I made a spotlight that 1. Projects 3d models onto a render target from each light POV to simulate shadows 2. Cuts a circle out of the square of light that has been projected onto the render target as a result of the light frustum, then only lights up the pixels inside that circle (except the shadowed parts of course), so you dont see the square edges of the projected frustum. After doing an if check to see if the dot product of light direction and light to vertex vector is greater than .95 to get my initial cutoff, I then multiply the light intensity value inside the resulting circle by the same dot product value, which should range between .95 and 1.0. This should give the light inside that circle a falloff from 100% lit to 0% lit toward the edge of the circle. However, there is no falloff. It's just all equally lit inside the circle. Why on earth, I have no idea. If someone could take a gander and let me know, please help, thank you so much. float CalculateSpotLightIntensity( float3 LightPos_VertexSpace, float3 LightDirection_WS, float3 SurfaceNormal_WS) { //float3 lightToVertex = normalize(SurfacePosition - LightPos_VertexSpace); float3 lightToVertex_WS = -LightPos_VertexSpace; float dotProduct = saturate(dot(normalize(lightToVertex_WS), normalize(LightDirection_WS))); // METALLIC EFFECT (deactivate for now) float metalEffect = saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace))); if(dotProduct > .95 /*&& metalEffect > .55*/) { return saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace))); //return saturate(dot(SurfaceNormal_WS, normalize(LightPos_VertexSpace))) * dotProduct; //return dotProduct; } else { return 0; } } float4 LightPixelShader(PixelInputType input) : SV_TARGET { float2 projectTexCoord; float depthValue; float lightDepthValue; float4 textureColor; // Set the bias value for fixing the floating point precision issues. float bias = 0.001f; // Set the default output color to the ambient light value for all pixels. float4 lightColor = cb_ambientColor; /////////////////// NORMAL MAPPING ////////////////// float4 bumpMap = shaderTextures[4].Sample(SampleType, input.tex); // Expand the range of the normal value from (0, +1) to (-1, +1). bumpMap = (bumpMap * 2.0f) - 1.0f; // Change the COORDINATE BASIS of the normal into the space represented by basis vectors tangent, binormal, and normal! float3 bumpNormal = normalize((bumpMap.x * input.tangent) + (bumpMap.y * input.binormal) + (bumpMap.z * input.normal)); //////////////// LIGHT LOOP //////////////// for(int i = 0; i < NUM_LIGHTS; ++i) { // Calculate the projected texture coordinates. projectTexCoord.x = input.vertex_ProjLightSpace[i].x / input.vertex_ProjLightSpace[i].w / 2.0f + 0.5f; projectTexCoord.y = -input.vertex_ProjLightSpace[i].y / input.vertex_ProjLightSpace[i].w / 2.0f + 0.5f; if((saturate(projectTexCoord.x) == projectTexCoord.x) && (saturate(projectTexCoord.y) == projectTexCoord.y)) { // Sample the shadow map depth value from the depth texture using the sampler at the projected texture coordinate location. depthValue = shaderTextures[6 + i].Sample(SampleTypeClamp, projectTexCoord).r; // Calculate the depth of the light. lightDepthValue = input.vertex_ProjLightSpace[i].z / input.vertex_ProjLightSpace[i].w; // Subtract the bias from the lightDepthValue. lightDepthValue = lightDepthValue - bias; float lightVisibility = shaderTextures[6 + i].SampleCmp(SampleTypeComp, projectTexCoord, lightDepthValue ); // Compare the depth of the shadow map value and the depth of the light to determine whether to shadow or to light this pixel. // If the light is in front of the object then light the pixel, if not then shadow this pixel since an object (occluder) is casting a shadow on it. if(lightDepthValue < depthValue) { // Calculate the amount of light on this pixel. float lightIntensity = saturate(dot(bumpNormal, normalize(input.lightPos_LS[i]))); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. float spotLightIntensity = CalculateSpotLightIntensity( input.lightPos_LS[i], // NOTE - this is NOT NORMALIZED!!! cb_lights[i].lightDirection, bumpNormal/*input.normal*/); lightColor += cb_lights[i].diffuseColor*spotLightIntensity* .18f; // spotlight //lightColor += cb_lights[i].diffuseColor*lightIntensity* .2f; // square light } } } } // Saturate the final light color. lightColor = saturate(lightColor); // lightColor = saturate( CalculateNormalMapIntensity(input, lightColor, cb_lights[0].lightDirection)); // TEXTURE ANIMATION - Sample pixel color from texture at this texture coordinate location. input.tex.x += textureTranslation; // BLENDING float4 color1 = shaderTextures[0].Sample(SampleTypeWrap, input.tex); float4 color2 = shaderTextures[1].Sample(SampleTypeWrap, input.tex); float4 alphaValue = shaderTextures[3].Sample(SampleTypeWrap, input.tex); textureColor = saturate((alphaValue * color1) + ((1.0f - alphaValue) * color2)); // Combine the light and texture color. float4 finalColor = lightColor * textureColor; /////// TRANSPARENCY ///////// //finalColor.a = 0.2f; return finalColor; } Light_vs.hlsl Light_ps.hlsl
  18. Hey guys, I basically learned graphics programming on here so I figured I should come back to relearn. I was a console graphics programmer for a bit, but the past three years I've been mostly doing performance and finding ways to bastardize typical techniques to make them run on a Mali 400 Samsung S3 in India. Because of this, (and the overtime), I haven't spent as much time on pretty graphics in awhile, where should I start? I'm currently updating my personal engine for OGL 4 with an eye toward Vulkan once I'm stable again, and my first goal will be getting a rough approximation of a PBR pipeline together, but after that what should I focus on to get back up to speed? I know it's an open topic and I'm mostly just looking for conversation about the "State of Graphics Programming in 2018", any feedback will be appreciated.
  19. Just realized maybe this doesn't fit in this sub-forum but oh well I'm making an exporter plugin for Maya and I want to export a non-triangulated mesh, while still outputting triangle data, not quad/n-gon data. Using MItMeshPolygon, I am doing the following: for (; !polyIter.isDone(); polyIter.next()) { //Get points and normals from current polygon MPointArray vts; polyIter.getPoints(vts); MVectorArray nmls; polyIter.getNormals(nmls); //Get number of triangles in current polygon int numberOfTriangles; polyIter.numTriangles(numberOfTriangles); //Loop through all triangles for (int i = 0; i < numberOfTriangles; i++) { //Get points and vertexList for this triangle. //vertexList is used to index into the polygon verts and normals. MPointArray points = {}; MIntArray vertexList = {}; polyIter.getTriangle(i, points, vertexList, MSpace::kObject); //For each vertex in this triangle for (int v = 0; v < 3; v++) { //Get point and normal UINT vi = polyIter.vertexIndex(vertexList[v]); UINT ni = polyIter.normalIndex(vertexList[v]); MPoint _v = vts[vi]; MFloatVector _n = nmls[ni]; //Create vertex Vertex_pos3nor3uv2 vert = {}; vert.posX = _v.x; vert.posY = _v.y; vert.posZ = _v.z * -1.0; vert.norX = _n.x; vert.norY = _n.y; vert.norZ = _n.z * -1.0; vert.u = 0.0; very.v = 0.0; verts.push_back(vert); } } } Doing this only gives me half the triangles I'm supposed to get and the result is very distorted. Link above is a picture of a cube exported this way. Edit: I've also tried indexing into the entire mesh vertex array like this: MPointArray vts; meshFn.getPoints(vts); MFloatVectorArray nmls; meshFn.getNormals(nmls); //.... UINT vi = polyIter.vertexIndex(vertexList[v]); UINT ni = polyIter.normalIndex(vertexList[v]); MPoint _v = vts[vi]; MFloatVector _n = nmls[vi]; I can't figure out what's wrong with my code. Any ideas?
  20. Please look at my new post in this thread where I supply new information! I'm trying to implement SSAO in my 'engine' (based on this article) but I'm getting odd results. I know I'm doing something wrong but I can't figure out what's causing the particular issue im having at the moment. Here's a video of what it looks like . The rendered output is the SSAO map. As you can see the result is heavily altered depending on the camera (although it seems to be unaffected my camera translation). The fact that the occlusion itself isn't correct isn't much of a problem at this stage, since I've hardcoded a lot of stuff that shouldn't be. E.g. I don't have a random-vector texture, all I do is use one of the sample vectors in order to construct the TBN matrix. One issue at a time... My shaders are as follows: //SSAO VS struct VS_IN { float3 pos : POSITION; float3 ray : VIEWRAY; }; struct VS_OUT { float4 pos : SV_POSITION; float4 ray : VIEWRAY; }; VS_OUT VS_main( VS_IN input ) { VS_OUT output; output.pos = float4(input.pos, 1.0f); //already in NDC space, pass through output.ray = float4(input.ray, 0.0f); //interpolate view ray return output; } Texture2D depthTexture : register(t0); Texture2D normalTexture : register(t1); struct VS_OUT { float4 pos : SV_POSITION; float4 ray : VIEWRAY; }; cbuffer cbViewProj : register(b0) { float4x4 view; float4x4 projection; } float4 PS_main(VS_OUT input) : SV_TARGET { //Generate samples float3 kernel[8]; kernel[0] = float3(1.0f, 1.0f, 1.0f); kernel[1] = float3(-1.0f, -1.0f, 0.0f); kernel[2] = float3(-1.0f, 1.0f, 1.0f); kernel[3] = float3(1.0f, -1.0f, 0.0f); kernel[4] = float3(1.0f, 1.0f, 0.0f); kernel[5] = float3(-1.0f, -1.0f, 1.0f); kernel[6] = float3(-1.0f, 1.0f, .0f); kernel[7] = float3(1.0f, -1.0f, 1.0f); //Get texcoord using SV_POSITION int3 texCoord = int3(input.pos.xy, 0); //Fragment viewspace position (non-linear depth) float3 origin = input.ray.xyz * (depthTexture.Load(texCoord).r); //world space normal transformed to view space and normalized float3 normal = normalize(mul(view, float4(normalTexture.Load(texCoord).xyz, 0.0f))); //Grab arbitrary vector for construction of TBN matrix float3 rvec = kernel[3]; float3 tangent = normalize(rvec - normal * dot(rvec, normal)); float3 bitangent = cross(normal, tangent); float3x3 tbn = float3x3(tangent, bitangent, normal); float occlusion = 0.0; for (int i = 0; i < 8; ++i) { // get sample position: float3 samp = mul(tbn, kernel[i]); samp = samp * 1.0f + origin; // project sample position: float4 offset = float4(samp, 1.0); offset = mul(projection, offset); offset.xy /= offset.w; offset.xy = offset.xy * 0.5 + 0.5; // get sample depth. (again, non-linear depth) float sampleDepth = depthTexture.Load(int3(offset.xy, 0)).r; // range check & accumulate: occlusion += (sampleDepth <= samp.z ? 1.0 : 0.0); } //Average occlusion occlusion /= 8.0; return min(occlusion, 1.0f); } I'm fairly sure my matrices are correct (view and projection) and that the input rays are correct. I don't think the non-linear depth is the problem here either, but what do I know I haven't fixed the linear depth mostly because I don't really understand how it's done... Any ideas are very appreciated!
  21. I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow. Here is my light space matrix calculation mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position the_sun->direction is a normalized vector so i multiply it by a constant to get the position. player->pos is the camera position in world space the light projection matrix is calculated like this: mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader: uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader: out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer My shadow calculation in the deferred fragment shader: float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = light_space_pos.xyz / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this: get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now): sorry because of the compression , the black smudges are trees ... https://i.stack.imgur.com/T43aK.jpg EDIT: Depth texture attachment: glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24,fbo->width,fbo->height,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fbo->depthTexture, 0);
  22. Hello, on theirs paper "Fast, minimum storage ray/triangle intersection", the algorithm is presented as having two slight variations, one that can be used to cull back face triangles, postponing the use of the division operation and the other, that cannot do so and works for both kind of triangles. On my Ray-Tracer, I have to use the one that postpone the divisions, intended to be used with front faced triangles, to calculate the position and use the one that cannot postpone, intended to be used with two sided triangles, to calculate the shadows. On that case I get perfect image as can be seen on this link: https://postimg.org/image/d8rrpu0fv/ Now if I invert the order (not postponed division to determine triangles position, postponed for shadows) I get no shadows and this light blue strip region: https://postimg.org/image/6ibagdq4r/ If I use the postponed, front face in both cases I get lack of shadows: https://postimg.org/image/wqmf5rpnv/ And if a use the non postponed back faced version in both occasions I get the light blue strip: https://postimg.org/image/r2g4euy63/ Here is the code for the one that postpones the division operation, followed by the one that don't. Vector edge1 = s.vert2 - s.vert1; Vector edge2 = s.vert3 - s.vert1; Vector s1; s1.x = ray.dir.y * edge2.z - ray.dir.z * edge2.y; s1.y = ray.dir.z * edge2.x - ray.dir.x * edge2.z; s1.z = ray.dir.x * edge2.y - ray.dir.y * edge2.x; float det = edge1 * s1; if (det<0.000001) return false; Vector distance = ray.origin - s.vert1; float barycCoord_1 = distance * s1; if (barycCoord_1<0.0 || barycCoord_1>det) return false; Vector s2; s2.x = distance.y * edge1.z - distance.z * edge1.y; s2.y = distance.z * edge1.x - distance.x * edge1.z; s2.z = distance.x * edge1.y - distance.y * edge1.x; float barycCoord_2 = ray.dir * s2; if (barycCoord_2 < 0.0 || (barycCoord_1 + barycCoord_2) > det) return false; float intersection = edge2 * s2; float invDet = 1/det; intersection *= invDet; barycCoord_1 *= invDet; barycCoord_2 *= invDet; if (0.1f<intersection && intersection<t) { t=intersection; return true; } return false; Vector edge1 = s.vert2 - s.vert1; Vector edge2 = s.vert3 - s.vert1; Vec3f edge11 (s.vert22 - s.vert11); Vec3f edge22 (s.vert33 - s.vert11); Vector s1; s1.x = ray.dir.y * edge2.z - ray.dir.z * edge2.y; s1.y = ray.dir.z * edge2.x - ray.dir.x * edge2.z; s1.z = ray.dir.x * edge2.y - ray.dir.y * edge2.x; float divisor = s1 * edge1; if (divisor==0.0) return false; float invDivisor = 1/divisor; Vector distance = ray.origin - s.vert1; float barycCoord_1 = distance * s1 * invDivisor; if (barycCoord_1 < 0.0 || barycCoord_1 > 1.0) return false; Vector s2; s2.x = distance.y * edge1.z - distance.z * edge1.y; s2.y = distance.z * edge1.x - distance.x * edge1.z; s2.z = distance.x * edge1.y - distance.y * edge1.x; float barycCoord_2 = ray.dir * s2 * invDivisor; if (barycCoord_2 < 0.0 || (barycCoord_1 + barycCoord_2) > 1.0) return false; float intersection = edge2 * s2 * invDivisor; if (0.1f<intersection && intersection<t) { t=intersection; return true; } return false; I have no idea of what is going wrong and would appreciate suggestion about what can be going wrong, and if that is not possible, perhaps suggestions of what to do to try to figure out what is happening. Thanks in advance.
  23. // Sample the shadow map depth value from the depth texture using the sampler at the projected texture coordinate location. depthValue = shaderTextures[6 + i].Sample(SampleTypeClamp, projectTexCoord).r; // Calculate the depth of the light. lightDepthValue = input.lightViewPositions[i].z / input.lightViewPositions[i].w; // Subtract the bias from the lightDepthValue. lightDepthValue = lightDepthValue - bias; // Compare the depth of the shadow map value and the depth of the light to determine whether to shadow or to light this pixel. // If the light is in front of the object then light the pixel, if not then shadow this pixel since an object (occluder) is casting a shadow on it. if(lightDepthValue < depthValue) { // Calculate the amount of light on this pixel. //lightIntensity = saturate(dot(input.normal, input.lightPositions)); lightIntensity = saturate(dot(input.normal, normalize(input.lightPositions[i]))); if(lightIntensity > 0.0f) { // Determine the final diffuse color based on the diffuse color and the amount of light intensity. color += (diffuseCols[i] * lightIntensity * 0.25f); } } else // shadow falloff here { float4 shadowcol = (1,1,1,1); float shadowintensity = saturate(length(input.lightpositions[i])*0.038); color += shadowcol * shadowintensity*shadowintensity*shadowintensity; } } } // Saturate the final light color. color = saturate(color); Hi, I want to add a fall off to the shadows in this pixel shader. This should be really straightforward - just get the distance between the light position and the vertex position, and multiply it by the light intensity at the pixel being shadowed, so the light intensity will increase and the shadow will fade away towards the edges. As you can see, I get the "lightPosition" from the input (which comes from the vertex shader, and was calculated by worldLightPosition - worldVertexPosition inside the vertex shader, so taking its length should give you the distance between the light and the pixel.) I multiplied it by 0.038, an arbitrary number, to scale it down, because it needs to be between 0 and 1 before multiplying it by shadow color (1,1,1,1) to give a gradient. However, this does absolutely nothing, and I cant tell where its failing. Please look at the attached files to see the full code of the vertex and pixel shaders. Any advice would be very welcome, thanks! Light_ps.hlsl Light_vs.hlsl
  24. Hello, I want you guys to help me to know the difference between technical artist and graphics programming role. I'm very interested in arts, maths and programming that's why i started to study computer graphics ( before i know that there is a technical artist role ) but after i know it i get confused knowing what the key similarity and difference between both of them. Can both position be in overlap and may be work on the same set of tasks/problems? What is the responsibilities of both of them? What's the skillset one should have to work on either of them? Thanks for your time, Regards.
  25. In Computer Graphics, there are two important data structures, The BSP Tree and the Octree.I want to use them to fast detect collisions.I mainly use it to get visible objects in a limited space from the whole scene,or to simulate physical motion,or to implement Ray-Traced renderer...Some people told me BSP is faster,and the other said Octree is faster.Who should I believe? Thanks
  • Advertisement