Jump to content
  • Advertisement

Search the Community

Showing results for tags '3D' in content posted in Graphics and GPU Programming.

The search index is currently processing. Current results may not be complete.


More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Community
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Topical
    • Virtual and Augmented Reality
    • News
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics
  • For Beginners's Forum

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Product Groups

  • GDNet+
  • Advertisements
  • GameDev Gear

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me


Website


Role


Twitter


Github


Twitch


Steam

Found 221 results

  1. Hello State based render architecture has many problems such as leakage of states and naive setting of states on each draw call, a lot of different sources recommend stateless rendering architecture which makes sense for DX12 as it uses a single object bind the PSO. Take a look at the following: Designing a Modern GPU Interface stateless-layered-multi-threaded-rendering-part-2-stateless-api-design Firaxis Lore System: CIV V Is this not causing the same problem though? as you are passing all the state commands within a DrawCommand object for them to be set during the draw call? yes you are hiding the state machine by not exposing the functions directly but you are just deferring the state changes to the command queue. You can sort by index using this method: Real Time Collision: Draw Call Key But that means each DrawCommand is passing in the entire PSO structure (as in the state's you want) with each command and storing it, just for you to sort by the key and elect the first object to bind its PSO for the rest within the group to use. It seems like a lot of wasted memory to pass all the PSO in to use just one, although it does prevent any slow down from swapping PSO for every single object. How are you handling state changes? am i missing some critical piece of information about stateless (note i am aiming towards the stateless method for DX12 just want some opinions on it :)) Thanks.
  2. I made this post on Reddit. I need ideas and information on how to create the ground mesh for my specifications.
  3. Hi I’ve been working on a game engine for years and I’ve recently come back to it after a couple of years break. Because my engine uses DirectX9.0c I thought maybe it would be a good idea to upgrade it to DX11. I then installed Windows 10 and starting tinkering around with the engine trying to refamiliarise myself with all the code. It all seems to work ok in the new OS but there’s something I’ve noticed that has caused a massive slowdown in frame rate. My engine has a relatively sophisticated terrain system which includes the ability to paint roads onto it, ala CryEngine. The roads are spline curves and built up with polygons matching the terrain surface. It used to work perfectly but I’ve noticed that when I’m dynamically adding the roads, which involves moving the spline curve control points around the surface of the terrain, the frame rate comes to a grinding halt. There’s some relatively complex processing going on each time the mouse moves - the road either side of the control point(s) being moved, is reconstructed in real time so you can position and bend the road precisely. On my previous OS, which was Win2k Pro, this worked really smoothly and in release mode there was barely any slow down in frame rate, but now it’s unusable. As part of the road reconstruction, I lock the vertex and index buffers and refill them with the new values so my question is, on windows 10 using DX9, is anyone aware of any locking issues? I’m aware that there can be contention when locking buffers dynamically but I’m locking with LOCK_DISCARD and this has never been an issue before. Any help would be greatly appreciated.
  4. Hi guys, I'm trying to learn this stuff but running into some problems 😕 I've compiled my .hlsl into a header file which contains the global variable with the precompiled shader data: //... // Approximately 83 instruction slots used #endif const BYTE g_vs[] = { 68, 88, 66, 67, 143, 82, 13, 236, 152, 133, 219, 113, 173, 135, 18, 87, 122, 208, 124, 76, 1, 0, 0, 0, 16, 76, 0, 0, 6, 0, //.... And now following the "Compiling at build time to header files" example at this msdn link , I've included the header files in my main.cpp and I'm trying to create the vertex shader like this: hr = g_d3dDevice->CreateVertexShader(g_vs, sizeof(g_vs), nullptr, &g_d3dVertexShader); if (FAILED(hr)) { return -1; } and this is failing, entering the if and returing -1. Can someone point out what I'm doing wrong? 😕
  5. I have a problem with SSAO. On left hand black area. Code shader: Texture2D<uint> texGBufferNormal : register(t0); Texture2D<float> texGBufferDepth : register(t1); Texture2D<float4> texSSAONoise : register(t2); float3 GetUV(float3 position) { float4 vp = mul(float4(position, 1.0), ViewProject); vp.xy = float2(0.5, 0.5) + float2(0.5, -0.5) * vp.xy / vp.w; return float3(vp.xy, vp.z / vp.w); } float3 GetNormal(in Texture2D<uint> texNormal, in int3 coord) { return normalize(2.0 * UnpackNormalSphermap(texNormal.Load(coord)) - 1.0); } float3 GetPosition(in Texture2D<float> texDepth, in int3 coord) { float4 position = 1.0; float2 size; texDepth.GetDimensions(size.x, size.y); position.x = 2.0 * (coord.x / size.x) - 1.0; position.y = -(2.0 * (coord.y / size.y) - 1.0); position.z = texDepth.Load(coord); position = mul(position, ViewProjectInverse); position /= position.w; return position.xyz; } float3 GetPosition(in float2 coord, float depth) { float4 position = 1.0; position.x = 2.0 * coord.x - 1.0; position.y = -(2.0 * coord.y - 1.0); position.z = depth; position = mul(position, ViewProjectInverse); position /= position.w; return position.xyz; } float DepthInvSqrt(float nonLinearDepth) { return 1 / sqrt(1.0 - nonLinearDepth); } float GetDepth(in Texture2D<float> texDepth, float2 uv) { return texGBufferDepth.Sample(samplerPoint, uv); } float GetDepth(in Texture2D<float> texDepth, int3 screenPos) { return texGBufferDepth.Load(screenPos); } float CalculateOcclusion(in float3 position, in float3 direction, in float radius, in float pixelDepth) { float3 uv = GetUV(position + radius * direction); float d1 = DepthInvSqrt(GetDepth(texGBufferDepth, uv.xy)); float d2 = DepthInvSqrt(uv.z); return step(d1 - d2, 0) * min(1.0, radius / abs(d2 - pixelDepth)); } float GetRNDTexFactor(float2 texSize) { float width; float height; texGBufferDepth.GetDimensions(width, height); return float2(width, height) / texSize; } float main(FullScreenPSIn input) : SV_TARGET0 { int3 screenPos = int3(input.Position.xy, 0); float depth = DepthInvSqrt(GetDepth(texGBufferDepth, screenPos)); float3 normal = GetNormal(texGBufferNormal, screenPos); float3 position = GetPosition(texGBufferDepth, screenPos) + normal * SSAO_NORMAL_BIAS; float3 random = normalize(2.0 * texSSAONoise.Sample(samplerNoise, input.Texcoord * GetRNDTexFactor(SSAO_RND_TEX_SIZE)).rgb - 1.0); float SSAO = 0; [unroll] for (int index = 0; index < SSAO_KERNEL_SIZE; index++) { float3 dir = reflect(SamplesKernel[index].xyz, random); SSAO += CalculateOcclusion(position, dir * sign(dot(dir, normal)), SSAO_RADIUS, depth); } return 1.0 - SSAO / SSAO_KERNEL_SIZE; }
  6. Hi all, now that I’ve managed to get my normal mapping working, I’m at the point of making a “clean” implementation. Would you know if there’s a general consensus on the following 2 things: 1. Normal mapping in world space vs tangent space (multiply T, B and N in PS versus converting light direction to tangent space in VS, pass to PS) 2. Provide tangents and bitangents to the GPU in vtxbuffer versus calculating in the shader What I’ve learned so far is that performing it in tangentspace is preferred (convert light dir in VS and pass that to the PS), combined with calculating the bitangent (and perhaps also tangent) in the VS, per vertex. I know the answer could be profiling, but that’s not something I’m doing at this stage. Just want to decide my 1st approach. Any input is appreciated.
  7. Hello everyone, After a few years of break from coding and my planet render game I'm giving it a go again from a different angle. What I'm struggling with now is that I have created a Frustum that works fine for now atleast, it does what it's supose to do alltho not perfect. But with the frustum came very low FPS, since what I'm doing right now just to see if the Frustum worked is to recreate the vertex buffer every frame that the camera detected movement. This is of course very costly and not the way to do it. Thats why I'm now trying to learn how to create a dynamic vertexbuffer instead and to map and unmap the vertexes, in the end my goal is to update only part of the vertexbuffer that is needed, but one step at a time ^^ So below is my code which I use to create the Dynamic buffer. The issue is that I want the size of the vertex buffer to be big enough to handle bigger vertex buffers then just mPlanetMesh.vertices.size() due to more vertices being added later when I start to do LOD and stuff, the first render isn't the biggest one I will need. vertexBufferDesc.Usage = D3D11_USAGE_DYNAMIC; vertexBufferDesc.ByteWidth = mPlanetMesh.vertices.size(); vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexBufferDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; vertexBufferDesc.MiscFlags = 0; vertexBufferDesc.StructureByteStride = 0; vertexData.pSysMem = &mPlanetMesh.vertices[0]; vertexData.SysMemPitch = 0; vertexData.SysMemSlicePitch = 0; result = device->CreateBuffer(&vertexBufferDesc, &vertexData, &mVertexBuffer); if (FAILED(result)) { return false; } What happens is that the result = device->CreateBuffer(&vertexBufferDesc, &vertexData, &mVertexBuffer); Makes it crash due to Access Violation. When I put the vertices.size() in it works without issues, but when I try to set it to like vertices.size() * 2 it crashes. I googled my eyes dry tonight but doesn't seem to find people with the same kind of issue, I've read that the vertex buffer can be bigger if needed. What I'm I doing wrong here? Best Regards and Thanks in advance Toastmastern
  8. I am implementing baking in our engine.I met a problem about how to assignment per object uv in lightmap atlas. I am using UVAtlas to generate lightmap uv, most unwrapped mesh has a uv range [0, 1), no matter how big they are. I wanna them have same uv density so to packing them well in lightmap atlas.I have tried thekla_atlas to do the same thing too, but it seems that it can not unwrap uv according mesh size. As far as I can see, unwrapping uv coordinates using its world space can solve this, all meshes share a same scale.But I don't hope to spend a lot of time to write these code and debug them.I am wandering is there exist some methods I don't know that can scale each lightmap uv to a same density. Thanks in advance. : )
  9. I have been working on cleaning up my old linear algebra code after getting some actual programming experience (in audio...), and I have the above question about best practices: if a matrix is not invertible, what should the function Mat3 inverse( Mat3 const& a ) do? I know that MOST matrices are invertible, but I don't want things to explode on the off chance things go sideways. I see five possibilities (I'm using the Adjugate matrix divided by the determinant to form the inverse): Rely on the calling context to check if a matrix is invertible. (Yeah right) Don't check the determinant and let the floating point exception happen. (Eww) Assert that the determinant isn't zero (though I haven't written assertions yet). Throw an exception. Return some matrix which isn't the inverse, such as the matrix 'a' itself or a null matrix of all zeroes, which can be checked; in the case of returning 'a', you could check it against itself for equality. (Bugs?) I'm leaning Assertion for development, possibly backed by throwing an exception for 'production' code (who am I kidding, none of this will ever go live :P) so that it is recoverable or at least loggable.
  10. In traditional way, it needs 6 passes for a point light and many passes for cascaded shadow mapping to generate shadow maps. Recently I learnt a method that using a geometry shader to generate all the shadow maps in one pass.I specify a render target and a depth-stencil buffer which are both Texture2dArray in DirectX11.It looks much better than the traditional way I think.But after I implemented it, I found cascaded shadow mapping runs much slower than the traditional way.The fps slow down from 60 to 35.I don't know why.I guess may be I should do some culling or maybe the geometry shader is not efficient. I want to know the reason that I reduced the drawcalls from 8 to 1, but it runs slow down.Should I abandon this method or is there any way to optimize this method to run more efficiently than multi-pass rendering? Here is the gs code: [maxvertexcount(24)] void main( triangle DepthGsIn input[3] : SV_POSITION, inout TriangleStream< DepthPsIn > output ) { for (uint k = 0; k < 8; ++k) { DepthPsIn element; element.RTIndex = k; for (uint i = 0; i < 3; ++i) { float2 shadowSlopeBias = calculateShadowSlopeBias(input.normal, -g_cameras[k].world[1]); float shadowBias = shadowSlopeBias.y * g_cameras[k].shadowMapParameters.x + g_cameras[k].shadowMapParameters.y; element.position = input.position + shadowBias * g_cameras[k].world[1]; element.position = mul(element.position, g_cameras[k].viewProjection); element.depth = element.position.z / element.position.w; output.Append(element); } output.RestartStrip(); } }
  11. Hi, i am self teaching me graphics and oo programming and came upon this: My Window class creates an input handler instance, the glfw user pointer is redirected to that object and methods there do the input handling for keyboard and mouse. That works. Now as part of the input handling i have an orbiting camera that is controlled by mouse movement. GLFW_CURSOR_DISABLED is set as proposed in the glfw manual. The manual says that in this case the cursor is automagically reset to the window's center. But if i don't reset it manually with glfwSetCursorPos( center ) mouse values seem to add up until the scene is locked up. Here are some code snippets, mostly standard from tutorials: // EventHandler m_eventHandler = new EventHandler( this, glm::vec3( 0.0f, 5.0f, 0.0f ), glm::vec3( 0.0f, 1.0f, 0.0f ) ); glfwSetWindowUserPointer( m_window, m_eventHandler ); m_eventHandler->setCallbacks(); Creation of the input handler during window creation. For now, the camera is part of the input handler, hence the two vectors (position, up-vector). In future i'll take that functionally out into an own class that inherits from the event handler. void EventHandler::setCallbacks() { glfwSetCursorPosCallback( m_window->getWindow(), cursorPosCallback ); glfwSetKeyCallback( m_window->getWindow(), keyCallback ); glfwSetScrollCallback( m_window->getWindow(), scrollCallback ); glfwSetMouseButtonCallback( m_window->getWindow(), mouseButtonCallback ); } Set callbacks in the input handler. // static void EventHandler::cursorPosCallback( GLFWwindow *w, double x, double y ) { EventHandler *c = reinterpret_cast<EventHandler *>( glfwGetWindowUserPointer( w ) ); c->onMouseMove( (float)x, (float)y ); } Example for the cursor pos callback redirection to a class method. // virtual void EventHandler::onMouseMove( float x, float y ) { if( x != 0 || y != 0 ) { // @todo cursor should be set automatically, according to doc if( m_window->isCursorDisabled() ) glfwSetCursorPos( m_window->getWindow(), m_center.x, m_center.y ); // switch up/down because its more intuitive m_yaw += m_mouseSensitivity * ( m_center.x - x ); m_pitch += m_mouseSensitivity * ( m_center.y - y ); // to avoid locking if( m_pitch > 89.0f ) m_pitch = 89.0f; if( m_pitch < -89.0f ) m_pitch = -89.0f; // Update Front, Right and Up Vectors updateCameraVectors(); } } // onMouseMove() Mouse movement processor method. The interesting part is the manual reset of the mouse position that made the thing work ... // straight line distance between the camera and look at point, here (0,0,0) float distance = glm::length( m_target - m_position ); // Calculate the camera position using the distance and angles float camX = distance * -std::sin( glm::radians( m_yaw ) ) * std::cos( glm::radians( m_pitch) ); float camY = distance * -std::sin( glm::radians( m_pitch) ); float camZ = -distance * std::cos( glm::radians( m_yaw ) ) * std::cos( glm::radians( m_pitch) ); // Set the camera position and perspective vectors m_position = glm::vec3( camX, camY, camZ ); m_front = glm::vec3( 0.0, 0.0, 0.0 ) - m_position; m_up = m_worldUp; m_right = glm::normalize( glm::cross( m_front, m_worldUp ) ); glm::lookAt( m_position, m_front, m_up ); Orbiting camera vectors calculation in updateCameraVectors(). Now, for my understanding, as the glfw manual explicitly states that if cursor is disabled then it is reset to the center, but my code only works if it is reset manually, i fear i am doing something wrong. It is not world moving (only if there is a world to render :-)), but somehow i am curious what i am missing. I am not a professional programmer, just a hobbyist, so it may well be that i got something principally wrong :-) And thanks for any hints and so ...
  12. As title says, what's the go-to method for reducing the number of pixels processed for every light? Tiles don't seem to work well with shadow-casting lights, so I would still need to render them individually. Basically I have a choice to render a screen-space quad, or a 3D volume. The quad has to be calculated "manually", which is simple enough for point lights (though still have to be careful if the light is behind the camera), but for spot lights the calculations are much more complex and result in a lot of "wasted" pixels. Another approach would be to render, say, a box around a point light. But which faces should be rendered? If I only render front faces, it will not work if the box intersects near plane (i.e. light behind camera). If back faces are rendered, it will fail for back plane in the same manner. If both are rendered, then "normal" lights will affect every pixel twice. Is there a good solution for this, aside from using stencils?
  13. Hi, currently I'm helping my friend with his master thesis creating 3d airplane simulation, which he will use as presentation {just part of the whole}. I've got Boening 737-800 model, which is read from obj file format and directly rendered using my simulator program. The model has many of moving parts, they move independently basing on defined balls. Here's left wing from the top to depict one element and its move logic: Each moving object {here it is aleiron_left} has two balls: _left and _right which define the horizontal line to rotate around. I created movement and rotate logic by just using translatef and rotatef, each object is moved independently within its own push and pop matrixes. Once the project runs I'm able to move the aleiron based on parameter I store in some variable, and it moves alright: First one is before move, and the second one after move. I added balls for direct reference with 3ds max model. The balls are also moved using translatef and rotatef using the same move logic, so they are "glued" to their parent part. Problem is, I need to calculate their world coordinates after move, so in the next step out of the render method I need to obtain direct x,y,z coordinates without again using translatef and rotatef. I tried to multiply some matrixes and so on, but without result and currently I've got no idea how to do that.. Fragment of code looks like this: void DrawModel(Model.IModel model) { GL.glLoadIdentity(); GL.glPushMatrix(); this.SetSceneByCamera(); foreach (Model.IModelPart part in model.ModelParts) this.DrawPart(part); GL.glPopMatrix(); } void DrawPart(Model.IModelPart part) { bool draw = true; GL.glPushMatrix(); #region get part children Model.IModelPart child_top = part.GetChild("_top"); Model.IModelPart child_left = part.GetChild("_left"); Model.IModelPart child_right = part.GetChild("_right"); Model.IModelPart child_bottom = part.GetChild("_bottom"); #endregion get part children #region by part switch (part.PartTBase) { case Model.ModelPart.PartTypeBase.ALEIRON: { #region aleiron float moveBy = 0.0f; bool selected = false; switch (part.PartT) { case Model.ModelPart.PartType.ALEIRON_L: { selected = true; moveBy = this.GetFromValDic(part); moveBy *= -1; } break; } if (selected && child_left != null && child_right != null) { GL.glTranslatef(child_right.GetCentralPoint.X, child_right.GetCentralPoint.Y, child_right.GetCentralPoint.Z); GL.glRotatef(moveBy, child_left.GetCentralPoint.X - child_right.GetCentralPoint.X, child_left.GetCentralPoint.Y - child_right.GetCentralPoint.Y, child_left.GetCentralPoint.Z - child_right.GetCentralPoint.Z); GL.glTranslatef(-child_right.GetCentralPoint.X, -child_right.GetCentralPoint.Y, -child_right.GetCentralPoint.Z); } #endregion aleiron } break; } #endregion by part #region draw part if (!part.Visible) draw = false; if (draw) { GL.glBegin(GL.GL_TRIANGLES); for (int i = 0; i < part.ModelPoints.Count; i += 3) { if (part.ModelPoints[i].GetTexCoord != null) GL.glTexCoord3d((double)part.ModelPoints[i].GetTexCoord.X, (double)part.ModelPoints[i].GetTexCoord.Y, (double)part.ModelPoints[i].GetTexCoord.Z); GL.glVertex3f(part.ModelPoints[i].GetPoint.X, part.ModelPoints[i].GetPoint.Y, part.ModelPoints[i].GetPoint.Z); if (part.ModelPoints[i + 1].GetTexCoord != null) GL.glTexCoord3d((double)part.ModelPoints[i + 1].GetTexCoord.X, (double)part.ModelPoints[i + 1].GetTexCoord.Y, (double)part.ModelPoints[i + 1].GetTexCoord.Z); GL.glVertex3f(part.ModelPoints[i + 1].GetPoint.X, part.ModelPoints[i + 1].GetPoint.Y, part.ModelPoints[i + 1].GetPoint.Z); if (part.ModelPoints[i + 2].GetTexCoord != null) GL.glTexCoord3d((double)part.ModelPoints[i + 2].GetTexCoord.X, (double)part.ModelPoints[i + 2].GetTexCoord.Y, (double)part.ModelPoints[i + 2].GetTexCoord.Z); GL.glVertex3f(part.ModelPoints[i + 2].GetPoint.X, part.ModelPoints[i + 2].GetPoint.Y, part.ModelPoints[i + 2].GetPoint.Z); } GL.glEnd(); } #endregion draw part GL.glPopMatrix(); } What happens step by step: Prepare scene, set the model camera by current variables {user can move camera using keyboard and mouse in all directions}. Take model which contains all modelParts within and draw them one by one. Each part has its own type and subtype set by part name. Draw current part independently, get its child_left and child_right -> those are the balls by name. Get moveBy value which is angle. Every part has its own variable which is modified using keyboard like "press a, increase variable x for 5.0f", which in turn will rotate aleiron up by 5.0f. Move and rotate object by its children Draw part triangles {the part has its own list of triangles that are read from obj file} When I compute the blue ball that is glued to its parent, I do exactly the same so it moves with the same manner: case Model.ModelPart.PartTypeBase.FLOW: { float moveBy = 0.0f; Model.IModelPart parent = part.GetParent(); if (parent != null) { moveBy = this.GetFromValDic(parent); if (parent.Reverse) moveBy *= -1.0f; child_left = parent.GetChild("_left"); child_right = parent.GetChild("_right"); } if (child_left != null && child_right != null) { GL.glGetFloatv(GL.GL_MODELVIEW_MATRIX, part.BeforeMove); GL.glTranslatef(child_right.GetCentralPoint.X, child_right.GetCentralPoint.Y, child_right.GetCentralPoint.Z); GL.glRotatef(moveBy, child_left.GetCentralPoint.X - child_right.GetCentralPoint.X, child_left.GetCentralPoint.Y - child_right.GetCentralPoint.Y, child_left.GetCentralPoint.Z - child_right.GetCentralPoint.Z); GL.glTranslatef(-child_right.GetCentralPoint.X, -child_right.GetCentralPoint.Y, -child_right.GetCentralPoint.Z); } } break; Before move I read the model matrix using GL.glGetFloatv(GL.GL_MODEL_MATRIX, matrix), which gives me those values {red contains current model camera}: The blue ball has its own initial central point computed, and currently it is (x, y, z): 339.6048, 15.811758, -166.209473. As far as I'm concerned, the point world coordinates never change, only its matrix is moved and rotated around some point. So the final question is, how to calculate the same point world coordinates after the object is moved and rotated? PS: to visualize the problem, I've created new small box, which I placed on the center of the blue ball after it is moved upwards. The position of the box is written by eye - I just placed it couple of times and after x try, I managed to place it correctly using similar world coordinates: First one is from the left of the box, and the second one is from behind. The box world coordinates are (x, y, z): 340.745117f, 30.0f, -157.6322f, so according to original central point of blue ball it is a bit higher and closer to the center of the wing. Simply put, I need to: take original central point of blue ball: 339.6048, 15.811758, -166.209473 after the movement and rotation of the blue ball is finished, apply some algorithm {like take something from modelview_matrix, multiply by something other} finally, after the algorithm is complete, in result I get 340.745117f, 30.0f, -157.6322f point {but now computed}, which is the central point in world matrix after movement. PS2: Sorry for long post, I tried to exactly explain what I'm dealing with. Thank you in advance for your help.
  14. Hi everybody, I have implemented a marching tetrahedra algorithm for the isosurface extraction. For the normals, I was only able to compute them per triangle as per vertes I need information about all neighbouring triangle. However, in neighboring cells I have no information about which triangles are adjacent to the cell so I cannot compute the normals per vertex. How is this normally done? Ma marching tetrahedra code is as follows : void marching_tetras::MarchingCubesSplitter(Vec3f verts[8], float interpol[8],int parity, bool wireframe, bool raytracer,vector<triangle> &triangleList) { //! Split each cube into 5 tetrahedra //! need to alternate orientation of central tetrahedron to ensure the diagonals line up if (parity) { MarchingTetras(verts[0],verts[5],verts[3],verts[6],interpol[0],interpol[5],interpol[3],interpol[6],wireframe,raytracer,triangleList); MarchingTetras(verts[0],verts[1],verts[3],verts[5],interpol[0],interpol[1],interpol[3],interpol[5],wireframe,raytracer,triangleList); MarchingTetras(verts[0],verts[4],verts[5],verts[6],interpol[0],interpol[4],interpol[5],interpol[6],wireframe,raytracer,triangleList); MarchingTetras(verts[0],verts[2],verts[6],verts[3],interpol[0],interpol[2],interpol[6],interpol[3],wireframe,raytracer,triangleList); MarchingTetras(verts[3],verts[7],verts[6],verts[5],interpol[3],interpol[7],interpol[6],interpol[5],wireframe,raytracer,triangleList); } else { MarchingTetras(verts[2],verts[4],verts[1],verts[7],interpol[2],interpol[4],interpol[1],interpol[7],wireframe,raytracer,triangleList); MarchingTetras(verts[5],verts[4],verts[7],verts[1],interpol[5],interpol[4],interpol[7],interpol[1],wireframe,raytracer,triangleList); MarchingTetras(verts[2],verts[3],verts[7],verts[1],interpol[2],interpol[3],interpol[7],interpol[1],wireframe,raytracer,triangleList); MarchingTetras(verts[4],verts[2],verts[1],verts[0],interpol[4],interpol[2],interpol[1],interpol[0],wireframe,raytracer,triangleList); MarchingTetras(verts[2],verts[7],verts[6],verts[4],interpol[2],interpol[7],interpol[6],interpol[4],wireframe,raytracer,triangleList); } } void marching_tetras::MarchingTetras(Vec3f a, Vec3f b, Vec3f c, Vec3f d,float val_a, float val_b, float val_c, float val_d, bool wireframe, bool raytracer,vector<triangle> &triangleList) { //! figure out how masimulationGrid->getYSize() of the vertices are on each side of the isosurface //! If value is smaller than 0 -> not part of fluid. Count is the number of vertices inside the volume int count = (val_a > m_isosurface) + (val_b > m_isosurface) + (val_c > m_isosurface) + (val_d > m_isosurface); //! handle each case //! All vertices are inside the volume or outside the volume (This time of the tetrahedra, not the cube!) if(count==0 || count==4) return; //! Three vertices are inside the volume else if (count == 3) { //! Make sure that fourth value lies outside if (val_d <= m_isosurface) MarchingTetrasHelper3(a,b,c,d,val_a,val_b,val_c,val_d,wireframe,raytracer,triangleList); else if (val_c <= m_isosurface) MarchingTetrasHelper3(a,d,b,c,val_a,val_d,val_b,val_c,wireframe,raytracer,triangleList); else if (val_b <= m_isosurface) MarchingTetrasHelper3(a,c,d,b,val_a,val_c,val_d,val_b,wireframe,raytracer,triangleList); else MarchingTetrasHelper3(b,d,c,a,val_b,val_d,val_c,val_a,wireframe,raytracer,triangleList); } //! Two vertices are inside the volume else if (count == 2) { //! Make sure that the last two points lie outside if (val_a > m_isosurface && val_b > m_isosurface) MarchingTetrasHelper2(a,b,c,d,val_a,val_b,val_c,val_d,wireframe,raytracer,triangleList); else if (val_a > m_isosurface && val_c > m_isosurface) MarchingTetrasHelper2(a,c,d,b,val_a,val_c,val_d,val_b,wireframe,raytracer,triangleList); else if (val_a > m_isosurface && val_d > m_isosurface) MarchingTetrasHelper2(a,d,b,c,val_a,val_d,val_b,val_c,wireframe,raytracer,triangleList); else if (val_b > m_isosurface && val_c > m_isosurface) MarchingTetrasHelper2(b,c,a,d,val_b,val_c,val_a,val_d,wireframe,raytracer,triangleList); else if (val_b > m_isosurface && val_d > m_isosurface) MarchingTetrasHelper2(b,d,c,a,val_b,val_d,val_c,val_a,wireframe,raytracer,triangleList); else MarchingTetrasHelper2(c,d,a,b,val_c,val_d,val_a,val_b,wireframe,raytracer,triangleList); } //! One vertex is inside the volume else if (count == 1) { //! Make sure that the last three points lie outside if (val_a > m_isosurface) MarchingTetrasHelper1(a,b,c,d,val_a,val_b,val_c,val_d,wireframe,raytracer,triangleList); else if (val_b > m_isosurface) MarchingTetrasHelper1(b,c,a,d,val_b,val_c,val_a,val_d,wireframe,raytracer,triangleList); else if (val_c > m_isosurface) MarchingTetrasHelper1(c,a,b,d,val_c,val_a,val_b,val_d,wireframe,raytracer,triangleList); else MarchingTetrasHelper1(d,c,b,a,val_d,val_c,val_b,val_a,wireframe,raytracer,triangleList); } } //! when 3 vertices are within the volume void marching_tetras::MarchingTetrasHelper3(Vec3f a, Vec3f b, Vec3f c, Vec3f d, float val_a, float val_b, float val_c, float val_d,bool wireframe, bool raytracer,vector<triangle> &triangleList) { //! Compute intersection points of isosurface with tetrahedron float ad_frac = (m_isosurface - val_d) / (val_a - val_d); float bd_frac = (m_isosurface - val_d) / (val_b - val_d); float cd_frac = (m_isosurface - val_d) / (val_c - val_d); //! Compute the vertices of the intersections of the isosurface with the tetrahedron Vec3f ad = ad_frac*a + (1-ad_frac)*d; Vec3f bd = bd_frac*b + (1-bd_frac)*d; Vec3f cd = cd_frac*c + (1-cd_frac)*d; //! Render the triangle storeTriangle(ad,cd,bd); } //! when 2 vertices are within the volume void marching_tetras::MarchingTetrasHelper2(Vec3f a, Vec3f b, Vec3f c, Vec3f d,float val_a,float val_b,float val_c,float val_d,bool wireframe,bool raytracer,vector<triangle> &triangleList) { //! Compute the intersection points of isosurface with tetrahedron float ac_frac = (m_isosurface - val_c) / (val_a - val_c); float ad_frac = (m_isosurface - val_d) / (val_a - val_d); float bc_frac = (m_isosurface - val_c) / (val_b - val_c); float bd_frac = (m_isosurface - val_d) / (val_b - val_d); //! Compute the vertices of the TWO triangles which mark the intersection of the isosurface with the tetrahedron Vec3f ac = ac_frac*a + (1-ac_frac)*c; Vec3f ad = ad_frac*a + (1-ad_frac)*d; Vec3f bc = bc_frac*b + (1-bc_frac)*c; Vec3f bd = bd_frac*b + (1-bd_frac)*d; //! Render the two triangles storeTriangle(ac,bc,ad); storeTriangle(bc,bd,ad); } //! when 1 vertex is within the volume void marching_tetras::MarchingTetrasHelper1(Vec3f a, Vec3f b, Vec3f c, Vec3f d,float val_a,float val_b,float val_c,float val_d, bool wireframe, bool raytracer,vector<triangle> &triangleList) { //! Compute the intersection points of isosurface with tetrahedron float ab_frac = (m_isosurface - val_b) / (val_a - val_b); float ac_frac = (m_isosurface - val_c) / (val_a - val_c); float ad_frac = (m_isosurface - val_d) / (val_a - val_d); //! Compute the vertices of the intersections of the isosurface with the tetrahedron Vec3f ab = ab_frac*a + (1-ab_frac)*b; Vec3f ac = ac_frac*a + (1-ac_frac)*c; Vec3f ad = ad_frac*a + (1-ad_frac)*d; //! Render the triangle storeTriangle(ab,ad,ac); }
  15. Hello fellow programmers, For a couple of days now i've decided to build my own planet renderer just to see how floating point precision issues can be tackled. As you probably imagine, i've quickly faced FPP issues when trying to render absurdly large planets. I have used the classical quadtree LOD approach; I've generated my grids with 33 vertices, (x: -1 to 1, y: -1 to 1, z = 0). Each grid is managed by a TerrainNode class that, depending on the side it represents (top, bottom, left right, front, back), creates a special rotation-translation matrix that moves and rotates the grid away from the origin so that when i finally normalize all the vertices on my vertex shader i can get a perfect sphere. T = glm::translate(glm::dmat4(1.0), glm::dvec3(0.0, 0.0, 1.0)); R = glm::rotate(glm::dmat4(1.0), glm::radians(180.0), glm::dvec3(1.0, 0.0, 0.0)); sides[0] = new TerrainNode(1.0, radius, T * R, glm::dvec2(0.0, 0.0), new TerrainTile(1.0, SIDE_FRONT)); T = glm::translate(glm::dmat4(1.0), glm::dvec3(0.0, 0.0, -1.0)); R = glm::rotate(glm::dmat4(1.0), glm::radians(0.0), glm::dvec3(1.0, 0.0, 0.0)); sides[1] = new TerrainNode(1.0, radius, R * T, glm::dvec2(0.0, 0.0), new TerrainTile(1.0, SIDE_BACK)); // So on and so forth for the rest of the sides As you can see, for the front side grid, i rotate it 180 degrees to make it face the camera and push it towards the eye; the back side is handled almost the same way only that i don't need to rotate it but simply push it away from the eye. The same technique is applied for the rest of the faces (obviously, with the proper rotations / translations). The matrix that result from the multiplication of R and T (in that particular order) is send to my vertex shader as `r_Grid'. // spherify vec3 V = normalize((r_Grid * vec4(r_Vertex, 1.0)).xyz); gl_Position = r_ModelViewProjection * vec4(V, 1.0); The `r_ModelViewProjection' matrix is generated on the CPU in this manner. // No the most efficient way, but it works. glm::dmat4 Camera::getMatrix() { // Create the view matrix // Roll, Yaw and Pitch are all quaternions. glm::dmat4 View = glm::toMat4(Roll) * glm::toMat4(Pitch) * glm::toMat4(Yaw); // The model matrix is generated by translating in the oposite direction of the camera. glm::dmat4 Model = glm::translate(glm::dmat4(1.0), -Position); // Projection = glm::perspective(fovY, aspect, zNear, zFar); // zNear = 0.1, zFar = 1.0995116e12 return Projection * View * Model; } I managed to get rid of z-fighting by using a technique called Logarithmic Depth Buffer described in this article; it works amazingly well, no z-fighting at all, at least not visible. Each frame i'm rendering each node by sending the generated matrices this way. // set the r_ModelViewProjection uniform // Sneak in the mRadiusMatrix which is a matrix that contains the radius of my planet. Shader::setUniform(0, Camera::getInstance()->getMatrix() * mRadiusMatrix); // set the r_Grid matrix uniform i created earlier. Shader::setUniform(1, r_Grid); grid->render(); My planet's radius is around 6400000.0 units, absurdly large, but that's what i really want to achieve; Everything works well, the node's split and merge as you'd expect, however whenever i get close to the surface of the planet the rounding errors start to kick in giving me that lovely stairs effect. I've read that if i could render each grid relative to the camera i could get better precision on the surface, effectively getting rid of those rounding errors. My question is how can i achieve this relative to camera rendering in my scenario here? I know that i have to do most of the work on the CPU with double, and that's exactly what i'm doing. I only use double on the CPU side where i also do most of the matrix multiplications. As you can see from my vertex shader i only do the usual r_ModelViewProjection * (some vertex coords). Thank you for your suggestions!
  16. I know that pre-rendered games use images of 3D scenes rendered on other hardware; then these images are used as backgrounds. I was wondering if the power of the console is irrelevant to visualize these images. What I mean exactly is: if the console only visualizes these images, and so it must not compute them as it does with games rendered in real time, is it possible that a pre-rendered game, created with the more sophisticated hardware and techniques today, can be visualized correctly with a console like Playstation 1 without problems (resolution and some other factor aside)? For example, to use pre-rendered images like those of Resident Evil 0 (Nintendo Gamecube) on the Playstation 1. Or does the console has its limits regarding the pre-rendered images which it is able to visualize? Another question which connects to the previous one: what is the difference between pre-rendered images of games and the realistic scenarios used by architects etc? Can those realistic scenarios used as pre-rendered backgrounds? I did this reasoning: if with my dated computer (so with my Nvidia Geforce go 7300 Turbocache) of 2006 I can visualize renderings created with the most advanced hardware today, this should be the same regarding the matter of pre-rendered backgrounds created today with the most advanced hardware, put in old consoles like Playstation 1. Is this reasoning correct?
  17. Hello, Im wondering if anyone has examples of how they handle material and shader systems, particularly in the association of uniform/constant buffers with the correct slots. Currently i define a Material file in xml (it could be json or anything else just xml for now): <MaterialFile> <Material name="BasicPBS" type="0"> <!-- States--> <samplers> <sampler_0>PointWrap</sampler_0> </samplers> <rasterblock>CullCounterClockwise</rasterblock> <blendblock>Opaque</blendblock> <!-- Properties --> <Properties> <color r="1" g="1" b="1" a="1"></color><!-- tint color --> <diffuse> <texture>SnookerBall.png</texture> <sampler>Sampler_0</sampler> </diffuse> <specular> <value r="1" g="1" b="1" a="1"></value> </specular> <metalness> <value>0</value> </metalness> <roughness> <value>0.07</value> </roughness> </Properties> </Material> </MaterialFile> The Sampler, Raster and Blend Blocks are all common reusable blocks that a material cpp can point too based on the ones within the xml, it is similar to xna and dxtk: https://github.com/Microsoft/DirectXTK/wiki/CommonStates Common Data such as the camera view, projection and viewProjection matrix can be stored in a single PerRender Constant and the meshes data: world, worldProjection etc. can be stored in a PerObject constant buffer that can be set by a world/render manager each time it draws an object. The material will also obviously store a reference to the shader that has been loaded by the shader manager, and reused if its already loaded etc. That leaves the material properties, how can they be handled in a dynamic way? or will you have to explicitly create individual material classes that use material as a base and override using virtual's (yuk) to achieve setting the individual property types? for example you can have materials that have a texture for diffuse but just values for roughness and metallic, or yuo can have one with all maps etc. this means there has to be permutations. How can that be handled in a clean way? Thanks
  18. Hello, How can I get center of scene or node or model? What is best way to do this? Scene structure: Scene | o - Node[s] | o - Model[s] // Mesh | o - Primitive[s] // Sub-Mesh | o local AABB and world AABB I'm using AABB's center as center of primitive and I'm combining all AABB boxes to build an AABB for scene. When I visualized the boxes it seems work as expected. But I need to get center of scene, node or model for apply rotation around center. Because I'm using a trackball for rotating attached node or model. Currently I'm using scene's AABB's center as rotation point (pivot), for single object it is working. After rotation is completed center of primitive remains same which it should be, I think. But if I load a scene which contains multiple models or primitives, after rotation is completed center of scene's AABB is moving (I'm using that as center of scene). Because every time rotation is completed, new AABB is calculated for scene by combining all sub AABB boxes. I think this may be a normal because there is no balance between AABB boxes while rotating. For instance if I use two same CUBE without rotations center of new scene's AABB remains same. My solution (it seems work for now): I created new center member (vec3) in scene struct: scene->center = vec3(0); scene->primCount = 0; for prim in primitivesInFrustum scene->center += prim->aabb->center; scene->primCount++; scene->center = scene->center / scene->primCount Now I'm using this center as center of scene instead of scene->aabb->center and it seems work. My question is that what is best way to get center of scene, node or model? How do you get the center for rotation? Any suggestions? Am I doing right? Thanks
  19. Hi. I wanted to experiment D3D12 development and decided to run some tutorials: Microsoft DirectX-Graphics-Samples, Braynzar Soft, 3dgep...Whatever sample I run, I've got the same crash. All the initialization process is going well, no error, return codes ok, but as soon as the Present method is invoked on the swap chain, I'm encountering a crash with the following call stack: https://drive.google.com/open?id=10pdbqYEeRTZA5E6Jm7U5Dobpn-KE9uOg The crash is an access violation to a null pointer ( with an offset of 0x80 ) I'm working on a notebook, a toshiba Qosmio x870 with two gpu's: an integrated Intel HD 4000 and a dedicated NVIDIA GTX 670M ( Fermi based ). The HD 4000 is DX11 only and as far as I understand the GTX 670M is DX12 with a feature level 11_0. I checked that the good adapter was chosen by the sample, and when the D3D12 device is asked in the sample with a 11_0 FL, it is created with no problem. Same for all the required interfaces ( swap chain, command queue...). I tried a lot of things to solve the problem or get some info, like forcing the notebook to always use the NVIDIA gpu, disabling the debug layer, asking for a different feature level ( by the way 11_0 is the only one that allows me to create the device, any other FL will fail at device creation )... I have the latest NVIDIA drivers ( 391.35 ), the latest Windows 10 sdk ( 10.0.17134.0 ) and I'm working under Visual Studio 2017 Community. Thanks to anybody who can help me find the problem...
  20. Hi everyone, I would need some assistance from anyone who has a similar experience or a nice idea! I have created a skybox (as cube) and now I need to add a floor/ground. The skybox is created from cubemap and initially it was infinite. Now it is finite with a specific size. The floor is a quad in the middle of the skybox, like a horizon. I have two problems: When moving the skybox upwards or downwards, I need to sample from points even above the horizon while sampling from the botton at the same time. I am trying to create a seamless blending of the texture at the points of the horizon, when the quad is connected to the skybox. However, I get skew effects. Does anybody has done sth similar? Is there any good practice? Thanks everyone!
  21. Hey guys, I've been writing a software occlusion culling algorithm (mostly for fun and learning), and I came across an issue. I store my geometry as an octree that I traverse front to back, and I check each node for coverage to see if it and its children are occluded. When checking a node for occlusion, the node itself is a cube but I approximate it as the minimum on screen bounding box that covers the cube. Now my issue is, when checking this on screen bounding box for occlusion, I don't exactly know when a pixel should be considered as touched by the geometry. Currently, I generate min/max points for every node on screen, I round them to the nearest integer, and loop as shown in the below code: f32* DepthRow = RenderState->DepthMap + MinY*ScreenX + MinX; for (i32 Y = MinY; Y < MaxY; ++Y) { f32* Depth = DepthRow; for (i32 X = MinX; X < MaxX; ++X) { if (*Depth > NodeZ) { // NOTE: We aren't occluded } ++Depth; } DepthRow += ScreenX; } This method is kind of the same as what software rasterizes do, in that a pixel is only considered a part of the box if the center of the pixel is inside the box. Sometimes though, I get nodes that are 2x1 pixels in size, and when I subdivide them, none of the children are rendered because of where they happen to be relative to the pixel centers. This is illustrated in the below 2 images: So in the above examples, one node happens to overlap 2 pixels, but once it's subdivided, some its children are empty and the ones which aren't overlap 0 pixels because the min max values are equal, so we get a nothing rendered. So my issue is how to better decide when a node should overlap a pixel to avoid such cases. I thought I can maybe make my loop inclusive by checking Y <= MaxY and X <= MaxX but then I have to subdivide nodes until MinX == MaxX and MinY == MaxY, which makes my program render 10 times more nodes than it usually does. My other option is to check if the pre rounded Min/Max values are of a distance less than or equal to 1 of each other, and render that, regardless of if the block takes up more than one pixel because of its orientation. The only issue I have with this method is if I can really consider my renders as pixel accurate then, in the worst case scenario, a node can be of diameter 1 on the screen but cover 4 pixels, so I can get some blockier results than I should be getting. Is there a good fix for this issue? There aren't many resources on software rasterization, especially with very small geometry so I don't know how explored this issue really is.
  22. Dear folks, How do I calculate the axis of rotation between 2 vectors, one of them is the source directional vector, and the second is the destination directional vector. Thanks a lot Jack
  23. I'm making render just for fun (c++, opengl)Want to add decals support. Here what I found A couple of slides from doom http://advances.realtimerendering.com/s2016/Siggraph2016_idTech6.pdf Decals but deferred http://martindevans.me/game-development/2015/02/27/Drawing-Stuff-… space-Decals/ No implementation details here https://turanszkij.wordpress.com/2017/10/12/forward-decal-rendering/ As I see there should be a list of decals for each tile same as for light sources. But what to do next? Let assume that all decals are packed into a spritesheet. Decal will substitute diffuse and normal. - What data should be stored for each decal on the GPU? - Articles above describe decals as OBB. Why OBB if decals seem to be flat? - How to actually render a decal during object render pass (since it's forward)? Is it projected somehow? Don't understand this part completely. Are there any papers for this topic?
  24. I'm looking at the follow paper which describes low res particle rendering in Destiny. http://advances.realtimerendering.com/s2013/Tatarchuk-Destiny-SIGGRAPH2013.pdf The paper mentions rendering a separate buffer besides particle color that contains the start and end depth at which a pixel goes from transparent to fully opaque. Getting the start depth is just a matter of getting the resulting depth buffer since I'm rendering particles from back to front. What I'm not sure on is how to get the depth value for full opacity. Is there a generally accepted way of rendering this value? I'm wondering if i would need to use some kinda of UAV buffer operation when rendering particles to get the final depth value.
  25. Hi, With the latest update of Windows 10 (April 2018) and laptops with dual graphics (NVIDIA + Intel) we have a critical problem - FPU exception ("floating point division by zero") when we call IDirect3D9::CreateDevice() We found a temporary solution - globally disable all FPU exceptions in our app by calling Set8087CW($133F) when our app is started. I'm sure that our code which works with DirectX is correct and there is no mistakes in parameters which I pass to CreateDevice(). This problem didn't happen with any previous version of Windows 10 or with Windows 8/7/Vista/XP. Also this problem doesn't occur if manually choose Intel video card for our app. Or run our app on external monitor on NVIDIA card. And no problems with AMD cards at all. Our app is written on Delphi (Pascal). We received many complains from our customers. Should I report to NVIDIA or Microsoft about this problem?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!