• Advertisement

Search the Community

Showing results for tags '3D' in content posted in Graphics and GPU Programming.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Categories

  • Audio
    • Music and Sound FX
  • Business
    • Business and Law
    • Career Development
    • Production and Management
  • Game Design
    • Game Design and Theory
    • Writing for Games
    • UX for Games
  • Industry
    • Interviews
    • Event Coverage
  • Programming
    • Artificial Intelligence
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Engines and Middleware
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
  • Archive

Categories

  • News

Categories

  • Audio
  • Visual Arts
  • Programming
  • Writing

Categories

  • GameDev Unboxed

Categories

  • Game Dev Loadout

Categories

  • Game Developers Conference
    • GDC 2017
    • GDC 2018
  • Power-Up Digital Games Conference
    • PDGC I: Words of Wisdom
    • PDGC II: The Devs Strike Back
    • PDGC III: Syntax Error

Forums

  • Audio
    • Music and Sound FX
  • Business
    • Games Career Development
    • Production and Management
    • Games Business and Law
  • Game Design
    • Game Design and Theory
    • Writing for Games
  • Programming
    • Artificial Intelligence
    • Engines and Middleware
    • General and Gameplay Programming
    • Graphics and GPU Programming
    • Math and Physics
    • Networking and Multiplayer
  • Visual Arts
    • 2D and 3D Art
    • Critique and Feedback
  • Topical
    • Virtual and Augmented Reality
    • News
  • Community
    • For Beginners
    • GameDev Challenges
    • GDNet+ Member Forum
    • GDNet Lounge
    • GDNet Comments, Suggestions, and Ideas
    • Coding Horrors
    • Your Announcements
    • Hobby Project Classifieds
    • Indie Showcase
    • Article Writing
  • Affiliates
    • NeHe Productions
    • AngelCode
  • Workshops
    • C# Workshop
    • CPP Workshop
    • Freehand Drawing Workshop
    • Hands-On Interactive Game Development
    • SICP Workshop
    • XNA 4.0 Workshop
  • Archive
    • Topical
    • Affiliates
    • Contests
    • Technical
  • GameDev Challenges's Topics

Calendars

  • Community Calendar
  • Games Industry Events
  • Game Jams
  • GameDev Challenges's Schedule

Blogs

There are no results to display.

There are no results to display.

Developers

Developers


Group


About Me


Website


Industry Role


Twitter


Github


Twitch


Steam

Found 186 results

  1. Hi guys, There are many ways to do light culling in tile-based shading. I've been playing with this idea for a while, and just want to throw it out there. Because tile frustums are general small compared to light radius, I tried using cone test to reduce false positives introduced by commonly used sphere-frustum test. On top of that, I use distance to camera rather than depth for near/far test (aka. sliced by spheres). This method can be naturally extended to clustered light culling as well. The following image shows the general ideas Performance-wise I get around 15% improvement over sphere-frustum test. You can also see how a single light performs as the following: from left to right (1) standard rendering of a point light; then tiles passed the test of (2) sphere-frustum test; (3) cone test; (4) spherical-sliced cone test I put the details in my blog post (https://lxjk.github.io/2018/03/25/Improve-Tile-based-Light-Culling-with-Spherical-sliced-Cone.html), GLSL source code included! Eric
  2. Hello guys, Please tell me! How do I know? Why does wavefront not show for me? I already checked I have non errors yet. using OpenTK; using System.Collections.Generic; using System.IO; using System.Text; namespace Tutorial_08.net.sourceskyboxer { public class WaveFrontLoader { private static List<Vector3> inPositions; private static List<Vector2> inTexcoords; private static List<Vector3> inNormals; private static List<float> positions; private static List<float> texcoords; private static List<int> indices; public static RawModel LoadObjModel(string filename, Loader loader) { inPositions = new List<Vector3>(); inTexcoords = new List<Vector2>(); inNormals = new List<Vector3>(); positions = new List<float>(); texcoords = new List<float>(); indices = new List<int>(); int nextIdx = 0; using (var reader = new StreamReader(File.Open("Contents/" + filename + ".obj", FileMode.Open), Encoding.UTF8)) { string line = reader.ReadLine(); int i = reader.Read(); while (true) { string[] currentLine = line.Split(); if (currentLine[0] == "v") { Vector3 pos = new Vector3(float.Parse(currentLine[1]), float.Parse(currentLine[2]), float.Parse(currentLine[3])); inPositions.Add(pos); if (currentLine[1] == "t") { Vector2 tex = new Vector2(float.Parse(currentLine[1]), float.Parse(currentLine[2])); inTexcoords.Add(tex); } if (currentLine[1] == "n") { Vector3 nom = new Vector3(float.Parse(currentLine[1]), float.Parse(currentLine[2]), float.Parse(currentLine[3])); inNormals.Add(nom); } } if (currentLine[0] == "f") { Vector3 pos = inPositions[0]; positions.Add(pos.X); positions.Add(pos.Y); positions.Add(pos.Z); Vector2 tc = inTexcoords[0]; texcoords.Add(tc.X); texcoords.Add(tc.Y); indices.Add(nextIdx); ++nextIdx; } reader.Close(); return loader.loadToVAO(positions.ToArray(), texcoords.ToArray(), indices.ToArray()); } } } } } And It have tried other method but it can't show for me. I am mad now. Because any OpenTK developers won't help me. Please help me how do I fix. And my download (mega.nz) should it is original but I tried no success... - Add blend source and png file here I have tried tried,..... PS: Why is our community not active? I wait very longer. Stop to lie me! Thanks !
  3. Could anyone please explain why the FOV and the monitor aspect ratios (when doing the actual import) would affect how the mesh(es) itself are being imported, such as 3dsmax? thanks Jack
  4. Hello, in my game engine i want to implement my own bone weight painting tool, so to say a virtual brush painting tool for a mesh. I have already implemented my own "dual quaternion skinning" animation system with "morphs" (=blend shapes) and "bone driven" "corrective morphs" (= morph is dependent from a bending or twisting bone) But now i have no idea which is the best method to implement a brush painting system. Just some proposals a. i would build a kind of additional "vertecie structure", that can help me to find the surrounding (neighbours) vertecie indexes from a given "central vertecie" index b. the structure should also give information about the distance from the neighbour vertecsies to the given "central vertecie" index c. calculate the strength of the adding color to the "central vertecie" an the neighbour vertecies by a formula with linear or quadratic distance fall off d. the central vertecie would be detected as that vertecie that is hit by a orthogonal projection from my cursor (=brush) in world space an the mesh but my problem is that there could be several vertecies that can be hit simultaniously. e.g. i want to paint the inward side of the left leg. the right leg will also be hit. I think the given problem is quite typical an there are standard approaches that i dont know. Any help or tutorial are welcome P.S. I am working with SharpDX, DirectX11
  5. So I have probably common question but somehow I got stuck and need a little help. I managed to glue together skinned character animation player in C++/OpenGL (though it really does not matter). As an input I use MD5 (3d models from Doom) and Spine2d (2d animations in JSON). Everything works like a charm, with full blend tree support, skeleton deforming on GPU by supplied positions and quaternions, just like mama tough us with MD5. The problem started when I tried to add a proper animation from root bone: To make blending between 2 animations a bit better, I had to remove the root bone rotation and position so the bones will "morph" more locally - otherwise If I mix animations '90 degree rotating' with 'standing still' the bones would be placed in bad positions. This is because positions are lerped, so they move on straight line. At the end of animation I just add the root bone to the whole skeleton and everything looks fine. This also served as a base to object moving. As we all know, after the walk cycle such animation will pop to the start position. Unfortunately, to make object move, I can't just add the animation delta at the end of the cycle. This is because if I change the animation weight, the end position will get weighted too. I just checked the Doom3 source code and it looks like they are storing animation delta of the full step, but maybe it is for evaluating where the animation will end up (AI and such). For now I assumed that I just need position. I store the root bone position before the frame time update as some additional old_root_bone joint. Than I pass this old_root_bone with all other bone positions (including the root bone itself) and when I'm ready to draw I can extract the difference in step and add it to the character position. It works. Some minor thing to figure out is if I should take into consideration the Bind Pose of the root bone and add it somewhere. The problem is that I have no clue what to do with the rotation. I have noticed, that some animations (I 'borrowed' animations from Batman Arkham series, as those are well suited for full character animation) rotate root bone as they please - for example in running animation the character is leaning forward - and if I rotate the model by difference in animations - I end up with character moving like hamster in the wheel. So after this wall of text my questions are: 1. Am I doing the moving character from the root bone correctly? 2. Is this a common thing to ignore rotation and do it somehow manually in script? Eventually I can extract only single axis animation and use this, but I have no idea if that is a proper choice. 3. What are common practices for this kind of animations? I'm attaching a short movie how the things look like. The 3 buttons below are: - node stand/walk/run - anim weight - node stand/rotate90/rotate180 - anim weight - cross between the above 2 nodes (actually it is the weight of the turning node, but it is the same thing in this case) BatWalk0001.mp4
  6. Hi, what would be a good book to be used as a general reference about the architectures of GPU drivers?Example of sections I would be looking for: Theoretical descriptions of how drivers might or should work. Enumeration of aspects to consider when writing GPU drivers. What higher-level graphics programmer should definitely know about the drivers they use? ...
  7. Hi, I'm implementing a simple 3D engine based on DirectX11. I'm trying to render a skybox with a cubemap on it and to do so I'm using DDS Texture Loader from DirectXTex library. I use texassemble to generate the cubemap (texture array of 6 textures) into a DDS file that I load at runtime. I generated a cube "dome" and sample the texture using the position vector of the vertex as the sample coordinates (so far so good), but I always get the same face of the cubemap mapped on the sky. As I look around I always get the same face (and it wobbles a bit if I move the camera). My code: //Texture.cpp: Texture::Texture(const wchar_t *textureFilePath, const std::string &textureType) : mType(textureType) { //CreateDDSTextureFromFile(Game::GetInstance()->GetDevice(), Game::GetInstance()->GetDeviceContext(), textureFilePath, &mResource, &mShaderResourceView); CreateDDSTextureFromFileEx(Game::GetInstance()->GetDevice(), Game::GetInstance()->GetDeviceContext(), textureFilePath, 0, D3D11_USAGE_DEFAULT, D3D11_BIND_SHADER_RESOURCE, 0, D3D11_RESOURCE_MISC_TEXTURECUBE, false, &mResource, &mShaderResourceView); } // SkyBox.cpp: void SkyBox::Draw() { // set cube map ID3D11ShaderResourceView *resource = mTexture.GetResource(); Game::GetInstance()->GetDeviceContext()->PSSetShaderResources(0, 1, &resource); // set primitive topology Game::GetInstance()->GetDeviceContext()->IASetPrimitiveTopology(D3D_PRIMITIVE_TOPOLOGY_TRIANGLELIST); mMesh.Bind(); mMesh.Draw(); } // Vertex Shader: cbuffer Transform : register(b0) { float4x4 viewProjectionMatrix; }; float4 main(inout float3 pos : POSITION) : SV_POSITION { return mul(float4(pos, 1.0f), viewProjectionMatrix); } // Pixel Shader: SamplerState cubeSampler; TextureCube cubeMap; float4 main(in float3 pos : POSITION) : SV_TARGET { float4 color = cubeMap.Sample(cubeSampler, pos.xyz); return color; } I tried both functions grom DDS loader but I keep getting the same result. All results I found on the web are about the old SDK toolkits, but I'm using the new DirectXTex lib.
  8. Hi Gamedev team, I am trying to re-implement the fluid simulation from the following paper . http://movement.stanford.edu/courses/cs448-01-spring/papers/foster.pdf. However, I am stuck with the level set method as it generates very strange resultt. The water should move much faster than the level set actually propagates, so I think there is a bug in my levelSetPropagation. However, I tried now for several months, but I just can't get it right. In the lower image you can see the problem. The scene is a breaking dam. While the inserted particles move quite fast , my level set seems to stick to the wall and does not propagate correctly. As the particles are moving correctly, I assume the error lies in my level set propagation. The level set sticks to the wall very long which looks really strange and not correct. I also inserted my level set propagation code below. If anybody could give me a hint on what I am doing wrong please let me know. If you require more information just let me know and I will reply as soon as possible. thank you for your support! void solverFedkiw::levelSetExtensionVelocity(){ int xSize = 1+newgrid->getXSize(),ySize = 1+newgrid->getYSize(),zSize = 1+newgrid->getZSize(); int i, j, k; enum{CLOSE, FAR, ACCEPTED}; std::multimap<float, Index> distances; //! distances of close cells, sorted due to multimap //! Algorithm taken from "The Fast construction of extension velocities in level set methods" //! by adalsteinsson and Sethian for(int direction=0;direction<3;direction++) { int dx=0,dy=0,dz=0; if(direction==0) dx=1; else if(direction==1) dy=1; else if(direction==2) dz=1; //! Initialize all values to FAR. These will become "all other points" -> page 8 for(i=0;i<xSize;i++){ for(j=0;j<ySize;j++){ for(k=0;k<zSize;k++) sets[j][k] = FAR; } } //! Build narrow band for(i=1;i<xSize-1;i++) { for(j=1;j<ySize-1;j++) { for(k=1;k<zSize-1;k++){ if((newgrid->getCellType(i, j, k) == grid::WALL) || (newgrid->getCellType(i, j, k) == grid::HOLDER)) continue; //! Test whether left neighbour is different (i.e. change of cell type) if(newgrid->getCellType(i, j, k) != newgrid->getCellType(i-dx, j-dy, k-dz)) { sets[j][k] = ACCEPTED; newgrid->setDistance(i, j, k, 0); //! All neighbours of the onsurface cells are set to CLOSE if(sets[i+1][j][k] == FAR) sets[i+1][j][k] = CLOSE; if(sets[i-1][j][k] == FAR) sets[i-1][j][k] = CLOSE; if(sets[j+1][k] == FAR) sets[j+1][k] = CLOSE; if(sets[j-1][k] == FAR) sets[j-1][k] = CLOSE; if(sets[j][k+1] == FAR) sets[j][k+1] = CLOSE; if(sets[j][k-1] == FAR) sets[j][k-1] = CLOSE; } } } } //! put the close ones in the map for(i=1;i<xSize-1;++i) { for(j=1;j<ySize-1;++j) { for(k=1;k<zSize-1;++k) { if(sets[j][k] == CLOSE){ distances.insert(std::pair<float, Index>(1, Index(i, j, k))); newgrid->setDistance(i, j, k, 1); } } } } //! Perform marching algorithm according to page 8 while(!distances.empty()){ //! Begin gives back iterator. The second element is the direction (the first one contains a float containing the distance) Index idx = (distances.begin())->second; //! Move the point into accepted and remove it from distances sets[idx.i][idx.j][idx.k] = ACCEPTED; distances.erase(distances.begin()); if(newgrid->getCellType(idx.i, idx.j, idx.k) != grid::AIR) continue; //! Recompute the velocities according to page 13 float totalDistance=0, weightedSpeed=0, distance; //! Test whether neighbour is inside the fluid if(sets[idx.i+1][idx.j][idx.k] == ACCEPTED){ distance = fabs(newgrid->getDistance(idx.i+1, idx.j, idx.k) - newgrid->getDistance(idx.i, idx.j, idx.k)); if(direction==0) weightedSpeed += newgrid->getU(idx.i+1, idx.j, idx.k) * distance; else if(direction==1) weightedSpeed += newgrid->getV(idx.i+1, idx.j, idx.k) * distance; else if(direction==2) weightedSpeed += newgrid->getW(idx.i+1, idx.j, idx.k) * distance; totalDistance += distance; } if(sets[idx.i-1][idx.j][idx.k] == ACCEPTED){ distance = fabs(newgrid->getDistance(idx.i-1, idx.j, idx.k) - newgrid->getDistance(idx.i, idx.j, idx.k)); if(direction==0) weightedSpeed += newgrid->getU(idx.i-1, idx.j, idx.k) * distance; else if(direction==1) weightedSpeed += newgrid->getV(idx.i-1, idx.j, idx.k) * distance; else if(direction==2) weightedSpeed += newgrid->getW(idx.i-1, idx.j, idx.k) * distance; totalDistance += distance; } if(sets[idx.i][idx.j+1][idx.k] == ACCEPTED){ distance = fabs(newgrid->getDistance(idx.i, idx.j+1, idx.k) - newgrid->getDistance(idx.i, idx.j, idx.k)); if(direction==0) weightedSpeed += newgrid->getU(idx.i, idx.j+1, idx.k) * distance; else if(direction==1) weightedSpeed += newgrid->getV(idx.i, idx.j+1, idx.k) * distance; else if(direction==2) weightedSpeed += newgrid->getW(idx.i, idx.j+1, idx.k) * distance; totalDistance += distance; } if(sets[idx.i][idx.j-1][idx.k] == ACCEPTED){ distance = fabs(newgrid->getDistance(idx.i, idx.j-1, idx.k) - newgrid->getDistance(idx.i, idx.j, idx.k)); if(direction==0) weightedSpeed += newgrid->getU(idx.i, idx.j-1, idx.k) * distance; else if(direction==1) weightedSpeed += newgrid->getV(idx.i, idx.j-1, idx.k) * distance; else if(direction==2) weightedSpeed += newgrid->getW(idx.i, idx.j-1, idx.k) * distance; totalDistance += distance; } if(sets[idx.i][idx.j][idx.k+1] == ACCEPTED){ distance = fabs(newgrid->getDistance(idx.i, idx.j, idx.k+1) - newgrid->getDistance(idx.i, idx.j, idx.k)); if(direction==0) weightedSpeed += newgrid->getU(idx.i, idx.j, idx.k+1) * distance; else if(direction==1) weightedSpeed += newgrid->getV(idx.i, idx.j, idx.k+1) * distance; else if(direction==2) weightedSpeed += newgrid->getW(idx.i, idx.j, idx.k+1) * distance; totalDistance += distance; } if(sets[idx.i][idx.j][idx.k-1] == ACCEPTED){ distance = fabs(newgrid->getDistance(idx.i, idx.j, idx.k-1) - newgrid->getDistance(idx.i, idx.j, idx.k)); if(direction==0) weightedSpeed += newgrid->getU(idx.i, idx.j, idx.k-1) * distance; else if(direction==1) weightedSpeed += newgrid->getV(idx.i, idx.j, idx.k-1) * distance; else if(direction==2) weightedSpeed += newgrid->getW(idx.i, idx.j, idx.k-1) * distance; totalDistance += distance; } if(direction==0) newgrid->setU(idx.i, idx.j, idx.k,weightedSpeed/totalDistance); else if(direction==1) newgrid->setV(idx.i, idx.j, idx.k,weightedSpeed/totalDistance); else if(direction==2) newgrid->setW(idx.i, idx.j, idx.k,weightedSpeed/totalDistance); //! Move all neighbouring points of trial that are in Far to Close float dist = newgrid->getDistance(idx.i, idx.j, idx.k); if(newgrid->getCellType(idx.i+1, idx.j, idx.k)!=grid::WALL && newgrid->getCellType(idx.i+1, idx.j, idx.k)!=grid::HOLDER && sets[idx.i+1][idx.j][idx.k] == FAR) { newgrid->setDistance(idx.i+1, idx.j, idx.k, dist+1); sets[idx.i+1][idx.j][idx.k] = CLOSE; distances.insert(std::pair<float, Index>(dist+1, Index(idx.i+1, idx.j, idx.k))); } if(newgrid->getCellType(idx.i-1, idx.j, idx.k)!=grid::WALL && newgrid->getCellType(idx.i-1, idx.j, idx.k)!=grid::HOLDER && sets[idx.i-1][idx.j][idx.k] == FAR) { newgrid->setDistance(idx.i-1, idx.j, idx.k, dist+1); sets[idx.i-1][idx.j][idx.k] = CLOSE; distances.insert(std::pair<float, Index>(dist+1, Index(idx.i-1, idx.j, idx.k))); } if(newgrid->getCellType(idx.i, idx.j+1, idx.k)!=grid::WALL && newgrid->getCellType(idx.i, idx.j+1, idx.k)!=grid::HOLDER && sets[idx.i][idx.j+1][idx.k] == FAR) { newgrid->setDistance(idx.i, idx.j+1, idx.k, dist+1); sets[idx.i][idx.j+1][idx.k] = CLOSE; distances.insert(std::pair<float, Index>(dist+1, Index(idx.i, idx.j+1, idx.k))); } if(newgrid->getCellType(idx.i, idx.j-1, idx.k)!=grid::WALL && newgrid->getCellType(idx.i, idx.j-1, idx.k)!=grid::HOLDER && sets[idx.i][idx.j-1][idx.k] == FAR) { newgrid->setDistance(idx.i, idx.j-1, idx.k, dist+1); sets[idx.i][idx.j-1][idx.k] = CLOSE; distances.insert(std::pair<float, Index>(dist+1, Index(idx.i, idx.j-1, idx.k))); } if(newgrid->getCellType(idx.i, idx.j, idx.k+1)!=grid::WALL && newgrid->getCellType(idx.i, idx.j, idx.k+1)!=grid::HOLDER && sets[idx.i][idx.j][idx.k+1] == FAR) { newgrid->setDistance(idx.i, idx.j, idx.k+1, dist+1); sets[idx.i][idx.j][idx.k+1] = CLOSE; distances.insert(std::pair<float, Index>(dist+1, Index(idx.i, idx.j, idx.k+1))); } if(newgrid->getCellType(idx.i, idx.j, idx.k-1)!=grid::WALL && newgrid->getCellType(idx.i, idx.j, idx.k-1)!=grid::HOLDER && sets[idx.i][idx.j][idx.k-1] == FAR) { newgrid->setDistance(idx.i, idx.j, idx.k-1, dist+1); sets[idx.i][idx.j][idx.k-1] = CLOSE; distances.insert(std::pair<float, Index>(dist+1, Index(idx.i, idx.j, idx.k-1))); } } } } [/CODE]
  9. hi all, how to implement this type of effect ? Also what is this effect called? this is considered volumetric lighting? what are the options of doing this? a. billboard? but i want this to have the 3D effect that when we rotate the camera we can still have that 3d feel. b. a transparent 3d mesh? and we can animate it as well? need your expert advise. additional: 2. how to implement things like fireball projectile (shot from a monster) (billboard texture or a 3d mesh)? Note: im using OpenGL ES 2.0 on mobile. thanks!
  10. Hi everyone, I have a quadtree terrain system which uses camera distance to decide when a node should split into 4 children (or merge the children). It's basically 'split if distance between camera and the centre of the node is less than a constant * the node's edge length'. This has worked OK so far, but now I've changed my algorithm to have many more vertices per node. This means that I want to get a lot closer to a node before deciding to split - maybe something like 0.1*edgeLength. I can no longer just check my distance from the centre as it won't work when I approach the edges of the node. I thought I could check the distance from each vertex in each node and find the smallest, but that seems like overkill (I'd be looping over every vertex in my terrain). So my question is - what's a more elegant way to decide whether to split/merge my quadtree nodes? Thanks for the help!
  11. float currU = 0.01f * localTime; newhorizonTex.y = tex0.y + currU; newhorizonTex.x = tex0.x - currU; The animation initially works for the first few steps, then the texture gradually gets distorted.. and sucked into the pinch of the mesh.. thanks Jack
  12. 3D HLSL Minimap!

    I am working on making a shader driven minimap for a game. I have almost completed this task but I am stuck on one part. I need to make sure that when I track something (on the minimap) and it goes beyond the extent of my viewport (perimeter of minimap) it sticks to the edge of the viewport and does not continue past it. This is relatively straight forward for a circle minimap, however, mine is a rectangle. Not sure how to work out the math on this one. Any ideas??
  13. Hi, I'm trying to implement the radioisity normal mapping algorithm from the valve paper. I've made a shader, baked some light maps, and got the effect to work more or less. It showed some surfaces both lightmapped and normal mapped, and it looked kinda nice. This old demo is currently offline but you can see the results: I wanted to revisit the tangent stuff that made the sphere render janky, when i noticed that this never quite worked as expected. It's noticeable here when i have the flat (128,128,255) normal map and the lights seem to "diverge". I tried to visualize the math by mocking the light maps with 3 dot products between a directional light and the basis vectors. The n dot l on the yellow normal never seems to be the same as the result of sampling and weighting from the basis directions. The first image where the light is completely grazing the surface seems to illustrate well where it fails. The other two basis (green and blue) are getting a clamped 0, but 1/3 of the contribution comes from the red dir where there is some value. Baking a regular light map, this surface would be black. This cyan distribution plot looks just like the banding shape i get if i have a light close to a surface. A light cylinder intersecting a wall for example, wouldn't shine uniformly in all directions, but these dominant three. The results from the screenshots in the papers look much better. Either the weighting math needs to be adjusted somehow (i was thinking maybe scaling the normal somehow), or the lights need to have negative values or something... would be my wild guesses. Is there a way to render a regular light map and non-normal mapped geometry the same way as a radioisity normal map, with a flat (128,128,255) normal map? ^actually this plot is without the clamp for the basis/normal dot products
  14. For my simple pathtracer written in C I need a function which calculates a random, diffuse reflection vector (for diffuse / lambert material). This is my function header: struct Ray diffuse(const struct Sphere s, const struct Point hitPoint) where s is the intersected Sphere and hitPoint is the point a ray intersected the sphere. Of course there are several projectes and their source code available on GitHub but most of the time they are doing way more things than I can understand at once. I don't want to just copy their code (I want to understand it myself) and I did not get any usefull results from google. I don't know where to start.
  15. I have something wrong that loading fbx model using assimp some meshes are located in unusual place this is result about my renderer. (the face is off) but other programs (like 3dmax, assimp viewer, and etc...) the face is well attached. (this is assimp viewer image) I folloew the reading of vertex while debugging, and The x-coord of all vertices in the face mesh were added to 200 (this is return value about aiMesh.mVertices) my import setting is that. m_pScene = aiImportFile(fileName.c_str(), aiProcess_JoinIdenticalVertices | // join identical vertices/ optimize indexing aiProcess_ValidateDataStructure | // perform a full validation of the loader's output aiProcess_ImproveCacheLocality | // improve the cache locality of the output vertices aiProcess_RemoveRedundantMaterials | // remove redundant materials aiProcess_GenUVCoords | // convert spherical, cylindrical, box and planar mapping to proper UVs aiProcess_TransformUVCoords | // pre-process UV transformations (scaling, translation ...) //aiProcess_FindInstances | // search for instanced meshes and remove them by references to one master aiProcess_LimitBoneWeights | // limit bone weights to 4 per vertex aiProcess_OptimizeMeshes | // join small meshes, if possible; //aiProcess_PreTransformVertices | aiProcess_GenSmoothNormals | // generate smooth normal vectors if not existing aiProcess_SplitLargeMeshes | // split large, unrenderable meshes into sub-meshes aiProcess_Triangulate | // triangulate polygons with more than 3 edges aiProcess_ConvertToLeftHanded | // convert everything to D3D left handed space aiProcess_SortByPType); // make 'clean' meshes which consist of a single type of primitives); Note that if aiProcess_PreThransformVertices flag is used, The model is rendered perfactly! but this model has animation, so i can't use this flag and There is one thing that takes This model's face mesh is made up of rect polygons. (not triangle) Could this be a problem? Please please help me about this problem! If you want other codes, I'll gladly provide all the code Thanks!
  16. 3D Megascancs

    Hello, How do megascans assets work ? Are they very good because of texture size, polygons number ? I'm a bit lost here. Thanks for the help !
  17. Hello. I have already posted a thread about myself and what I am trying to accomplish here. TLDR: I am a 12 year, game industry, artist moving into graphics programming, starting with hlsl shader development. I started with a basic phong shader and received a ton of help on these forums here. Now I continue my journey and am starting a new thread with general questions. I am using several books and online resources in my process but sometimes community is a great way to learn as well. Here we go! Question #1: I understand that there are global variables and local variables. If you want to use a declared, global. variable in a function (like a vertex shader) you need to pass it in. I usually create a struct that I want to use in the vertex shader for this reason. However, I have noticed that some variables (like, worldViewProjection : WORLDVIEWPROJECTION) can be declared and used in a function without being passed into it. Why is that? What am I missing here?
  18. I want to calculate the position of the camera, but I always get a vector of zeros. D3DXMATRIX viewMat; pDev->GetTransform(D3DTS_VIEW, &viewMat); D3DXMatrixInverse(&viewMat, NULL, &viewMat); D3DXVECTOR3 camPos(viewMat._41, viewMat._42, viewMat._43); log->Write( L"Camera Position: %f %f %f\n", camPos.x, camPos.y, camPos.z); Could anyone please shed some lights on this? thanks Jack
  19. I'm following rastertek tutorial 14 (http://rastertek.com/tertut14.html). The problem is, slope based texturing doesn't work in my application. There are plenty of slopes in my terrain. None of them get slope color. float4 PSMAIN(DS_OUTPUT Input) : SV_Target { float4 grassColor; float4 slopeColor; float4 rockColor; float slope; float blendAmount; float4 textureColor; grassColor = txTerGrassy.Sample(SSTerrain, Input.TextureCoords); slopeColor = txTerMossRocky.Sample(SSTerrain, Input.TextureCoords); rockColor = txTerRocky.Sample(SSTerrain, Input.TextureCoords); // Calculate the slope of this point. slope = (1.0f - Input.LSNormal.y); if(slope < 0.2) { blendAmount = slope / 0.2f; textureColor = lerp(grassColor, slopeColor, blendAmount); } if((slope < 0.7) && (slope >= 0.2f)) { blendAmount = (slope - 0.2f) * (1.0f / (0.7f - 0.2f)); textureColor = lerp(slopeColor, rockColor, blendAmount); } if(slope >= 0.7) { textureColor = rockColor; } return float4(textureColor.rgb, 1); } Can anyone help me? Thanks.
  20. Hi, ok, so, we are having problems with our current mirror reflection implementation. At the moment we are doing it very simple, so for the i-th frame, we calculate the reflection vectors given the viewPoint and some predefined points on the mirror surface (position and normal). Then, using the least squared algorithm, we find the point that has the minimum distance from all these reflections vectors. This is going to be our virtual viewPoint (with the right orientation). After that, we render offscreen to a texture by setting the OpenGL camera on the virtual viewPoint. And finally we use the rendered texture on the mirror surface. So far this has always been fine, but now we are having some more strong constraints on accuracy. What are our best options given that: - we have a dynamic scene, the mirror and parts of the scene can change continuously from frame to frame - we have about 3k points (with normals) per mirror, calculated offline using some cad program (such as Catia) - all the mirror are always perfectly spherical (with different radius vertically and horizontally) and they are always convex - a scene can have up to 10 mirror - it should be fast enough also for vr (Htc Vive) on fastest gpus (only desktops) Looking around, some papers talk about calculating some caustic surface derivation offline, but I don't know if this suits my case Also, another paper, used some acceleration structures to detect the intersection between the reflection vectors and the scene, and then adjust the corresponding texture coordinate. This looks the most accurate but also very heavy from a computational point of view. Other than that, I couldn't find anything updated/exhaustive around, can you help me? Thanks in advance
  21. How do you handle situations where the physics engine produces a 129x129 height field which results in 2M vertexes for every frame. The frame rate suffers a lot, and drops to 2-3 fps. Is there a way to reduce the traffic or burden over the graphics card? Is the technique of LOD and progressive mesh still apply? thanks Jack
  22. Hey guys! Three questions about uniform buffers: 1) Is there a benefit to Vulkan and DirectX's Shader State for the Constant/Uniform Buffer? In these APIs, and NOT in OpenGL, you must set which shader is going to take each buffer. Why is this? For allowing more slots? 2) I'm building an wrapper over these graphics APIs, and was wondering how to handle passing parameters. In addition, I used my own json format to describe material formats and shader formats. In this, I can describe which shaders get what uniform buffers. I was thinking of moving to support ShaderLab (Unity's shader format) instead, as this would allow people to jump over easily enough and ease up the learning curve. But ShaderLab does not support multiple Uniform Buffers at all, as I can tell, let alone what parameters go where. So to fix this, I was just going to send all Uniform Buffers to all shaders. Is this that big of a problem? 3) Do you have any references on how to organize material uniform buffers? I may be optimizing too early, but I've seen people say what a toll this can take.
  23. I've just started learning about tessellation from Frank Luna's DX11 book. I'm getting some very weird behavior when I try to render a tessellated quad patch if I also render a mesh in the same frame. The tessellated quad patch renders just fine if it's the only thing I'm rendering. This is pictured below: ' However, when I attempt to render the same tessellated quad patch along with the other entities in the scene (which are simple triangle-lists), I get the following error: I have no idea why this is happening, and google searches have given me no leads at all. I use the following code to render the tessellated quad patch: ID3D11DeviceContext* dc = GetGFXDeviceContext(); dc->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_4_CONTROL_POINT_PATCHLIST); dc->IASetInputLayout(ShaderManager::GetInstance()->m_JQuadTess->m_InputLayout); float blendFactors[] = { 0.0f, 0.0f, 0.0f, 0.0f }; // only used with D3D11_BLEND_BLEND_FACTOR dc->RSSetState(m_rasterizerStates[RSWIREFRAME]); dc->OMSetBlendState(m_blendStates[BSNOBLEND], blendFactors, 0xffffffff); dc->OMSetDepthStencilState(m_depthStencilStates[DSDEFAULT], 0); ID3DX11EffectTechnique* activeTech = ShaderManager::GetInstance()->m_JQuadTess->Tech; D3DX11_TECHNIQUE_DESC techDesc; activeTech->GetDesc(&techDesc); for (unsigned int p = 0; p < techDesc.Passes; p++) { TerrainVisual* terrainVisual = (TerrainVisual*)entity->m_VisualComponent; UINT stride = sizeof(TerrainVertex); UINT offset = 0; GetGFXDeviceContext()->IASetVertexBuffers(0, 1, &terrainVisual->m_VB, &stride, &offset); Vector3 eyePos = Vector3(cam->m_position); Matrix rotation = Matrix::CreateFromYawPitchRoll(entity->m_rotationEuler.x, entity->m_rotationEuler.y, entity->m_rotationEuler.z); Matrix model = rotation * Matrix::CreateTranslation(entity->m_position); Matrix view = cam->GetLookAtMatrix(); Matrix MVP = model * view * m_ProjectionMatrix; ShaderManager::GetInstance()->m_JQuadTess->SetEyePosW(eyePos); ShaderManager::GetInstance()->m_JQuadTess->SetWorld(model); ShaderManager::GetInstance()->m_JQuadTess->SetWorldViewProj(MVP); activeTech->GetPassByIndex(p)->Apply(0, GetGFXDeviceContext()); GetGFXDeviceContext()->Draw(4, 0); } dc->RSSetState(0); dc->OMSetBlendState(0, blendFactors, 0xffffffff); dc->OMSetDepthStencilState(0, 0); I draw my scene by looping through the list of entities and calling the associated draw method depending on the entity's "visual type": for (unsigned int i = 0; i < scene->GetEntityList()->size(); i++) { Entity* entity = scene->GetEntityList()->at(i); if (entity->m_VisualComponent->m_visualType == VisualType::MESH) DrawMeshEntity(entity, cam, sun, point); else if (entity->m_VisualComponent->m_visualType == VisualType::BILLBOARD) DrawBillboardEntity(entity, cam, sun, point); else if (entity->m_VisualComponent->m_visualType == VisualType::TERRAIN) DrawTerrainEntity(entity, cam); } HR(m_swapChain->Present(0, 0)); Any help/advice would be much appreciated!
  24. I have just started writing shaders recently (a couple of weeks ago.) (Background info in another post here.) I decided to start with a basic phong shader. I understand (I think) the implementation of the diffuse and specular refleections. Diffuse reflection = (N.L) Specular reflection = (R.V)^S where s = specular power. Reflection vector = 2* (N.L)*N-L I have a shader put together for fx composer 2.5 that is an attempt at the phong lighting model. (Shader code linked.) I have not added in the texture sample to the pixel shader output yet since I am trying to debug lighting model issues. The diffuse reflection works fine, however, the specular reflection is giving me some trouble. The problem I am experience is that the specular reflection (shown here) seems to affect the back of the sphere. When I saturate the specular reflection value I get this: (If saturate clamps input to min=0 and max=1, I do not know where these values on the back are coming from since negative values should just get clamped to 0.) If I remove the saturate from the specular reflection and add the difuse reflection to the specular reflection I get this. (I will post an image of my diffuse reflection below to show how it looks on its own. ) Diffuse reflection by itself. What am I missing here? Also, I am completely open to feedback on my shader. I am looking to learn anything I can here. Thank you in advance for your time. phongUpload
  25. Hi there, this is my first time posting, but have been a long-time lurker in this community. I am currently developing a 3D game engine using a deferred renderer and OpenGL. I have successfully implemented recursive portals (and mirrors) in my engine utilizing the stencil buffer to mask out regions of the screen. This solution is very favorable as I am able to have dozens of separate views drawn at once without needing to worry about requiring multiple G-buffers for each individual (sub)view. I also benefit with being able to perform post processing effects over all views, only needing to apply them over what is visible (one pass per-section with stencil masking for no risk of overdraw). Now presently I am pondering ways of dealing with in-game camera displays (for an example think of the monitors from Half-Life 2). In the past I've handled these by rendering from the camera's perspective onto separate render target, and then in the final shading pass applying it as a texture. However I was greatly disappointed with the performance and the inability to combine with post-processing effects (or at least the way I do presently with portals). Another concern being that I wish to have scenes containing several unique camera screens at once (such as a security CCTV room), without needing to worry about the associated vram usage of having several G-Buffers. I wanted to ask more experienced members of this community if it would be possible to handle them in a similar fashion as I do with portals, but with the difference being for them to be transformed so they take on the appearance of a flat 2D surface. Would anybody with a more comprehensive understanding of matrix maths be able to tell me if this idea is feasible or not, and if so could come up with a possible solution? I hope all this makes enough sense. Any possible insight would be greatly appreciated! Thanks!
  • Advertisement