• Advertisement

QQemka

Member
  • Content count

    70
  • Joined

  • Last visited

Community Reputation

250 Neutral

About QQemka

  • Rank
    Member

Personal Information

  • Interests
    Programming
  1. Hello. I am coding a small thingy in my spare time. All i want to achieve is to load a heightmap (as the lowest possible walking terrain), some static meshes (elements of the environment) and a dynamic character (meaning i can move, collide with heightmap/static meshes and hold a varying item in a hand ). Got a bunch of questions, or rather problems i can't find solution to myself. Nearly all are deal with graphics/gpu, not the coding part. My c++ is on high enough level. Let's go: Heightmap - i obviously want it to be textured, size is hardcoded to 256x256 squares. I can't have one huge texture stretched over entire terrain cause every pixel would be enormous. Thats why i decided to use 2 specified textures. First will be a tileset consisting of 16 square tiles (u v range from 0 to 0.25 for first tile and so on) and second a 256x256 buffer with 0-15 value representing index of the tile from tileset for every heigtmap square. Problem is, how do i blend the edges nicely and make some computationally cheap changes so its not obvious there are only 16 tiles? Is it possible to generate such terrain with some existing program? Collisions - i want to use bounding sphere and aabb. But should i store them for a model or entity instance? Meaning i have 20 same trees spawned using the same tree model, but every entity got its own transformation (position, scale etc). Storing collision component per instance grats faster access + is precalculated and transformed (takes additional memory, but who cares?), so i stick with this, right? What should i do if object is dynamically rotated? The aabb is no longer aligned and calculating per vertex min/max everytime object rotates/scales is pretty expensive, right? Drawing aabb - problem similar to above (storing aabb data per instance or model). This time in my opinion per model is enough since every instance also does not have own vertex buffer but uses the shared one (so 20 trees share reference to one tree model). So rendering aabb is about taking the model's aabb, transforming with instance matrix and voila. What about aabb vertex buffer (this is more of a cosmetic question, just curious, bumped onto it in time of writing this). Is it better to make it as 8 points and index buffer (12 lines), or only 2 vertices with min/max x/y/z and having the shaders dynamically generate 6 other vertices and draw the box? Or maybe there should be just ONE 1x1x1 cube box template moved/scaled per entity? What if one model got a diffuse texture and a normal map, and other has only diffuse? Should i pass some bool flag to shader with that info, or just assume that my game supports only diffuse maps without fancy stuff? There were several more but i forgot/solved them at time of writing Thanks in advance
  2. Looking for a coder to work with

    What kind of game are you trying to create? I'm a coder.
  3. Hello. I've got problem shadow mapping. For some reason, the buildings are not casting shadows properly. The red, blue, green lines are positive XYZ axes respectively. White line is direction of the directional light (it's end is at xyz 60,30,60 which is used to construct the view matrix). This is the effect I am getting http://i.imgur.com/mDZqfCy.png This is, where i think the shadows should also appear (red circles) http://i.imgur.com/7n7Hb8G.png That is my scene when instead of using camera view/projection i use light view/projection http://i.imgur.com/2eRSodj.png Shaders are from this tut: https://learnopengl.com/code_viewer.php?code=advanced-lighting/shadow_mapping&type=vertex https://learnopengl.com/code_viewer.php?code=advanced-lighting/shadow_mapping&type=fragment Why are shadows not complete near the house edges? On the other hand, when I use projection matrix instead of ortho for the light, i am getting no shadows at all (but rendering the scene from light's point of view gives me nice view of the city)
  4. Hello. I am struggling with moving to deferred shading with framebuffers. I got forward rendering working perfectly. I am following this tutorial http://ogldev.atspace.co.uk/www/tutorial35/tutorial35.html My code is doing the same thing, but the textures (position, texcoord, normal, diffuse) are black, just like nothing gets rendered (but i know the draw calls are executed). The uniforms are correct, textures also, i have extensive error checking in all modules Pieces of code: framebuffer.hpp https://pastebin.com/292A92zt framebuffer.cpp https://pastebin.com/DdJC1xDF function responsible for rendering https://pastebin.com/t1p6Ft11 deffered shader https://pastebin.com/LV8Fg50u forward shader https://pastebin.com/nAMvZpfi   What may be wrong? Thanks in advance   Edit: Being a master of copypasta, for some reason I've ommited " glBindFramebuffer(GL_FRAMEBUFFER, fbo_); " call. Now everything works just fine.... :D
  5. Mipmap management

    Configured correctly? All i do is load dds with pregenerated mipmaps and the whole magic is done automatically in the background. And thats what im worried about, i dont have manual control over it. Am i missing something?
  6. Hello. I generate a dds texture with mipmaps (2^n x 2^n down to 1 x 1), compress it and use without problems. But I couldnt find information on how to actually use higher/lower quality mipmaps. Do i need to somehow explicitly say which mip do I want now, or GPU somehow knows which one is most suitable?
  7. DX11 Create Dx Texture

    What is the problem exactly? What additional features does your texture have so you cannot convert it to any other known format?
  8. And i checked on paper with proper transposing, and still it turns out that the "good" order is wrong according to math. It turns out that mul(a,b) does b*a and not a*b...
  9. Well, i know a lot of linear algebra, the matrices, transpositions, multiplication order etc. I suspect hlsl does something implicitly i dont know about...
  10. I am currently refactoring the code and i bumped once again on the shader. There is something definitely wrong. I read the "mul" documentation, the row/column matrix/vector and in my opinion, the math in my shader is wrong, but suddenly it turns out that it renders properly...   So, inside my shader file i have cbuffer cbPerFrameViewProjection : register(b1) {     column_major float4x4 PV; }; cbuffer cbPerObjectWorldMatrix : register(b0) {     column_major float4x4 World; }; Where "PV" is set once at the beginning of a frame, and world is per object.   The multiplication inside vertex shader float4x4 wvp = mul(World, PV); output.Pos = mul(wvp,inPos); I set the constant buffers like that: Camera: &XMMatrixTranspose(cam.GetViewMatrix()*cam.GetProjectionMatrix()) And for every object: &XMMatrixTranspose(world.GetTransform()) Now the maths: C++ stores matrices as row major, hlsl as column. Camera ((view row * projection row) transposed) equals to (((projection column * view column) transposed) transposed) which is (projection column * view column). World row transposed equals to world column. So - both constant buffers are fed a column matrices. Then, inside shader, to count WVP transposed (which in column order is PVW) i should do mul(PV,World) from constant buffers data to get the right answer, but NOT mul(World,PV) as i do now - and the second (wrong in my opinion) solution is correct. Why is that?   And second problem. Why is that correct output.Pos = mul(inPos, wvp); And this is wrong?? output.Pos = mul(wvp, inPos); HLSL uses column matrices, and as far as i know from theory, to transform a vector you multiply matrix * vector.   Could someone explain what is goin on here? What am i missing in the matrix mess? Thanks in advance
  11. Edit: I decided to post his in general section.
  12. Object Referencing

    1. I am talking about good practices, you can do literally everything with c++ 2. Can you elaborate on the new/delete ?
  13. Hello. As a total beginner in graphics programming who started playing around with gpu 3 weeks ago (using d3d11) I stumbled upon many different problems. I would like to list them all here, explain what my concerns are :)       -should i go fpr LH RH or coordinate system? Most programs used for modeling use RH, yet the d3d tutorials use LH. The only difference is t he Z axis direction (which has some other consequences with uv,normals etc), isnt it best to stick to RH? -how do i make light not go through objects? Is it all just about shadows?   -what data should mesh material have? Looking at different formats and the ammount of parametrs of material (im using .obj right now) im gettin really confused. And most imporant: should transparency be a property of one material or entire mesh?   -second, what should mesh have? So i've read about subsets, materials, that subsets may have multiple materials and all that mess. Should my implementation (for rendering) split the mesh into subsets and deal with multi materials, or should i make groups of vertices which are using the same material, or should i separate transparent and opaque subsets? (The proper transparency rendering is something i totally cant wrap my head around) -transparency. Should it be property of whole mesh, of a single material, or any of these, and the user decides during runtime what to render as transparent and what as opaque? Is assumption that every mesh will be opaque right?   I am kinda stuck because of these problems, especially "what data should mesh and material have". I've changed the implementations several times and i cant find the proper representation.   Thanks in advance
  14. DX11 Cannot Create A Constant Buffer

    Yes, but thats only for gpu buffers. If you need own structure for own (not gpu) processing then obviously you do not have to do that... :)
  15. DX11 Cannot Create A Constant Buffer

    Its not directX problem, its C in general. Just read on "structure alignment" (its very easy) and padding.   For example, having struct with 4x4 matrix and xyz vector would need 1 addition padding float. The 4x4 float matrix takes 64 bytes, it will be placed at offset 0. Next, you want to pack the 3 float vector, at offset 64, taking 12 bytes. The structure is 76 bytes in size then, do you add 1 float (or int, or 4 char array - whatever with size of 4 bytes) and magically the structure grows to 80 bytes :) It is 16 byte aligned, buffer works.
  • Advertisement