Jump to content
  • Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

1706 Excellent

About TomKQT

  • Rank
    Advanced Member

Personal Information

  • Role
  • Interests
  1. Well, if you have a special case where you really only need to make translations, then you can just add the translation vector to the position vector of you objects. A pure translation matrix has a lot of zeros (and ones) in it and transforming by such a matrix leads to lots of unnecessary zero*something (one*something) multiplications. If this is what you were asking. I can imagine a class responsible for managing some specific objects (that can only be translated) that doesn't deal with matrices but just vectors. Such a class would not have to be able to handle layered transformations of different kinds. And layering translations is always just adding the vectors together.
  2. Transformations in general cannot be done by simple add or subtract. This may be true only for translation, but for rotation you need some sines and cosines. Also it should be mentioned (as addition to what others said before me) that although it may look like a waste of computer power to do 4x4 matrix - 1x4 vector multiplications, actually it's much better than transforming every vertex "manually" by doing the sin/cos calculations. Don't forget that when you need to transform (rotate, translate, scale) a 3D object, you actually need to apply the same transformation to ALL of its vertices. So you create one transformation matrix (I'm now talking only about the world matrix) and then you just use it for every vertex. So you make the expensive sin and cos calculations only few times during matrix creation and then you are doing just thousands of vector-matrix multiplications, which is only multiplication and addition (much faster operations and the GPUs are optimized for this stuff).
  3. And how does it look in the HoloLens glasses? From your question it isn't clear whether the problem appears only in Mixed Reality Capture (or whether you don't even have the device and you are using only the simulation). MRC looks quite different than the real view, because it works very differently. The two biggest differences are that MRC doesn't suffer from the very low angle of view and that MRC uses subtractive rendering (like the standard DirectX rendering on a monitor screen), while the physical device uses additive rendering (adding light to the real world view just like a projector). Anyway, I'm not even sure that what you want to achieve is possible with HoloLens. It will scan the lamp with its 3D scanning sensor, and it will use this data to mask the hologram if it's behind the real object. But I'm not really sure it will be able to properly realise that the lamp is hollow and that the hologram should be inside, but still visible from the bottom. The scanner also isn't really very accurate. Oh and btw, if you only have the simulator and not the physical device, then this will not work at all. There is no way how the simulator can know that there is a lamp in the scene which should mask (hide) the hologram
  4. I'm a completely self-taught programmer. I started with programming on my Commodore C64 (from around the age of 10, I think), where I learnt the BASIC language and the most important programming principles like variables, branching, arrays, loops etc. And also binary numbers and binary operations (AND, OR etc.) because of how often do you have to use those famous POKE and PEEK commands on C64. My only sources of information were the user manual of C64 (it was a completely different manual than nowadays, it contained also a very good BASIC reference book with sample codes) and a book I got back then (no Internet, of course). Later when I switched to 286 and DOS, I stayed with BASIC (QBASIC). Then Visual Basic 6 and all those .NET versions. I started with C++ quite at the same time as with DirectX 9 and it was quite late - what I was beginning my Ph.D. studies in Robotics (at 2004, age of 24). My sources were some books (Frank Luna) and the Internet, inluciding the gamedev.net forums (as you can see, I registered around that time). I'm partly using programming and DirectX in my job, but I'm not strictly a programmer. It's been my hobby for all the time.
  5. TomKQT

    Problems displaying a mesh

    Or a plain black cube. I hope your background color isn't black ;) That's just a tip, to be sure (maybe you're doing it, but we can never know) - during these initial attempts, never clear your backbuffer to white or black, use some general color, like a yellowish-gray etc :)
  6. TomKQT

    shadow shader

    abdellah you really should again focus on the kaunas's post. What exactly are you trying to do - do you have really some special situation that the algorith would work well for you? Generally, the idea is not good, because: You will be checking each vertex of all the shadow recievers (or do you really have just one?) against all vertices of all possible shadow casters (or do you really have just one?). That's not really efficient and it's quite hard to achieve, because when you are drawing a certain vertex (you are in its vertex shader), you don't have information about other vertices - unless you pass them to the shader somehow. Also a very important thing to keep in mind - a vertex will be in shadow only if it there is some vertex of the shadow caster PRECISELY on the line between the said vertex and the point light. In fact you have just a bunch of point occluders with infinitely small size (ideal points), casting shadows in infinitely thin lines on another set of infinitely small point receivers. What are the odds that you will ever get a single shadow?
  7. TomKQT

    Creating a Sphere from a Cube

    What exactly was the motivation for you to try to create a sphere out of a cube (that's not a rhetorical question, I really wonder)? That doesn't make much sense to me, honestly. Why do you generate verticles or a cube and then modify (with huge problems) to form a sphere instead? Did you try to use google and find out how to directly generate verticles of a sphere?
  8. Well, to be honest I'm not sure, I haven't used the Effect Framework for ages. I guess you would still need to assign the texture by sampler textureSampler = sampler_state { Texture = <DiffuseTex0>; }; I'm using the version with a fixed register number which allows me to set the texture simply by SetTexture(register, texture).
  9. You don't need to use the structure, you can define sampler just like this: sampler DiffuseTex0; And then use it: float4 textureColor = tex2D(DiffuseTex0, input.TexCoord); You can even say: sampler DiffuseTex0 : register(ps, s0); Which will allow you to simply set the texture from c++ code by directly using the particular sampler register: device->SetTexture(0, d3dTexture);
  10. "Render a text" will most probably mean "reander a vertex buffer (small quads, one of each letter) with a texture". The texture contains all the necessary chars. So you can try to look for that. Is the texture pre-made with all possible chars (can you find it somewhere in the game assets)? Is it made when the application starts and the letters are somehow drawn in it? Or is it changed even when the application runs if the text needs new letters (Btw I think the last variant is what D3DXFont does).
  11. TomKQT

    Why is this happening?

    You think it's related to specular calculations and you're 99 % sure it's not in shaders? That means you are either using the FFP and you are trying to say that there's something wrong in it, or I have no clue
  12.   I did something very similar 5 years ago (picture) and all the magic with projection was done very simply by using D3DXMatrixPerspectiveOffCenterLH to create the projection matrix. Try to look at this D3DX function ;)
  13. Maybe his algorithm works with the mesh rendered at zero depth (center of the mesh), so that some parts of it are in front of the near clipping plane? I have no idea, that's just a guess. In this case it would probably be best to modify it to work with any depth, to be able to use the 0-1 depth range (between near and far clipping planes).
  14. TomKQT

    Recalculating terrain normals

    Just a small note on this. Don't take AAA games in general as your holy grail as far as technology is concerned. Sure, there are some developers that push the boundaries further and produce extremely optimized and technicaly advanced games. But there are plenty of AAA games that are made in very short time and with focus on other elements, not optimizations. For example, I think it was on this forum - somebody showed how incredible hight was the amount of draw calls in the early version of Civilisation V. They had a separate draw call for every single icon on the screen, including all those food/wealth/production icons on each tile (if you know the game). That probably won't be the best way, will it? And btw I think this changed in the later versions of the game.
  15. TomKQT

    Recalculating terrain normals

    But you would make it only once on the CPU, wouldn't you? As opposed to every frame on GPU. (If the terrain is not changing somehow during time.) You need to send less data to the GPU, that's right. But that would probably require some profiling, whether it's better to be sending less data and calculating some on the GPU or to be sending more data and spare some GPU time.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!