Jump to content
  • Advertisement

perrs

Member
  • Content Count

    51
  • Joined

  • Last visited

Community Reputation

122 Neutral

About perrs

  • Rank
    Member
  1. perrs

    Center the object

    "I have a problem with nr. 5. How do i know which Z to choose then unprojecting?" The vertex also has a Z value in screenspace (Actually, I meant projectionspace). Just keep this value as you are not interested in moving the object back or forth (correct?). " I can not use bounding sphere since the boundign shpere is of constant size. Because when i rotate the object i might be possible to zoom-in more since the rotation changed objects 2D screen shape (representation)." As I said earlier, this will place the object in the center of the screen. It will not necessary make the same amount of space on all sides on the screen. That's why I suggested the second approach.
  2. perrs

    Center the object

    One solution I know to be used is where you generate a bounding sphere for the object, and then focus the camera on the center and move the camera back to radius times some constant (depending on how big the object should be one the screen). This will place the center of the object in the center of the screen, like Erik Rufelt suggested. If you, on the other hand, don't want the object centered, but rather make sure that there is the same amount of space on corresponding sides, you need to work in screenspace. Try the following: 1. First, make a good guess using the method mentioned above. 2. Project all vertices to screenspace and isolate the topmost, bottommost, leftmost and rightmost vertex. To position the object correctly over the x-axis: 3. From the leftmost and rightmost vertices, find out how long they must be moved to be placed correctly. 4. Take the leftmost vertex (could also be the rightmost) and translate that (still in screenspace) with the calculated distance. 5. Now, unproject it to find its world position. 6. Compare this world position with the old one to find out how much the camera should be moved. Do step 3-6 for the y-axis too. I haven't tried this so I cannot promise it will work as expected, but I believe so. Hope this helps, Per Rasmussen.
  3. It sounds like you got the right idea. Quote:I am also trying to do point lighting at this stage but the point lights seem to either never be occluded or are never drawn at all (depending on, at this point, fairly random changes to settings). Make sure that you use the same depth buffer as you did when rendering the geometry. Also, make sure that depth buffer writing is disabled during rendering of the lights, and that depth testing is disabled during rendering of the directional light. Now, for rendering of the point lights: if (point light radius + near clip plane < distance from camera to point light) Reverse culling and disable depth testing; else Normal culling and enable depth testing; Hope this helps!
  4. Interesting. But just to clarify; The epsilon you are talking about is used in all kinds of shadow mapping. Not just PCF. Correct? Question: How did you arrive at the formula: d = length (LightPosW - VertexPosW); epsilon = 1 / (d*d - 2*d); On a side note; Another way to avoid the problem all together is to change your polygon culling (eg. from counter-clockwise to clockwise) when rendering your shadow map. This will make sure that you only render the backside of objects which will be shadowed anyway due to the normal (at least if you are using phong or similar). This should yield some proper results in most "normal" cases. It does, of course, depend on that your objects are closest, but that is a requirement for many other things anyway. :) Best regards, Per Rasmussen
  5. perrs

    Spajders - new game

    Yeah, it is pretty fun. Good game play idea and nice implementation. One thing, though: When the "game over" text appears the frame rate drops to around 1-2 fps. Just wanted to let you know.
  6. ignotion: I've been having the exact same thoughts as you recently. However, I'm not sure you need as many shaders as you may think. To take your example. If you have a shader that can handle 1 omni, 1 directional and 1 spot, you don't need a shader that can handle 1 omni. You can just turn off the two other lights. I realize that this might not be the best solution performance wise, but as long as we are talking simple combinations of primitive lights, I do not believe is problem. You can save alot of shaders at very little cost, as these shaders will not be the bottlenecks anyway. You may still need quite a few shaders, but instead of 64 per special effect (4 omni * 4 directional * 4 spot = 64) you might do with four or five. However, if you plan to start using shadow mapping (or something similar) it is probably a good idea to do a render pass per light from the beginning. Perhaps I hybrid method could be a good idea. If you do one render pass with all the "simple" lights and after that one render pass per light it might not be such a bad idea. Best regards, Per Rasmussen.
  7. I am considering whether to use XNA or MDX for my next project. But I am having a hard time deciding. Reasons to use XNA: -- XNA seems to have better integration with the .net way of doing things. On several occasions it uses generics to ease certain tasks. -- XNA is newer and cleaner and I suspect it is going to be used a lot in the future. -- XNA has Xbox 360 support. (This isn't really that important to me though) Reasons to use managed DirectX: -- the XNA framework seems to be missing a lot of the functionalities of DirectX. -- XNA doesn't work in the Visual Studio 2005, for example meaning that I can't use the built-in shader debugger Visual Studio offers. -- XNA still seems to be a bit immature. (Even for a Beta 2) Anyway, these are just some of my views. I would like to here what you guys think.
  8. I am assuming that you are talking about moving a 3d model around in a 3d enviroment viewed from arbitrary angles. Actually this is more of a 3d graphics question but what the heck.... To do this properly you first must calculate a ray in 3d space from your mouse coordinates. This can be a bit tricky but you can find an example of it in the "Pick" sample of the MS DirectX SDK. It is in C++ though, but it is very easy to translate. The code for generating the ray is this: const D3DXMATRIX *pmatProj = g_Camera.GetProjMatrix(); POINT ptCursor; GetCursorPos( &ptCursor ); ScreenToClient( DXUTGetHWND(), &ptCursor ); // Compute the vector of the Pick ray in screen space D3DXVECTOR3 v; v.x = ( ( ( 2.0f * ptCursor.x ) / pd3dsdBackBuffer->Width ) - 1 ) / pmatProj->_11; v.y = -( ( ( 2.0f * ptCursor.y ) / pd3dsdBackBuffer->Height ) - 1 ) / pmatProj->_22; v.z = 1.0f; // Get the inverse view matrix const D3DXMATRIX matView = *g_Camera.GetViewMatrix(); const D3DXMATRIX matWorld = *g_Camera.GetWorldMatrix(); D3DXMATRIX mWorldView = matWorld * matView; D3DXMATRIX m; D3DXMatrixInverse( &m, NULL, &mWorldView ); // Transform the screen space Pick ray into 3D space vPickRayDir.x = v.x*m._11 + v.y*m._21 + v.z*m._31; vPickRayDir.y = v.x*m._12 + v.y*m._22 + v.z*m._32; vPickRayDir.z = v.x*m._13 + v.y*m._23 + v.z*m._33; vPickRayOrig.x = m._41; vPickRayOrig.y = m._42; vPickRayOrig.z = m._43; Once you can calculate this ray the rest is not too hard. Here is some pseudo code explaining the process: Vector3 mouseStartingPoint; Vector3 modelStartingPoint; Plane modelPlane; onMouseDown() { modelPlane = new Plane(myModel.position, camera.foreward); Ray r = calculateMouseRay(); modelStartingPoint = myModel.position; mouseStartingPoint = modelPlane.intersect(r); } OnMouseMove() { if (mouseDown) { Ray r = calculateMouseRay(); Vector3 mouseCurrentPoint = modelPlane.intersect(r); myModel.position = modelStartingPoint + (modelStartingPoint - mouseCurrentPoint); } } Here I make a plane that goes through the model and is parallel to the camera. I then find out where the mouse is pointing on this plane. I then use the starting position and the current position to calculate where to the model should be moved. Hope this make any sense. If not, please say so and I will try to explain further. Best Regards, Per Rasmussen.
  9. perrs

    transparent

    Color keying works like this: You first choose a color that is your transparent color. This means that whenever the texture is rendered it will be transparent in the areas that has this color. Note that it is only this EXACT color, meaning that if your transparent color is pure green (R=0 G=255 B=0) an almost pure green (fx. R=1 G=255 B=0) will not be invisible but simple green. This is why Endurion warns against using jpeg as it compresses the image in blocks and will typically "blur" the colors in a block a little. Take his advice and use png. If you have problems with that you can simple use bmp (the native format of MS Paint) though they grow quite large in size. So what you should do is that you load the texture using D3DXCreateTextureFromFileEx as Endurion suggested and you parse the exact color you want transparent as the ColorKey argument. Note that you should set the alpha component of this color to ff (255). In the ol' days they had a load texture function that could set your color key to the color of the top left pixel in the texture. Not the most robust way but it made it alot easier to quickly make something transparent. Did this answer your question?
  10. SiCrane and u235: I know the framework quite well. What I am looking for is something a bit more advanced and feature rich. For example, something that automatically can "mirror" objects over a network. AP: Not a bad idea, but I don't believe remoting is suited for games.
  11. Can anyone point me to some network SDKs for .NET. What I am really looking for is some kind of library that can "mirror" (or "ghost") objects between server and client. Thanks in advance, Per Rasmussen.
  12. Adriano: I am afraid you have the terms confused a little. It is not CAllocateHierarchy that calls CreateMeshContainer and CreateFrame. These two functions are member functions of CAllocateHierarchy it has to implement to be able to inherit from ID3DXAllocateHierarchy. This is because they are pure virtual in ID3DXAllocateHierarchy. They are called from D3DXLoadMeshHierarchyFromX when you call that. My question was more about when. Especially for CreateMeshContainer. newbie_programmer: I have read that article and the questions I ask here not answered in it. But thanks anyway. Could anybody who knows about this stuff please help me out.
  13. perrs

    What is the Proper Way

    Well, I don't know the answer but I know that if that is wrong it will not be the first time. Last time I checked (about a year ago) there was no erreta made, but you could try to find out if there is now. Best Regards, Per Rasmussen.
  14. Hi everybody. I am struggling to figure out how D3DX loads skinned mesh. Below is my assumptions so far. Could somebody please read through it and correct any errors and perhaps answer some of the questions I have? My Assumptions so far (I have put in some numbers to make it easier to reference): 1. An .x file can either contain a regular mesh or a hierarchy mesh. (how can you tell if it holds one or the other?) 2. If it contains a regular mesh it can be loaded with D3DXLoadMeshFromX and if it is a hierarchy mesh it can be loaded with D3DXLoadMeshHierarchyFromX. 3. If you want to load a mesh with D3DXLoadMeshHierarchyFromX you have to implement subclasses of ID3DXAllocateHierarchy and D3DXFRAME. Actually you do not HAVE to subclass the latter as it does not contain a combinedTransform member it can be a good thing. These two derived classes will we hence forth call AllocMeshHierarchy and FrameEx. 4. A hierarchy mesh will contain one or more frames which is setup in hierarchy. E.g. Torso has the child LeftUpperArm, which has a sibling, RightUpperArm, and a child, LeftLowerArm. LeftLowerarm has no siblings but a child, LeftHand. 5. Each frame can have a linked list of D3DXMESHCONTAINER which contains mesh data such as vertex data, bone weights etc. (Can more than one of the frames contain a MeshContainer? And if so, how should that be interpreted and does it even make any sense?) 6. When you call the D3DXLoadMeshHierarchyFromX function it starts to parse the file. 7. Each time it encounters a new frame it calls AllocMeshHierarchy::CreateFrame to have that allocate and return a new frame with the passed name. This frame can be an instance of FrameEx (our derived class). 8. Each time a frame has a MeshContainer, AllocMeshHierarchy::CreateMeshContainer is called to have it allocate and return a new MeshContainer. 9. Now I am a little confused again. Some examples I have seen calls ConvertToIndexedBlendedMesh at the end of CreateMeshContainer to parse the ID3DXSkinInfo, and others do not do this until later. Why? What is the difference? The rest I believe I got. Hope somebody can clarify these things for me. Thanks in advance, Per Rasmussen.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!