Jump to content
  • Advertisement

break_r

Member
  • Content Count

    88
  • Joined

  • Last visited

Community Reputation

122 Neutral

About break_r

  • Rank
    Member
  1. I build my DLLs (ie. graphics, audio, input, etc) using C++/CLI. You get the best of both worlds in a seamless manner. I then consume these .NET assemblies from C#. It feels almost like scripting. You must remember that in .NET, language is secondary - it's all about the framework.
  2. break_r

    problems with picking method

    link to a ray picking demo in C++/CLI
  3. Thx, I'll have another look at it. I actually posted the wrong piece of code =/ And now I don't know where my original one went. I really need some sleep...
  4. Normally, D3D will ignore redundant state changes and other misuse of the API - a pure device is not so forgiving. A pure device eliminates state processing overhead, therefore you cannot use Get* methods. Thx for trying to hi-jack my thread but now we need to get back to my problem. =P Yeah, I thought it was kind of odd to adjust for the mouse x,y when it was already in client coords. No, I'm not using a pure device and GetViewport returns 640 by 480 which is the value I've hardwired everywhere. I don't resize the window and 640 by 480 is set for both CreateWindow and the present params back buffer width/height passed into CreateDevice. [Edited by - break_r on May 12, 2006 4:26:30 AM]
  5. Hi, Thx for the reply and I did what u suggested with the following: // during init... D3DXMatrixOrthoLH(&g_matProj, 4, 3, 1.0f, 100.0f); g_pd3dDevice->SetTransform( D3DTS_PROJECTION, &g_matProj ); // during WM_LBUTTONDOWN... // extract client mouse coords int x = LOWORD(lParam); int y = HIWORD(lParam); // grab the viewport D3DVIEWPORT9 viewport; g_pd3dDevice->GetViewport(&viewport); // using two vectors to create the line u mentioned above D3DXVECTOR3 vOut1; D3DXVECTOR3 vOut2; // first call has z set to zero D3DXVECTOR3 vSource((float)x, (float)y, 0); D3DXVec3Unproject(&vOut1, &vSource, &viewport, &g_matProj, &g_matView, &g_matWorld); // second call has z set to one vSource.z = 1.0f; D3DXVec3Unproject(&vOut2, &vSource, &viewport, &g_matProj, &g_matView, &g_matWorld); // then use both vectors for intersection test D3DXIntersect(g_pCubeMesh, &vOut1, &vOut2, &g_bHit, NULL,NULL, NULL, NULL, NULL, NULL); And I manage to get a hit response. However, the hit is slightly offset from the actual object by just a few pixels along the x-axis (screen). So, I made this small adjustment to the x coord: // offset x by size of window border D3DXVECTOR3 vSource((float)x + GetSystemMetrics(SM_CXSIZEFRAME), (float)y, 0); And it works perfectly. But the y coord is still off. I figured I'd add the title bar size - GetSystemMetrics(SM_CYSIZE) - to y and I'd be set. Unfortunately no matter what I do, the y coord click is always off.
  6. Hi, Does anyone have a simple demo of ray picking inside an orthographic viewport?
  7. Hi, I did what u suggested using GetSystemMetrics() to offset the title bar height and the window border. Appears to work although it's still 1 pixel off *rolls eyes* I guess I can live with that =P Thanks a lot, I needed a fresh pair of eyes to see what was right in front of me!
  8. So I cut'n paste this portion from the "Pick" sample in the DirectX SDK: ------------------------------------------------------------ void Pick() { D3DXVECTOR3 vPickRayDir; D3DXVECTOR3 vPickRayOrig; if( GetCapture() ) { POINT ptCursor; GetCursorPos( &ptCursor ); ScreenToClient( g_hWnd, &ptCursor ); // Compute the vector of the Pick ray in screen space D3DXVECTOR3 v; v.x = (( 2.0f * ptCursor.x ) / 640 - 1) / g_matProj._11; v.y = -(( 2.0f * ptCursor.y ) / 480 - 1) / g_matProj._22; v.z = 1.0f; D3DXMATRIX mWorldView = g_matWorld * g_matView; D3DXMATRIX mInv; D3DXMatrixInverse( &mInv, NULL, &mWorldView ); // Transform the screen space Pick ray into 3D space vPickRayDir.x = v.x*mInv._11 + v.y*mInv._21 + v.z*mInv._31; vPickRayDir.y = v.x*mInv._12 + v.y*mInv._22 + v.z*mInv._32; vPickRayDir.z = v.x*mInv._13 + v.y*mInv._23 + v.z*mInv._33; vPickRayOrig.x = mInv._41; vPickRayOrig.y = mInv._42; vPickRayOrig.z = mInv._43; D3DXIntersect(g_pCubeMesh, &vPickRayOrig, &vPickRayDir, &g_bHit, NULL,NULL, NULL, NULL, NULL, NULL); } } ------------------------------------------------------------ I don't have a camera class, just the following setup: ------------------------------------------------------------ D3DXMatrixIdentity( &g_matWorld ); g_pd3dDevice->SetTransform(D3DTS_WORLD, &g_matWorld); D3DXMatrixPerspectiveFovLH( &g_matProj, D3DX_PI / 4, 640.0f/480.0f, 1.0f, 10000.0f ); g_pd3dDevice->SetTransform( D3DTS_PROJECTION, &g_matProj ); D3DXMatrixLookAtLH( &g_matView, &D3DXVECTOR3( 0.0f, 0.0f, -4.0f ), // eye &D3DXVECTOR3( 0.0f, 0.0f, 0.0f ), // look at &D3DXVECTOR3( 0.0f, 1.0f, 0.0f ) ); // up g_pd3dDevice->SetTransform( D3DTS_VIEW, &g_matView ); ------------------------------------------------------------ Every frame I render g_pCubeMesh and I don't move the world or view matrices, I just call Pick(). Now, I must be doing something wrong because it registers a hit 4 pixels off to the right and about 12 or so pixels down. It's like the cube is slightly offset from the one that is rendered. Thx, - 3am and blurry-eyed [Edited by - break_r on May 8, 2006 10:21:35 PM]
  9. break_r

    DirectInput and Dispose

    Ask yourself what the dispose pattern is used for (deterministic destruction and cleanup of unmanaged resources) and that'll answer your question. If you're not quite sure, I suggest reading ".NET Components" by Juval Lowey. An excellent book that covers the dispose pattern and much more.
  10. Is it possible to mix Avalon with Managed DirectX? I haven't done a lot of Avalon, so I'm wondering if someone knows for sure. Regards.
  11. break_r

    Device Reset

    I've ran into InvalidCallExceptions myself. Here's what I've encountered: 1) I reset the device. No exceptions are thrown and it appears to be successful. 2) Try and use the device pointer to do something shortly after. This throws and InvalidCallException. The MSDN says this exception is thrown because of a bad param in my method but that doesn't seem to be the case. The solution I've found is to let Windows do a little message pump processing. For example, reseting the device usually happens during a window state or form border style change. I'll catch the InvalidCallException, let Windows do some processing by calling Applicaiton.DoEvents() (i'm using .net). This flushes the message queue, and then goes back to the previous device code, which no longer throws an exception. Not sure what the Win32 equivalent to Application.DoEvents() is, but I hope that helps. I recommend implementing a tracing mechanism. You'll be shocked at what Windows does, and what you think it should have done. Regards.
  12. Oh, there is no STL.NET assembly. You're just adding the header, which is a self-contained definition. Instantiate the template and go... NVM! =P
  13. In an STL.NET header like: <cliext/vector> they have vector defined as: template<typename _Value_t> ref class vector : public impl::vector_base<_Value_t>, Generic::ICollection<_Value_t> { ... } A template on a ref class that implements a couple of generic ref interfaces. And of course, after using namespace cliext, you can go: vector<int>^ v = gcnew vector<int>; So how do they pull off this magic? How do they export a template from an assembly? If I create a DLL with the following: template<typename T> public ref class Foo { }; It's not visible to an application using that assembly as a reference. And why would it. Templates are C++ specific and compile time. So how do I pull off something similar to vector? Regards.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!