Jump to content
  • Advertisement

psykr

Member
  • Content Count

    955
  • Joined

  • Last visited

Community Reputation

295 Neutral

About psykr

  • Rank
    Advanced Member
  1. I'm thinking about picking up an Emotiv EPOC headset to play with. I figure, even if it's not useful as an input device, I could get some interesting EEG-type data about myself. 1) Has anyone here ordered one/know anybody who's received one from the company? The site claims 10 weeks to ship, which is totally ludicrous and means I may not be interested anymore by the time I get one. 2) Am I right in understanding that something like 4 inputs can be detected at once, out of a set of 13? That would mean I have to hang on to my keyboard.. 3) Some forum threads I've read imply that the fastest possible response time is 0.25 seconds, anyone know about that? This would make it nearly impossible to use as a primary input device, since a 0.25s lag on, say, mouse movement would make it very hard to see cause and effect. 4) Since the device measures biopotentials, muscle movements are also detected. If I train with the EPOC until I have a decent level of proficiency, will I start making faces every time I think "jump" or "pull"? I'm imagining that since we can get access to raw data from the device, there's some possibility that I'll apply some machine learning algorithm and be able to do more than the stock SDK lets you do. Or that I have amazingly clear brain pulses that will interact better with the device.
  2. If you do have naming warts, make sure they're consistent. One way I can think of doing this is that non-class structs are typically plain old data, so if you add overloaded operators go for the C- prefix.
  3. I'm looking to link non-traditional profiling data (memory bandwidth, GPU utilization) with more typical data, such as CPU utilization and recent function execution. For example, I have a way to read hardware counters that say "the GPU executed this many instructions", and I would like to be able to loosely relate that to "this rendering function just spit out 300M polys, they're probably being processed". I'm looking at existing profiling tools like VTune and gprof, but there doesn't seem to be a way to extend them with this kind of data. I may just have to write my own libraries, and pipe the output to some 3rd-party visualization tool, but I would like to avoid that if possible. Does anyone know of any similar projects? Or even keywords for what this approach is called?
  4. psykr

    Poetic Justice!

    Ok.
  5. psykr

    Wii 3D -- Must See!

    Actually, I think you could just buy the Wiimote ($39.99, I was looking at it before). One of the first things people did when the Wii came out was to get it to work for PC, so drivers are probably available somewhere. Also, I believe the guy from the OP's post (who is actually allowed to release his research) releases most of his software for download. For anyone interested, here's what I found during a 5-minute search: Wiimote Hardware Details, from WiiLi wiki. Kalman filtering [Gamasutra] with the Wiimote's accelerometer, but I have no idea how useful it actually is. I don't think the head tracking thing is that useful; the video shows him jumping around to actually get the camera to move, which demonstrates the pretty obvious limitation that you have to be facing the Wiimote. Maybe an array of Wiimotes in a circle around you.. but you would still have the face the TV. Maybe an array of TVs in a circle around you..
  6. Just out of curiosity, what hardware/driver combination were you using?
  7. Can you tell what's wrong with your screenshot? For example, are the textures reversed, shifted, etc.? If you can't tell, maybe posting the grass texture will be helpful.
  8. All the data caching is done at once, and then the shader setup/draw calls are done sequentially so I'm not too worried about race conditions. If it doesn't impact performance to have lots of vertex buffers locked, I'll try to keep multiple buffers locked for as long as I can during the caching step. (Aside from not being able to render, will it have any other impact on performance?) Rather than having the standard pointers to data for memcpy(), I have callbacks that will fill a given memory region with vertex data in a specified format. I'm not sure about handling dynamic data, since the draw calls need to occur before they can be locked with DISCARD (if I'm using the buffers multiple times per frame).
  9. psykr

    flexible camera fails HELP!

    It just does the last few steps for you, basically this part: Quote:// Build the view matrix: float x = -D3DXVec3Dot(&right, &pos); float y = -D3DXVec3Dot(&up, &pos); float z = -D3DXVec3Dot(&look, &pos); (VV) (0,0) = right.x; (VV)(0, 1) = up.x; (VV)(0, 2) = look.x; (VV)(0, 3) = 0.0f; (VV)(1,0) = right.y; (VV)(1, 1) = up.y; (VV)(1, 2) = look.y; (VV)(1, 3) = 0.0f; (VV)(2,0) = right.z; (VV)(2, 1) = up.z; (VV)(2, 2) = look.z; (VV)(2, 3) = 0.0f; (VV)(3,0) = x; (VV)(3, 1) = y; (VV)(3, 2) = z; (VV)(3, 3) = 1.0f; If you have a full camera class done already, then you do need more than D3DXMatrixLookAtLH can handle.
  10. psykr

    D3DDEVICE Vertex Type

    Oh, yeah. I'm leaving out all supporting code, but this is the general flow: render the first model: - set the vertex declaration - call drawprimitive render the second model: - set another vertex declaration - call drawprimitive
  11. psykr

    flexible camera fails HELP!

    What do you mean that it's flexible? D3DXMatrixLookAtLH() should work just fine. Maybe you're passing the wrong vectors in? Basically, if your basis vectors (here the right, up, look vectors) don't have lengths of 1, some very odd things can start happening to your scene. Some of the math used in the graphics pipeline depends on them being normalized.
  12. My questions are targeted at Direct3D 9, although notes an any differences with D3D 10/OpenGL VBOs are appreciated. I am trying to implement a vertex buffer memory allocation scheme based around Heap Layers, if that changes things somehow. Also, I haven't thought about instancing/index buffers so insights on integrating those features would be helpful as well. I'm planning to make I/O asynchronous and multithreaded and all that, so this is how I think vertex buffers will be used: 1) Lock a vertex buffer 2) Pass the mapped memory region to whoever needs it 3) After some amount of time, the memory is written to 4) The vertex buffer is unlocked (probably after the render thread dequeues the request) Now, the questions (correctness and performance info is appreciated): 1) Can I lock multiple vertex buffers at once? (Both static and dynamic VBs) 2) It's possible to lock only part of the VB. Can I lock the same VB multiple times? What if I limit the locks to non-overlapping regions?
  13. I'm having a little trouble getting started on a raytracer. It is supposed to match the following OpenGL pseudo-code: // variables used struct { float fov; float z_near, z_far; vec3f position, lookat, up; } camera; float wnd_width, wnd_height; // use this projection matrix gluPerspective( camera.fov, camera.wnd_width/camera.wnd_height, camera.z_near, camera.z_far ); // use this view matrix gluLookAt( camera.position, camera.lookat, camera.up ); Here is what I have set up so far: // based on gluLookAt // we are creating a new basis uvw for the camera Matrix LookAtLH( position, lookat, up ) { vec3f w = lookat - position; vec3f v = up; vec3f u = cross( w, v ); Matrix mat = identity(); mat.col(0) = u; mat.col(1) = v; mat.col(2) = w; Matrix translate = identity(); mat.col(3) = position; return translate * mat; } RayTrace::CalculatePixel( x, y ) { // generate the view matrix from camera data Matrix view_matrix = LookAtLH( camera ); // calculate the direction of the pixel in camera space // use the FOV angle to get the ratio of screen height to distance from camera float z = (wnd_height/2) / tan( (camera.fov/2) in radians ) // in view space, the camera is at the origin vec3f cam( 0, 0, 0 ); // we just need a direction to look for pixel intersections vec3f pixel( x - wnd_width/2, y - wnd_height/2, z ); // transform with inverse of view matrix into world space cam *= inverse( view_matrix ); pixel *= inverse( view_matrix ); // make a ray to check for collisions with // starting from cam, moving towards pixel ray r( cam, pixel - cam ); } It works in one or two test scenes, but for most of them it misses the scene completely. Is there anything that I'm clearly doing wrong here?
  14. psykr

    Profiling Threads

    Be careful when saying that 100% CPU is acceptable. As a basic example, consider users on a notebook PC; if you use all the CPU, it will eat up the battery life. What happens when a background task starts up? Then your game will go to 50%, and if it needs 100% to run acceptable then it will become choppy at seemingly random intervals. Sleep(0) is also not a good solution if you want to give up some processing time; I believe that there are some thread priority issues. IIRC there is a function to explicitly yield your timeslice on Win32, but I can't say for sure.
  15. psykr

    D3D9 multihead

    Ugh, I did way too much research on multi-head devices. What I think is happening is this: on the first monitor, everything renders fine and dandy. When moving to the second monitor, it is being controlled by essentially a different device (to Direct3D), and so the back buffer is copied onto the "second" head which makes it much slower on the second monitor. Multi-head scenarios under D3D9 are only supported under certain conditions. I think if this is supported this is your best bet, but the usage scenarios are very limited; check DXSDK article(s) on multi-head for details.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!