• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Naruto-kun

Members
  • Content count

    155
  • Joined

  • Last visited

Community Reputation

442 Neutral

About Naruto-kun

  • Rank
    Member
  1. Yes. I don't have the source code and am having to rely on a undocumented function that I located in one of the dll export lists. To give you an idea what also occurs, if I have the camera pointing in the same direction as the aircraft nose, and I pitch the nose up, the camera pitch/bank/heading values will be the same as the aircraft. However, if I rotate the camera 90 deg horizontally to the side, the camera pitch/bank values will be swapped.
  2.   Am aware of the dangers but they do not apply in what I am trying to achieve.   That is the idea yes. I want to know where the camera is looking relative to the aircraft nose. The purpose ranges from a 3D sound engine, to helmet mounted sights. I can get the aircraft pitch/bank/heading and make a rotation matrix out of it (inverting the signs of course to un-rotate as you say), and this works on locating the head position relative to the aircraft. But for some reason it isn't working on the heading rotation relative to the aircraft. Yes it is a SR-71.
  3. I also tried this solution from a much older topic of mine (I left off this problem due to other duties at work for a while) but I only got confused from it. https://www.gamedev.net/topic/681548-converting-world-rotation-to-body-rotation/?view=findpost&p=5308675
  4. I see I am still not quite understood....I tried Alundra's idea (multiplying the camera rotation matrix by the inverse aircraft rotation matrix and then decomposing the resultant matrix into its Euler angles as per this site: http://nghiaho.com/?page_id=846 ) However, this is the result I get. As you can see in this picture, the camera pitch and bank relative to the aircraft are correct (0), as is the heading (-48.83). In this next picture, the camera angle relative to the aircraft is again correct in pitch (-38.78) and bank and heading (0). The moment I combine a pitch and heading change however, this is what I get. Bank should still be 0 but as you can see it isn't. Any suggestions?
  5. Yes the camera follows the cone and can be offset from it. I'm not sure I follow exactly though. The cone has its own local transformation which affects the world transformation as well as the camera. Basically, if I roll the cone by -25 deg, set the pitch of the cone to 0, and then rotate the camera heading(yaw) by -90 deg, the camera transform will show a heading of -90 deg, a bank angle of 0, and a pitch angle of -25 deg. If I rotate the camera heading back to 0, pitch will be the same as the cone (0) and roll will be the same as the cone (-25). I need to calculate the local angles of the camera relative to the cone so that if I rotate the camera heading by -90 deg, no matter what pitch/roll values I will get the a heading of -90 deg, pitch 0, bank 0.
  6. Hi guys I have a bit of a challenge here. It is best illustrated in the below screenshots: In this scene, the camera is linked in hierarchy to the cone. The cone has 0 rotation on it. The camera is rotated to the left (yaw motion) by 15 degrees (see the Z value in the bottom right corner). I then rotate the cone upwards (pitch motion on the X axis) by 30 degrees using the World transform. But then I switch from the World transform to the local and rotate the cone around the Y axis (roll motion) by -25 deg (ie to the left). The camera is linked to the cone and follows it through these motions. You can see the rotation values in the bottom right corner. The above illustrates a problem I am trying to solve using reverse engineered data from a flight simulator. I have the camera world rotation angles, and the aircraft (represented by the cone) world rotation angles in pitch and heading(yaw). My end goal is to calculate the camera rotation angle relative to the local aircraft rotation angles. But I am getting thrown for a loop by the switch to local in roll axis. Anyone have any suggestions?
  7. Perfect. Thanks a bunch.
  8. Hi guys I am working on a directional light system, where the light is collocated with the camera position. As such, the lighting returns will depend on the direction to the surface from the camera position. I want to get the eye direction to this point so I can perform the dot product calculation on it with the surface normal and determine how much light is reflected back. Would I simply normalize the vertex position after multiplying it by the view matrix? Thanks JB
  9. Thanks a bunch. Apologies for the misunderstanding.
  10. I need the camera rotation for custom 3D sound engine purposes. Since I didn't create the camera (it is created internally by the simulator and I have to use a bit of reverse engineering to get hold of it) I need to be able to transform it so that I know where it is looking relative to the nose of the aircraft.
  11. Looking over the output results, it appears that the x and y values of the vector after being multiplied by the inverse projection matrix are the angles to the pixel in radians (x converted to degrees is roughly -60 which fits with my horizontal FOV of 120 deg, and y is roughly 3.75 deg which fits my vertical FOV of 7.5 deg). I'm guessing the key is in what relationship the z and w values have with my far and near clip planes?
  12. Here is the geometry shader I am working with. The basic idea, is that the X axis of the texture (depth buffer) I am sampling from will correlate with the X axis of the final image, and the distance to the pixel calculated using the depth buffer will correspond to the Y axis of the final image (in otherwords, a modified B-scope radar display where the horizontal corresponds to the azimuth of the return and the vertical corrosponds to the slant range). [maxvertexcount(80)] void GS( point GS_INPUT sprite[1], inout TriangleStream<PS_INPUT> triStream ) { PS_INPUT v; uint ux = sprite[0].id % 256; uint uy = sprite[0].id / 256; float x = float(ux) / 256.0f; float y = float(uy) / 75.0f; float x1 = x + (1.0f / 256.0f); float y1 = y + (1.0f / 75.0f); float4 fdepth = float4( depth.SampleLevel(SampleType, float2(x, y1), 0), depth.SampleLevel(SampleType, float2(x, y), 0), depth.SampleLevel(SampleType, float2(x1, y1), 0), depth.SampleLevel(SampleType, float2(x1, y), 0) ); float4 v1 = float4((2.0f*x) - 1.0f, 1.0f - (2.0f*y1), fdepth.x, 1.0f); float4 v2 = float4((2.0f*x) - 1.0f, 1.0f - (2.0f*y), fdepth.y, 1.0f); float4 v3 = float4((2.0f*x1) - 1.0f, 1.0f - (2.0f*y1), fdepth.z, 1.0f); float4 v4 = float4((2.0f*x1) - 1.0f, 1.0f - (2.0f*y), fdepth.w, 1.0f); float4 m1 = mul(v1, inprj); float4 m2 = mul(v2, inprj); float4 m3 = mul(v3, inprj); float4 m4 = mul(v4, inprj); float l1 = length(m1.xyz / m1.w); float l2 = length(m2.xyz / m2.w); float l3 = length(m3.xyz / m3.w); float l4 = length(m4.xyz / m4.w); v.m = 0; v.p.x = v1.x; v.p.y = l1 - 1.0f; v.p.zw = float2(0, 1.0f); v.t = float2(x, y1); triStream.Append(v); v.p.x = v2.x; v.p.y = l2 - 1.0f; v.p.zw = float2(0, 1.0f); v.t = float2(x, y); triStream.Append(v); v.p.x = v3.x; v.p.y = l3 - 1.0f; v.p.zw = float2(0, 1.0f); v.t = float2(x1, y1); triStream.Append(v); v.p.x = v4.x; v.p.y = l4 - 1.0f; v.p.zw = float2(0, 1.0f); v.t = float2(x1, y); triStream.Append(v); triStream.RestartStrip(); } Here is the first sample, using the top left corner of the texture source image (dimensions are 256x75). The screenshot below gives you an idea of where the camera is looking. The actual texture source image would only contain a thin strip from the center of the screenshot. ux 0 uint uy 0 uint x 0.000000000 float y 0.000000000 float x1 0.003900000 float y1 0.013300000 float fdepth x = 0.968700000, y = 0.964700000, z = 0.968600000, w = 0.964700000 float4 m1 x = -1.048700000, y = 0.063800000, z = -0.099900000, w = 1.068700000 float4 m2 x = -1.048700000, y = 0.065500000, z = -0.099900000, w = 1.064700000 float4 m3 x = -1.040500000, y = 0.063800000, z = -0.099900000, w = 1.068600000 float4 m4 x = -1.040500000, y = 0.065500000, z = -0.099900000, w = 1.064700000 float4 l1 0.987600000 float l2 0.991300000 float l3 0.980000000 float l4 0.983700000 float //Inverse projection matrix inprj[0] x = 1.048700000, y = 0.000000000, z = 0.000000000, w = 0.000000000 float4 inprj[1] x = 0.000000000, y = 0.065500000, z = 0.000000000, w = 0.000000000 float4 inprj[2] x = 0.000000000, y = 0.000000000, z = 0.000000000, w = 1.000000000 float4 inprj[3] x = 0.000000000, y = 0.000000000, z = -0.099900000, w = 0.100000000 float4 Here is what the final output looks like. A bit confusing to say the least. Here I have the camera even further from the target. ux 0 uint uy 0 uint x 0.000000000 float y 0.000000000 float x1 0.003900000 float y1 0.013300000 float fdepth x = 0.999300000, y = 0.999200000, z = 0.999300000, w = 0.999200000 float4 m1 x = -1.048700000, y = 0.063800000, z = -0.099900000, w = 1.099300000 float4 m2 x = -1.048700000, y = 0.065500000, z = -0.099900000, w = 1.099200000 float4 m3 x = -1.040500000, y = 0.063800000, z = -0.099900000, w = 1.099300000 float4 m4 x = -1.040500000, y = 0.065500000, z = -0.099900000, w = 1.099200000 float4 l1 0.960000000 float l2 0.960300000 float l3 0.952600000 float l4 0.952900000 float //Inverse projection matrix inprj[0] x = 1.048700000, y = 0.000000000, z = 0.000000000, w = 0.000000000 float4 inprj[1] x = 0.000000000, y = 0.065500000, z = 0.000000000, w = 0.000000000 float4 inprj[2] x = 0.000000000, y = 0.000000000, z = 0.000000000, w = 1.000000000 float4 inprj[3] x = 0.000000000, y = 0.000000000, z = -0.099900000, w = 0.100000000 float4 I hope this helps to give a better idea of what I'm trying to do.
  13.   Unfortunately this one isn't quite possible because the program I am working with has no way to provide me with the distance to a pixel so I would be working blind (I am working through an API, not source of the program). I only have a rough guess that I can work with which is anything but reliable.   Make sure the coordinates are in NDC: -1 to 1 for x/y (and z in GL) and 0 to 1 for z in D3D. Then after the matrix multiply, divide xyz by w to get a views pace position, and calculate its length to get distance from the camera. I did a shader debug run using VS 2013's shader debugging library. The screen coordinates I calculate using texture coordinates from the area of the screen I am sampling (I convert them to the proper +-1.0f ranges), and I can confirm the Z values are 0 to 1. After multiplying with the inverse projection matrix I divide the xyz by the w and put it through the length function. The value still comes out quite low (like around 1.2 to 1.8) and my final image looks a bit like a hyperbola. I will post some screenshots shortly of the program which may help you visualise things a bit more effectively.
  14. Hi guys I have the depth buffer for a scene, along with the far and near clip plane values (I also have the units of measurement), the vertical FOV, and the aspect ratio of the projection. The scene is using a standard D3DXMatrixPerspectiveFovLH projection matrix. The depth buffer values are normalised (+-1.0f). How would I work backwards so I can get the distance to the pixel in the scene? I tried making an inverse matrix of the projection matrix and multiplying the screen and depth coordinates of the pixel from it but I didn't get the expected results.
  15. I do have the plane orientation data. I'm just wondering how I should build the rotation matrix from it (inverse or something?) which I can then use to get the camera rotation values relative to the plane rather than the world.