Jump to content
  • Advertisement

16bit_port

Member
  • Content count

    262
  • Joined

  • Last visited

Community Reputation

180 Neutral

About 16bit_port

  • Rank
    Member
  1. Yeah.. I just found out about it.
  2. Are there any APIs that allows multiple window creations / child windows? I know that SDL 1.3 supports multiple windows but that is still under development / buggy... Also, I don't want to use Win32
  3. 16bit_port

    feature levels

    When you guys say Direct3D 11 runtime, do you mean the d3d11 dll? #3 With regards to the runtime level emulation, is it only for that feature (multi-threading), or does it apply for other features not supported by the hardware? And what exactly do you mean by runtime emulation? Do you mean WARP? #4 So if I want to support different video cards with various directx versions, I should go this route (code with D3D11 and let feature levels do its thing, while making sure to handle features that might be absent on older hardware) rather than creating abstracted version-specific rendering engines.
  4. 16bit_port

    DX11 feature levels

    My understanding of feature levels is that with it you can use the D3D11 API on older video cards that don't support D3D11 but with two caveats : one being that the client OS must be Vista or higher and the second is that D3D11-exclusive features are not available to those non-D3D11 cards. Question 1 : If the device is created with feature level 9.0 (card only supports 9.0), what happens under the hood when a D3D11 function is called? Like for example somewhere in my code I have a call to D3D11CreateDevice(), but since it's running on the 9.0 feature level, is the D3D9-equivalent of the function called instead (in this case IDirect3D9::CreateDevice)? Question 2 : Same situation as the previous (using feature level 9.0) but this time a hull and a domain shader is created in the code. What happens? Does it crash at that point or are those calls just simply ignored? Question 3 : I came across a year-old thread and in it : N3Xus : I have another question: I have a DX10 compatible graphics card, if I use the feature level for DX10-SM4.0, does the new multithreading DX11 thing work? MJP : Yes you can Correct me if I'm wrong but isn't the multi-threading exclusive to DX11 only? If what I said before is true (D3D11-exclusive features are not available to those non-D3D11 cards), how can the multi-threading work on a DX10 card? Is MJP implying that that will only work with a reference rasterizer? Question 4 : In this MSDN article, it says "In prior versions of Direct3D, you could find out the version of Direct3D the video card implemented, and then program your application accordingly." and then it goes on to talk about feature levels. Maybe I'm reading too much into that sentence but is it implying that with the introduction of feature levels, you no longer have to do something like this (?) : if( D3D11 is supported in the hardware ) // render with D3D11 code else if( D3D10 is supported in hardware ) // render with D3D10 code else if( D3D9 is supported in hardware ) // render with D3D9 code and that with feature levels you just have to write one "version" of code (D3D11) and it will implicitly handle all different versions (D3D 9/10) and you don't have to do those nasty if-cases? If you can directly answer each of these questions and not jumble it into one generic answer, that would help me a lot. Thanks! =)
  5. This ray tracer used hierarchical modelling to setup the scene.[/quote] I'm assuming you mean setting up some sort of quad tree or k-d tree to split your scene into separate chunks? When I did my raytracer, I didn't use Direct3D or OpenGL for my rendering; I just used SetPixel from the Win32 API (yes I know.. using SetPixel is slow but that was one of the requirements of the project). I'll suggest what I would do by combining my two separate experiences with Direct3D and implementing a ray-tracer together (if someone had already created a ray-tracer with Direct3D/OpenGL, please feel free to correct me). You won't be utilizing the graphics/shader pipeline in Direct3D since it doesn't make sense in a ray-tracer context. The only thing that you'd probably use is the vertex shader but that is only if you're animating the mesh. I'm assuming you're not animating that mesh, since you didn't give indication that you are so... the only part of Direct3D that you'd be messing around with is setting the pixel's final color in the backbuffer. Don't worry about sending your vertices, indices, textures/materials, matrices to a vertex/index/constant buffer since like I said before you won't be utilizing the shader pipeline. First you'll create your backbuffer with Direct3D (doesn't matter which version you use). This is where all of your final pixels will go. Next, you'll do all of your math-intensive parts of your ray-tracer (object intersection, matrix transforms, and what-not) with DirectCompute and after you determine the current pixel's final color, you go back to the backbuffer and modify the respective pixel. Repeat for all of your pixels and when you're done tracing that scene, simply swap the backbuffer to the "front" (through Direct3D), and that should be it. Since you're fairly new to Direct3D and I'm assuming with DirectCompute as well, I'd take baby steps. First, write a small Direct3D program that just creates a backbuffer, sets the entire thing to a certain color, and then present it to the screen. Then figure out how to grab that backbuffer, and modify a pixel's color. Once you have that done, learn how to perform mathematical computations with DirectCompute.
  6. 16bit_port

    help with geometry. (intersection of 2 circles)

    Also two more things, just out of curiosity if I took the circle formula : (x - h[sub]1[/sub])[sup]2[/sup] + (y - c[sub]1[/sub])[sup]2[/sup] = r[sub]1[/sub][sup]2[/sup] (x - h[sub]2[/sub])[sup]2[/sup] + (y - c2)[sup]2[/sup] = r[sub]2[/sub][sup]2[/sup] and solve this system of equations by solving for x and then substituting it back into either of those and solve for y, would I get the same result? In other words, is that a valid way to find the intersection? And doing this in 3D, I don't understand how he got r^2 = R1^2 - (a^2+b^2+c^2)*t0^2for the radius of the intersecting circle between two spheres. Thanks.
  7. 16bit_port

    help with geometry. (intersection of 2 circles)

    Actually. Yes it does. I forgot that swapping the coordinates and negating one of them of a vector creates another vector perpendicular to it.
  8. Circle-Circle intersection point article (it's VERY short) I understood everything up until the very end : x[sub]3[/sub] = x[sub]2[/sub] +- h ( y[sub]1[/sub] - y[sub]0[/sub] ) / d y[sub]3[/sub] = y[sub]2[/sub] -+ h ( x[sub]1[/sub] - x[sub]0[/sub] ) / d why the "y[sub]1[/sub] - y[sub]0[/sub]" and the negative h to get x[sub]3[/sub]?
  9. 16bit_port

    GUI in games

    I'm assuming most companies use controls (buttons, scrollbars, etc) found in the engine that they using, but in the case that the engine doesn't provide something like that, do they write own (possibly give the controls their own look) or do they use the ones found in some existing API (like the ones in Win32)? If you're the developer making the engine, do you do the former (make your own) or the latter (use existing)?
  10. 16bit_port

    Cg shading language

    But why compile as HLSL if you're already using Cg?
  11. I think it's great that you can use this with D3D or OpenGL but are there any cons to using this? The only one I can think of is that you can't use this on mobile devices (although I could be wrong). Do most of the game companies in the industry use this or do they stick to HLSL and/or GLSL? Any tutorials on using the latest version of Cg (3.0)? I have The Cg Tutorial : The Definitive Guide to Programmable Real-Time Graphics but that was published in 2003 so I doubt it is up-to-date.
  12. If I have something like "float2 tac : texcoord0", is it possible to get "tac"? I don't see anything in D3D11_SIGNATURE_PARAMETER_DESC that might contain that.
  13. hr = D3DX10CompileFromMemory( ConcatVsString, TotalLength, "ThisParamIsRequiredForSomeReason", NULL, NULL, "VS", "vs_4_0", ShaderFlags, 0, NULL, &pCode, &error, NULL ); if( error ) { MessageBoxA( 0, (LPCSTR)(error->GetBufferPointer()), 0, 0 ); SAFE_RELEASE( error ); } The messagebox displays : "C:\ ... \ThisParamIsRequiredForSomeReason(3,8): error X3000: syntax error : unexpected integer constant" What is that third parameter in D3DX10CompileFromMemory used for anyway? MSDN says "The name of the file that contains the shader code." but why does it even need this if I'm compiling the code from memory as a string? I'm confused as to what its expecting and what I need to do to solve this. Thanks.
  14. 16bit_port

    Very quick question

    Not sure what you mean by that. But yeah, nothing wrong with calling it several times. What it'll do is just replace the vertex buffers previously in/pointed to by the device. The most recent vertex buffer set by IASetVertexBuffer will be used in the next draw call.
  15. Does not need to be fast. It's on the CPU.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!