• Content count

  • Joined

  • Last visited

Community Reputation

580 Good

About Promethium

  • Rank
  1. Getting to next level!

    Sounds like you are ready to let go of the book hand holding and start practicing. Learning from a book or tutorial is great to learn [i]syntax[/i], but [i]programming[/i] is a skill that requires practice and experience and you only get that by actually doing stuff on your own. So my suggestion would be to set yourself some goal: Create a Tetris or breakout clone, anything, just pick something you think could be fun to make and try to make it. You will probably fail, BUT you will learn something in the process. So start a new project, learn, fail again, and so on until you can get past the simple stuff and start worrying about the advanced stuff. And if you succeed with your first project, you weren't ambitious enough [img][/img] Anyhoo, as for concrete technologies and libraries just use google. Search for "2D library tutorial" or whatever. At this point it doesn't matter much exactly what you use. Heck, I would even suggest sticking with just Win32 GDI for starters.
  2. Casual vs Dress interview

    Wear a nice version of what you would normally wear to work. Be careful not to overdress, it can give the impression that you don't know the industry you are trying to get into. In the games industry, and I guess in many "creative" industries, casual wear is the norm, so showing up in a suit and tie when your interviewers are wearing T-shirts and jeans will certainly make you stand out, but in a bad way, and can make the interviewer question whether you know anything about the position/workplace. On the other hand, don't underdress either: Wear nice, clean clothes, maybe go out and buy some new. If I was being interviewed for a game development job I would wear jeans and a (dress) shirt, but absolutely not a tie. Be yourself and let your clothes reflect who you are. And if in doubt, do as Telastyn suggests and ask your contact person, it shows interest and keenness.
  3. d3d 11 Stretch DepthStencil

    Have you tried binding the depth-stencil texture as an ordinary render target? Try creating two render target views, one with type DXGI_FORMAT_R24_UNORM_X8_TYPELESS and the other with DXGI_FORMAT_X24_TYPELESS_G8_UINT and then render to them as ordinary render targets. I don't know if it will work, but I think it should. You will probably have to make a two-pass render as the driver will likely not allow you to bind both targets at the same time.
  4. Little question about draw order

    The answer is yes and no. The graphics card doesn't do any sorting, but the [i]depth buffer[/i] (AKA [i]Z-buffer[/i]) by default only lets the GPU draws pixels that are closer to the camera than what have already been drawn on the screen. So if you have three pixels in distance 10, 23, and 37 from the camera, only the pixel with distance 10 will be visible, no matter in which order you draw the pixels. So you don't have to sort your vertices yourself. However, in some cases you want to disable the depth buffer, for example when doing 2D rendering (a GUI overlay perhaps). In that case the GPU will always overwrite with the last triangle drawn so in that case you have to manage the draw order yourself. Alpha blending also requires an explicit rendering order, for the same reason. There are other reasons to manually manage the rendering order, but for now you can just relax and let the GPU sort it out for you.
  5. While MJP's response is totally correct, just to complete the story: It is safe to partially update a dynamic buffer IF you can guarantee that you are not touching any data that is in use by the GPU. In pseudo-code this would typically be something like [code] int batch_size = ... int capacity = ... int offset = capacity; while( true ) { if( offset + batch_size > capacity ) { ptr = buffer->lock( 0, batch_size, DISCARD ) offset = 0; } else { ptr = buffer->lock( offset, batch_size, NO_OVERWRITE ) } copy_data( ptr, batch_size ) buffer->unlock( ptr ) render( buffer, offset, batch_size ) offset += batch_size; } [/code] This works because DISCARD returns a pointer to a new memory area, so you won't be touching old data. This is potentially faster than DISCARDing on each update since the GPU can continue reading from the buffer while you are updating. The capacity of the buffer must of course be large enough that you can have several batches in-flight at the same time.
  6. Rendering solid and wire at once

    Sure, you need to calculate the distance from the edge of the triangle to the current fragment in the fragment shader and the change the color of the fragment based on that distance. This page and paper describes a method that gives good results: [url=""]Wireframe Drawing[/url].
  7. You can set up a render target to render to a texture off-screen, then use that texture to render your preview. Look up IDirect3DDevice9::CreateRenderTarget and IDirect3DDevice9::SetRenderTarget (sorry, can't remember the SlimDX equivalent) It's even easier in DX11, just create a texture with the D3D11_BIND_RENDER_TARGET and D3D11_BIND_SHADER_RESOURCE bind flags and set it as your render target. No need to capture (ei. copy) anything. Make sure that, when you use a render target, you also set the viewport correctly.
  8. I always add braces. I have yet to hear even one compelling argument against adding braces. The "wasted whitespace" is a red herring IMO. Writing code is about reading code and overly dense code is harder to read and so it's easier to miss some crucial detail (such as for instance missing braces around the if statement you are modifying.) But even then: [code]if( something ) x(); else y(); [/code]vs[code]if( something ) { x(); } else { y(); }[/code] ONE extra line! I will sacrifice that one extra line any day if it stops me or someone else from introducing a stupid bug later.
  9. DX11 Depth buffer ! fail to save it !

    Probably because the depth buffer isn't in a format that can be saved as a BMP file. Save it as a DDS instead, and use the texture tool in the DX SDK to inspect it.
  10. FPS-RPG hybrids and enemy health indicators

    Numerical health indicators[*] are a substitute for immersive/in-world representation of "health" that goes back to the earliest text-only games where it was really the only option. Because of limited resources on early systems, early graphical games kept the numerical health indicator: Instead of having multiple sprites showing enemies becoming more and more wounded you could reuse the same sprite and just update a number. These days the main reason to keep the numerical indicators are tradition I think, people playing RPG's have a very ingrained opinion on how such games should look and play. They are mostly very conservative and will balk at any violent break from tradition (or at least not buy the game which hurts profits.) FPS'es in the other hand have never had a pressing need for health indicators because the enemies are often very short-lived. When an enemy dies from a single headshot with a shotgun I don't really care if he had 50 or 100 "health". The amount time from you seeing a live enemy till he is dead is counted in seconds, so health indicators would just be needless information overload. Even when I need more than one shot the game conventions tell me that it is okay to keep shooting, eventually he will go down. One of the worst sins you can commit when making an FPS is making enemies immune to your normal attacks without clearly and unambiguously communicating it to the player. However, special enemies, typically bosses, even in FPS'es, takes longer to kill. Here I think many FPS'es are doing things RPG's could learn from: Instead of a health bar show the enemy in different states of break down. The tank becomes dented and sheds armor plating, the T-Rex starts bleeding and walks erratically, the super soldier bleeds and becomes bent over with pain (or simply turning glowing soft spots off, but that is really just another kind of health bar). Just be sure always to have visible progress in the fight and you don't really need the explicit numerical health indicators. Of course, this increases the amount of art assets, especially in an RPG where there are many different enemies. But it would definitely be cool if we could loose some the numbers from RPG's. [*] Here I also consider a health bar a numerical health indicator. The bar is just a graphical visualization of the underlying numbers.
  11. Representing a quaternion in C#?

    A quaternion is a 4D vector, so the easiest and most flexible way is to just store it as 4 floats [code]struct Quaternion { float X; float Y; float Z; float W; public static float Dot(Quaternion q, Quaternion r) { // ... } };[/code] However, as some quaternion operations are better represented as 3D vector operations, it also makes sense to store it as a 3D vector and a scalar. This also neatly matches the quaternion as a 3D complex number interpretation. [code]struct Quaternion { Vector3 xyz; float w; public float X { get { return xyz.X; } set { xyz.X = value; } } public float Y { get { return xyz.Y; } set { xyz.Y = value; } } public float Z { get { return xyz.Z; } set { xyz.Z = value; } } } [/code] Which representation to pick depends on your usage and whether you already have a 3D vector class (which you probably have if you are using quaternions.) Note that I'm using struct's. Be sure you understand the difference between classes and structs in C#.
  12. HLSL Vertex Shader Help

    It's been a while since I've done D3D9 effects, but I think that your transformation is the wrong way around. Try changing your vertex shader from [code]void myvs( float3 vPos : POSITION, out VS_OUTPUT OUT ) { OUT.vPos = mul( gWorldViewProj, float4(vPos,1.0f) ); }[/code] to [code]void myvs( float3 vPos : POSITION, out VS_OUTPUT OUT ) { OUT.vPos = mul( float4(vPos,1.0f), gWorldViewProj ); }[/code]
  13. When I need intersection tests I always start by look in Real-Time Rendering by Thomas Akenine-Möller and Eric Haines. Their homepage also has a useful grid of intersection test: [url=""][/url]
  14. [quote name='Amr0' timestamp='1318417214' post='4871795'] What about 3rd party libraries? I don't see much talk about them. [/quote] There is [url=""]Umbra[/url] which has been used in a large number of well-known games.
  15. I think I would approach this more as a physics problem than rendering. Treat the tube guy as a cloth, for example a spring system. Then fix the bottom ring of vertices and flip gravity (to make it rise). To make it move add some small sideways forces or model turbulence. I recall that the game "Gunstringer" has a "sky guy" boss. Maybe you can look at that and get some ideas. Edit: Found a [url=""]video[/url] of the wavvy tube man in Gunstringer. Skip to to 6:15.