Tasche

Member
  • Content count

    65
  • Joined

  • Last visited

Community Reputation

222 Neutral

About Tasche

  • Rank
    Member
  1. Existing 3D Game Engine for Gameplay Programming

    Just on the subject of Triple A asking for C++: Not 100% sure about this, but that does not necessarily mean you will be coding in C++, its just the fact that if you can handle C++ really well, you will be able to adapt to other languages rather easily, because most modern languages abstract the nitty gritty that C++ does (at the cost of performance). Also, you will most certainly NOT start your career at those big companies, they ask for 5 years of work experience for their junior staff. And you can be sure that in those 5 years, you need to churn out kick ass stuff, because they really get to choose, and they have a reputation to fulfill (and the budget to back up being picky). What i'm saying is, along the way you will most certainly have to do several different languages, having a big portfolio and being able to work with whatever you are given, and get the maximum out of that.   So this what MrDaaark said in his last sentence is a good working/learning paradigm in my opinion.
  2. finshed breakout game

    running the risk of d/ling a virus i wanted to be a nice guy and test your game. however, you compiled it with the .NET debug dll, the standard .NET runtimes don't include these. recompile with release settings and try again.
  3. Good thinking. Could you show how you got there? Which equations you used? the same ones you did, just different constants (acceleration A = g, S=d distance traveled in time t):   s(t)=v*t+(g*t^2)/2 (with s(t0)=d, at some arbitrary time t0, initial velocity v=0) => d=(g*t0^2)/2 => sqrt(2d/g) = t0 (I) for any time t: v(t)=g*t (II) is true, by calculating total derivative of s: ds/dt = v (change of distance s per time t is velocity v by definition) this is especially true for time t0: (I) substituted into (II): v0 = v(t0) = g*sqrt(2d/g) = sqrt(2dg) q.e.d.   so whats actually calculated here is the speed the object has after falling d units (starting at s=0, going to s=d, with initial velocity 0, trying to find end velocity v0). since ideal newton physics are symmetric in time its the same the other way around.
  4. linking a pos/neg value, just as pointer?

    a bool would work. but that's an int in c++ however... you also may want to consider using bounding spheres instead of boxes, if your application allows that. (point-plane-distance test is easy and quick, no fiddling with signs :) )
  5. DirectX 11 Rigging in Shader

    i'm not sure this will help you, but i think this may be what you are looking for: http://msdn.microsoft.com/en-us/library/windows/desktop/ff476523%28v=vs.85%29.aspx   i personally have a fixed maximum number of bones and use the constant buffer, and send the updated matrices by mapping that buffer. calculation of those matrices from quaternions is done on cpu, only the skinning itself (vertex transformations) is done on gpu, and stored in a dx buffer using stream output.
  6. if the angles are really really small, you might get away with it (actually pretty sure you will). nlerp is already approximating, so you want to minimize the error. if the angles for your source rotations are almost the same, you also won't notice any additional error. if they are too different, the blend will differ significantly from the expected animation. from personal experience i would say if the renormalization constants differ around 2-3% it already gets noticeable (like a slight awkward twist in a joint). just try it out, if your keyframes are tight enough, it should work out. however i would suggest you go for normalizing in between, since the nlerp approximation only changes speeds, not angles or axis, unlike interpolating unnormalized quaternions. and a few square roots less are not worth getting crappy animations. by the way, the problem multiplies the more animations you blend. also, as a general rule of programming (imho), is get stuff working perfectly and as expected first, then optimize and judge the quality/speed tradeoff. especially for something so easily implemented.
  7. yes... there's a reason these titles consume huge budgets
  8. short answer: no. post back here if you need the long answer.
  9. read the posts on this stackoverflow page for a discussion: http://stackoverflow.com/questions/1228161/why-use-prefixes-on-member-variables-in-c-classes i personally go for Jason Williams approach, have yet to see the bad side of prefixing.
  10. ah sry to dig out this old one, but just for completeness, i tried the alpha = 0 version and just setting first pass target to 0 is marginally quicker. so if anyone ever wondered, go for a nulltarget =)
  11. How do you do bloom with only one render target? Eg, do you only use one for your blur pass? nope, of course i got a bunch of smaller targets... but they are, well, smaller^^. didn't think quarter size was worth mentioning
  12. DX11 Learning Curve from DX9 to DX11

    i also switched from dx9 to dx11 like a year ago, and it seemed a annoying and not straightforward at first, but actually, it is. you will also love the new DXGI system, no more stupid resets when going fullscreen, or doing other stuff which involves display. all handled natively by the API. that's all i can vaguely remember, and its something positive =). so switching will be rather easy and rewarding for you, i'm sure.
  13. very interesting thread...   gbuffer: 1 FP32 rgba bit (will probably be dropped once i reconstruct position from depth) 2 FP8 rgba   bloom: 1 halfsize FP8 rgb   distortion: 1FP8 rg   shadowmap: 1 2048x2048 FP8 r   light accumulation: 1 FP8 rgb   i guess im being very wasteful with memory, but optimizimg should be the last step of the process right? =)
  14. thats the way i do it. the animation data is already loaded as quarternions, then gets interpolated that way, and only before final composition with bonematrix do i convert. (also cuts load time a wee tiny bit i guess)
  15.   Thats some pretty sound advice dude, ta muchly. However you lost me a bit with "first, directly output all your inputs as color (in this case, esp. the normals) and check if they have sane values (represented as colors in the framebuffer) you might want to pack them before outputting them (something like (normal+1)/*0,5), so you get the full range."   So you mean use my normals as colours? frag_colour = normal.xyz?   Also, what's this doing ( x) / * 0  ?   Thanks again dude yep, exactly. something along the lines of: cbuffer { float4 something; } struct vsIn { float4 position; float4 normal; float2 tex; } struct psIn { float4 position; float4 normal; float2 tex; } psIn vsmain(vsIn input) { psIn output; ...some vertex manipulations... output.position = finalPos; output.normal = rotatedNormal; output.tex = input.tex; } float4 psmain : SV_TARGET { float4 ret; ...some cool lighting done here... return ret; } now just before you 'return ret;' you insert: ret = input.normal; for example. if that is already messed up, pass the normal from the vertex shader untouched (may have to add an extra var using the texcoord semantic in psIn), and output that first. psIn vsmain(vsIn input) { psIn output; ...some vertex manipulations... output.position = finalPos; output.normal = input.normal; //<----change to this output.tex = input.tex; } use this for quick and dirty 'judged-by-eye' tests   that was just another way to show you how typos work. was meant to be (normal+1)*0.5. this moves normal component values from -1,1 range to 0,1 which can be viewed as rgb.   good luck finding your errors!