• Content count

  • Joined

  • Last visited

Community Reputation

114 Neutral

About barsiwek

  • Rank
  1. Hi everyone,   XNAMath has a lot of useful structures for loading data into the mapped constant buffers. Like casting the appropriate piece of memory the XMFLOAT4A* and then assigning to it does the job. What I cannot find is the data structures that allow me to do the same for double4, double4x4 and similar entries in constant buffers. What is the best way to do that?
  2. Thanks for the answers, so I guess [url=]this[/url] blog post is totally wrong? EDIT: Just remembered this was about constant buffers. Carry on.
  3. [quote name='Hodgman' timestamp='1353812417' post='5003867'] Which versions of Direct3D and HLSL are you using? For D3D11, the formats that can be used for vertex data are listed [url=""]here[/url]. For D3D9, they're listed [url=""]here[/url], but you also have to check the device caps to make sure at runtime. Also, the vertex-declaration types and the HLSL types don't have to match; they'll be cast. [/quote] Hm... I had someting else in mind - what HLSL types are allowed in the Shader code. So - can I have 'float4' in the input struct? Yes as far as I know. Can I have a 'sampler'? Probably not. Some specification that can answer this questions is what I'm looking for. [quote name='Hodgman' timestamp='1353812417' post='5003867'] I wasn't aware of any alignment requirements, but the fact that D3D11's element-offset variable is called "[font=courier new,courier,monospace]AlignedByteOffset[/font]" implies there are [img][/img] Also, it allows you to use D3D11_APPEND_ALIGNED_ELEMENT to specify that you want D3D to figure out the correct offset including padding, but there seems no way to query the automatically configured value of "[font=courier new,courier,monospace]AlignedByteOffset[/font]" after creating your input layout, which means you wouldn't know how to lay out your vertex buffer!? That [i]is[/i] interesting... As a guess, I would assume alignment requirements might be the per-component size of the element, e.g. for [font=courier new,courier,monospace]DXGI_FORMAT_R32G32_FLOAT[/font] alignment would be 4 bytes. [/quote] Hmm... This helps a bit. I know that if the VS input is for example: [code] struct VSInput { float a : A, float b : B, float4 pos : POSITION } [/code] Then I need 8 bytes of padding after the 'B' data in vertex buffer, which means that position must be 16 bytes aligned I guess?
  4. Hi everyone, I'm having no luck locating the following information on MSDN (or anywhere in that matter) and I was hoping that someone could help me. First of all - is there somewhere a specification of what HLSL types are allowed as inputs to vertex shader (a.k.a. vertex attributes)? Secondly - is there a specification how the above types are suppose to be aligned in the memory on the application side, so that everything works out?
  5. Wow! Thank you Erik for the code and MJP for reference. Microsoft should work a bit on their documentation - I did a bunch of searching on MSDN and did not come over this article at all :/
  6. [quote name='Erik Rufelt' timestamp='1350220791' post='4990028'] What exactly do you want to do? Get your normal with as high precision as possible to your shader using only 32 bits total for the vector? [/quote] I want to try out different formats for storing normals at vertices and compare quality to pick the one that suits me the best. Most formats are either simple to pack to have functions that do that for you (for example - D3DX_FLOAT4_to_R10G10B10A2_UNORM). With this one I do not know how to do it correctly. [quote name='kauna' timestamp='1350230832' post='4990065'] store in the shader: stored_normal = normal * 0.5f + 0.5f; restore in the shader: restored_normal = stored_normal * 2.0f - 1.0f; Nothing more complicated than that. However, I'm not sure if DXGI_FORMAT_R11G11B10_FLOAT has good precision for normals. Cheers! [/quote] I was hoping on a description how to do it on application side (in C++ for example) - so I can store this in a binary file for fast loading later.
  7. Hi everyone, I would like to ask how to properly pack values of a vertex normal using DXGI_FORMAT_R11G11B10_FLOAT format. My input, per vertex, is a 3 dimensional vector with components in range [-1,1] stored in 32bit floats. I do know that I have to scale that into the [0, 1] range since there is no signs in R11G11B10 format, but what am I suppose to do after? I can extract the exponent and mantisa but is truncating them bitwise the right way to go?
  8. This remibds me of an blog post that I read a while ago about simulating closures in HLSL. As far as I rember it works on SM 3.0 and up. [url][/url]
  9. Ok so... You can have two situations: 1) you have the points in space (say 3D) with their parameter values preassigned 2) you have just the points in space. In the first case the situation is straight forward - you just proceed as the article says, that is for each parameter interval you normalize it to (0,1) and the use the formula you posted. In the second case you need to parametrize the data and there is no "one way" of doing it as different metods give different results. Try to google for "uniform parametrisation", "chord lenght parametrisation" and "centipetal parametrisation".
  10. Any PhD's in the house?

    [quote name='jjd' timestamp='1323362946' post='4891833'] how do you prove that you can do the job? [/quote] Well my idea would be to make a blog/library/portfolio of code samples that implement various techniques/algorithms etc. Maybe a small game. I know that is definitely too much work for one persons spare time to make a complete graphics engine from the scratch. You think that will help?
  11. Any PhD's in the house?

    Now I need to ask a question cause the OP's post is almost exactly my case and the answers got me worried a bit. I'm halfway through PhD program right now and I would really do graphics programming after I'm done. My research field is not 100% relevant (computational mathematics) but I do have a 1.5 years of experience a as "standard" software engineer and two master degrees (Math and CS). So my question is - am I destined to be stuck in academia? I still do coding and plan on building up a samples library in a form of a blog (sth. like what _Humus_ has done) - will that help? Maybe I should try for an internship at some point?
  12. Iteration hell

    How about a Strycture-of-arrays approach instead og Array-of-structures?Split the data into two vectors (preallocated, fixed size arrays would be better): [code]class Object { D3DXVECTOR3 v_position; }; std::vector<Object*> objects; std::vector<uint32> visibility;[/code] Now each bit in visibility vector determines the visibility of the one object, you can clear the whole thing with memset and it's better for cache as long as visibility is used more often than the rest of data.
  13. Expecting calculation to be 0

    I'll throw in my two cents with these two tutorials (I always felt they aren't popular enough): [url][/url] [url][/url]
  14. SAT bouncing resolution

    Hi again, I'm not sure the exact source of your problems but normals from your debug output look a bit strange. For example both vertical sides of the large rectangle in the bottom of the screen give (1, 0) normal. One of them (left I think) should have a (-1, 0) normal.
  15. Fourier Transform questions

    [quote name='staticVoid2' timestamp='[url="tel:1320597004"]1320597004[/url]' post='[url="tel:4881088"]4881088[/url]'] I've been implementing a (Discrete) Fourier transform (and it's inverse) in C++ and was wondering about two things in particular, 1. How does e^-i (negative) differ from e^i when representing this using Euler's formula? I know that anything to the power of a negative value is it's reciprocal (e.g. 2^-1 = 1/2) but I'm a bit confused whether this changes the Euler's formula (cos(w) + isin(w)) or not.[/quote] e^-i = e^(-1)i = cos(-1) + isin(-1) = cos(1) - isin(1) e^i = e^(1)i = cos(1) + isin(1) [quote name='staticVoid2' timestamp='[url="tel:1320597004"]1320597004[/url]' post='[url="tel:4881088"]4881088[/url]']and 2. Why does the inverse Fourier transform get normalized at the end (i.e sum/numsamples) whereas the normal Fourier transform does not? [/quote] Which one gets normalized is a convention. You could multiply the normal one instead of dividing the inverse one. The reason you have to do it is (vaguely) because when going from time domain to frequency domain your "unit of measure" changes from s to rad/s.