satanir

Members
  • Content count

    265
  • Joined

  • Last visited

Community Reputation

1452 Excellent

About satanir

  • Rank
    Member

Personal Information

  • Interests
    |programmer|
  1. Trouble passing multiple lights

      That's correct. I don't remember how to use the old (and deprecated for a long time now) D3D11 effect system, perhaps it has a call to bind the buffer to both shader stages at once. [EDIT]: Apparently there is :)   But as you are not using the effect system to bind the buffers but rather call the context yourself, you should bind the buffer to both stages.
  2. Trouble passing multiple lights

      DaSutt is right. You don't bind the CB to the PS. 
  3. Dot is a dot product. '*' is a component-wise multiplication. float4 a, b; float4 c = a*b; //c = (a.x*b.x,a.y*b.y,a.z*b.z, a.w*b.w)When you multiply float by float4, the result is float4. The float will be broadcasted to float4. float a; float4 b; float4 c = a*b; //c = (a*b.x,a*b.y,a*b.z,a*b.w)
  4. HLSL switch attributes

    FXC will generate switch v1.x case l(5) call label0 break default call label1 break endswitch mov o0.xw, r0.xxxy mov o0.yz, l(0,1.000000,1.000000,0) ret label label0 mov r0.xy, l(1.000000,1.000000,0,0) ret label label1 mov r0.xy, l(0,0,0,0) ret So there's still a 'switch' in the assembly. It just that the statement following the 'case' will be a subroutine call instead of the actual statement.
  5. HLSL compiler uses the same mechanism for GS as for other stages - it will output to virtual registers. The hardware driver will then optimize it based on the underlying architecture.
  6.     TL;DR - The HLSL compiler will pad it all by itself.  [EDIT] There is no (spoon) buffer.   The common problem with CB layout is updating from C++ code. That's where you need to be careful, since the alignment and packing rules are different for C++ and HLSL structs.   It doesn't really matter for inter-stage data. The layout is not exposed to the user. Conceptually- it's not even a buffer. The HLSL compiler will decompose your struct and assign input and output registers to each struct member. There are further driver specific optimizations that affect how communication between shaders happens. Don't worry about it - the compiler will make an optimized decision for you.
  7.   JSON is one way to go. In fact, it's exactly what we are using at work for our research framework. We chose JSON because it is human readable. We don't store actual models there - just model filenames with relevant data (position/rotation/scaling). We also store light source information, camera definitions and some other parameters. The main drawback is that parsing text files is slow. If you plan to store actual model data there (i.e. vertices), I would advise you to come up with a binary format. It will make a huge difference in loading time when loading a scene with large amount of vertices.
  8. Think of the operations you do in terms of commands. You react to the UI by creating such commands, then dispatching them to a processor class which reacts to them. You keep a stack of all the commands you processed. Each command should encapsulate enough data to allow you to reverse it (for example, translation should store the origin, add object should store a reference to the added object, etc.). Once you have that architecture in place, it's straighforward to implement undo and redo.       Define map? And that really depends on your engine and the features you support. I started to write an answer, but figured it's a topic on its own, so be more specific in what you want to export.         Hmmm.... Any particular reason you want to move faces around? That's what we have Maya for. It's not difficult. Implement picking to detect which face/vertex was chosen, then update the corresponding vertices location in the vertex buffer. Take a look here - http://ogldev.atspace.co.uk/www/tutorial29/tutorial29.html.
  9.   Well, you should benchmark it. But it terms of bandwidth, writing 4 16-bit values is double the bandwidth of writing 2 16-bit values. Same goes for fetching the data, you'll read 2 less floats (though the compiler might realize that you only use the red and alpha channel and optimize it). Another option to reduce the bandwidth is to use blend-state write-mask and mask out the green and blue channel. That might reduce the bandwidth.   That's only helpful if you are bandwidth limited. If you are compute limited, than that probably doesn't worth the trouble.     If you are referring to how the configure the pipeline, when you create the blend state you can set different blend operators for each render-target. MSDN has more info.
  10. DX11 Frame Time Swap Throttled

    Is VSYNC on in your app? My guess would be that this is the time DXGI waits until the next VSYNC. I can't think of another reason why D3D would throttle the swap operation.
  11. You can use multiple-render-targets: - Bind 2 DXGI_FORMAT_R16_FLOAT render-targets. - Bind a blend desc with (IndependentBlendEnable == true), where for one RT you use MAX and for the other RT you use MIN blend operator. - In your shader output the same value to both render-targets.   Populating the min-max buffers is probably more efficient, since it consumes less bandwidth. It has the drawback that when you read from the buffers you need to do 2 sample operations, but that as well will consume less bandwidth than RGBA texture.
  12. Is there a way to specify explicit uniform buffer location in GLSL? I am aware of the "layout (binding = 0)" specifier, but that's not what I'm looking for.   What I want is something like: // GLSL code layout (location = 4) uniform SomeBuffer { ..... }; // C Code assert(4 == glGetUniformBlockIndex(ProgramID, "SomeBuffer")); But layout(location) doesn't seem to work for uniform buffers. Is there a way to do it?
  13. Sorry, my original post was unclear, I was talking about sampler objects, not sampler parameters. Sampler objects are bound to a texture unit.
  14. I'm porting a DX rendering framework to OpenGL. The DX framework follows the DX API - there is no relation between the samplers and the textures, and binding them to shaders is done seperatly. A pseudo-code of the DX framework looks like pProgram->SetSampler(sampler_shader_var_name, pSomeSampler); pProgram->SetTexture(texture_shader_var_name, pSomeTexture); So from the user prespective, both are just another type of shader variables.   I'm trying to achieve the same simplicity using OpenGL, but the way GL works complicates things. The problem is that samplers are not bound to shader variables, but to a texture unit. Once the sampler is bound to the texture unit, it affects all textures bound to the same texture unit. The simplest solution is to let the user manage texture units on its own, but that means losing the abstraction. Another solution is to create a psuedo-sampler state (not a GL object), then let the user bind it to the texture, and change the texture's sampler parameters. This is not good since it means I can only use one sampler at a time with each texture.   I have a bunch of others solutions, but nothing as clean as the DX code.   Well, I'm stuck. Spent the last 4 hours thinking, coding, deleting and vice-versa. Any advice on a clean way to it?
  15. Writing an OpenGL backend for a framework we use at work. I got an FBO class there which I also want to use for the default FBO, and I need the default FBO description (formats, size, sample count). It's mainly for completness.   Guess I'll have to go the long way to get this info.