• Content count

  • Joined

  • Last visited

Community Reputation

646 Good

About krippy2k8

  • Rank
  1. Problem with orthographic projection

    Grrr. I had already tried transposing the matrix and it didn't work. Somehow the Z of 0 didn't even occur to me. Transposing and using a Z of 1 works.   Thanks.
  2. Problem with orthographic projection

    Okay, so XMMatrixOrthographicOffCenterLH seems to work in a way that I don't quite understand. If I use XMMatrixOrthographicLH it works fine, or if I set the values of XMMatrixOrthographicOffCenterLH  to -300, 300, -200, 200, then it works exactly the same as XMMatrixOrthographicLH. If I set the values to 0, 600, 0, 400, I get some really funky results. I understand why the first one works the way it does, but it doesn't seem like the second one should give me the results that I am getting.
  3. Problem with orthographic projection

    Well I found that if I use D3DXMATRIX and D3DXMatrixOrthoLH instead, it works. I also have to change the vertices to use pixel values, which is what I expect, but which didn't work the other way either.   Perhaps some kind of alignment issue?
  4. So I'm looking into doing some 2D graphics with Direct3D 11. I am having a problem getting a simple orthographic projection to work right, and I am hoping that somebody can tell me what I may be doing wrong, or ideas of how to figure it out.   My vertex layout is a simple position/color layout, and the vertices are these:           {0.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f},         {1.0f, -1.0f, 0.0f, 0.0f, 1.0f, 0.0f, 1.0f},         {-1.0f, -1.0f, 0.0f, 0.0f, 0.0f, 1.0f, 1.0f}   This is my vertex shader: cbuffer cbChangesPerFrame : register( b0 ) {     matrix worldMatrix;     matrix viewMatrix;     matrix projectionMatrix; }; struct VS_Input {     float4 pos   : POSITION;      float4 color : COLOR; }; struct VS_Output {     float4 pos   : SV_POSITION;     float4 color : COLOR; }; VS_Output main( VS_Input vertex ) {     VS_Output vsOut = (VS_Output)0;     vsOut.pos = mul(vertex.pos, worldMatrix);     vsOut.pos = mul(vsOut.pos, viewMatrix);     vsOut.pos = mul(vsOut.pos, projectionMatrix);     vsOut.color = vertex.color;     return vsOut; } And this is my code to setup the constant buffer:     XMMATRIX all[3];     all[0] = XMMatrixIdentity();     all[1] = XMMatrixIdentity();     all[2] = XMMatrixOrthographicOffCenterLH( 0.0f, 600.0f, 0.0f, 400.0f, 0.1f, 100.0f );     ctx->UpdateSubresource( _constants, 0, 0, all, sizeof(XMMATRIX) * 3, 0 );     ctx->VSSetConstantBuffers( 0, 1, &_constants ); This results in a red single-pixel line being drawn from the top center of the screen, to about the center of the screen.   If I change my matrices to the following:     XMMATRIX all[3];     all[0] = XMMatrixIdentity();        all[1] = XMMatrixIdentity();     all[2] = XMMatrixIdentity(); Then the triangle renders as I expect; with one point at the top center, and the other 2 points at the bottom corners, with a red-green-blue gradient. So it would appear that the vertices are being correctly pushed to the shaders.   I have tried explicitly setting the .w position value to 1 in the vertex shader, I have tried using 4-element position values in the code, I have tried using a single matrix in the shader and multiplying them in code, and I have tried using a single matrix in the shader and just setting it to the projection matrix, all of which I figure should be the same since I'm using identity for the other 2 matrices anyway, and of course the result is the same in all cases.   What could I be missing here?   Thanks!
  5. Windows Desktop Manager

    Basically, no you can't do what you want. Though I admit I can not say that I am 100% of certain of that (though I AM sure that you can't do it all), I am pretty sure. Some food for thought, some of which will answer some of your questions, and some of which may be useful to help you in your quest if you continue to pursue it: 1) It's DWM (Desktop Window Manager) not WDM. Not being pedantic, just thought it might help your searches if you use the right terminology ;) 2) When you setup a swap chain in Direct3D windows mode, DirectX coordinates with DWM to provide the surface that you are rendering to, which is the DWM surface. This is a video memory surface that is owned by DWM.exe but which is shared across process boundaries by virtue of the WDDM (Windows Display Driver Model). When you Present, the DWM is notified that the surface is "dirty" so it can be recomposited on the primary surface. 3) Preserving the alpha values would not help you do what you want anyway, because you don't have control of the shaders that are used when compositing your window on the primary surface.
  6. Personally I would argue that both approaches are bad practice for an entity system. An entity in a pure entity system should neither "be" nor "contain" a collection of components, it is merely an identifier that is used to associate components with each other. All of your components (which should just be data) should be stored in a database that is keyed by the entity identifier, then your various systems query the database for the required components. Using an entity class that contains the components makes it difficult to obtain the performance advantages of a pure entity system. That being said, given only your 2 choices I would definitely go with composition. If anything just inherit from NetworkObject. It seems conceivable to me that NetworkObjects can also be property containers and also be serializable. If they are not already that is a big potential design complication that could raise it's head in the future. An entity system by it's very nature is supposed to be all about composition and avoiding the rigidity of OOP, so making the system itself so heavily dependent on inheritance seems pretty backwards to me.
  7. Communicating with Programmers

    Generally speaking I think the bigger concern is the other way around. As a programmer who has spent many a day dealing with artists, the biggest barriers to communication have been when I didn't understand the content creation process well enough and was unable to effectively communicate the necessary constraints or requirements, or fully understand the implications of such from their perspective. So I got myself a subscription to Digital Tutors and spent a couple of months learning what I could about developing game assets with Max, Maya, ZBrush and MotionBuilder, and typically now I spend at least a couple days a month doing the same, and the communication has become much, much easier. This is probably not something that every programmer will be able to or want to do, but as somebody else mentioned, there should be one guy on the programming team that can act as liasion between the technical and artistic sides, and it would be a good idea for that person to have a reasonable understanding of the content creation process.
  8. [quote name='SiCrane' timestamp='1349639685' post='4987759'] Am I the only one seeing the C# tag on this thread? (This is not a rhetorical question, I'm severely confused by the number of C++ specific responses here.) [/quote] There is no C# "prefix" so I guess most people - like me - don't really look at the tags and just assume that every question is C++ unless the OP mentions another language in their question, especially when the topic is something that is essentially the same in most languages anyway.
  9. SFINAE equivalent in C#?

    [quote name='Slavik81' timestamp='1349110169' post='4985793'] That would be a reasonable solution if it did not have such a dramatic impact on compilation time, and if it did not require major workarounds to apply to virtual functions. There are definite advantages to the restrictions C# imposes on its generics. [/quote] That should have little impact on compile times. Certainly not "dramatic." And yes, there are definite advantages to the restrictions C# imposes on it's generics. There are also definite disadvantages. A major design focus behind C# is in fact compile times and convenience, whereas C++ focuses more on flexibility and run-time performance. Template specialization is a big part of that. It's certainly not a design flaw. And the workarounds you're talking about in regards to virtual functions are typically only necessary if you use questionable design choices in your own code from the perspective of C++, with modern C++ absolutely favoring generics and generic algorithms over inheritance and virtual functions, especially when it comes to operating on template types.
  10. SFINAE equivalent in C#?

    [quote name='Slavik81' timestamp='1348884726' post='4984931'] I could expect it, given that the two objects should have both identical code and data, and one has an interface that is a subset of the other. The fact that it's so difficult to use one in place of the other is just a limitation imposed by poor design choices in the C++ template system. [/quote] Actually it is the result of a powerful C++ design choice: template specializations. [quote name='Slavik81' timestamp='1348884726' post='4984931'] If it C++ disallowed atrocities like this, the types would be compatible: [CODE] struct MyObject{}; template struct MyTemplateClass { T x; }; template<> struct MyTemplateClass<const MyObject*>{}; int main() { MyObject obj; MyTemplateClass<MyObject*> i1; i1.x= &obj; MyTemplateClass<const MyObject*> i2; i2.x= &obj; // compile error } [/code] [/quote] It seems it would be a pretty silly tradeoff to disallow template specializations in order to allow implicit template type conversions, particularly when the solution to your original problem is so simple: [CODE] template<typename ContainerT> void function(const ContainerT&){} [/CODE]
  11. Given your current constraints, what I would do is make each control responsible for adding itself. i.e.: [CODE] class Control { public: virtual void addToWindow( KBMWindow& window ) { throw std::logic_error("This control type not implemented for KBM."); } virtual void addToWindow( TouchWindow& window ) { throw std::logic_error("This control type not implemented for Touch."); } }; class Button: public Control { public: virtual void addToWindow( KBMWindow& window ) { window.uiWindow().addControl( getKBMButton() ); } virtual void addToWindow( TouchWindow& window ) { window.uiWindow().addControl( getTouchButton() ); } }; class KBMWindow: public Window { KBM::Window m_window; public: void addControl(Control *ctrl) { ctrl->addToWindow(m_window); } KBM::Window& uiWindow(){ return m_window; } }; class TouchWindow: public Window { Touch::Window m_window; public: void addControl(Control *ctrl) { ctrl->addToWindow(m_window); } Touch::Window& uiWindow(){ return m_window; } }; [/CODE]
  12. [quote name='ATC' timestamp='1347169738' post='4978188'] Anyone know where I can find a full list of valid SlimDX/D3D10 InputLayout strings (e.g., "POSITION", "COLOR", "NORMAL", etc)? [/quote] [url=""]On MSDN[/url]
  13. [quote name='fanaticlatic' timestamp='1347135440' post='4978090'] On a seperate point I would Ideally like to drop the WinMain call into the Core library rather than have platform specific stuff in the Application project. From what I have read this is possible by defining the /Entry in the VS project settings although I am not sure what this should be set as, for the Core static lib or the Application or both. [/quote] Personally I think that's a bad idea. What you gain from hiding the entry points in your library is marginal at best, and disruptive at worst. You'll be taking control away from your users. What if they don't want to instantiate your application framework at the beginning of WinMain? Then they have to modify your library. It's also going to cause them to have to set strange linker settings on every platform in order to properly link the engine if they decide to create new projects for it manually. Ideally your WinMain would just be something like this: [CODE] int APIENTRY WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) { Application app. app.init( hInstance, lpCmdLine ); return; } [/CODE] . With perhaps platform-specific versions of Application::init. What would you actually gain from hiding this from your users?
  14. Simple & Robust OOP vs Performance

    [quote name='L. Spiro' timestamp='1347075675' post='4977887'] For iOS, for example, profiling revealed 2 functions to be high on the time-consuming list (Xcode comes with “Instruments” that show you how long every function takes) and I was able to reduce the time spent inside those functions by half by reversing the order of the loop to count down to 0. [/quote] Yeah I would pay to see that happen. If you actually had that result, then there either had to have been something in your loop that was early-outing sooner by counting down, or you were doing something in your comparison that was resulting in a calculation on each loop iteration to possibly have gotten those results: (i.e. i < something.size()). Either that or you were using some kind of interpreted or intermediate language that did strange things. Otherwise it is physically impossible to get that result with native machine code. Can't say for Java but I would really have to see it to believe that it would have anything more than a extremely negligible effect. The difference between iterating up and comparing against a memory address, and iterating down and comparing to zero, is exactly 1 test instruction on all ARM and x86 platforms with any reasonable compiler, and an extremely small memory latency issue which is definitely going to be a cache hit on every single iteration except possibly the first, unless your loop is doing so much that the difference in iteration method couldn't even be in the equation. I just did a quick benchmark on 4 different systems with different CPUs (Windows and OSX, AMD and Intel), counting up and counting down by 2 billion iterations and ran the test 10,000 times, and the worst-case difference was 0.3%. This is with the internals of the loop doing nothing more than a simple addition accumulation and rollover. It is true that it is usually going to be slightly faster to iterate to 0, but the difference is only going to matter in the most extreme cases.
  15. Simple & Robust OOP vs Performance

    [quote name='rockseller' timestamp='1346972383' post='4977405'] Unless I'm wrong, having 1 class with 100 methods, over 1 class, and 10 subclasses, each one having 10 methods, will result in 1 object vs 11 objects What do you mean with sensible design? [/quote] 1) Unless you're spawning many monsters every second, the overhead of spawning 11 objects per monster instead of just 1 is negligible and typically not worth compromising your design. This is in fact a perfect example of why you shouldn't engage in these types of "optimizations" without analyzing your bottlenecks first. If it has no impact on the overall performance of your application, it is a useless optimization. 2) When you have something like MonsterAttackModule, if it is primarily just a collection of methods and doesn't manage state, you can usually get away with only having a single MonsterAttackModule instance per monster type, thus incurring no additional allocation overhead per monster. Or perhaps just a collection of static methods, and thus incurring no allocation overhead at all. 3) Paying more attention to your higher level design can produce performance benefits many orders of magnitude greater than your micro-optimizations. Compromising such a design in any way in favor of micro-optimizations without profiling will almost always hurt you in the long run. This doesn't mean that you should never think about low-level performance issues as you write your code, as you should, but only where you know it will make a [b]significant and noticeable[/b] difference or that it won't compromise your design in any meaningful way. i.e. block-copying memory instead of copying memory a byte at a time is always a good idea because it has a significant performance impact and doesn't usually result in any significant design compromises. Doing things like string1.append(string2) instead of string1 += string2 is also usually a good idea as it usually eliminates the creation of temporary objects, and also has no impact on your design. Creating a monolithic class with 100 methods because you think it's faster to allocate is definitely not a good choice for a premature optimization.