Erius

Members
  • Content count

    80
  • Joined

  • Last visited

Community Reputation

124 Neutral

About Erius

  • Rank
    Member
  1. Object Management?

    I find http://www.helixsoft.nl/articles/circle/sincos.htm to be nicely explaining what cos and sin can do for motion tracking. Hopefully it's not a secret duplicate of any of the answers.
  2. @Hodgman: Thank you for your reply, I might adapt my system to it as well, since it has some vague similarities already. My very small and humble system so far uses commands as well, but it's the meshes that have them, and the meshes are being filled by the two simple components so far, window rendering, and static mesh rendering. It goes like this: My render device can only do one thing when it comes to rendering: clear, accept a list of meshes, flip. Each mesh has vertex and index data, and a material. A material has resources (or handles to them, the actual things are in a libary that get filled at load time) like vertex and pixelshaders, a vertex declaration, a constant table that is linked to variable emitters, info about which datastreams of the mesh are in use, and so on. To actually produce data, a system like the window renderer takes a mesh, fills it with mesh data according to the states of the windows using it, then sets commands like "Clear the Zbuffer", "Change View to Orthogonal" for the material script, and sends the mesh on it's merry way. It's a bit awkward when it comes to configuring it all, but it works. I'm not sure if this 'passthrough' way of doing it is a good idea. In essence, that window rendering system is just a mutator, it doesn't have much state info itself...does that fall under that gypsy wagon/poltergeist thing? But again, thanks for the input, that looks a lot more structured than mine... Edit: Hm, so no global state like that? Interesting. Makes sense though, smaller lego blocks that can form many things, including a duplo block, like the OrthoMatrix thing are better than using duplos only.
  3. Sorry for the vague title, my whole question wouldn't fit, which is: What's the point of having a single render device, when so many things need different ways of handling? I'm trying to implement my own rendering system, really basic, Dx9 and all that, and from overhauling some of my old code, I've found out that out that the more I keep responsibilities of objects/functions separate, the easier I can maintain and extend stuff, pretty much what people recommend anyways. That means: Avoid monolithic do-it-all god classes, right? Yes. Isn't a render device pretty much monolithic, that way if one wants it to be actually substantial, and not just some sort of delegate thing that shoves data between specialized components? Or is that the point of one, after all? So, isn't it better to have things like "GuiWindowRenderer", "FontRenderer", "3dRenderer" that could maybe act as plugins, or well, components for a big class? Or would it be better to completely split those things and use them on their own, when one wants to render a GuiWindow instead of an actual ingame object? The only need for a RenderDevice I can see that way would be a common framework like thing, that access backbuffers, and flips them and all that, but in that case the renderdevice is not the main hero, but the shadow sidekick. So, what are the "rendering system" blocks one can see in engine design diagrams really? I'm aware that that question is kind of a bad one, since a monolithic, rigid spaghetti-like monster class could do it all, and still pass as a rendering system, but let's say we don't want it that way... Instead...would you make the components the stars, and have the device be some sort of low level widget? Or have the components act more like scripts, or plug ins something that wants to be rendered can use? Like with 2D GUI rendering, if it were a script, it could set the device to use an orthogonal view, and then feed it a stream of vertices, and material data. Then, when it's time to render some scene object, a different view would be used, and so on. So in short, I'm having trouble designing a sort of layout plan for responsibilities, what should depend on what, etc. Any resources that focus on that particular topic? (Again I apologize for vagueness, please act as if this topic never happened if it's too vague for you. Don't want people to feel like they put effort into a reply in vain)
  4. I've been using QI for some time now, and I'm pretty good at writing parsers for clearly laid out structures, but for a part of current project I decided to cobble a quick (originally planned as quick, been sitting on it for half an hour now) grammar that takes a textfile and stuffs strings into a vector<string>. Example: [code] class foo: public qi::grammar<char*,std::vector<std::string>()> { public: foo(); ~foo(){} qi::rule<char*,std::vector<std::string>()>start; qi::rule<char*,std::string>()>itemA; qi::rule<char*> uselessStuffBetweenAandB; qi::rule<char*,std::string>()>itemB; };[/code] start is defined like this: [code] start %= +(ItemA >> uselessStuffBetweenAandB >> itemB ); [/code] It parses the input file correctly, but instead of pushing each item into the vector separately, it concatenates itemA and itemB. I just wanna stuff it all into that vector and do my own sorting/operations on it after it has been filled. What am I doing wrong? [code]start %= +(ItemA() >> uselessStuffBetweenAandB >> itemB() );[/code] doesn't compile [code]start %= +(ItemA[_val] >> uselessStuffBetweenAandB >> itemB );[/code] does nothing and ^ | always match the first item only before skipping to the next set. Sorry for such a simple question but I was unable to make Google give me something concrete... Edit: I'll just go for a specialized struct for the time being, wasting so much time on this is silly, but I'd still be thankful for an answer.
  5. Ah, thanks for the confirmation about atlasing and so on, for sprites, etc. In a GUI application, the scenario I depicted would lead to breaking up the batches to render each layer correctly, though, right? Also, can anyone recommend some books that specifically deal with techniques like that? Something like a hybrid of GPU Gems and the ShaderX books, maybe? I remember reading about techniques that help for reducing drawing calls and all that, but I read them a long time ago...and they were all scattered, so a compilation would be really, really awesome. I don't want prefab code, per se, but still more or less specialized general tips (even if that's some sot of oxymoron). For example...I think I recall reading that one could just use all 8 texture-'slots' at once to save some switches, but I can't remember if the approach used 8 coordinates per vertex, or had some sort of lookup and only used one set of texture coordinates and a sort of index about which texture to use. A general approach, but still specialized enough to prefer it over multiple draw calls. Edit: Would be awesome if it also contained things like: "It's less CPU bound to fill a VB with vertices in a specific order in one call later, than using 10 draw calls to render those 10k tris without any sorting" for example. (I pulled these figures out of my behind, but I read something like that before, but again, somewhere I can't remember) something like a ...tricks of the trade book for backwards people like me, as in, slightly outdated (cause I don't think big AAA guy give out their secrets) but still better than web-tutorial/autodidact level performance. Any recommendations?
  6. I'm having trouble with it, or at least having trouble about being sure about what I know. Let's say we have a sprite system, or a GUI system or something else that relies on objects being basically image layers. Now let's say we have a collection of objects which use either one of two textures. In the usual scenario, one would render all objects of texture one, then all of texture two, using only two DIP calls. What about them all being semi transparent, though, and alternating in their order? Like here [attachment=1651:example.png] (meant to demonstrate order and type only, not transparency) Wouldn't I have to, in order to blend stuff properly, render each layer separately? Sounds like a huge problem to me, unless I atlas everything or use multitexturing or something else I'm not all that experienced with. So my question is, am I stuck with breaking batches if all I want to/can use at this point, is very, very simple shaders which pretty much emulate fixed function rendering?
  7. Hi. I know questions a la "show me a solution" are frowned upon, but I am at a complete loss with this one. I want to implement a custom build rule that uses FXc (directx shader compiler) on vsh and psh files. I have tried ...a lot in the past few hours, pretty much everything Google threw at me, including some threads on GDnet. I was unable to find a newbie-like explanation about what the *bleep* are the bare essentials for those darn things. I looked at the files that came with VS2010, that masm and lc target/xml/props stuff. I was unable to deduce what I need and what is just sugar... I don't want any fancy options or command line switches, i just want to compile files all with the same setting, from inside the IDE. MSBuild is a horrible nightmare to me, I even downloaded a third party tool for it, hoping it would make things clearer to me, and it did, but not to the degree that I can strip down the masm and lc (and some QT rules I picked up from here) to just do that single thing I want to do from above. I can compile them via cmd and everything, but i have not the slightest idea how to make a build customization rule file definition whatever it's called. http://www.gamedev.net/community/forums/topic.asp?topic_id=404026&whichpage=1� Was the first thing I tried, a simple CBT, but all I got was cannot open file <path>\.vsh (I changed everything to the correct path, the error message pointed at the correct path, but it seemed that right-clicking the file and choosing compile, tried to open .vsh, not the <filename>.vsh) Then there is this page here http://blogs.msdn.com/b/visualstudio/archive/2010/05/26/programmatically-adding-removing-querying-vc-build-customizations.aspx It looks better to me than http://blogs.msdn.com/b/vcblog/archive/2010/04/21/quick-help-on-vs2010-custom-build-rule.aspx since this one poses the same problem to me which is: what is candy, what is essential" but all that just for such a simple thing? plus...c#? I ...don't know what to think. But, at least they are about vs2010, and not about "hurr, use rule files!" from 1634 which seem to be, for practical means, outdated. This page here is fun, too. http://blogs.msdn.com/b/visualstudio/archive/2010/04/26/custom-build-steps-tools-and-events.aspx Very nicely explained, my favorite part is where he goes: Custom Build Tool is targeted at the exceptional source item case (you have just one source item that needs this kind of command executed on it) rather than a group of files. If you’re trying to set up a command that runs on all .customextension files, you should consider using a Custom Build Rule (now called Build Customization) rather than a Custom Build Tool, as that is exactly what custom build rules are for. And elegantly DOES NOT explain THAT one. Searching the blog on it yielded nothing either. And those are the better options i have found, the rest was c# related, or MSbuild->creating project files and stuff. But I do not want to make a project... I want to execute "hurr.exe" on "*.durr" files to create "*.derp" files. Just that...no flashy property pages where I can fine tune the amount of derp that goes in the herp. Just... if vsh: make vso. Thank you for reading, pointers are very, very appreciated, and sorry for the lack of professionalism here, rants suck, but this is one of those things I know I will understand if I only get some foothold. Cheers...
  8. Heya... I'm a little bit confused by this flag, since pretty much every search for topics on it, has come up with this: "Use them if you want to append data, while preserving previous vertices." But what should I do when I need a buffer to be dynamic since, in a worst case scenario all vertices have to be re-filled, but the majority of the time nothing changes, or only a subset of the whole thing. I always lock the entire thing, since I have read that locks (not sure if it only applies to static ones, though)lock the resource completely anyway. So discard would require everything to be refilled each time an element changes. Using no locking flag seems to preserve all vertices and I can pick an choose which to write, but then D3D9 complains about not using discard or nooverwrite. To circumvent that, I can use a static buffer, but what if the worst case occurs? (This might be premature optimizing, but I'm still interested in mechanics...) Using nooverwrite...seems to be tightly bound to appending, but I don't want to append, I want to, lets say, move one single vertex, in the middle of them all. Is nooverwrite suitable for that? Edit: I just found a paper from ATI with an interesting bit: Quote: For static buffer updates the situation is trickier. First of all, as the name implies, there should be no updates if the data is static. There are situations however, when data is “almost” static and changes very infrequently. Using dynamic buffers and streaming data every frame might not be very efficient. In that case try updating data as much in advance as possible to prevent stalls on the buffers currently used for rendering. So, should I use a static buffer instead, then? It seems that nooverwrite is a flag that tells the hardware that existing data will not be corrupted, but in my scenario, I am corrupting it. [Edited by - Erius on July 30, 2010 3:37:38 AM]
  9. I see...thank you. I'm going to keep a little memo about where I am using it though, just in case, hehe. Cheers!
  10. Heya. I am writing a GUI module for my project, and I like the windows procedure approach Windows uses, so I copied it :P. The thing is, I defined all arguments to be unsigned ints, because so far, the vast vast majority of operations work with those. Now, maybe it's bad design (this is my first time attempting such a thing), but for the paint message, I need a float, to hold the time. My first thought was multiplying the value with 1000, but that produced different results. (Very very small differences though, but I'm a bit paranoid when it comes to timing...) So I thought about reinterpret casting it, but I don't like using reinterpret cast. Memcpy is iffy like heck in this case, so I went for a union. It seems to work perfectly, the float values are being reproduced exactly like they came in. Is this a safe way of doing it? I am not that much concerned with portability, but I like deterministic behavior <.<.
  11. I haven't done much in that way (actually searching for variable titlebars), but off the top of my head, I'd do it via http://msdn.microsoft.com/en-us/library/ms633500%28v=VS.85%29.aspx since it should still work fine. with classname "ToolbarWindow32" caption Address: <address> example "Address: c:\windows\system32" remember to use double slashes in the source "Address: c:\\windows\\system32" Also, finding it a bit dubious...but I have done similar things, f or games which keep a frame in windowed mode, with windowed resolution set to desktop resolution, which resulted into having a bit of the bottom cut off, because of the titlebar. so i set up a listener which changed the window style to popup,etc.
  12. Reference

    Quote:Original post by iMalc An essential series of articles for the serious C++ programmer: Guru of the Week This just drained almost two hours off me...very interesting, even if a lot of it is too high for me at this point <.<
  13. std::list erase issue...

    No problem, I was a bit vague there, and in fact I wanted to apologize for that, but went for moving on, since you didn't confuse me with it :P. Can happen to someone else reading the thread, though, so ..my bad... Also, thanks for pointing that out. This is my only class so far that uses pointers so extensively (back in the days I didn't care much about style, mainly because I had no clue), and it's also in the rough-outline phase, but it's growing nicely and I have been thinking about something more manageable. I'll look into it then. Cheers! :)
  14. std::list erase issue...

    Yeah, that's right, but list::erase() returns an iterator pointing at the item that comes after the erased one, or if the erased one was the last item, returns the list::end() iterator, which doesn't get invalidated by erasure, it seems. std::list<foo_element*>::iterator cur,last; cur = element_index.begin(); last= element_index.end(); // was originally a typo, sourcecode does not contain it for (;cur!=last;) { if (*cur == 0) { cur = gui->window_index.erase(cur); } else ++cur; } works fine edit: also, I use a list that stores pointers, so even if I delete an object via one of those pointers, the pointers themselves are still there, they just point at now unused memory. So it should not do anything to the list and it's ability to remove items from itself. Unless I'm wrong, of course, hehe.
  15. std::list erase issue...

    @SiCrane: Ah yes, I thought so too, and it's the thing that confused me at first glance. Since I didn't notice that dead-memory-event-pop yet got crashes 'right' when I called erase, it made me wonder if list did some behind the scenes work, but that looked impossible since the original the list pointed at remained unchanged. Also, the memory access seemed right in the middle of the space an element occupies, adding to the confusion, but yeah, PopEvent on dead space did it, and it must have only been detected by a trap somewhere in the list deletion functions. So yeah, it made me doubt what I know about it...cause I was pretty sure about it too, same with vectors and so on, need to call delete before popping stuff off.