Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

643 Good

About Sc4Freak

  • Rank
  1. Yes, that's correct but I don't think that's really the issue here. The Gadget is just an example, and whose sole purpose is to manage a collection of Widgets and to allow for read-only enumeration of Widgets by clients. You can call it "WidgetCollection" instead of "Gadget" if you wish, but the question was more around how to implement it and making the choice between complexity vs. encapsulation.
  2. The editor ate all my angle brackets.
  3. So we all know that encapsulation is a good thing in OO design. Encapsulation increases maintainability and insulates clients from changes to implementation details. But at the same time, there ain't no such thing as a free lunch: there is an upfront cost (in terms of programmer time) to writing a class in an encapsulated manner. So the question of how much you should encapsulate is, like most engineering decisions, a tradeoff: how much effort do you want to spend now to ease future development and maintenance? I want to give a concrete example here: let's say you have a class Gadget, whose purpose is to maintain, update, and mutate an internal collection of Widgets. The internal representation of this collection isn't part of Gadget's interface and is an implementation detail - so ideally you want to hide this from clients of the Gadget class. The Gadget class needs to allow its clients to somehow observe its collection of Widgets, without exposing too much of its implementation details. And ideally, only forward and backward iteration of Widgets should be exposed to clients: that way we can use vector, list, set, or any other container in the implementation without affecting client code. The simplest solution would be to just return a reference to the internal collection: [source] class Gadget { vector<Widget> widgets; public: const vector<Widget>& GetWidgets() const { return widgets; } } [/source] This hides very little: client code knows that you use a vector and must take a dependency on that. You also cannot change the implementation (eg. can't change a vector to a list or whatever) without modifying your interface. A second option would be to provide an index accessor like this: [source] class Gadget { vector<Widget> widgets; public: size_t GetWidgetCount() const; const Widget& GetWidget(size_t i) const; } [/source] This allows you to change the type of [font=courier new,courier,monospace]widgets[/font] without affecting client code, but with a caveat: you've allowed random access to the underlying Widgets collection. In this case, you wouldn't be able to change [font=courier new,courier,monospace]widgets[/font] to map or list or something else that doesn't support random access. A similar, but different option would be to expose iterators to the data: [source] class Gadget { vector<widget> widgets; public: typedef vector<Widget>::const_iterator WidgetIterator; WidgetIterator WidgetBegin() const { return widgets.begin(); } WidgetIterator WidgetEnd() const { return widgets.end(); } } [/source] But that's still not perfect: typedefs are are not actually separate types so it doesn't stop anyone from accidentally doing stuff like this: [font=courier new,courier,monospace]vector<Widget>::const_iterator it = gadget.WidgetBegin(); // leaky encapsulation[/font] [font=courier new,courier,monospace]size_t num = gadget.WidgetEnd() - gadget.WidgetBegin(); // only works if random access iterator[/font] And finally, probably the most encapsulated solution would be to actually define your own iterators: [source] class Gadget { vector<Widget> widgets; public: struct WidgetIterator : iterator<bidirectional_iterator_tag, Widget> { // ... } WidgetIterator WidgetBegin() const { ... } WidgetIterator WidgetEnd() const { ... } } [/source] This is the most encapsulated solution. It exposes exactly nothing more than it needs to, and allows complete freedom for you to change the internals of Gadget without affecting client code. You can even make [font=courier new,courier,monospace]widgets[/font] an associative data structure like map (whose iterator iterates over key/value pairs), and write your custom iterator to discard the key and return only the value. But it's also the most difficult and costly to write: custom iterators aren't entirely trivial to write and the extra code to implement the custom iterator is itself a maintenance burden. So there are many ways to achieve the same thing, with varying degrees of complexity and encapsulation. So my question is: in your projects, which option do you usually choose? If you were tasked with implementing the Gadget class, would you go with the simple and cheap option, the complex but safe option, or something entirely different? edit: fix angle brackets
  4. I'm getting back into C++ after quite a lengthy dry spell and some of the language's idiosyncracies are escaping me for the moment. If I inherit from multiple base classes and get the "dreaded diamond", is virtual inheritance still necessary if all the base classes are pure abstract base classes ("interfaces")? Because all the base classes contain only pure virtual function definitions (and no other members), would virtual inheritance essentially be a no-op in this situation?
  5. This is something I've been wondering about for a while, but haven't been able to come up with a really good solution for. Every game is going to have certain dependencies between subsystems and entities in the game: for example, if a camera is following the player, the camera has a dependency on the player's position. Now, the order in which you update things in your game can have a significant impact. If you update the player before the camera, the camera's position at the end of the tick will have correctly updated with the player's latest position. But if you update the player after the camera, the camera will perpetually be one-frame behind (it'll always be looking at last ticks data). In current game, my update order is a bit ad-hoc. I don't have any defined way to deal with these sorts of depenencies: I just update things in any order I like and go back and fix things if something breaks. And I don't have any way to deal with a circular dependency. This has led to the unfortunate situation of objects being arbitrarily either 0 or 1 tick behind. One easy solution I've heard is to double-buffer game state. At tick t=1, BufferA contains the game state as it was at t=0. The game logic reads from BufferA and writes to BufferB. At the end of tick t=1, BufferB contains the game state at t=1. The two buffers are pointer-swapped, and the process repeats for t=2. This essentially forces everything to be one tick behind, but you don't have to worry about update order. Are there any better ways to handle this?
  6. Yeah, I don't want to use OBBs unless I have to. But AABBs and bounding spheres kind of have problems too: 1) If I use bounding spheres, how do I handle non-uniform scaling, and other "unusual" transforms like shearing? 2) If I use AABBs I guess I can apply the transform to each of the 8 corners (turning it into an OBB) and then get the AABB that encompasses that. But that generates a pretty loose bound (eg. when rotated by 45 degrees). Any better way to handle AABBs?
  7. Up to now, I've been happily using bounding spheres and AABBs for quick frustum culling. Culling is done in two passes: first with the bounding spheres to quickly remove large portions of the scene, then with the bounding boxes for more fine-grained culling. This works just fine if all I have are translations and uniform scaling. But how do I handle this in a more general manner, that'll work with all transforms? The way I'm currently doing it is to keep bounding spheres and AABBs in object space, and transform the frustum into object space (rather than the other way around). But calculating a brand new frustum for every object that needs to be rendered is turning out to be prohibitively expensive. How are bounding volumes usually handled if your object has a transformation?
  8. Sc4Freak

    3D camera movement

    The camera's rotation is arbitrary but fixed. In other words, it's looking down at an angle and I just need a translation for the camera. I had it looking straight down at first, and it was easy to find the translation - find the difference between the start and end finger positions in world space, and subtract that from the camera position. But since the camera is looking down at an angle, that doesn't work.
  9. I'm writing a 3D game that'll use a touchscreen for input. The camera in the game is located above and looks down at the battlefield (which is merely a flat plane). The user should be able to drag his finger across the screen to scroll across the map with the same world-space position remaining underneath the finger. That is, the finger should behave like an "anchor" - as the finger moves around the screen, the camera needs to move such that same world position always appears underneath the finger. For example, if the user places his finger on a tree in the world and drags his finger around, the camera should move around such that the tree always remains underneath the user's finger. The concept is simple enough, but I'm having a little trouble with the maths. In any given frame there's a start and an end finger position in screen-space, which represents the movement of the user's finger across the screen during that frame. Unprojecting these two points gives the starting and final world-space positions of the finger. But how do I turn that into a corresponding translation of the camera?
  10. Well, in the case of undefined behaviour the compiler is free to do what it wants, so that reasoning would apply to every other feature of C++ as well.
  11. I just had a "why didn't I think of that" moment. :P It works great, thanks!
  12. Z ordering in my game is represented as a tree: each node represents something that needs to be drawn. A node has any number of children, and a child's Z value is relative to its parent. A child's Z determines the order in which it'll be drawn relative to its parent. For example, if node X has children A, B and C with Z values of -1, -2 and 5 respectively, the drawing order should be B A X C. I want to convert these relative Z values into "absolute" Z values, such that when a list of nodes is sorted by the absolute Z value, they will be in correct rendering order. Here is a naive recursive solution that doesn't work: def FlattenZ(node) node.AbsoluteZ = node.Parent.AbsouteZ + node.RelativeZ for each (childNode in node.Children) FlattenZ(childNode) It won't work because it can cause Z values to "overlap" between nodes in separate subtrees. Consider the following tree of nodes labelled A-E, with each node's relative Z value in parentheses. A(3) / \ B(1) C(-1) / \ D(-1) E(-2) The above algorithm will give the nodes E and C the same absolute Z value, which isn't correct. The correct ordering for this particular tree would be C, A, E, D, B. What's an elegant in-place algorithm to do this correctly? I'm developing for a mobile platform where resources are constrained, so I have limits to the number of allocations I can make. Bonus points for something that's linear time.
  13. Sc4Freak

    Alpha compositing

    Quote:Original post by Hodgman Quote:It seems as if the only way to do it properly is to draw to a rendertarget first, but that seems unnecessarily wasteful (and complicated, performance-wise).Using the blend mode above, you just need one RT for your entire GUI, which shouldn't be too complicated. On modern cards, performance isn't an issue for RTT, except for where you blend both RT's together (a small fraction of a ms). If you want to save RAM, you could make this RT as big as your biggest GUI, and render each GUI into it (and blit to main screen) one at a time. Well, my target platform is Windows Phone 7, so resources are extremely constrained. Custom shaders are also unavailable (which limits my options somewhat). I was hoping initially for a solution which wouldn't involve render targets, but I guess there's not much choice. Quote:Original post by karwosts Can you do regular alpha blending, but render from front to back with z testing? I.e. draw the red square first with blending, and then render the green square second. It will reject everything underneath the red square, while drawing the green square around the red. This means each element only gets blended once with the background. That'll work for solid elements (like the boxes in my example) but it won't work with anything that uses textures with an alpha channel. A classic example would be text, which is usually done with a character map on a texture drawn to the screen using quads. Relying on Z won't work there.
  14. Sc4Freak

    Alpha compositing

    Quote:Original post by Hodgman RTT isn't that crazy... :) If we assume a black background instead of a white background, it's possible. You could render your whole GUI onto this black background, and then composite the final result over your actual background. In the case of your squares, we could say they've got 100% alpha, which is then modified to be 25% alpha. The blend mode you'd want is: Source * ModAlpha + Dest * (1-OrigAlpha) where: OrigAlpha = 100% ModAlpha = 100% * 25% However, that blend mode is too complicated to be configured though the built in blend unit, so you'd have to implement some of it in a shader. You could use a shader to pre-multiply your alpha into the colour output, like:*** Source Snippet Removed ***And then use pre-multiplied-alpha blending: Source * One + Dest * (1-SrcAlpha) I was hoping for a slightly more general solution: one that would work regardless of what was in the background. It seems as if the only way to do it properly is to draw to a rendertarget first, but that seems unnecessarily wasteful (and complicated, performance-wise). Quote:Original post by Krohm Cannot you just disable Ztest and draw them? If the GUI elements are 2D, it will work just fine. It won't work, because the blending won't be right.
  15. Is there any "smart" way to do alpha compositing that I don't know about? I'm writing a GUI for my game, and there are UI elements with other elements on top of them. For example, a button with some text on top of it. The button background and the button text are rendered separately (because the text might change). If I want to set the button's alpha, how would I go about ensuring that the button and the text on top of the button blend properly? For example, say I had a red box sitting on top of a green box, on a white background: At 25% opacity, this is what it should look like: Obviously, setting the alpha of the two boxes separately won't work: The only solution I can think of is to render the button and the text to an offscreen rendertarget (without alpha), then blend the rendertarget with the backbuffer. But that seems crazy solution to what should be a simple problem. Is there some other way to do this that I'm not aware of?
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!