• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

sundersoft

Members
  • Content count

    17
  • Joined

  • Last visited

Community Reputation

216 Neutral

About sundersoft

  • Rank
    Member
  1. I almost never use a debugger because: -I usually have a good idea of where the bug is; it's usually in the last code you wrote (assuming you're testing properly). The code's errorneous behavior is also usually enough to determine roughly where the bug is. Having to look at stack traces and whatnot takes a good amount of time and it's usually not necessary to know exactly where the bug is. -My approach to fixing bugs is to look at the relevant code and try to improve it. I may find other bugs that I didn't know about or I may find that the code or variable names are confusing. I may also decide that the method I'm using is naturally error prone (has many special cases, etc) and that I should rewrite the code to be simpler. With a debugger, I normally find the bug, fix it, and leave, so the code quality is not improved much. I'd rather spend time looking at the code than using a debugger. When I do use a debugger, it's usually to generate a stack trace for an uncaught exception or segmentation fault. Although well written C++ code shouldn't throw many exceptions or have many opportunities to segfault, so I very rarely use the debugger. I normally do printf-style debugging since it's faster than using a debugger to generate the same information if you're lucky, and since I prefer reading log files over suspending the program every time I want data. I also hate Visual Studio because it gets in my way so much. You should just be able to write text without having a box suddenly pop up on the screen, eat your keyboard input, and block out the rest of the code. Why can't the autocomplete stuff be on a seperate panel on the side of the screen instead of blocking the code? I use Dev-C++ (a beta version of it and with the most recent version of GCC) to do all development because the IDE is simple and doesn't get in my way. Also, autocomplete encourages lazy variable naming because the penalty for not naming your variables properly is reduced which makes it harder to detect badly-named variables. Many of the other IDE's features are stupidly implemented and it would waste more space (specifically vertical space) than Dev-C++ even if you disabled all of its features. With that being said, most people are reliant on debuggers and prefer complicated IDEs such as Visual Studio.
  2. [quote name='roadysix' timestamp='1346542823' post='4975590'] I soon realized this after posting that reply and decided to go with the function overload. Extending the standard namespace is something I try to avoid, but I am interested in what these "certain conditions" are, if you could explain further. [/quote] This is what the standard says about it (this is from a 2005 draft but I doubt the C++11 standard changed this significantly): "It is unde?ned for a C++ program to add declarations or de?nitions to namespace std or namespaces within names- pace std unless otherwise speci?ed. A program may add template specializations for any standard library template to namespace std. Such a specialization (complete or partial) of a standard library template results in unde?ned behavior unless the declaration depends on a user-de?ned type of external linkage and unless the specialization meets the standard library requirements for the original template. 171) A program may explicitly instantiate any templates in the standard library only if the declaration depends on the name of a user-de?ned type of external linkage and the instantiation meets the standard library requirements for the original template." Footnote 171: "Any library code that instantiates other library templates must be prepared to work adequately with any user-supplied specialization that meets the minimum requirements of the Standard." You might want to wait for concepts to be standardized (or killed) before trying to add information about when a template would work to its interface. You're going to have to change your code if the standards comittee actually implements concepts and you want to use that feature (which would basically do what you're trying to do right now). http://en.wikipedia.org/wiki/Concepts_%28C%2B%2B%29 However, this probably isn't going to be standardized for another 5 years at least (if it does become standard).
  3. [quote name='roadysix' timestamp='1346530306' post='4975525'] [quote] You could try changing the specialization to: [code] template <> typename std::enable_if<std::is_arithmetic<mytype>::value, mytype>::type func(const mytype& x) noexcept; [/code] [/quote] I could try this but it would involve specializing is_arithmetic for mytype which is something I don't really want to do. Even still it would only evaluate to true and produce the same problems. [/quote] I really doubt you're going to be able to specialize the function if your type doesn't have is_arithmetic<mytype>::value==true . You are allowed to specialize templates in the standard library (under certain conditions). The following code compiles even though the mytype specialization of func doesn't use is_arithmetic: [code] #include <iostream> #include <type_traits> // firstsomething.hpp template <typename T> typename std::enable_if<std::is_arithmetic<T>::value, T>::type func(const T& x) noexcept { return x; } // something.hpp class mytype { }; namespace std { template<> class is_arithmetic<mytype> { static const bool value=1; }; } template <> mytype func(const mytype& x) noexcept { return mytype(); } int main() { func(0.0); func(0); } [/code] If the is_arithmetic specialization was removed, the code would fail to compile on GCC 4.6. If your type is not an arithmetic type then I think you're going to have to use function overloading. [quote]I think I was concerned that creating an overload would break func for integer types because mytype does not have an explicit constructor and it still takes a single argument integer value. However this is not the case apparently.[/quote] Yes; the compiler will prefer to instantiate a template over having a cast and using a non-template (although you can force it to use the non-template version by generating a function pointer to it and calling it, or you can force it to use a template by specifying the template arguments explicitly).
  4. This code compiles in GCC 4.6. I don't have 4.7 to test with. [code] #include <type_traits> // firstsomething.hpp template <typename T> typename std::enable_if<std::is_arithmetic<T>::value, T>::type func(const T& x) noexcept { return x; } // something.hpp typedef int mytype; template <> typename std::enable_if<true, mytype>::type func(const mytype& x) noexcept { return 0; } int main() { func(0.0); func(0); } [/code] You could try changing the specialization to: [code] template <> typename std::enable_if<std::is_arithmetic<mytype>::value, mytype>::type func(const mytype& x) noexcept; [/code] If that doesn't work, you might not need a function specialization in the first place (I'm not sure what exactly you're trying to do), so you ought to be able to use function overloading: [code] mytype func(const mytype& x) noexcept; [/code] [quote] I did add 'template <typename T>' to the specializations instead of 'template <>' but doesn't this then mean the functions are no longer specializations of the original? [/quote] Yes. You can't have a partial function specialization in C++ so all function specializations must start with "template<>" (as far as I know).
  5. I accidentally downvoted your post, sorry.
  6. IMO you should be using templates if you need to write code that works with any type. This allows you to use multiple types and change them later without affecting existing code. Besides float and double, it's also possible that you may use one of the integer types, complex, a quaternion type, a vector or matrix type, etc. For example, if you have a generic vector type then you can use vec<unsigned char, 4> or vec_4<unsigned char> to store image data and manipulate it in a convenient manner (although you may also have to implement casts).
  7. Intel's drivers are famous for not supporting OpenGL properly. You could try using DirectX if you want to support Intel cards, which is supposed to work somewhat better.
  8. Most commercial games use a fixed time step, meaning that they only advance the physics engine by a fixed interval. If the game is running at a low frame rate, then the physics engine will be advanced multiple times per frame instead of being advanced only once. It is possible that the user's PC is not capable of running the physics in real time in which case you have to use a larger time step, but this should never happen for PCs which surpass the minimum requirements that you intend to support. I believe that most physics engines run discrete collision detection multiple times per physics frame to get more accurate results and handle fast-moving objects better. Using either of those along with careful design of the levels should avoid problems caused by the use of discrete collision detection. The alternative is to make your physics engine do all of the collision detection continuously and to not use approximations that become worse with larger time steps. However, this is more difficult to implement and commercial physics engines do not do this so you are not likely to be able to implement it properly.
  9. If you have access to a modern compiler with unique_ptr then you could basically replace each usage of auto_ptr with unique_ptr and make some simple alterations to the code based on error messages and it ought to work (is there a good reason for why are you porting to VS 2005?). If you're changing auto_ptr to unique_ptr then you have to add a call to std::move in some cases. For example, if a and b are auto_ptrs, then this code is valid: a=b But, the unique_ptr version must be written like this: a=move(b) You can just use the error messages to find all of the places where you need a call to move.
  10. My favorite minigame is Overworld Zero from System Shock 2. System Shock 2 is an FPS/RPG so the player has an inventory, and one of the items you can pick up is the GamePig, which lets you play minigames while you're playing the main game (it's not paused or anything; you might get ambushed by enemies while you're playing the minigame). You also had to find game cartridges throughout the game after you found the GamePig. It actually served a purpose in the game because there are many actions in System Shock 2 that take lots of time to complete such as research (the player has to research some items before using them, and they can research parts of dead enemies to do more damage to them in the future) or waiting for the red alien goo to heal you. Instead of just waiting, you might want to read some of your unread audio logs (you can find audio logs around the ship which tell you about what happened and sometimes where items are) or play the minigames. All of the minigames were controlled only with the mouse and Overworld Zero was a turn-based RPG where the player had to kill enemies, find coins, pay for healing, transfer items from one house to another for coins, and go to a shrine to level up. You could only move in 8 directions and it was tile-based. I think there were also some items you could find that boosted your health. If you beat the game I think you got nanites (money) for the main game, but only a small amount. There was only one map but it was randomly generated and the player had to kill some boss enemies after getting to a certain level to complete the game. Someone actually made a clone of it in Java: http://mac.softpedia.com/progDownload/OverWorld-Zero-Download-60710.html Basically System Shock 2's minigames were effective because there were occasions where a player would need to kill time, the player wasn't incentivized to play them (actually it wasted one inventory slot), and the games themselves were fun.
  11. [quote name='Ed Welch' timestamp='1342648211' post='4960666'] Thanks for your answer, sundersoft. It's interesting that you mention harmonic mean, because if you look at the wikipedia definintion it says the following: "In certain situations, especially many situations involving rates and ratios, the harmonic mean provides the truest average." FPS is most certainly a rate, so that would back up my argument (basically, getting the harmonic mean of the FPS is the same as getting the arithmetic mean of the render time).[/quote] This means that, if you ran one game for 100 frames, stopped it, ran the second one for 100 frames, stopped it, etc, then your overall average frame rate would be given by the harmonic mean. This is not very useful. [quote name='Ed Welch' timestamp='1342648211' post='4960666'] Regarding your other point, you certainly can't assume that all games that get bad scores are because of "optimisation errors". Some GPUs perform badly with shader heavy games, for instance. But seeing as we don't have the source code, we really don't know. [/quote] That was an example to prove that the data is plausible. If you agree that the data is plausible, then you need some way to measure each GPU's overall performance taking into account only the data. You can use either the harmonic or arithmetic means for this purpose, or some other measure of center. In my example, a game that was not significantly affected by the choice of graphics card caused the percent increase in performance of the second card based on arithmetic mean to change from 161% (if the 3rd game is removed from the data) to 127%, whereas the percent increase associated with the harmonic mean would change from 161% to 40%. Most people would consider the harmonic mean to perform worse in this case, but this is subjective. You also need to decide how likely it is to have a slow game that is not affected by choice of graphics card, and whether the harmonic mean has any other differences with respect to the arithmetic mean that would redeem it if you consider its behavior in this case bad. Both of these considerations are pretty subjective.
  12. The definition of the arithmetic mean is the same whether you're using FPS or seconds per frame. You would still just add all of the numbers. You're proposing using the harmonic mean to describe the center of FPS rates. First of all, neither one is mathematically correct and there are other measures of center such as the median and geometric mean. However, each measure of center has different behavior when it comes to values that deviate far from the rest of the values. If you're trying find the average performance of a graphics card, then each game should be given equal weight. Also, if the card has exceptionally low performance on one game, that sample should be given less weight because this would indicate that the game has an optimization problem with that card or in general. To compare the harmonic and arithmetic means, assume that two different cards had these FPS rates for a sample of 3 games: 60, 55, 30 150, 150, 28 The 3rd game can be assumed to be badly optimized and the second card is obviously much faster than the first one. Harmonic means: 44.0 61.2 (40% more than the first card) Arithmetic means: 48.3 109.3 (127% more than the first card) If the harmonic mean was used to compare the two cards, then slow games would have more weight than fast ones. This isn't what we want since slow games are generally poorly optimized. In this example (where the 3rd game was bottlenecked by something other than the graphics card), most people would consider 127% to be more representative of the 2nd card's speed relative to the 1st one, rather than 40%, so the arithmetic mean would be preferred over the harmonic one.
  13. [quote name='L. Spiro' timestamp='1341672034' post='4956646'] __ (2 underscores) is a prefix reserved for the system/compiler. If you want to make absolutely sure your macros will never conflict with anything, you could add some underscores in front, but make sure it is not just 2 underscores. At work we use 3. [/quote] Anything starting with two underscores or one underscore and a capital letter is reserved for the compiler. So, anything starting with three underscores is reserved (since it also starts with two underscores) and any capitalized macro that starts with any underscores is reserved. Also, there can't be any sequence of two underscores in the identifier, even if it's not at the start. The compiler is not likely to define a macro that starts with three underscores but it is still allowed to do so.
  14. [quote name='Nanook' timestamp='1340319280' post='4951530'] I have another issue with my resource manager that I want some comments on.. I have been using shared_ptr for this previously.. The options I have been thinking of are these: 1. Have a template<typename ResourceT> class ResourceManager; and have a function Hash register(const std::string& filename); on that class where I can register to a resource with the given filename. If the resource does not exist the register function will create the resource and store it on the stack in a unordered_map. I would also store a reference count and the user would need to call unregister when its done with the resource and it would be deleted when ref count is 0. To get the resource I would call ResourceT& GetResource(Hash resourceHash); I could then store raw pointers to the objects as they cannot be deleteded before I call the Unregister function. I like this approach as I try to follow the advice on keeping things on the stack mentioned in this thread.. 2. I could have the same ResourceManager class as above, but I could store weak_ptr's in the unordered_map and when I do register it would check if theres a weak_ptr that I could get a valid shared_ptr to the object or it can create a new shared_ptr with the object, store a weak_ptr and then return the shared_ptr.. with this I dont have to worry about calling unregister and I can run a cleanup of the expired weak_ptr's whenever I want to.. Why should I choose one of these over the other, or even something else? [/quote] If you're going to have register and unregister functions in one of your classes, you should consider providing a helper RAII class which registers its argument in its constructor and unregisters it in the destructor. Anyway, if you're going to be using reference counting you might as well just use shared_ptr since the efficiency improvement of implementing it yourself is going to be negligable. Anyway, the implementation you should use is going to depend on what kind of efficiency constraints you want and what exactly you're trying to do. For example, if you're making a game and you want to cache textures and models that were loaded from the disk, I'd recommend the following in your caching class: -Have it return shared_ptrs on an object lookup. This allows you to not have to worry about whether the cached objects are in use or not when you flush the cache. I would also recommend storing shared_ptrs in the cache because, if you use weak_ptrs, then the object would become flushed from the cache as soon as the last shared_ptr reference to it was destroyed. This would be bad if part of your game involved loading a model (e.g. of a rocket), instancing it, and then destroying it later. Every time the model would be requested from the cache, it would have to be loaded from disk again. -Have a "flush" function which goes through the cache and removes any object that only has one shared_ptr referring to it (i.e. the one stored by the cache). This removes all currently unused objects from the cache. It is not desireable to remove every object from the cache because this may cause two different instances of the same object to exist. When you're performing an operation where most of the cache would be redundant (e.g. loading a new level), you would flush the cache. For loading a level, you would first delete the existing one, load and initialize the new one, and then flush the cache. This order would cause any resources that exist in both levels to not need to be loaded from disk twice.
  15. [quote name='Nanook' timestamp='1340111272' post='4950562'] I've started changing alot of my code now. But I have a case where I use a unique_ptr as its a factory function that creates the object.. I then move this pointer into a owning container.. but I also have to set a currently selected object in another class.. should I rather use a shared_ptr in this case and use a weak_ptr for where I set the currently selected object? Or should I use a raw pointer for the selected object and get the raw pointer out of the unique pointer like I do below? [/quote] Raw pointer. If the object is deleted and then accessed by the currently selected pointer, that would usually indicate an error in your code and both weak_ptr and raw pointers will cause a runtime error here (assuming you don't check for a null shared_ptr from the weak_ptr's lock function). It's possible that a memory access to deleted memory will not cause a runtime error but this is rare and usually yields garbage data. Using shared_ptr generally implies that there is shared ownership of the pointed-to object, so having only one shared_ptr to an object means that there is exclusive ownership, which makes the code less clear in addition to having more overhead. It's also possible that a second shared_ptr could be created by mistake which points to the same object, which is generally not desired (removing the object from the container should delete it) and not possible with the unique_ptr implementation. Using shared_ptr for both the currently selected pointer and the container of objects would cause the object to remain in memory when the object is removed from the container while being referenced by the currently selected pointer, but this condition is usually errorneous (the selected pointer should generally only point to objects in the container), so this will mask the error and make it hard to detect. The object may require shared ownership without considering the existance of the currently selected pointer (for example, if the object is immutable then you might want to copy a pointer to it instead of doing a full copy, in which case shared_ptr is appropriate). In this case, weak_ptr should be preferred since the overhead of having another weak_ptr instead of a raw pointer when the shared_ptr already exists is neglegible and the weak_ptr is more likely to detect errors. However, checking if the shared_ptr returned by weak_ptr's lock function is null should usually be omitted since it is usually a non-recoverable error to have the weak_ptr return a null shared_ptr.