Jump to content

  • Log In with Google      Sign In   
  • Create Account


sundersoft

Member Since 09 May 2012
Offline Last Active Oct 09 2012 09:38 PM
-----

#4982543 Is using a debugger lazy?

Posted by sundersoft on 21 September 2012 - 07:19 PM

I almost never use a debugger because:
-I usually have a good idea of where the bug is; it's usually in the last code you wrote (assuming you're testing properly). The code's errorneous behavior is also usually enough to determine roughly where the bug is. Having to look at stack traces and whatnot takes a good amount of time and it's usually not necessary to know exactly where the bug is.
-My approach to fixing bugs is to look at the relevant code and try to improve it. I may find other bugs that I didn't know about or I may find that the code or variable names are confusing. I may also decide that the method I'm using is naturally error prone (has many special cases, etc) and that I should rewrite the code to be simpler. With a debugger, I normally find the bug, fix it, and leave, so the code quality is not improved much. I'd rather spend time looking at the code than using a debugger.

When I do use a debugger, it's usually to generate a stack trace for an uncaught exception or segmentation fault. Although well written C++ code shouldn't throw many exceptions or have many opportunities to segfault, so I very rarely use the debugger. I normally do printf-style debugging since it's faster than using a debugger to generate the same information if you're lucky, and since I prefer reading log files over suspending the program every time I want data.

I also hate Visual Studio because it gets in my way so much. You should just be able to write text without having a box suddenly pop up on the screen, eat your keyboard input, and block out the rest of the code. Why can't the autocomplete stuff be on a seperate panel on the side of the screen instead of blocking the code? I use Dev-C++ (a beta version of it and with the most recent version of GCC) to do all development because the IDE is simple and doesn't get in my way. Also, autocomplete encourages lazy variable naming because the penalty for not naming your variables properly is reduced which makes it harder to detect badly-named variables. Many of the other IDE's features are stupidly implemented and it would waste more space (specifically vertical space) than Dev-C++ even if you disabled all of its features.

With that being said, most people are reliant on debuggers and prefer complicated IDEs such as Visual Studio.


#4975599 std::enable_if, template specialization

Posted by sundersoft on 01 September 2012 - 06:46 PM

I soon realized this after posting that reply and decided to go with the function overload. Extending the standard namespace is something I try to avoid, but I am interested in what these "certain conditions" are, if you could explain further.


This is what the standard says about it (this is from a 2005 draft but I doubt the C++11 standard changed this significantly):

"It is undefined for a C++ program to add declarations or definitions to namespace std or namespaces within names-
pace std unless otherwise specified. A program may add template specializations for any standard library template to
namespace std. Such a specialization (complete or partial) of a standard library template results in undefined behavior
unless the declaration depends on a user-defined type of external linkage and unless the specialization meets the standard
library requirements for the original template.
171)
A program may explicitly instantiate any templates in the standard
library only if the declaration depends on the name of a user-defined type of external linkage and the instantiation meets
the standard library requirements for the original template."

Footnote 171: "Any library code that instantiates other library templates must be prepared to work adequately with any user-supplied specialization that meets
the minimum requirements of the Standard."

You might want to wait for concepts to be standardized (or killed) before trying to add information about when a template would work to its interface. You're going to have to change your code if the standards comittee actually implements concepts and you want to use that feature (which would basically do what you're trying to do right now).
http://en.wikipedia.org/wiki/Concepts_%28C%2B%2B%29
However, this probably isn't going to be standardized for another 5 years at least (if it does become standard).


#4975513 std::enable_if, template specialization

Posted by sundersoft on 01 September 2012 - 01:07 PM

This code compiles in GCC 4.6. I don't have 4.7 to test with.
#include <type_traits>
// firstsomething.hpp
template <typename T>
typename std::enable_if<std::is_arithmetic<T>::value, T>::type func(const T& x) noexcept
{ return x; }
// something.hpp
typedef int mytype;
template <> typename std::enable_if<true, mytype>::type func(const mytype& x) noexcept { return 0; }
int main() {
    func(0.0);
    func(0);
}

You could try changing the specialization to:
template <> typename std::enable_if<std::is_arithmetic<mytype>::value, mytype>::type func(const mytype& x) noexcept;

If that doesn't work, you might not need a function specialization in the first place (I'm not sure what exactly you're trying to do), so you ought to be able to use function overloading:
mytype func(const mytype& x) noexcept;

I did add 'template <typename T>' to the specializations instead of 'template <>' but doesn't this then mean the functions are no longer specializations of the original?

Yes. You can't have a partial function specialization in C++ so all function specializations must start with "template<>" (as far as I know).


#4965951 Floating Point Constants

Posted by sundersoft on 03 August 2012 - 03:26 PM

IMO you should be using templates if you need to write code that works with any type. This allows you to use multiple types and change them later without affecting existing code. Besides float and double, it's also possible that you may use one of the integer types, complex, a quaternion type, a vector or matrix type, etc. For example, if you have a generic vector type then you can use vec<unsigned char, 4> or vec_4<unsigned char> to store image data and manipulate it in a convenient manner (although you may also have to implement casts).


#4964638 Opengl having issues with Intel HD3000

Posted by sundersoft on 30 July 2012 - 04:50 PM

Intel's drivers are famous for not supporting OpenGL properly. You could try using DirectX if you want to support Intel cards, which is supposed to work somewhat better.


#4964635 Use SAT to "unpenetrate" objects but limit direction of minimum trans...

Posted by sundersoft on 30 July 2012 - 04:44 PM

Most commercial games use a fixed time step, meaning that they only advance the physics engine by a fixed interval. If the game is running at a low frame rate, then the physics engine will be advanced multiple times per frame instead of being advanced only once. It is possible that the user's PC is not capable of running the physics in real time in which case you have to use a larger time step, but this should never happen for PCs which surpass the minimum requirements that you intend to support.

I believe that most physics engines run discrete collision detection multiple times per physics frame to get more accurate results and handle fast-moving objects better.

Using either of those along with careful design of the levels should avoid problems caused by the use of discrete collision detection.

The alternative is to make your physics engine do all of the collision detection continuously and to not use approximations that become worse with larger time steps. However, this is more difficult to implement and commercial physics engines do not do this so you are not likely to be able to implement it properly.


#4962297 AUTO_PTR issue.

Posted by sundersoft on 23 July 2012 - 10:53 AM

If you have access to a modern compiler with unique_ptr then you could basically replace each usage of auto_ptr with unique_ptr and make some simple alterations to the code based on error messages and it ought to work (is there a good reason for why are you porting to VS 2005?).

If you're changing auto_ptr to unique_ptr then you have to add a call to std::move in some cases. For example, if a and b are auto_ptrs, then this code is valid:
a=b

But, the unique_ptr version must be written like this:
a=move(b)

You can just use the error messages to find all of the places where you need a call to move.


#4956742 Global variables in comparison to #define

Posted by sundersoft on 07 July 2012 - 03:18 PM

__ (2 underscores) is a prefix reserved for the system/compiler. If you want to make absolutely sure your macros will never conflict with anything, you could add some underscores in front, but make sure it is not just 2 underscores. At work we use 3.


Anything starting with two underscores or one underscore and a capital letter is reserved for the compiler. So, anything starting with three underscores is reserved (since it also starts with two underscores) and any capitalized macro that starts with any underscores is reserved. Also, there can't be any sequence of two underscores in the identifier, even if it's not at the start. The compiler is not likely to define a macro that starts with three underscores but it is still allowed to do so.


#4951896 Overusing smart pointers?

Posted by sundersoft on 22 June 2012 - 08:26 PM

I have another issue with my resource manager that I want some comments on..

I have been using shared_ptr for this previously..

The options I have been thinking of are these:

1. Have a template<typename ResourceT> class ResourceManager; and have a function Hash register(const std::string& filename); on that class where I can register to a resource with the given filename. If the resource does not exist the register function will create the resource and store it on the stack in a unordered_map. I would also store a reference count and the user would need to call unregister when its done with the resource and it would be deleted when ref count is 0. To get the resource I would call ResourceT& GetResource(Hash resourceHash); I could then store raw pointers to the objects as they cannot be deleteded before I call the Unregister function. I like this approach as I try to follow the advice on keeping things on the stack mentioned in this thread..

2. I could have the same ResourceManager class as above, but I could store weak_ptr's in the unordered_map and when I do register it would check if theres a weak_ptr that I could get a valid shared_ptr to the object or it can create a new shared_ptr with the object, store a weak_ptr and then return the shared_ptr.. with this I dont have to worry about calling unregister and I can run a cleanup of the expired weak_ptr's whenever I want to..

Why should I choose one of these over the other, or even something else?


If you're going to have register and unregister functions in one of your classes, you should consider providing a helper RAII class which registers its argument in its constructor and unregisters it in the destructor. Anyway, if you're going to be using reference counting you might as well just use shared_ptr since the efficiency improvement of implementing it yourself is going to be negligable.

Anyway, the implementation you should use is going to depend on what kind of efficiency constraints you want and what exactly you're trying to do. For example, if you're making a game and you want to cache textures and models that were loaded from the disk, I'd recommend the following in your caching class:

-Have it return shared_ptrs on an object lookup. This allows you to not have to worry about whether the cached objects are in use or not when you flush the cache. I would also recommend storing shared_ptrs in the cache because, if you use weak_ptrs, then the object would become flushed from the cache as soon as the last shared_ptr reference to it was destroyed. This would be bad if part of your game involved loading a model (e.g. of a rocket), instancing it, and then destroying it later. Every time the model would be requested from the cache, it would have to be loaded from disk again.

-Have a "flush" function which goes through the cache and removes any object that only has one shared_ptr referring to it (i.e. the one stored by the cache). This removes all currently unused objects from the cache. It is not desireable to remove every object from the cache because this may cause two different instances of the same object to exist.

When you're performing an operation where most of the cache would be redundant (e.g. loading a new level), you would flush the cache. For loading a level, you would first delete the existing one, load and initialize the new one, and then flush the cache. This order would cause any resources that exist in both levels to not need to be loaded from disk twice.


PARTNERS