• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

riku

Members
  • Content count

    94
  • Joined

  • Last visited

Community Reputation

194 Neutral

About riku

  • Rank
    Member
  1. Never tried this myself but this one sure looks nice: SOCI. It's an abstraction layer over several SQL Databases rather than a library for a single SQL implementation. Of course, you can use SQLite, PostgreSQL, MySQL or other database systems using their native C or C++ libraries. You can also check out other, non-relational database systems like MongoDB. They seem to be very popular these days. -Riku
  2. I use the pimpl idiom extensively when interfacing with C code. It has the advantage of not having to include the C headers in your C++ headers. For example when doing a wrapper for some Win32 code, I'd have to have #include <windows.h> in my public headers. It will bring lots of namespace clutter and unwanted macros (like the min and max macros). Using the pimpl idiom also allows you to write separate implementations for the same public interface, since you can modify the data members of the private impl without the client having to know (ie. no change to the public headers). Opaque pointers are used a lot in C programming for the same thing. Using the pimpl idiom slightly complicates the code, but it will help you make super clean C++ interfaces. My guide is: pimpl if the implementation is dirty or requires messy C headers like <windows.h>. -Riku
  3. Could the removal of D3D support from Qt have something to do with the fact that Qt recently stopped using native windows for widgets and instead they use "foreign windows", a single native window where Qt draws the widgets. Not using native windows greatly improved the responsiveness of the UI, especially when resizing windows and widgets. This is a fairly major architectural change. When you create a D3D device, you need to have a native win32 window (a HWND window handle, that is) where to render to. Since Qt doesn't have one, they may have dropped D3D support for the sake of simplicity. What puzzles me, though, is that when you create an OpenGL rendering surface you need to have a native window just as you need with D3D. How do they do this in Qt? Perhaps when using GL, they put a native window on top of the Qt window which to use for the GL surface. You really should not whine about open source projects dropping a feature you liked. They probably do have a reason for it and the developers are free to do stuff that's in their intrest. If you really want or need that feature back, you are free to grab the source code and hack it there yourself. I'm sure you can find the old D3D widget source from their source repositories and also track down the changes in other relevant parts. Also looking at the source for the OpenGL widget may prove to be helpful. -Riku
  4. Some of the old keygens used small software synths to create their music. Software synths can be a lot smaller than sample-based tracker music and they also have their own peculiar sound. 4 and 64 kilobyte intros also use soft synths because of their smaller size and better sound quality compared to tracker music in the same amount of data. Using tracker music modules is a lot simpler codewise, tho. -Riku
  5. Remember to normalize the orientation quaternion after the final addition. Special cases may work without normalization but in general your results will be messed up without it. -Riku
  6. I personally never store any 3rd party libraries, in source or in code, or in general, anything that is either avaialble elsewhere (like 3rd party libraries) or machine-generated. When I had my first "industry" job I was shocked to see that it was a common practice to store everything under version control, 3rd party libraries, applications and even the right version of the compiler to make the project compile. I didn't like the practice but I did see many good things in doing so. Most of them, however, were more business than software related. There are a variety of reasons why to store 3rd party libraries (in binary) in source control systems. No need to configure and install 3rd party dependencies. However, you get in trouble once you have different operating systems, processor architectures or compiler versions to support. If you can stick with one combination, say win32-x86_64-msvc9 or you're using a language like Java, it's may work to store 3rd party libs. In any other case, this is pretty damn constraining. You can define a workspace that contains everything to compile and run the program. Again, this works as long as you stick to 1 OS/1 cpu architecture/1 compiler version, etc. How far are you willing to go with this? At one big american company I worked for, they had a convention of even putting the correct Microsoft Visual C++ version installers in the source control tree. How convenient is that? A new/different version of dependency library will not break your app. A new, non-backward compatible version of your dependancy library may appear, and you will either have to stick to the old version, fix your software and require a new version or incorporate a workaround to support two versions. Or you could store the 3rd party library in your source control and do nothing. If a software system in production breaks down because of a new dependency library version, the client may not be willing to pay for it. Thus it will be cheaper to keep the correct version bundled with the software sources and this will never happen. However, if you're not in a situation where you have to care about production lines, your software can be considered out-of-date and deprecated and better just update your software for the new dependency versions. Your boss or your marketing dept may think it's a practical solution to version mismatches. It doesn't mean that it actually is, but your software will turn obsolete quicker. All the reasons in this list are bad reasons. No matter how convenient they might seem to be at first, they mostly don't work out in practice. You are only adding more constraints to your development cycle. Even at best, all my experiences with these "workspace in version control"-type solutions have started out by reading the README.txt and tweaking compiler paths and mostly it would have been easier to install the dependencies manually. If you're free from marketing and business related, I recommend these best practices: No 3rd party libraries in version control. Especially no binaries, they impose the most restrictions. The source code is much more versatile than the binaries, but it may be a pain in the ass make it compile and requires the dependencies of the dependency to be included too. Don't store what you can compute. No binaries in the source control. Also no other machine generated data or code, like stuff you used a script to generate/export (but include the scripts that do the process, if applicable). Applies also to intermediate build files, like Makefiles on a ./configure-based system (or MSVC solution files in a CMake-based system). If you must include dependencies, use svn:externals or similar. Usually, however, you are better off if you install dependencies manually (assuming easy availability of correct versions of installers, distribution packages, etc). I kind of like the previous poster's idea of having a separate workspace-repository with the dependencies. It would not impose the limitations or force a user to actually use those libraries, but they are there and available if another user wishes to use them. Your original source tree will contain your code only and won't be cluttered by a ton of 3rd party libraries and script hacks to get them built. And remember: Even Windows works with multiple processor architectures these days. A single binary distribution just won't cut it. If you add different compilers or compiler versions and other operating systems to the equation, the solution to the initial problem is worse than not solving the problem at all. rant of the day, -Riku
  7. First, some advertising: http://libwm.googlecode.com - Libwm is a new library for windowing and rendering context management for OpenGL, with OpenGL 3 support and more. Now for the competitors: SDL, GLFW, GLUT, SFML, ClanLib. Google for the home pages. -Riku
  8. Despite what people say here, it's not okay for a game to run as fast as it can and hog 100% of CPU. Ideally a game should run the game logic at a fixed rate and redraw the screen at a limited rate, independent from the game logic update rate. The game logic could be updated slower or faster than the drawing rate. I've heard that Quake ran physics at a fixed 15 frames per second (can't confirm this, tho) while drawing was happening at a different rate. Physics simulations like racing games usually want to run the physics at a higher rate than the screen redraw, perhaps physics at 120 updates/second. The screen redraw rate (frames per second) should not exceed 60 frames per second. If you run your game loop at maximum speed, the only thing it does is waste CPU and GPU time, produce a lot of heat and make your fans run faster. You're supposed to spend as much time sleeping as possible to let the operating system give out CPU time to other processes. There's nothing more annoying than a simple tetris game that's hogging all the CPU time and causing the fans to generate a lot of noise because the game is effectively "busy waiting". So run your graphics at 60 FPS maximum and run your physics/game logic at a fixed rate, independent from the graphics and sleep as much as you can. There is no advantage whatsoever in running things any faster, all the comments about maximum rates and artificial limitations from the counterstrikers above is just mere superstition, don't listen to that. -Riku
  9. A new version of my cross-platform windowing and OpenGL rendering context management library, Libwm 0.2.0 was released today. Some new features added, most notably support for OpenGL 3.0 and multisampling. Opengl.org was early to report the release (GL3 support was just added to the development source repos), but here is the actual release. http://libwm.googlecode.com/ Any feedback is welcome. Please test out the library and consider using it in your current or future projects. -Riku
  10. Today I released Libwm 0.1.0. A new cross platform C++ library for windowing and rendering context management for OpenGL. It's got support for basic windowing and event handling, internationalized text input, works with Xlib/GLX or Win32/WGL, support for GLX <= 1.2 and GLX >= 1.3, WGL_ARB_pixel_format and DescribePixelFormat for backward compatibility. OpenGL 3.x support, multisampling, etc coming later. Libwm web site Source code API documentation Mac OS X Darwin binaries Win32 binaries Building instructions All feedback welcome.
  11. Quote:Original post by Megamorph Correct me if I'm wrong here, but since we're going for a flat rendering, those should be face normals - not vertex normals. In this decision I was going off some earlier gamedev.net post about some other normals issue and the sugguestion therein, as well as tutorials, such as this one, which use old-style GLbegin/GLend blocks to render a cube with normals (scroll down). Although that does arouse suspicion... How would I then render a smooth-shaded mesh (how does OpenGL know whether to expect vertex normals or face normals?) Perhaps I'm wrong and all I need is to calculate 8 vertex normals instead. In this case, would a vertex normal then = (normals of all adjacent faces)/(number of all adjacent faces)? Yes, you are wrong. When using "face" normals, you really specify the same normal for all the vertices in the face. When using OpenGL immediate mode (glVertex/glNormal), it just uses the last specified normal for all your vertices. OpenGL does not have any magic to recognize whether you are using face or vertex normals. GL_FLAT vs. GL_SMOOTH lighting only affects the way the computed lighting values are interpolated across the pixels in the same face, and has nothing to do with face vs. vertex normals. For VBO's/Vertex arrays you need to have one normal per vertex, if there are identical normals, then just copy the same normal to all vertices that share the normal. The same goes for all the other vertex attributes (texture coordinates, etc). So, for a cube you need 24 vertices that are made by combining the 8 vertex coordinates with the 6 normal vectors (and maybe 4 texture coordinates). If you have the vertices, vertex indices (8 vertices + 36 indices) and normals + normal indices (6 normals + 36 indices), you can whip up an algorithm that combines them into the 24 vertex+normal combinations and 36 indices. These can then be optionally triangle stripped and vertex cache locality optimized at this point. This data is then fed to the VBO + index buffer and then rendered. If you're using C++, std::map (or std::set) can prove helpful when writing the algorithm for interleaving multiple "planes" of vertex data + indices into one big interleaved vertex data+indices batch that's ready for your GPU to chew on. -riku
  12. Quote:Original post by JSoftware Protip: GameDev.net This might be a reason not many people here like Linux As this is GameDev.net, I'm sure many people are here to improve the linux gaming situation :) I think that Linux (and why not governemnt funded open source?) is the way to go for the public sector. I hate it how a huge amount of taxpayer dollars/euros/$CURRENCY is going to Microsoft's pockets. France has already switched to Linux based OS'es in some government offices several years ago and now the law enforcement follows. -Riku
  13. I use inner classes regularly. They can used to improve encapsulation. Their advantage over non-inner classes is that they can access the outer class' private members in many languages (no "friend" relation needed). You can also make the inner class private in the outer class, so it cannot be used outside of the outer class (and it's friends). I also use forward declared inner classes in C++, in particular with the Pimpl idiom. // Header file class Pimpl { public: private: struct impl_t; // put all private members here impl_t *impl; // and use them through this pointer }; // Implementation file #include <nastyplatformheader.h> // we don't want this in the header file struct Pimpl::impl_t { NastyPlatformSpecificThing thing; }; Pimpl::Pimpl() : impl(new impl_t) { impl->thing = createThing(); } Pimpl::~Pimpl() { delete impl; } This has the obvious advantage of not having to include implementation specific header files in our header. You can also have multiple implementations of the same routines using the same header files. For example, you can have a single set of header files for a library that has implementation files for Windows and Linux and let your build system decide which to use. -Riku
  14. You might want to take a look at other version control systems, too. Instead of a centralized system like subversion, a distributed system might suit you better. I have a personal preference to distributed version control systems like Mercurial, Bazaar, Darcs and Git. -riku
  15. Your approach seems okay, just make sure you get the synchronization correct. Your renderer thread could use a work queue that sould be synchronized with a reader-writer or producer-consumer type of synchronization. Quote:Original post by ville-v If you are not sure if resource is thread-safe, it is fairly simple to implement an interface for it. *** Source Snippet Removed *** This is how you should not do it. The locking does not work, because the test-and-set operation is not atomic (while(locked) { wait(); }; locked = true;). It is trivial to find a case where more than one thread can enter the critical section. Instead of this kind of locking mechanism, you should look at synchronization primitives such as mutexes and condition variables that are provided by your operating system (using interrupt disabling and cpu-level atomic operations and low level primitives such as semaphores and spinlocks). They can be used to provide robust synchronization mechanisms. You should also try to use (variations of) known parallel programming school book solutions such as readers-writers, producer-consumer, monitor, etc whenever they can be applied to your solution. Parallel programming is a huge paradigm shift from single threaded coding. You can't use your single-thread tricks as you used to. -Riku