Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


frob

Member Since 12 Mar 2005
Offline Last Active Yesterday, 10:55 PM

#5168267 Converting STL heavy static library to shared

Posted by frob on 21 July 2014 - 07:27 PM

Reiterating, the problem is not about if the library is static or dynamic or any such things. The problem ensuring safe conditions when crossing boundaries.

If you say "here is a pointer to a c-style string" or "here is a 32-bit value" there are no problems. Data is transferred and read, but as long as each side is responsible for their own internal work everything works just fine.

When you start to pass around larger objects there is potential for problems. One side might have some build-specific data members (like debug variables) so that the two structures are not identical, one may have 16 data members but the other has 18. As long as the data members and the layout match, most of those concerns go away. If you end up modifying a shared data structure you need to rebuild everything on both sides to keep them in sync, but in the grand scheme of things this is not a problem.

The other big issue is when you pass around things that rely on other systems, like memory management. If one side is using a certain memory manager, and the other side is using a different memory manager, and then both sides are attempting to manipulate the memory, the two will behave in incompatible ways and really bad things are going to happen. Most of the container classes are a great example of this. Since container classes tend to have their functions get inlined and container classes can modify large blocks of memory, programmers must be careful that both sides are using compatible functions. One side uses a debug allocator that puts a nice little border around it and tracks the memory, then the other side resizes the memory with a non-debug allocator and releases the debug-tracked block with a non-debug function and suddenly all kinds of nightmare scenarios can play out. Memory management is probably the most common item, but it applies to any set of functions. If one side behaves in a way that is incompatible with the other side, errors will result.

As long as both sides (the main executable and the external libraries) are compiled with the same settings and the same combination of libraries everything is fine. The data structures will be the same size, with the same alignment, and they'll be using the same libraries for external functionality.


#5167751 Patenting an Algorithm?

Posted by frob on 19 July 2014 - 02:11 AM

Copyright covers the expression of the idea. It covers the source code, the executable, and so on. If somebody copies and distributes the source code, or uses it in their own programs that work similarly and then distribute it, they are violating copyright.

Patent covers processes, machines, and devices. It can potentially cover the process of encoding, the mechanics behind it. If somebody came up with a similar process for encoding and decoding that is too similar to the patented process, they are violating a patent.


If someone used a 'clean room' implementation, that is they came up with a H264 implementation on their own based on descriptions given to them, they would not be violating copyright (because it is their own expression of the idea) but they would likely be violating the patent (because they used the process).


#5167693 why c++ is the most widely used language for professional game development?

Posted by frob on 18 July 2014 - 05:48 PM

Its insane compilation model leads to terrible build times, and completely breaks down in large codebases. A windows build of Qt5 takes >2 hours on a quad core system with a SSD. A comparable codebase in C# would take less than 10 minutes (my system manages ~100Kloc of C# per second, which would build the whole 10Mloc of Qt5 in 1.5 minutes.)
 
Its lack of garbage collection means that long-running applications will suffer from memory fragmentation issues unless special measures are taken.
 
The complexity of the specification means that no two compilers implement the language in a compatible manner. Portable code is littered with "#ifdef _MSC_VER > 1700" and similar line noise.
 
C++ may have been an improvement over C when it was created (debatable), but right now the main reason for its popularity is inertia.

Those are, in many ways, the strengths of the language.

The reason build times are insanely huge is because of the compilation and linking model. Pull everything in. Inline everything you possibly can. Optimize and precompute everything possible, perform every optimization possible, restructure everything from the biggest algorithms to the smallest pigeon-hole to be cache friendly, OOO-core friendly, branch predictor friendly, lookahead table friendly, and more.

Lack of garbage collection means less memory used; it is well established in academia that GC-style systems generally require 1.5x the memory requirements to maintain similar performance. Yes it requires more brain power for the humans to manage the object lifetime, but when you are on a console device or mobile device with memory measured in megabytes taking a 1/3 reduction in effective memory just to use automatic garbage collection is an unwise tradeoff.

The ability to have system-specific improvements means you can take advantage of features rather than relying on the most generic or completely portable features. If some hardware or compiler offers a feature you can take advantage of, then take advantage of it. You don't say things like "This hardware offers parallel processing, SIMD, lots of cores, and hardware acceleration for 3D graphics, but I'm going to stick with the basics. None of those fancy instructions or libraries, it is pure C++98, no threading, and all graphics will be done with direct hardware interaction." No, instead you look for features on the system and take advantage of them.


The language is less productive than many newer languages, but we don't use C++ for productivity reasons. We use it because the compilation model makes for incredible optimizations, because it allows programmers to control everything, because unlike other languages you only pay for features when you use them (with only two exceptions, exception handling and RTTI, and those are frequently disabled). We use it because it is trivially compatible with everything else. We use it because there is an enormous library of functionality to rely on. We use it because it allows extensions to take advantage of hardware. And when it is not the right language, we build scripting systems or exposed interfaces or can otherwise leverage high-productivity languages when performance is not key.

For systems level work, C#, Java, and other languages have much to be desired. C++ is great for systems level work. It is more productive than predecessors, and flexible enough for all your low level needs.


#5167572 What Linux Distribution is best to start making games?

Posted by frob on 18 July 2014 - 05:03 AM

The best distribution to start with is the one you happen to have installed already. Do your development there.

After that, you will need several just to verify that everything works. That likely means relying on pre-existing multi-system compiler farms, or downloading and testing on SUSE, Red Hat and Fedora, Debian and the *buntu family, Mint, and anything else you can get your hands on. Several groups like DistroWatch.com provide an ever-changing list of the top major distributions, cover the best spread you can manage.


If you are asking which is the best one to install for your very first linux distro, that is an enormous holy war that would get your topic moved over to the Lounge so reputations don't suffer too much. It is better asked at sites like the above-mentioned DistroWatch where there are detailed comparisons about the differences and similarities between the major players.


#5167565 Classes and OOP

Posted by frob on 18 July 2014 - 04:47 AM

Further on the single responsibility principle...

A class should focus on a single responsibility. Every function of the class should support that single responsibility. Every data member of the class should be related to the responsibility.

There are several related concepts, such as dependency inversion, the substitution principle, and more, known as SOLID.

Deciding exactly what responsibilities belong to a class can be a difficult decision. When you are too generic (such as "ResourceManager") you tend to create a "god class" that does too much or knows too much. When you are too specific you have classes that are incomplete and require tight coupling to be useful. Often there are clusters of functionality, such as an ImageCache, ImageProxy, CacheEventHandler, or whatever more you need. Or that may be overkill for your design.

When you discover them, find a way to refactor or redesign the code if it helps. Sometimes classes need to be split. Sometimes they need to be merged. Sometimes dependencies need to be introduced, frequently dependencies need to be removed. Sometimes it is better to just live with the mistake.

There are several classes that naturally become god classes in games. Often there is a GameObject that contains a bunch of generic information that applies to absolutely every object. There may be a World class that serves as not just a container for objects in the world, but also a central hub for everything remotely related to the simulator. Physics, Sound, and Animation turn into dumping grounds or namespaces with way too many free-floating functions. Keeping them limited to a single responsibility can be difficult, but it makes for a better implementation.

Trying to enforce a single responsibility per class often requires experience to do well. Most of us gain experience by watching others or by doing it badly and getting burned. As this is For Beginners, just do your best and focus on getting something that is functional. It doesn't need to be perfect, focus on the minimum to be good enough for your needs. In the process you will gain a lot of experience (by doing things badly). Don't worry too much about it if you can make it work. Live with the bad version for now, just get the project finished any way you can. Finish off your tic tac toe game, or pong game, or falling block game, and each time you will learn many things to do differently on your next, more ambitious project.


#5167429 Intialization withn definition?

Posted by frob on 17 July 2014 - 10:17 AM

Tile( TileType Type = TILE_EMPTY, unsigned int Color = 0xffffffff, unsigned int Flags = 0 ) :
... I fail to assimilate its semantics and usage.
Usage, you can use any of these:

Tile t();
Tile t(TILE_WHATEVER)
Tile t(TILE_WHATEVER, 0xbaadf00d);
Tile t(TILE_WHATEVER, 0xbaadf00d, flags);

Default parameters occasionally trip up programmers over some compiler rules, and they sometimes trip up long-term growth of released APIs. It may not be a problem with a pristine code base, but as the code grows and new functions are introduced and unexpected patterns arise and all the little useful-but-ugly growths start to naturally appear, default parameters invariably end up with collisions and headaches that could have been avoided.

In the long term it is better to explicitly provide a family of functions with all the desired signatures.

    Tile(  ) ...
    Tile( TileType Type )  ...
    Tile( TileType Type, unsigned int Color) ...
    Tile( TileType Type, unsigned int Color, unsigned int Flags)  ...
Providing all the signatures is much better for long term maintenance.


#5167245 Patenting an Algorithm?

Posted by frob on 16 July 2014 - 04:11 PM

Questions

1) So, if I were to patent an algorithm, do you think it would be unethical?
2) How much would an algorithm go for? I know it depends, but on what?
3) Do you think a machine that can use the algorithm to do something is more valuable than the algorithm itself?
4) If the source code of a game can be protected under copyright law, then doesn't it make since that an algorithm can be protected under the same law( besides, what makes a game different from the next other than the way in which the game was coded, other than art assets)?
5a) Do you think that perhaps the value a lot of games are more so, the processes used to make the games?
5b) Perhaps some algorithm used to solve a game problem could be used in the medical field to save someones life?

1. Ethics are hard. Is it ethical to kill an animal for food? Is it ethical to kill a plant for food? Is it ethical to limit someone's action at all?
2. The minimum filing fee in the US is $70. The process involved is extremely nuanced and usually requires at least one lawyer. The cost to have people do all the work, including the necessary prior art searches, proper citations, proper wording, and getting all the paperwork correct, is anywhere from $1,000 to $10,000 or more depending on complexity.
3. Patents are legal protection for processes. They do not have any intrinsic value. A large number of patents have no value at all, and are merely filed and forgotten. The value of a machine is very different from the value of a legal protection.
4. No. Copyright protects a tangible expression and rights associated with distribution and reproduction. Patents protects processes and methods of performing the process. They are quite different mechanisms for different legal protections.
5a. Watch your language. The processes used to make games are different then the processes encoded inside games. The processes used to make games are things like scrum meetings and sprints and project management and team coordination. Those are not the same as the processes for computing self-shadowed objects, for example.
5b. Possibly, but it is irrelevant. Maybe someone could adapt some AI techniques to apply to medicine. Lots of life-saving processes are patented, yet apart from the costs involved of licensing, fairly few people object to the practice. We don't like when licensing is expensive but we do understand and respect that the innovators should be compensated.


#5167235 cpu, gpu or maybe no constrained?

Posted by frob on 16 July 2014 - 03:47 PM

And... we're done here.


#5167073 In game periodic notice question

Posted by frob on 15 July 2014 - 09:00 PM

I've worked on games that relied on them. Ultimately we learned that players hate them. From telemetry we learned that most players don't use them for their intended purposes.

Of course, you might be making a game where players like to interrupt their gameplay by reading the rough equivalents of popup spam windows. Perhaps your game's demographics enjoy reading and re-reading the same text blurb thousands of times. In that case, all you need is a begin and end time index, and the update loop can cycle through them every frame without any performance problems. If the end time index is reached drop the notice, if the begin time index is reached display it if necessary. Iterating over a list of usually zero items but sometimes one or two items is not a performance concern.


#5167055 extern definition?

Posted by frob on 15 July 2014 - 03:12 PM

Yes, one use is language linkage. Only two are guaranteed to be supported; you can use extern "C" and extern "C++". The standard allows an implementation to provide other languages, for instance the Windows API might have marked things as extern "PASCAL" for their old convention. When used for language linkage can affect things like name mangling, calling convention, and anything else potentially needed for interoperability.

The third usage was introduced in C++11. Marking a template as extern prevents the compiler from implicitly instantiating a template. Sometimes people forget that a template doesn't actually create code, it creates something akin to an adjustable cookie cutter or giant bag of buttons ready to sew on. Normally the compiler needs to spend some effort generating the contents of a template. This is even more the case with type deduction, where the compiler needs to evaluate the template, try adjusting the values or searching through the big bag in an attempt to find versions of the code that doesn't have substitution failures or that match a concrete type that satisfies a long list of code generation rules, then selecting the best match requiring the fewest implicit transformations and finally generating code that best satisfies the match. Marking a template as extern prevents the compiler from implicitly generating the code and trying out all the variations, the compiler will assume that the functions for the type are the best match and that they have already been generated somewhere else (such as another source code file) and basically treated like regular function declarations rather than template declarations. The end result is much faster compilation of template-heavy code, at the cost of reduced inlining and potentially slower execution.


So that's the three. The original usage that dates back to the C roots for external linkage, the great C/C++ schism usage for external language linkage, and the recently introduced use for externally-existent template instances.


#5166886 extern definition?

Posted by frob on 14 July 2014 - 07:36 PM

You have it almost right.

The extern keyword specifies that a name has external linkage; that is, that the name should be externally visible between modules. (It has two other uses, but those aren't what you asked about.) For a variable like the one above, marking it as extern also means it has static duration, but since your variable was global, it had that already.

In many ways, "extern" is the opposite of "static". Both mean that the object will be created before program execution, but "extern" means the name is exported by the compiler so that others can link with it, "static" means the name is kept private so it doesn't collide with other similar names.

If you wrote:
extern int i;
Then the variable has external linkage but is not defined here. It serves as a forward declaration to the compiler. The compiler will leave it as a hole to be filled in by the linker. It is much like a function declaration in that respect, it is declared but not defined here.

When you wrote:
extern int i = 0;
Then the variable still has external linkage, and you have defined it in this module. This is similar to a function body, it is an actual definition, the actual thing, rather than just a name that exists somewhere for the linker to fill in later.


#5166874 Strange VS2012 Debug Problem

Posted by frob on 14 July 2014 - 05:54 PM

Those specific values are useful to memorize. For fun, convert your display to hex. You'll see 3435973836 turn to 0xcccccccc.

That is one of many bit patterns used by the debugging libraries to help you know what happened. That one specifically is uninitialized stack values.

Some common bit patterns in Windows development:

0xCCCCCCCC Uninitialized locals (on the stack)
0xCDCDCDCD Uninitialized (allocated on the heap but not initialized by your program)
0xDDDDDDDD Freed memory (released by delete or free, but remains allocated to the program for debug tracking purposes)
0xBAADFOOD Allocated by heap management functions rather than new or *alloc, uninitialized memory your program owns
0xDEADBEEF Deallocated heap memory your program no longer owns
0xABABABAB and 0xBDBDBDBD Guard memory located outside (before and after) an allocated memory block
0xFDFDFDFD No man's land (normally outside of a process, but also used for certain guard blocks)


So with your value, the debug library is telling you that you never initialized the value. In release mode the value will not be initialized to anything, it will just contain whatever value it had earlier.

If you thought you initialized it with something, you will need to track down inside that code to see why it is returning an uninitialized value instead of a valid (or NULL) pointer.


#5166829 How To Make Combat Formulas Work Better ?

Posted by frob on 14 July 2014 - 03:40 PM

Balancing the systems is difficult.

Most games work on probability curves, not just the high and low points. Take some times with games that expose their mechanics, such as table top games like D&D. They tune their dice rolls to adjust probability curves. 4d6 has a very different curve than 2d12 or 2d10+1d4.

Trying to figure out a good system to increase the player stats over time yet still maintain balance is a very hard job. Good game designers spend many years of their lives studying the subject just like you spend your time studying programming. During game development while you are spending your thousands of hours writing code they spend a similar amount of time tuning the design.

A solid, balanced, fun system isn't something they just make up on the spot. It usually requires many months of effort.

It is also a defining feature of most games. Exactly how the systems interact is game specific. You'll need to tune it so it visually and conceptually matches your world. Thinking in terms of dungeon crawlers, a simple short knife can be a poor weapon in the hands of a berserker, but a thief should be able to amass a huge kill streak with the same weapon.

My recommendation is a large collection of data tables that can be reloaded at runtime, and many hundred hours of tinkering.


#5166759 What is low-poly (nowdays)

Posted by frob on 14 July 2014 - 11:27 AM

It largely depends on your engine, your target machine, and the number of objects in the world.

If you have few objects you can have a higher number of polygons and denser textures. Lots of objects means fewer polygons and smaller textures. On the hardware side, if your game is mainstream and works on anything P4 or greater, or targets shader level 2 or greater, or otherwise allows for older machines, it is very different from a game that requires a 64-bit OS and 16GB of memory and a DX11 class graphics card.

On most projects the art budget was specified based on the size and importance of objects. A small object (quarter meter or less) had a limit of around 150 triangles and 1024 square textures. For important characters and large objects, 4000 polygons and large textures are acceptable.


#5166665 Game doesn't crash if currently printing

Posted by frob on 13 July 2014 - 10:39 PM

Don't know what you mean by crash-hunting, I listed some possible scenarios where crashing may/may not occur. If I don't have debug info toggle-able, and everything is just already out there, then the game doesn't crash either. I see if game_debug is true, the game crashes AFTER change_level is done. However say everything is commented out inside game_debug, then there is no crash at all. So it has to do something with rendering the text on screen, but nothing printed is erased when the level changes. So it's very confusing.


It *might* have something to do with that. It might not. That is what debuggers were invented for.

I asked what you had done for crash hunting for the additional information linked to by ApochPiQ. For example, when the crash occurs in the debugger, where does the crash occur? Does it appear inside a function you know, inside a system function, or in some seemingly random memory location? What does your call stack look like? What was your previous instruction?

Crashes typically include some sort of message indicating the error, or provide a minidump or other useful data. If an error message what exactly does it say? If a minidump what happens when you load it up with your debug info? Those random numbers mean things, and the guide linked to above can help you understand them. (Or we could retype the information hundreds of times, doing it yet again for your post. Please just go read the other links.)

Other than uncommenting some lines and moving a bit of code around to hide the bug, what have you actually done (you know, like a computer scientist rather than code monkey) to experiment on the issue, identify it, and correct it?




PARTNERS