Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Zlodo

Member Since 05 Jan 2009
Offline Last Active May 04 2015 08:00 AM

#5221490 GCC -malign-double

Posted by Zlodo on 05 April 2015 - 11:40 AM

As Bacterius alluded to..this flag can turn out to be very viral.

 

You could say that it is a malign flag.




#5221167 preventing system crash or power outage from wiping savegame: best methods

Posted by Zlodo on 03 April 2015 - 12:36 PM

 

A bit of explanation is probably called for.

 

The data in question is the bitmap masks for unexplored sections of the local maps in a FPS/RPG. Right now, they are overwritten on save, with no backup. if the power goes out, they get wiped, and the entire 25 square mile area of the game world becomes "unexplored " again.

 

Power outages are a special concern as i'm off grid running off a gen-batt system, and the low voltage alarm on the power inverter is not very loud.  So unexpected power outages are a multiple times a day occurrence here. i suspect laptop users might face similar issues. And since the game will run on laptops (i'm developing it on a laptop board in a mini case) i'd like to make it as robust as possible against power outage.

 

And its not like being on grid is much better - at least around here.  The power goes out in EVERY storm (almost all power lines are above ground in woods in this area). Its quite common for me to be the the only person in the neighborhood with power during bad weather. In fact the power went out two days ago just because of the normal March winds.

 

 

while the game has a built-in cheat to reveal all unexplored areas on the world and local maps, having to use it every few hours because the battery died makes it hard to reproduce the non-cheating player's experience in long term play testing.

 

 

Why load and save the entire map monolithically? Divide it into fixed size chunks and only overwrite those that have been touched since the last save (keep a boolean for each chunk that you set to true whenever you modify that chunk).

 

You could even save them on the fly in a background thread in a temporary file/directory and rename them into an actual game save file(s) when the player wants to save his progress. This way you "save game" function is almost instant (which is nice for the player), and when restarting the game after an unexpected power loss, you can offer the player to restore his "unsaved" progress from the temporary files.

 

Also this is exactly the kind of "mission critical" stuff where i'd definitely trust sqlite over any homegrown solution, by the way.




#5218884 preventing system crash or power outage from wiping savegame: best methods

Posted by Zlodo on 24 March 2015 - 01:58 PM

 


You can have a look at how sqlite achieves atomic commits using a rollback or a write ahead journal:

 

that was about the only thing i found from googling this

 

http://en.wikipedia.org/wiki/Journaling_file_system

 

it helped refresh my memory about how such things work.  I was into systems programming before I got into game programming, but that was a LONG time ago, back in the DOS 3.0 and DOS 4.0 days.

 

from the wikipedia description, recovery seemed rather non-trivial, especially compared to round robin saves or new save each time.

 

going to SQLite to save what is essentially a header-less 264x264 monochrome bitmap is probably overkill. also, these must page in real time, so performance is "mission critical code". in my mind, "mission critical code" and SQLanything don't belong in the same universe. 

 

 

All "SQLanything" aren't created equal - I suggested using SQLite, not Oracle. It's server-less and writes everything into a single binary file. You could easily store your bitmaps in there as blobs and get the atomic updates, resilience to crashes etc for free. If you use it like just as a key/value store for binary blobs you don't even need it to parse any sql (not that it would matter because parsing a few sql statements when initializing your app wouldn't kill you anyway)

 

As an aside, SQLite and "mission critical" do belong in the same universe, unless you don't consider airliners flight software to be mission critical:

http://sqlite.org/famous.html




#5218622 preventing system crash or power outage from wiping savegame: best methods

Posted by Zlodo on 23 March 2015 - 03:56 PM

You can have a look at how sqlite achieves atomic commits using a rollback or a write ahead journal:

http://sqlite.org/atomiccommit.html

 

Of course, as said above it relies on the hard disks not lying about flushing their caches onto the physical media. But even in that case it would be a good protection against crashes.

 

You may also consider just using sqlite to store your game save and let it deal with that.




#5218575 Programmatic Pixel Art

Posted by Zlodo on 23 March 2015 - 01:57 PM

Well the thing is that things like gimp can directly load and save XPM images so you can just edit them like normal images and then just #include them in your code. You still have to decode them into an RGB format before you can use them anyway, so I'm not sure it's really very useful.

 

If you really want to embed images directly into your executable, you might be better off just including images in PNG or JPG in your code using something like bin2c and using stb_image (a full featured image loader that fits entirely in a header file) to decode them.




#5218540 Programmatic Pixel Art

Posted by Zlodo on 23 March 2015 - 12:42 PM

Has anyone ever tried to do that, you ask? Oh yeah. I can guarantee you that in the medieval ages of computing people were routinely hardcoding graphics directly in their source code like that.

 

You may want to have a look at the XPM format, a text based bitmap format that is also actually valid C code defining palette and pixels as arrays. Some editors even recognize it and can turn into bizarre text editor / image editor hybrids:

Screenshot-xterm-linux.xpm-GVIM.png




#5217043 is it possible to prevent a proggram(specialy a game) to get decompiled

Posted by Zlodo on 17 March 2015 - 04:37 AM

Doing things client side in a mmo is only "fundamentally wrong" if cheating is your only concern.

More realistically it depends on many factors, including your business model. Having dumb servers and letting clients do most of the work can be pretty justified for a non subscription based game - especially if the gameplay is computationally demanding, for instance because it involves a detailed physic model.

Making the game hard enough to hack can suffice. It's an engineering compromise like everything else.


#5216632 is it possible to prevent a proggram(specialy a game) to get decompiled

Posted by Zlodo on 15 March 2015 - 09:19 AM

A lot of people see this as an all-or-nothing issue, aka "it's always theoretically possible to defeat a client side protection so there's no use in doing it at all", but it really all depends on the exposure of your game and what type of hackers you end up with.

 

On the game I work on, the only hackers we've had so far have limited abilities: they basically know how to poke half blindly around memory with cheatengine and that's it. So despite a lot of things being done client side and our game using peer to peer networking we've had some good success with relatively simple client-side protection schemes. Those could be defeated by good hackers, but the hackers that have been active in our game so far are mediocre.




#5215628 Questions about GPGPU

Posted by Zlodo on 10 March 2015 - 05:15 AM

In a similar vein to C++ AMP, there's SYCL, a khronos standard to embed opencl code directly into c++ code.

It's also worth noting that opencl 2.1 adopted a subset of c++14 as a compute kernel language (basically all of c++ except what you'd expect not to be possible on a GPU)


#5212763 How to store data for reflection in preprocessor stage

Posted by Zlodo on 24 February 2015 - 01:37 PM

Olof Hedman have the right idea. C++ can let you approach reflection in a unique way: do (almost) everything at compilation time.
 
I've experimented and played around a lot with that in a couple of pet projects as well as in some production code at my previous company. The solution I've ended up with that I liked the most is to describe everything at compilation time using template specialization, much like type traits. From various proposals for compilation time reflection for the C++ standard that I've seen, I think other people have used similar approaches to reflection in c++ as they are proposing things that work generally in a similar way.
 
I'm going to oversimplify this, but this approach is very extensible. I've used it to automatically generate language bindings completely at compilation time, to serialize graphs of objects, and a few other things.
 

Basically, I have a reflection::traits template that I specialize to describe every item I want my reflection system to describe. It is defined like so:

namespace reflection
{
    template< typename T > struct traits {};
}

I then have a specialization of it for each thing I want to describe, and which contain static methods, typedefs, etc. depending on what kind of thing I'm describing.

 
For instance if I have a class named Foo, I'll have a specialization of the reflection template that looks like this:
template<> struct reflection::traits< Foo >
{
    static const char* Name() { return "Foo"; }
};
 
At runtime, I now can now the name of class Foo by calling this: reflection::traits< Foo >::Name()
 
Of course, just getting the name isn't really useful. What I really want to do is to enumerate all the members. I do it using a compilation time visitor pattern. I guess it could be possible to use a more functional programming style, using tuples or something similar but I find the visitor approach less daunting.
 
In my previous company we only used this to serialize things, so I was doing something like to describe member variables:
 
template<> struct reflection::traits< Foo >
{
  static const char* Name() { return "Foo"; }


  template< typename V > accept( V& visitor )
  {
    visitor.template memberVar( "Blah", &Foo::m_Blah );
    visitor.template memberVar( "Thing", &Foo::m_Thing );
  }
};

It was all done using a bunch of macros so it looked something like this:

REFLECTION_START( Foo )
    CLASS_MEMBER_VAR( Blah )
    CLASS_MEMBER_VAR( Thing )
REFLECTION_END

The reflection::traits specialization had to be declared friend, so I had another macro for that. It is the only thing I need to insert into the definition of my classes, other than that this approach is non-intrusive, all the reflection stuff lives completely on the side.

 

It is possible to do much more complicated things, though: just create a dummy type for each member variable / property and create specific specializations of reflection::traits for those where you can then put whatever you need, like pointers to setter/getter member functions).

Likewise, macros are just one way to go about it. On my pet project I do more complicated things so I have a generator that generate all those templates from descriptions written in a simple language (I just don't like to insert annotations in the C++ code itself, I think it's both more complicated and less neat).

 

Then I can for instance print the member variables of any class described using this system by doing something like this:
 
template< class C >
class PrintObjectVisitor
{
public:
    WriteVisitor( const C& object, ostream& output ) :
        m_object( object ),
        m_output( output )
    {}

    template< typename T > void memberVar( const char* pName, T C::* mpVar )
    { 
        output << "  " << pName << ": " << m_object.*mpVar;
    }

private:
    const C& m_object;
    ostream& m_output;
}


template< typename C >
void PrintObject( const C& obj )
{
    PrintObjectVisitor v( obj, cout );
    reflection::traits< C >::accept( v );
}
The visitor can have methods besides "memberVar" to describe other aspects of the class, using template functions to pass along the required type (so that the visitor can then use reflection on that type in the same way and so on). For instance, you could have a method to tell the visitor about the superclasses of the class. It can then recursively visit them to print their member variables too.
 
You can use this to attach reflection information and implement visitor patterns for other things than classes. For namespaces, I just create a dummmy type in the namespace:
namespace Bar
{
    struct reflection_tag {}
}
Then specialize reflection::traits for "Bar::reflection_tag" to describe reflection informations about that namespace, including a function that goes through all the classes and nested namespace that it contains using a compile-time visitor like above.
 
Likewise, I create dummy structs in the class reflection traits to identify methods, properties and so on and then specialize the reflection::traits class for those to describe everything I need to describe about those things, depending on what I need to do.
 
The nice thing is that for most things, you pay no runtime cost for all that. That PrintObject function above, for instance, gets compiled into completely straight forward code that will just print each variable of the object without performing any lookup through a container at runtime. Furthermore, you don't get a bunch of data you don't need compiled into your binaries. If you only need to serialize a bunch of data in a binary blob, you don't need the class and variable names as long as you can identify the format version (I was doing it by using that same system to generate a CRC of all the classes description - it was enough for us since we used this only for network communication and it allowed to make sure that both the client and server were able to serialize and unserialize the exact same types). By the way in a modenr C++ compiler, things like computing CRCs such as this could be also done completely at compilation time using constexpr.
 
Another plus of this method is that you don't need to add stuff into the classes themselves, it lives completely on the side. You can make macros to build all those template specializations, I've did that at my prevoous company. However, in my personal projects I'm doing more sophisticated stuff using this approach than just serialization (like scripting language bindings), so I wrote a tool that generate those. I didn't want to use a Qt or unreal like approach of inserting special annotations through my c++ classes though, just a matter of taste but I find this messy. Instead, I have a simple interface description living in their own files, using a simple description language that ressembles a very simplified subset of c++, where I describe namespaces, classes, methods and properties. Then I have a tool that spits out a bunch of header files containing all those reflection::traits specialization, and from that I can generate  serialization, language bindings and such entirely through a bunch of templates.
 
Its also possible to use all this to construct a more classical system that allows introspection at runtime, but I'd only use that for things like property editor UIs and the like.



#5176175 Game content source repository?

Posted by Zlodo on 26 August 2014 - 05:43 AM

We use perforce here, on a large project with multiple teams around the world, to store both the code and the data.

Some things that make perforce good for this are:

- the possibility to have different locking policies per file type (you want to allow multiple checkout for sources, but exclusive checkout only for binary data such as art assets)

- the possibility (again, per file type) to limit the number of revision for which you keep the actual data. For instance, you can get it to store only the last 5 revisions of PNG files and discard earlier ones. This is vital for very large project s that really deal with a lot of data, to keep the size of the repository under control.

- perforce allows to setup proxy servers, and it works really well and allows the dozens or hundreds of people working at each studio to just talk with their local proxy, which in turn incrementally synchronize themselves with the central repository. This way the (large) data being committed elsewhere in the world is only downloaded once by your local proxy, and then everyone gets them on their PC through the lan. Imagine if a team of 100 persons had to download the same latest art assets through the internet directly...

Despite of all this it is very responsive in practice, someone on the other side of the world pushes a commit through their local proxy and you see it almost immediately. Of course when large operations are underway such as branching or large commits it tends to create some slow downs but nothing really crippling.


#5175441 So... C++14 is done :O

Posted by Zlodo on 22 August 2014 - 04:40 AM

My feeling is that if the committee can't write clean, efficient, proper C++ code, how they hell are we supposed to be able to?  If the committee can't write a simple vector implementation, then they need to get back to the drawing board and figure out why they cannot, rather than just pass the buck onto the vendors. I mean the whole reason Boost in its current form exists (and I love Boost) is simply because its the standard library that C++ needs.


Most of the people who work on compiler implementations and their corresponding STL implementations are actually part of the standard committee, and they actually work on the implementations of proposals while they are still in discussion to help nail down design issues.

The point of standards is that multiple people can independently implement them. A reference implementation would defeat the purpose.
If everyone used the same implementation, every quirk and defect of said implementation would de facto become part of the standard as people would end writing code unwittingly relying on them.


#4997625 How were Commodore 64 games developed?

Posted by Zlodo on 05 November 2012 - 10:58 AM

I reckon the first few years of the C64's life games were written on the C64 itself, but then written on a more powerful machine in its later years such as the Commodore Amiga. Just a guess though...

I don't know if it happened on machines such as the c64 but I do know that at some point amiga games were developed using cross assemblers and debuggers running on intel PCs.

I know that reflections did this at least on shadow of the beast II and later games (they used to have interesting blurbs in the documentation of their games telling about the developpment process and it was mentioned there iirc).
Factor 5 had even developped their own intel pc based toolset called "pegasus" that they used for all their amiga and console games (probably even for their atari ports too)., I had read this in an interview somewhere.

Nowadays it doesn't makes a lot of sense to use another pc to develop a pc game, but in those days it probably made a lot of sense for professional developers to turn to that kind of solutions because machines had small amounts of memory which made it hard to have a game and development tools to coexist and even though some OSes like amiga's had preemptive multitasking they didn't provide any kind of memory protection and process isolation, which meant any unfortunate write through a bad pointer could bring the entire system down (or worse, result in filesystem or text editor buffer corruption, all kind of fun things).

Also since most games just clobbered the entire hardware and memory and interruption handlers (because using the os induced too much overhead and the hardware was fixed anyway) it was likely much easier to use remote debuggers running on the intel pc than having some hacks to let the game coexist peacefully with the os during development.

As an example of the kind of things that could happen in those days when developing directly on actual target machine , the first game that Reflections developped on amiga was ballistix. I can't remember how the hell I managed to come accross that in the first place but there were actual portions of the game's assembler source code that ended up lying around on some unused sectors of the floppy disk. Evidently that game wasn't yet using a development process using a separate pc...


#4995971 Is it possible to draw Bezier curves without using line segments?

Posted by Zlodo on 31 October 2012 - 04:33 PM

Windows 7 performs font rasterization using the GPU directly
This does exactly what you want - which is drawing triangles and shading each pixel according to an implicit equation that derives from a Bezier curve
No slow and messy subdividing and drawing lines etc.


You can get a detailed description of how they do it in "Rendering Vector Art on the GPU" by Charles Loop and Jim Blinn
This is free online: http://http.develope...gems3_ch25.html

That remains quite complex though. For one, for cubic beziers it's much much easier to recursively split them into quadratic beziers than analyzing them to find out whether they're looping or whatnot to be able to rasterize them like in the paper. Quadratics are so much easier to work with, and approximating a cubic bezier with one or more quadratic is rather easy and doesn't really even result in any noticeable curvature error even when zooming in (while keeping the advantage of pixel perfect curve rendering at any zoom level).
Another thing is that you need to find overlapping "quadratic triangles" to subdivide one of them. Plus of course you need to perform tesselation of the whole thing too, but that's not really specific to using the GPU to rasterize curves.

A much easier to implement approach is to just use the stencil buffer instead of doing tesselation altogether. This also avoid the need to subdivide the "quadratic curve triangles". The downside is that each shape you need to render pretty much needs exclusive access to the stencil buffer, followed by rendering a bounding rectangle to fill the actual inside of the shape using the stencil so you pretty much require two render call for for each independent shape you want to render (and if you do stroking that's a second shape).

And then there's that, although I suspect it's quite heavy on fillrate (plus things like gradients would seem like it would be rather complex to do):
http://ivanleben.blo...r-graphics.html

Otherwise, there is an algorithm similar to bresenham but for bezier curves, which I guess is more what the OP was looking for:
http://smartech.gate...art1?sequence=1


#4945248 Why is C++ the industry standard?

Posted by Zlodo on 01 June 2012 - 02:37 AM

C++'s meta-language just sucks; I disagree that it improves any of the programmer's productivity.

C++ template programming have a somewhat steep learning curve, but when you start being productive with it it is very good. It allows to rather quickly build powerful abstractions that have no runtime cost compared to a more classical (and time consuming) implementation. So yes, it does improve programmer productivity significantly.

C++11 have also improved template programming significantly when it comes to ease of use and readability.




PARTNERS