Jump to content

  • Log In with Google      Sign In   
  • Create Account

Zlodo

Member Since 05 Jan 2009
Offline Last Active Sep 26 2014 01:52 AM

#5175441 So... C++14 is done :O

Posted by Zlodo on 22 August 2014 - 04:40 AM

My feeling is that if the committee can't write clean, efficient, proper C++ code, how they hell are we supposed to be able to?  If the committee can't write a simple vector implementation, then they need to get back to the drawing board and figure out why they cannot, rather than just pass the buck onto the vendors. I mean the whole reason Boost in its current form exists (and I love Boost) is simply because its the standard library that C++ needs.


Most of the people who work on compiler implementations and their corresponding STL implementations are actually part of the standard committee, and they actually work on the implementations of proposals while they are still in discussion to help nail down design issues.

The point of standards is that multiple people can independently implement them. A reference implementation would defeat the purpose.
If everyone used the same implementation, every quirk and defect of said implementation would de facto become part of the standard as people would end writing code unwittingly relying on them.


#4997625 How were Commodore 64 games developed?

Posted by Zlodo on 05 November 2012 - 10:58 AM

I reckon the first few years of the C64's life games were written on the C64 itself, but then written on a more powerful machine in its later years such as the Commodore Amiga. Just a guess though...

I don't know if it happened on machines such as the c64 but I do know that at some point amiga games were developed using cross assemblers and debuggers running on intel PCs.

I know that reflections did this at least on shadow of the beast II and later games (they used to have interesting blurbs in the documentation of their games telling about the developpment process and it was mentioned there iirc).
Factor 5 had even developped their own intel pc based toolset called "pegasus" that they used for all their amiga and console games (probably even for their atari ports too)., I had read this in an interview somewhere.

Nowadays it doesn't makes a lot of sense to use another pc to develop a pc game, but in those days it probably made a lot of sense for professional developers to turn to that kind of solutions because machines had small amounts of memory which made it hard to have a game and development tools to coexist and even though some OSes like amiga's had preemptive multitasking they didn't provide any kind of memory protection and process isolation, which meant any unfortunate write through a bad pointer could bring the entire system down (or worse, result in filesystem or text editor buffer corruption, all kind of fun things).

Also since most games just clobbered the entire hardware and memory and interruption handlers (because using the os induced too much overhead and the hardware was fixed anyway) it was likely much easier to use remote debuggers running on the intel pc than having some hacks to let the game coexist peacefully with the os during development.

As an example of the kind of things that could happen in those days when developing directly on actual target machine , the first game that Reflections developped on amiga was ballistix. I can't remember how the hell I managed to come accross that in the first place but there were actual portions of the game's assembler source code that ended up lying around on some unused sectors of the floppy disk. Evidently that game wasn't yet using a development process using a separate pc...


#4995971 Is it possible to draw Bezier curves without using line segments?

Posted by Zlodo on 31 October 2012 - 04:33 PM

Windows 7 performs font rasterization using the GPU directly
This does exactly what you want - which is drawing triangles and shading each pixel according to an implicit equation that derives from a Bezier curve
No slow and messy subdividing and drawing lines etc.


You can get a detailed description of how they do it in "Rendering Vector Art on the GPU" by Charles Loop and Jim Blinn
This is free online: http://http.develope...gems3_ch25.html

That remains quite complex though. For one, for cubic beziers it's much much easier to recursively split them into quadratic beziers than analyzing them to find out whether they're looping or whatnot to be able to rasterize them like in the paper. Quadratics are so much easier to work with, and approximating a cubic bezier with one or more quadratic is rather easy and doesn't really even result in any noticeable curvature error even when zooming in (while keeping the advantage of pixel perfect curve rendering at any zoom level).
Another thing is that you need to find overlapping "quadratic triangles" to subdivide one of them. Plus of course you need to perform tesselation of the whole thing too, but that's not really specific to using the GPU to rasterize curves.

A much easier to implement approach is to just use the stencil buffer instead of doing tesselation altogether. This also avoid the need to subdivide the "quadratic curve triangles". The downside is that each shape you need to render pretty much needs exclusive access to the stencil buffer, followed by rendering a bounding rectangle to fill the actual inside of the shape using the stencil so you pretty much require two render call for for each independent shape you want to render (and if you do stroking that's a second shape).

And then there's that, although I suspect it's quite heavy on fillrate (plus things like gradients would seem like it would be rather complex to do):
http://ivanleben.blo...r-graphics.html

Otherwise, there is an algorithm similar to bresenham but for bezier curves, which I guess is more what the OP was looking for:
http://smartech.gate...art1?sequence=1


#4945248 Why is C++ the industry standard?

Posted by Zlodo on 01 June 2012 - 02:37 AM

C++'s meta-language just sucks; I disagree that it improves any of the programmer's productivity.

C++ template programming have a somewhat steep learning curve, but when you start being productive with it it is very good. It allows to rather quickly build powerful abstractions that have no runtime cost compared to a more classical (and time consuming) implementation. So yes, it does improve programmer productivity significantly.

C++11 have also improved template programming significantly when it comes to ease of use and readability.


#4942837 Why is C++ the industry standard?

Posted by Zlodo on 24 May 2012 - 02:35 AM

Most popular languages these days insist on runnning in a VM even though the benefits for video games are questionable compared to their runtime cost.

Few languages are designed to be compiled natively while still offering a diverse set of high level paradigms such as OO programming, meta programming and functional programming. There's basically mostly C++ and D that fit this category. Only one of those two is a mature language with a proven design and a huge ecosystem of tools, compilers, libraries, and frameworks.

Template meta programming in particular allows C++ to combine very good runtime performance with very good abstraction capabilities. There's pretty much no other language that have both of those things along with a big ecosystem.


#4942494 C++ compile-time template arg name?

Posted by Zlodo on 23 May 2012 - 04:28 AM

Unfortunately I don't think you can do anything fancy in a static_assert.

It all comes down to the compiler, clang for instance will usually print out enough informations on a static_assert failure to know which template is being instanced and with which types.

Edit: I just checked the standard, the second parameter of static_assert is a string-literal. So it can only ever just be a string, the only way you could possibly concatenate stuff in it would be by using the preprocessor, so you don't have a lot of possibilities.


#4942167 OpenGL 4.2 headers, Mesa 3D, extensions - how does it fit together?

Posted by Zlodo on 22 May 2012 - 05:31 AM

On windows, the extension loading api basically allows you to request function pointers to the various opengl core and extension functions by their names.

So in that case glew just defines a bunch of function pointers corresponding to them, and upon initialization it uses the extension loading api to know what's there, and then it fills out all those function pointers. I'm skimming over the details because I don't kow them beyond that.

I know that glew uses auto generated code to do all that, using informations parsed from the spec files found on http://www.opengl.org/registry/

On linux I'm pretty sure mesa is not used altogether if you use the propietary AMD/Nvidia drivers. Distros usually have some mechanism in place that allows those packages to provide their own version of libGL.so, and usually the one found in /lib is a symbolic link to the one provided by ATI/Nvidia, so they basically provide the entire opengl stack themselves.

As for the gl headers I don't think there is one single source for it. I'm pretty sure that every opengl implementation around come with their own headers. For ATI/Nvidia they probably are in their respective SDKs.


#4942152 Broken Game as an Anti-Piracy Measure

Posted by Zlodo on 22 May 2012 - 04:35 AM

In addition to the above, I'm not sure intentionally letting your game seem bugged is a very useful method of retaliation against pirates.

The video game industry doesn't have a track record of releasing very stable games, especially on PC, and someone pirating your game and unable to do a thing he's supposed to be able to do is just going to assume your game is yet another buggy piece of crap, and if anything it might vindicate his decision to pirate it instead of purchasing it.


#4941898 Interpreting ASM of "a <cross> b" against "a.cross(b)...

Posted by Zlodo on 21 May 2012 - 06:54 AM

I think that what he means with "no cmp" is that his switch case seems nowhere to be found in the generated assembly.

fastcall, I think your interpretation is correct. The code in each branch of the switch case was probably the exact same, so they were merged together, which left the switch with all possible cases pointing to the same code, and no default (which has been hinted as unreachable with _assume(0)), so it was removed altogether.

And indeed, I would not expect the three approaches to define that cross operator to result in different code. In the three cases the compiler calls (and inlines) the same function, regardless of the specific syntax you use.

Another test you might want to do if you want to double check is just to remove the switch case altogether, and make three version of your source, one with r = (p1 <dot> p2);, one with r = (p1 DOT_2 p2); and one with r = p1.dot(p2);, compile all three, and compare the generated assembly, which should be the same.


#4892200 Cutscene system ruled just by time?

Posted by Zlodo on 09 December 2011 - 08:47 AM

Hi!,

Well it is true that I haven't exposed here the requirements for my game. The question was a bit vague and too general.

My game is a 2D platform game and doesn't need too many bells & whistles related to cutscene capabilities. I will be happy with a bunch of actions like walk from here to here, play animation for game object, create game object at position, say text, and little more. As I have said, don't need a fully featured system. THe problem with my system is that I need to have a class per action. In order to be able to add it to the timeline, it is a bit over bloated for me having to wrap every thing that can be added to the cutscene system into an action class. Don't really know if lua with its coroutines could help with this. Any tip?.

Lua coroutines can certainly help lot.

I had to make a cutscene system once for a nintendo DS game and since there was only one guy in the team who was available to actually build the cutscenes and less than a month to do it the system had to be super simple both to implement and to use.
So I used lua, made it so that each cutscene was one lua script automatically wrapped into a coroutine, and provided a bunch of functions that would either just install a behavior on an object that persisted until removed or changed (such as camera.lookat( actor )), or that would perform a long running action (such as actor.followpath( somepathobject, duration) )and wait until it was done to continue the script. This was achieved by having the function setup some state or controller that actually performed the action and then yielded to suspend the coroutine, which was resumed once the action was complete.

To allow simultaneous actions, there were functions to group actions together into blocks. There was "beginblock_any()', which meant "beginning of a group of actions that ends when any of them ends", and "beginblock_all()" which was the same but ending once all of the actions were completed. A call to another function, "endblock()", would mark the end of the block and actually perform the yield to hand the execution flow back to the engine.

The block system turned pretty useful for things like allowing partial skipping of cutscenes: the scripter could start a long running action in a block together with a waitbuttonPress() call and if the player pressed the button before the end of the long running action it would jump to the next part of the script without waiting.

This system allowed to construct cutscenes with very simple scripts describing a sequence of actions in a straightforward way, which I suppose is the same as with that system that you described that built an action tree.

But with the coroutine way you don't have to build such a tree, so it's probably more straightforward to implement. And since you can use lua control structures right inside of your cutscene script you can do some more advanced stuff easily such as branching cutscenes (doing different things depending on the outcome of a dialog) or QTEs (press X not to die). For instance we were able to loop the cutscene that was used as the game's title screen just by putting its script into a normal lua loop.


#4838433 best c++ foreach macro?

Posted by Zlodo on 21 July 2011 - 07:29 AM

Not necessarily. GCC supported auto as early as 4.4, but didn't support ranged based for until 4.6

Which is why I said "auto AND lambdas", my reasoning being that lambdas are more complex to implement than the for each construct and thus it is likely that by the time they get around to implement it they already did the foreach construct (which is the case with clang, for instance).


#4838351 best c++ foreach macro?

Posted by Zlodo on 21 July 2011 - 02:40 AM

Use of "auto"? Use lambdas:


If you have a compiler whose support of C++11 is far along enough to support auto and lambdas, chances are that it supports C++11's foreach syntax anyway:

for( auto& val : container )
{
}

gcc 4.6 supports this, I'm not sure about visual studio 2010. It was just added in clang too iirc.


#4787604 catting requests and parsing results, are you kidding me?

Posted by Zlodo on 18 March 2011 - 12:28 PM

Warning, this is mostly rant, but I want to know if others feel the same way:

To parse XML in C++ using a common library, such as QtXmlQuery, at runtime you create concat a bunch of strings together to form an xpath for what you want, send in the strings, which the library parses out a set of instructions, and returns the result, usually as a string, which your program must parse.

Want a file format experience better than serializing your data into strings and writing a text file? Try SQLite. Your C program just cats together a string full of SQL commands, which SQLite parses and runs. Really? Doesn't "SELECT FOO WHERE X=" << x << " AND Z =" << z drive you nuts? You could use '?' and bind a variable, but that's almost as bad. You are now serializing your data, intermixed with SQL keywords.


That's using SQLite (or any other sql database engine for that matter) very poorly. The right way is to precompile the SQL statements you're going to use at the beginning, then use variable bindings.
This way you don't parse the SQL multiple times, you don't output integers and such to strings that are subsequently reparsed to ints, and more importantly, you don't run the query planner multiple times. And in the context of a web app you also avoid being vulnerable to SQL injection.

In SQLite's case, a statement is turned, after parsing, into byte code that implements the the query (that runs into a specialized VM that provides such operations as index accesses, loops, column/row value retrieval and such), and the value of bound variables are accessed directly by the byte code (so they are not converted to strings and back).

I am no fan of XML in any way but I suppose that any decent XPatch engine would allow to pre-parse xpath queries as well.


PARTNERS