Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!

Matias Goldberg

Member Since 02 Jul 2006
Offline Last Active Today, 12:06 AM

#5150238 Do modern pc hardware based arcade cabinet games run on an os?

Posted by Matias Goldberg on 28 April 2014 - 09:09 PM

So, if it is really like developers building their own gaming computer (Buying and putting together their selected video card, motherboard, processor etc) then putting it inside of a cabinet connecting to arcade parts like the joysticks and running their own arcade game off of the hard drive...
Does that mean the games in those modern arcade cabinets are programs running on Windows? Or other Operating system like Linux, or maybe the developers own made distro of Linux?

Yes. There's a whole industry behind it. The Taito Type X2 is widely used today (BlazBlue fighting series comes to mind) and runs on Windows XP Embedded.


Basically what devs gain from this is a friendly known environment (i.e. x86 processor, Windows XP, DirectX, Visual Studio), stable known hardware specs (like in console development); and has a few customizations (most notably the hard drive is encrypted and all data in and out is decrypted/encrypted on the fly to prevent people from just cloning the drive and distributing the game online in pirate sites)

#5150158 SHA-1 Collisions

Posted by Matias Goldberg on 28 April 2014 - 01:12 PM

Assuming it's for non-cryptographic use, it's perfectly fine.

One good practice is, in Debug builds, to store the original sequence and assert when comparing two strings that are binary different but collide.
When you hit the assert, all you need to do is to change a single letter of the string (or add a space, etc). Not a really big deal in game development.

#5149724 MipMaps as relevant as before?

Posted by Matias Goldberg on 26 April 2014 - 04:55 PM

Are MipMaps relevant for games nowadays?

Mipmaps reduce bandwidth. Bandwidth is a very precious limited resource; hence it's very relevant.

Anisotropic filtering seems something computable in real time.

If that's the only thing you're doing, sure. If you've got to do 1000s of lights, 8 2048x2048 shadow maps, HDR, SSAO, other postprocessing effects, Global Illumination, multiple textures at the same time (diffuse, detail, specular, gloss, normal maps, parallax maps) and MSAA; then no, you're needlessly wasting your computing time on something that can be preprocessed/baked and has no need to be moved to real time (unlike i.e. spherical harmonic lighting which preprocessed means you no longer can move objects or the light sources)

#5148785 How to debug a game more efficiently?

Posted by Matias Goldberg on 22 April 2014 - 12:12 PM

From the beginning of designing my game engine, I've decided it would work deterministically. This approach is best if coded like that from start, but could still be possible to implement later in the process:

Being the game deterministic, the engine saves and journals all sources of undeterminism (player keystrokes, random number generator seeds, etc) and saves them to a file. The file is a mere KBs big even after an hour of playing. It's size depends on the amount of undeterminism sources you need to track and the framerate you're running at (i.e. simulating at 60hz needs double the space of a 30hz simulation)


When the game is closed or crashes (I use SEH to catch the crash and save the journal) I can later replay the whole session (I can even fast forward, but how fast I can fast forward depends on how powerful my system is, as it must simulate all frames until the crash without rendering waiting for vsync).


This has proven to be an invaluable tool. I've caught so many crashes in no time (specially those hard-to-reproduce ones), also experimented, and even see through the debugger how a variable's value evolves through every frame.


There are a few caveats:

  1. A few rare bugs may corrupt the journal, but this is extremely obvious and often blatantly obvious.
  2. If the bug is caused by an untracked source of undeterminism (i.e. an uninitialized variable is the most common one), the replay will be different on every run. However you can still use this to try to understand when is the moment that your simulation starts to diverge and nail down the source of undeterminism. You can also start removing stuff until the replay is simple enough.
  3. Sometimes the SEH fails to save the journal, and the replay session is lost.
  4. Changing code to fix one or more bugs may cause your replay session to diverge before the crash, thus it's no longer useful in seeing if the fix actually works.


I can't tell you how to write your journal, as it is specific to each game engine. However, I point you a few starts:


http://gafferongames.com/networking-for-game-programmers/debugging-multiplayer-games/ -> Keeps a journal for debugging networked games, but it's basically the same thing except for single player, and you're not journaling network packets, but rather key strokes.

#5148187 References or Pointers. Which syntax do you prefer?

Posted by Matias Goldberg on 19 April 2014 - 12:00 PM


  • Foo *myFoo. This could be an output variable. Or could be that you need to write to raw memory (i.e. gpu pointer) and/or do some pointer addressing math. Could be null (you may not be always expected to modify it). Could also be input. The most ambiguous of all.


So it is required to have write access to a GPU pointer?


? I did not understand the question.

Pointers can map to almost anything through virtual addressing. Normally it is mapped to system RAM; but a pointer can also use an address mapped to physical hardware in order to communicate with it (a GPU, a sound card, an ethernet card, etc).

There are no requirements for "Foo *myFoo"; and that's the thing. Foo *myFoo could be used for almost anything (input, output, putting stuff on ram, self-modify instructions, talking to hardware mapped devices, etc), and you will have to rely on the documentation to tell you what's the pointer going to be used for. If you can use the other three variants, prefer those first.

#5147973 References or Pointers. Which syntax do you prefer?

Posted by Matias Goldberg on 18 April 2014 - 01:35 PM

It's not just personal taste. Reference & Pointers tell you about the code.

  • const Foo &myFoo means this is an input variable, which cannot be null. Probably it's a large struct and we're using reference to avoid passing by value. If the reference is actually null, something really bad happened before entering the function. GDB & MSVC both support showing the address (just print &myFoo)
  • const Foo *myFoo means this is an input variable, but the pointer may be null. i.e. optional. Check the documentation to see if you can assume it cannot.
  • Foo &myFoo. This is an output variable. You're expected to modify it. Could also be input, but this is discouraged for many reasons.
  • Foo *myFoo. This could be an output variable. Or could be that you need to write to raw memory (i.e. gpu pointer) and/or do some pointer addressing math. Could be null (you may not be always expected to modify it). Could also be input. The most ambiguous of all.

Additionally, pointers can have a few more qualifiers that sadly references do not, such as __restrict, which is very powerful in code optimization.

#5145499 where to start Physical based shading ?

Posted by Matias Goldberg on 08 April 2014 - 04:52 PM


I found this article useful, includes source code:

This is interesting! I've been looking a bit at that Cook-Torrance link, but from what I understand physical based shading is supposed to be "normalized", e.g. the amount of light reflected is less than or equal the amount of incoming light. Is the BRDF described there really normalized?


To the best of my knowledge, the formula described in that book is normalized. Not all of the other BRDFs in that book are normalized though.
Note though, that the Cook Torrance code later adds the diffuse component "as is", but you need in fact to normalize that sum. A common cheap trick is to use the opposite of the fresnel argument F0:"NdotL * (cSpecular * Rs + cDiffuse * (1-f0))"

Frankly I'm not too interested in the math behind all this, but it'd still be really interesting to implement and see the results, and I need to understand it to some extent to be able to explain how to work with the lighting to our artists...

I'm afraid PBS is all about the math (tech, what you're focusing on), and feeding it with realistic values (the art).
You asked whether the Cook-T. formula was normalized, but in fact we can't know by just looking at it (unless we already know of course).
To truly check if it's normalized, you have to calculate the integral; like in this website, and like in this draft. Either that, or write a monte carlo simulation.

Either of them takes more than just 2 minutes to find out (and for some formulas it can actually be very hard even for experienced mathematicians).


Edit: Fabian Giesen's site seems to be down. I've re uploaded his PDF here.

#5143515 How to time bomb a beta?

Posted by Matias Goldberg on 31 March 2014 - 11:08 AM

any protection will ultimately depend on 3 system calls: get system time, get file time, and read sector. if those calls can found relatively easily, then its the code that uses them that must be obfusticated.

Self modifying code can indeed obfuscate those calls.

But most DRM approaches focuses on looking if the exe binary has been tampered. Multiple checksums at random events to verify that the exe is still intact to the one you sent (and these are much harder to spot because reading a file is alsoto read game data).
If the checksum fails, the exe has been tampered, possibly to circumvent the timebomb. Just, don't pop up a message saying "THIEF!". The checksum could also fail because the legit user has a virus that infected your binary.
Just stop the gameplay and display a courtesy pop up that this copy is from a beta and that he is playing past the expiration date, and may contain bugs, etc; with a link to buy the final version.

An innocent user may unknowingly get a circumvented version of your game, and a system like this can help you convert him into a paying customer.
A guilty user will just look for a more recent crack fixed (where all your DRM schemes have already been broken).

#5142997 How to time bomb a beta?

Posted by Matias Goldberg on 28 March 2014 - 10:04 PM

You can't stop the game from being hacked and circumvent the time bomb.
All you can do is to check the clock, keep a launch count, check timestamps of the files, phone home, obfuscate all of the former and check multiple times. And then there's more intrusively annoying options: depend on an active internet connection to play the game, rootkit and install something in the MBR (if you'd do this on my machine I would hunt you down).

But even then, all of this can be circumvent. Treat your customers well, not like thieves. Are you already a millonaire? Because if your game isn't worth it, it won't even be pirated. You won't need to worry about it. If your game truly rocks, it will get pirated, but you will also get much money from honest customers you won't have to worry about it either.

Do some checks, but going to insane lengths may just damage your image.

#5142994 Building new PC...

Posted by Matias Goldberg on 28 March 2014 - 09:47 PM

nowadays that tends to be greater double-precisions support, sometimes ECC memory or larger framebuffers. It used to be things like hardware GL clip-planes in the past.

Indeed. None of these features are "killer features" for designing a regular game.
To add to that list, perhaps the major difference between the Fire/Quadro vs Radeon/GeForce is that the GPU to CPU readback may not be as efficient when it comes to mixing GUI + 3D and raypicking using the Z Buffer. Both features commonly used by modeling products, but rarely by games. However as Direct2D is more standard for UI, Linux uses OpenGL for rendering the UI, and some GPGPU applications need powerful CPU to GPU; the gap is getting smaller.

Nvidia is quite unfamous for soft downgrading or throttling their tech. Particularly the GeForce 4xx series had a HW flaw where CPU to GPU readback was extremely slow, causing most 3D modelling packages (Maya, Max, Blender; and even some games) to slowdown so bad you end up with a crappy GPU that is outperformed by an old GeForce 8xxx series. The Quadro sister series however, did not suffer this flaw.

My recommendation is that if you go for regular consumer cards(*), first Google on Maya/Blender forums for that card model + "performance problems" or similar keywords, to see other people's experiences and avoid surprises.

(*) Personally, I would go for regular consumer cards.

Edit: Forgot to say, (like Ravyne already mentioned) Quadro/Fire are tuned for quality. This means "fast hacks" are disabled, and you get high quality texture filtering (i.e. Anisotropic) instead of getting blurry stuff. If you're an extremely talented artist, it may matter to you; but for most, it doesn't. Other quality differences may come in the RAMDAC (RAM Digital Analog Converter) in case you still use VGA or other analog output. A high quality ramdac is very important if you're doing professional video editing (with expensive equipment). Again this isn't your case. Besides, if you're planning on using an HDMI or DVI cable, this doesn't even matter to you.

#5142984 [GLSL] NVIDIA vs ATI shader problem

Posted by Matias Goldberg on 28 March 2014 - 09:02 PM

spreading rumours that AMD has bugs...

Naah, the rumours started because AMD OpenGL implementation used to be really, really, REALLY broken. But that isn't the case anymore.
Probably, nowadays it's true that NV makes look AMD wrong.
But honestly, I despise what NVs does on that front, because it happens you can't rely on them (running invalid code not only breaks on AMD, Intel Mesa drivers on Linux are also quite good).

#5142206 a refresh rate problem in directx11

Posted by Matias Goldberg on 25 March 2014 - 11:04 PM

But why my rotation entity still spinning when i set the swap chain's refresh rate to 0

Because probably 0 is an invalid value and is being ignored or is just telling the driver to disable VSync.
Also, whatever number you provide, if VSync is off, the GPU will try to render as fast as it can. That could be 10hz, 40hz, 300hz or whatever number (but visible tearing on the monitor will happen)

Only a handful numbers of refresh rate values are valid and you have to ask DX11 to enumerate which ones are supported (depends on Monitor - GPU running the program)

#5140152 Why is Candy Crush so Successful?

Posted by Matias Goldberg on 18 March 2014 - 05:49 PM

The game is already addictive and well executed. Points for that.


If you combine this with unethical viral marketing methods (unless extremely skilled, players are forced to pay or disturb share to all of their facebook contacts in order to advance) you pretty much find the recipe for short term success(*).


(*) Short term here is not measured in time, but rather in the capacity from a company to make its clients to come back and buy again.

#5140147 Exactly what's the point of 'int32_t', etc.

Posted by Matias Goldberg on 18 March 2014 - 05:41 PM

I think the standards committee finally realized it was foolish to not make all types fixed size in the first place

Umm. No.
The standard guarantees that char is the minimum representation a machine can do (in x86 case, that's 8 bits). And that a short is 2x a char.
Back then there were machines whose register's size in bits was not even a multiple of 2; thus this format made sense. This historical reason is also why signed integer overflow is undefined behavior, even though in all modern cpus an overflow pretty much behaves consistently.
The fact that char = 8-bit was not clear back then, and surely wasn't portable.

#5133389 deinstancing c-strings

Posted by Matias Goldberg on 21 February 2014 - 04:22 PM

A pointer's value cannot be known until the program is run,

Is this true? dont think so, i know some pointers are rebased or something by windows loader but in some way this pointer value is 'produced' in compile+link time.so i am not sure your suggestions are fully true here*

Aaaand you proved your ignorance on architecture (not trying to offend you).

You're seeing pointers are equal to integers in an x86 machine, running probably on Windows or may be even Linux.
Pointers are NOT integers. They're pointers.
An architecture could store, use and load pointers in a special purpose register that cannot directly talk to integer- or general purpose registers.
Memory addresses could be layed out in a segmented memory model, or other model different than the flat model.
C & C++ standards account for that. They even account for architectures where a byte is composed of 61 bits and not 8 bits (an arch that hasn't been produced in decades btw.)

Hence, when you're saying "should be possible"... it is possible in the popular x86 arch running with a flat memory model. But it's probably not gonna be ever standard, because it will not work with radically different targets.