Jump to content

  • Log In with Google      Sign In   
  • Create Account


Matias Goldberg

Member Since 02 Jul 2006
Offline Last Active Yesterday, 10:04 AM

#5147973 References or Pointers. Which syntax do you prefer?

Posted by Matias Goldberg on 18 April 2014 - 01:35 PM

It's not just personal taste. Reference & Pointers tell you about the code.

  • const Foo &myFoo means this is an input variable, which cannot be null. Probably it's a large struct and we're using reference to avoid passing by value. If the reference is actually null, something really bad happened before entering the function. GDB & MSVC both support showing the address (just print &myFoo)
  • const Foo *myFoo means this is an input variable, but the pointer may be null. i.e. optional. Check the documentation to see if you can assume it cannot.
  • Foo &myFoo. This is an output variable. You're expected to modify it. Could also be input, but this is discouraged for many reasons.
  • Foo *myFoo. This could be an output variable. Or could be that you need to write to raw memory (i.e. gpu pointer) and/or do some pointer addressing math. Could be null (you may not be always expected to modify it). Could also be input. The most ambiguous of all.

Additionally, pointers can have a few more qualifiers that sadly references do not, such as __restrict, which is very powerful in code optimization.




#5145499 where to start Physical based shading ?

Posted by Matias Goldberg on 08 April 2014 - 04:52 PM

 

I found this article useful, includes source code:
http://content.gpwiki.org/D3DBook:(Lighting)_Cook-Torrance

This is interesting! I've been looking a bit at that Cook-Torrance link, but from what I understand physical based shading is supposed to be "normalized", e.g. the amount of light reflected is less than or equal the amount of incoming light. Is the BRDF described there really normalized?

 

To the best of my knowledge, the formula described in that book is normalized. Not all of the other BRDFs in that book are normalized though.
Note though, that the Cook Torrance code later adds the diffuse component "as is", but you need in fact to normalize that sum. A common cheap trick is to use the opposite of the fresnel argument F0:"NdotL * (cSpecular * Rs + cDiffuse * (1-f0))"
 

Frankly I'm not too interested in the math behind all this, but it'd still be really interesting to implement and see the results, and I need to understand it to some extent to be able to explain how to work with the lighting to our artists...

I'm afraid PBS is all about the math (tech, what you're focusing on), and feeding it with realistic values (the art).
You asked whether the Cook-T. formula was normalized, but in fact we can't know by just looking at it (unless we already know of course).
To truly check if it's normalized, you have to calculate the integral; like in this website, and like in this draft. Either that, or write a monte carlo simulation.

Either of them takes more than just 2 minutes to find out (and for some formulas it can actually be very hard even for experienced mathematicians).

 

Edit: Fabian Giesen's site seems to be down. I've re uploaded his PDF here.




#5143515 How to time bomb a beta?

Posted by Matias Goldberg on 31 March 2014 - 11:08 AM

any protection will ultimately depend on 3 system calls: get system time, get file time, and read sector. if those calls can found relatively easily, then its the code that uses them that must be obfusticated.

Self modifying code can indeed obfuscate those calls.

But most DRM approaches focuses on looking if the exe binary has been tampered. Multiple checksums at random events to verify that the exe is still intact to the one you sent (and these are much harder to spot because reading a file is alsoto read game data).
If the checksum fails, the exe has been tampered, possibly to circumvent the timebomb. Just, don't pop up a message saying "THIEF!". The checksum could also fail because the legit user has a virus that infected your binary.
Just stop the gameplay and display a courtesy pop up that this copy is from a beta and that he is playing past the expiration date, and may contain bugs, etc; with a link to buy the final version.

An innocent user may unknowingly get a circumvented version of your game, and a system like this can help you convert him into a paying customer.
A guilty user will just look for a more recent crack fixed (where all your DRM schemes have already been broken).


#5142997 How to time bomb a beta?

Posted by Matias Goldberg on 28 March 2014 - 10:04 PM

You can't stop the game from being hacked and circumvent the time bomb.
All you can do is to check the clock, keep a launch count, check timestamps of the files, phone home, obfuscate all of the former and check multiple times. And then there's more intrusively annoying options: depend on an active internet connection to play the game, rootkit and install something in the MBR (if you'd do this on my machine I would hunt you down).

But even then, all of this can be circumvent. Treat your customers well, not like thieves. Are you already a millonaire? Because if your game isn't worth it, it won't even be pirated. You won't need to worry about it. If your game truly rocks, it will get pirated, but you will also get much money from honest customers you won't have to worry about it either.

Do some checks, but going to insane lengths may just damage your image.


#5142994 Building new PC...

Posted by Matias Goldberg on 28 March 2014 - 09:47 PM

nowadays that tends to be greater double-precisions support, sometimes ECC memory or larger framebuffers. It used to be things like hardware GL clip-planes in the past.

Indeed. None of these features are "killer features" for designing a regular game.
To add to that list, perhaps the major difference between the Fire/Quadro vs Radeon/GeForce is that the GPU to CPU readback may not be as efficient when it comes to mixing GUI + 3D and raypicking using the Z Buffer. Both features commonly used by modeling products, but rarely by games. However as Direct2D is more standard for UI, Linux uses OpenGL for rendering the UI, and some GPGPU applications need powerful CPU to GPU; the gap is getting smaller.

Nvidia is quite unfamous for soft downgrading or throttling their tech. Particularly the GeForce 4xx series had a HW flaw where CPU to GPU readback was extremely slow, causing most 3D modelling packages (Maya, Max, Blender; and even some games) to slowdown so bad you end up with a crappy GPU that is outperformed by an old GeForce 8xxx series. The Quadro sister series however, did not suffer this flaw.

My recommendation is that if you go for regular consumer cards(*), first Google on Maya/Blender forums for that card model + "performance problems" or similar keywords, to see other people's experiences and avoid surprises.

(*) Personally, I would go for regular consumer cards.

Edit: Forgot to say, (like Ravyne already mentioned) Quadro/Fire are tuned for quality. This means "fast hacks" are disabled, and you get high quality texture filtering (i.e. Anisotropic) instead of getting blurry stuff. If you're an extremely talented artist, it may matter to you; but for most, it doesn't. Other quality differences may come in the RAMDAC (RAM Digital Analog Converter) in case you still use VGA or other analog output. A high quality ramdac is very important if you're doing professional video editing (with expensive equipment). Again this isn't your case. Besides, if you're planning on using an HDMI or DVI cable, this doesn't even matter to you.


#5142984 [GLSL] NVIDIA vs ATI shader problem

Posted by Matias Goldberg on 28 March 2014 - 09:02 PM

spreading rumours that AMD has bugs...

Naah, the rumours started because AMD OpenGL implementation used to be really, really, REALLY broken. But that isn't the case anymore.
Probably, nowadays it's true that NV makes look AMD wrong.
But honestly, I despise what NVs does on that front, because it happens you can't rely on them (running invalid code not only breaks on AMD, Intel Mesa drivers on Linux are also quite good).


#5142206 a refresh rate problem in directx11

Posted by Matias Goldberg on 25 March 2014 - 11:04 PM

But why my rotation entity still spinning when i set the swap chain's refresh rate to 0

Because probably 0 is an invalid value and is being ignored or is just telling the driver to disable VSync.
Also, whatever number you provide, if VSync is off, the GPU will try to render as fast as it can. That could be 10hz, 40hz, 300hz or whatever number (but visible tearing on the monitor will happen)

Only a handful numbers of refresh rate values are valid and you have to ask DX11 to enumerate which ones are supported (depends on Monitor - GPU running the program)


#5140152 Why is Candy Crush so Successful?

Posted by Matias Goldberg on 18 March 2014 - 05:49 PM

The game is already addictive and well executed. Points for that.

 

If you combine this with unethical viral marketing methods (unless extremely skilled, players are forced to pay or disturb share to all of their facebook contacts in order to advance) you pretty much find the recipe for short term success(*).

 

(*) Short term here is not measured in time, but rather in the capacity from a company to make its clients to come back and buy again.




#5140147 Exactly what's the point of 'int32_t', etc.

Posted by Matias Goldberg on 18 March 2014 - 05:41 PM

I think the standards committee finally realized it was foolish to not make all types fixed size in the first place

Umm. No.
The standard guarantees that char is the minimum representation a machine can do (in x86 case, that's 8 bits). And that a short is 2x a char.
Back then there were machines whose register's size in bits was not even a multiple of 2; thus this format made sense. This historical reason is also why signed integer overflow is undefined behavior, even though in all modern cpus an overflow pretty much behaves consistently.
The fact that char = 8-bit was not clear back then, and surely wasn't portable.


#5133389 deinstancing c-strings

Posted by Matias Goldberg on 21 February 2014 - 04:22 PM

A pointer's value cannot be known until the program is run,

 
Is this true? dont think so, i know some pointers are rebased or something by windows loader but in some way this pointer value is 'produced' in compile+link time.so i am not sure your suggestions are fully true here*

Aaaand you proved your ignorance on architecture (not trying to offend you).

You're seeing pointers are equal to integers in an x86 machine, running probably on Windows or may be even Linux.
Pointers are NOT integers. They're pointers.
An architecture could store, use and load pointers in a special purpose register that cannot directly talk to integer- or general purpose registers.
Memory addresses could be layed out in a segmented memory model, or other model different than the flat model.
C & C++ standards account for that. They even account for architectures where a byte is composed of 61 bits and not 8 bits (an arch that hasn't been produced in decades btw.)

Hence, when you're saying "should be possible"... it is possible in the popular x86 arch running with a flat memory model. But it's probably not gonna be ever standard, because it will not work with radically different targets.


#5133162 deinstancing c-strings

Posted by Matias Goldberg on 20 February 2014 - 10:50 PM

Hodgman I really like, and I often support you, but that code you just posted is filled with ugliness: It's got std::set, std::string and globals. Can't get uglier than that.
 
To answer the OP post, you'll have to use const variables with extern forward declarations, and define the string in one compilation unit:
 

//Definitions.cpp
const char *gCAT = "cat";
 
//Definitions.h
extern const char *gCAT;
 
//Foo.cpp
if( strcmp( myVar, gCAT ) == 0 )
{
    //myVar contains "cat"
}

Edit: Just to clarify what's "ugly" in Hodgman's code:

  • Common implementations of std::set are not cache friendly. Any advantage from string interning is obliterated by this (duplicated string in the final compiled binary is just going to be faster).
  • std::string create unnecessary heap allocations (+ other minor misc overhead).
  • It's not thread safe (the code allows adding more strings to the pool at runtime; thus if it R/W access happens from multiple threads...)
  • It's got globals



#5131453 Auto update systems - yes or no

Posted by Matias Goldberg on 14 February 2014 - 08:22 PM

I like SourceTree's auto updater:
 

Every time there's an update, there are three buttons:

  • Update (the rest happens automatically)
  • Remind me tomorrow
  • Don't ask me ever again.

I'd say there's a missing "Always update automatically"; All 4 options should cover everyone's tastes.




#5130707 GPU Gems 3 - Samples and source code?

Posted by Matias Goldberg on 11 February 2014 - 09:48 PM

Afaik the reason you can read the book online through Nvidia's site is because they paid the bill for all of us in a deal they made.

But afaik, the deal didn't include the DVD.

 

So if you legally want the DVD, you'll have to purchase the book.




#5130583 Is today programming a games easier or harder than in 8,16- bit era?

Posted by Matias Goldberg on 11 February 2014 - 12:33 PM

This is all relative.

 

In the 80's-90's Making a successful "AAA" game that reaches the masses could be made by one man, or a team of 5 people. It was considered "hard" in its time though. For example Pac-Man

However, to reach the masses, one would have to have a publisher or similar backing them up since Arcade HW, console licensing fees, and distribution & packaging costs were very high. Pac-Man was made by 3 men, but published by Namco; who took most of the profits and paid for most of the costs.

 

Today, making a successful "AAA" game requires +50 people (just scroll the credits of any AAA game, they're insane), but distributions costs are much lower.

 

Nonetheless, today it's possible to make decent looking, competent titles by a small team that can reach the masses even with a low budget; thanks to open source engines, Unity, UDK 4, Youtube, Wordpress. What the press today refers to an "Indie" title (back in 90's an Indie was a creepy guy in a garage with a passion for gamedev who rarely got acknowledged and shared their experience with other Indies using 56k modems; getting a rotating triangle rendering on screen used to feel like a major achievement)

 

So, again... it's all relative.




#5130577 Naive Question: Why not do light culling in world space?

Posted by Matias Goldberg on 11 February 2014 - 12:15 PM

Light culling algorithms are common.

 

In Forward renderers you have a limited number of lights, so culling is a must. In Deferred renders, lights are often just rendered geometry with special additive shaders, thus can be culled like normal geometry.

 

It's worth noting you can use the standard algorithms that cull your geometry to also cull your lights, just as TheChubu says.

If you use a Quadtree to cull your geometry, you can use it for lights too.

 

Perhaps your case is that you're aware of the applications you'll be giving to it: most lights are static. Thus a custom solution that fit your particular needs is not crazy.

 

Be careful with quadtrees though. Trees often involve multiple dependent-reads indirections which means GPU's memory latency can slow you down a lot (though you could workarounded perhaps with a few assumptions); and unless the memory layout is well planned, they're usually not cache friendly either.

Grids are often much better. They waste more ram though.






PARTNERS