Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Matias Goldberg

Member Since 02 Jul 2006
Offline Last Active Today, 12:59 PM

#5212298 Succesful titles from non AAA studios (recent)

Posted by Matias Goldberg on 22 February 2015 - 12:26 PM

To answer OP's question... Flappy Bird.

Now I better run before I get shot and a war starts.


#5211726 Hiding savedata to prevent save backup

Posted by Matias Goldberg on 19 February 2015 - 12:39 PM

1. Just name the save "sound.bak" or something. Really simple but also very easy to "crack"!

Just mask it as an asset exploting a file format which allows putting more stuff at the end of the stream while regular file viewers will ignore your save data (i.e. png, jpg, pdf) like AngeCryption does (see slides).
Just make sure you don't really depend on that asset in case the file saving goes corrupt.

2. Save the data so some silly folder like "C:/appdata/flashdata/fakecompany/sound.bak". But ugly to create folder on the users computer and what if this folder is cleaned out (since its not supposed to be affiliated with the game)? Then the user will loose the progress.

If you do that, your program enters malware territory.

3. Save a timestamp to the savefile and keep a registry of the timestamps somewhere. If the savefile is replaced they will mismatch and you can refuse to load that savegame. But if the player backups the registry then? Which means i have to "hide" the registry file as well.

What happens if the clock goes kaputt? Quite common if the battery died. You'll just annoy your users.
Timestamps aren't reliable.

Also be aware that the process of safely saving a file (that is, following good practices) inherently involves performing an internal backup: (assuming no obfuscation) You first rename your Save.dat as Save.dat.old; then write your new Save.dat; and finally delete Save.dat.old
If the system crashes or power goes off, you first check if there's Save.dat.old and verify Save.dat is valid and doesn't crash if loaded. Once Save.dat is known to be ok, delete Save.dat.old; otherwise delete Save.dat and rename Save.dat.old as Save.dat
This way your user won't lose their entire progress, just the last progress they did (the power went off while saving... after all).

Take in mind that solutions that rely on writing to two or more locations to verify the save hasn't been tampered; you have to be very careful that writing to all those files ends up as an atomic operation, otherwise your "anticheat" technology will turn against your honest users who just experienced a system crash or a power outage and now have a valid save file with a corrupt verification file.

Why prevent cheating on single player games? Cheating is part of the fun. Otherwise TAS Video communities wouldn't prosper.


#5211441 Strange CPU cores usage

Posted by Matias Goldberg on 18 February 2015 - 08:30 AM

If you check the docs from the libs you're using, audio stuff in SDL is multithreaded.

 

Starting with Windows Vista, all audio is software based; unlike Win XP which could have hardware acceleration. This could easily explain the higher cpu usage.

Just check with a profiler or with ProcessExplorer which threads are active.




#5209331 glTexSubImage3D, invalid enum

Posted by Matias Goldberg on 07 February 2015 - 05:52 PM

@Chris_F
 
Then you've been using GL wrongly or out of touch with the driver team (also looking for the twitters from the devs is a good idea). Often they've fixed my bug reports within a week and included the fix in the next driver update.
 
Yes, sRGB textures got broken in of their releases and got fixed in the next driver release; which was a long time ago by now. I've been doing very bleeding edge OpenGL 4.4 and lots of low level management and haven't gotten into problems that haven't been fixed after being reported.


#5208465 SDL2 and Linux [RANT]

Posted by Matias Goldberg on 03 February 2015 - 03:10 PM

Roots was correct, my anger was in over excess considering it is free software.

 

However good part of that anger was fueled by the fact that one major bug (maximizing, resizing and restoring) was not only reported in 2012, but also had multiple patches proposed that were never applied. This made me question the will of the developers to push the sw forward on the Linux platform.

Add that to the other bugs, and my anger went off charts. I mean, a program that hangs if the video drivers aren't really, really up to date (i.e. we first try to create a GL context, if that fails try to do it again with a lower spec) can't be deployed (the amount of bug reports would be too high); which means I would have to seriously reconsider using SDL.

 

However considering two of those major bugs got fixed (which strongly affect end-user deployment) were fixed within a day after this post, restores my faith on the software; living up to its good reputation.




#5208403 Best practices for packing multiple objects into buffers

Posted by Matias Goldberg on 03 February 2015 - 08:46 AM

Hold on. Hold on. I see a lot of outdated advice.

 

In 2014 OpenGL 4 changed: AZDO was introduced. It fundamentally changed how the API should be managed; most of which can still be used in 3.0 hardware (as long as the driver is up to date with the necessary extensions).

Unfortunately, it still contains a lot of backwards compatibility baggage; hopefully GLNext will address that issue.

 

There are no more immutable / dynamic / streaming distinctions. GL_ARB_buffer_storage was introduced in GL 4.4; which is the new way of creating buffer objects in GL using the function glBufferStorage instead of glBufferData. The "immutability" doesn't mean that the contents of the buffer can't be changed, it just means that the size of the object (and access flags) can't be changed afterwards (just like in D3D11...); which is something that glBufferData allowed.

 

The new access flags are much more low level. It is not streaming / static / dynamic anymore. It's just:

  1. Whether it can be written to by the CPU using glMap*. (CPU -> GPU)
  2. Whether it can be read from by the CPU using glMap*. (CPU <- GPU)
  3. Persistent mapping flags (only available to GL4/D3D11 hardware; on GL3/D3D10 hw you can still use GL_UNSYNCHRONIZED_BIT when mapping as an inferior but still very good workaround)

A buffer that has no read and write flags will 99% likely to be allocated on the dedicated GPU memory; current recomendations is that you should keep very few pools of these (i.e. 1 big pool of 128MB for all your data: vertices, indices, texture buffer objects, uniform buffer objects, indirect buffer objects, etc; beware that if you make it too big, you may run out of GL memory due to fragmentation; just like regular malloc practices in a resource constraint device).

Whereas adding CPU access flags may force the driver to allocate the memory in a place where both the GPU & CPU can access directly (most likely if you just use CPU->GPU access flags), or only the CPU can access and then later the driver copies it to/from the real GPU (somewhat likely if you include CPU <- GPU flags, and almost certainly if you use both CPU <-> GPU access flags). These buffers should be kept small (between 4MB and 32MB each).

 

You can use these buffers with CPU access flags as intermediate buffers to upload data to your data in video memory, or as dynamic memory to write every frame. The difference depends on how you place your fences (i.e. for your "dynamic" buffers, you want to use just one fence for all dynamic buffers; one per frame) and how much memory you reserve (see AZDO slides, dynamic content uses a triple buffer scheme; so you will allocate 3x as much as need); but the access flags passed to glBufferStorage are exactly the same.

 

Thanks to persistent mapping (or GL_UNSYNCHRONIZED_BIT), you will control the filling of the buffers with CPU access flags manually using fences.

The last section of the ARB_buffer_storage spec contains example code of how to use a buffer with write access flags to upload data to a buffer with no CPU access flags; in other words; mimicking what you would do in D3D11 with a "staging buffer" to fill a "default buffer".

Note however that you can use persistent mapping to read from the GPU, but you can't use GL_UNSYNCHRONIZED_BIT to read from the GPU; that's the only gotcha.

 

By keeping all your static/immutable meshes in one big object (basically, most of your data); you can use a single glMultiDrawIndirect (MDI for short) to render all meshes that use the same shader and vertex format. Even if MDI isn't present (i.e. GL3 hardware), you can still use instancing and avoid switching VAOs for most of the time (you only need to switch VAOs if the vertex format is different, or if the mesh uses a different buffer object).

MDI can't be used to render two meshes whose data lives in different buffer objects.

 

This is basically low level management, which means it's not newbie friendly; and I haven't seen tutorials yet; so expect to bang your head against the wall a couple of times; but it pays off in the long term, and this is where modern GL is heading.

 

apitest is an excellent reference code on modern GL programming practices. It shows how to efficiently wait for a fence and render these types of buffers.




#5208204 SDL2 and Linux [RANT]

Posted by Matias Goldberg on 02 February 2015 - 09:42 AM

Whoa!

Credit is due where credit is due.

 

Yesterday the maximizing bugs and the GL context creation bug where fixed. Very fast response, that is really satisfying.

I have to try the latest version again now, and see if the other bugs remain after these bugs have been fixed.




#5207957 ShadowMapping in the year 2015?

Posted by Matias Goldberg on 31 January 2015 - 04:49 PM

A Sampling of Shadow Techniches from MJP is still very up to date.




#5206195 no vsync means double buffering to avoid tearing, right?

Posted by Matias Goldberg on 23 January 2015 - 08:30 AM

 

single buffering = not possible on modern OS's

 

Sometimes I wish it was possible only to create less laggy GUIs...

 

Single buffering w/out VSync is the best way to reduce latency, but it will be glitchy as hell.

Double Buffer w/ VSync often achieves lower latency than Single Buffering w/ VSync.




#5205331 OpenGL samplers, textures and texture units (design question)

Posted by Matias Goldberg on 19 January 2015 - 11:10 AM

@Hodgman:

Oh you want to start a war, dont you? smile.png

 

The thing is, DX has its trade off. GL may use higher bandwidth on worst case scenario (however these descriptors often fit in the L1 cache; also the wave occupancy influences how much the data is refetched); but DX style involves more instructions and more pointer chasing. GDDR is good at bandwidth, not at pointer chasing.

 

Timotty Lottes has two posts with a very thorough analysis on both styles on modern HW. The short version is that there is no ultimate best solution; it depends on what's your bottleneck and the characteristics of your scene.

Interestingly, Timotty Lottes ends up concluding that GL-style fetching is still superior on general purpose. However this will be an endless debate...




#5205189 glBindVertexArray structure/protocol

Posted by Matias Goldberg on 18 January 2015 - 07:59 PM

It shall be noted that you can modify a bound VAO. You can unbind it. You can bind it again and the previous settings will be restored, but you can still modify it.

 

The idea is that you should create a VAO, bind it, modify it once; and never modify it again (i.e. you "bake" the parameters into each VAO). However there's contradicting real-world performance results, and therefore some recommend to just bind one global VAO, leave it bound, and modify it on the go.

Good reads:

Cheers




#5203473 OpenGL samplers, textures and texture units (design question)

Posted by Matias Goldberg on 11 January 2015 - 08:50 AM

Indeed. It's as haegarr said.

 

Before the SamplerObjects extensions, texture parameters lived inside the texture. If you wanted to use the same texture with different sampling parameters (i.e. clamp vs wrap, point vs bilinear filtering, etc); you had to clone the texture and use twice as GPU RAM.

 

SamplerObjects extension addressed this issue and now sampling parameters are separated from the texture; and when SamplerObjects are bound to a texture unit, they override the internal settings from the texture.




#5203391 OpenGL samplers, textures and texture units (design question)

Posted by Matias Goldberg on 10 January 2015 - 09:58 PM

Ohhh... you're in a world of pain.

Mostly because very advanced developers and engineers can't agree whether the separation (DX11) or the merge (GL) is the best one. Arguments about being faster/more efficient, hardware friendly, clearer, and easier to use have been made for... both. Sometimes even the same reasoning has been made!
The only explanation is that some people just simply prefer one method, others prefer the other one.

Sampler objects are a nice way to deal with the issue; and they've been specifically made taking DX11 porting into account. This extension is widely supported so it's pretty convenient.
While sampler objects will be bound to a texture unit (and hence from GLSL perspective, they're merged); from a C++ perspective you can treat textures and samplers as separate objects until it's time to assign them to a texture unit (aka merge them).

Edit: Personally I think you're overly thinking about it because it is very rare to see the same texture being sampled with different filtering parameters / mip settings within the same shader in the same pass. Just bind the same texture to two texture units and use two GL sampler objects, one for each tex. unit.


#5203390 About computer instruction in relation to RAM consumption

Posted by Matias Goldberg on 10 January 2015 - 09:42 PM

1) Why does a program(i.e. Internet Browser) gets loaded into the RAM(I think it is called lv1 cache from the CPU correct me if I am wrong) from hard disk? Why not just access it from the hard disk since that is where the program originated from after installation?
(...)
Edit: I hate anonymous down votes...I just want to learn on a deep level

I'm sorry but that question is so basic, I can hardly take you want to understand it "deeply". It shows you didn't even try to do some basic research on your own. LEARN HOW TO MAKE QUESTIONS.
A quick Google shows an explanation for dummies.
With another quick Google from the information gathered in that link, you can reach to a more profound explanation. There's also this resource.
 
Your question is akin to why a person would prefer to travel by plane instead of a car; when he already owns a car.
 

I thought nanosecond is much longer time than millisecond

If you're confused, may be doing some research would help?

Sorry for being so rude, I hope you learn from your mistakes. Nobody likes someone who has no interest in doing some effort of their own.


#5201568 Should modern games work within a fullscreen window?

Posted by Matias Goldberg on 03 January 2015 - 11:38 AM

However the implementation of UI overlays like in Steam (if Steam is used) and other useful tools makes windowed fullscreen support a waste of dev time for indie/hobby devs.

No.
You just said a couple lines before that borderless window is useful for debugging, which contradicts your "windowed fullscreen support a waste of dev time for indie/hobby devs" statement. It is not a waste of time. It's very useful.

Furthermore, I know a couple of gamer fellows that will very angry if you don't include borderless window support. It gives the fullscreen experience (cover the whole monitor), without the Alt tabbing issues and allows integration with other external programs (i.e. multiplayer VoIP like Ventrillo or Mumble).

A fullscreen app in a multimonitor setup will takeover one monitor. Trying to reach the windows from the other monitors will cause the fullscreen app to go crazy; which doesn't happen on a borderless setup.

Long story short, support all 3 options. Borderless windows is just a small subset of windowed mode. It's very easy to support and your users will appreciate it.




PARTNERS