Jump to content

  • Log In with Google      Sign In   
  • Create Account

Hodgman

Member Since 14 Feb 2007
Online Last Active Today, 12:07 AM

#5174620 Checkstyle

Posted by Hodgman on 18 August 2014 - 10:17 PM

laugh.png "The best way to get a bad law repealed is to enforce it strictly"
Demonstrating that the rules are broken by creating unreadable code that passes the readability guidelines seems a fair demonstration!


As for line lengths, while you can rotate monitors by 90° without too much effort these days, most guys I know work on an unrotated 16:10 or 16:9 monitor, so long lines are actually not that bad.

90º monitors / 9:16 aspect FTW!! I use a 1920*1080 and a 1440*2560.
Also, when using a 16:9 aspect, I always split the IDE down the middle so I can see two files at the same time cool.png

No need to strictly stick to the traditional 80 characters (as derived from when we coded via terminals), but it's still a good idea to resort to line-breaks after some point. 




#5174423 Why the texture blurred ? I need it keeps the same resolution of the source i...

Posted by Hodgman on 18 August 2014 - 05:57 AM

Are you trying to display it at actual size (one texture-pixel = one screen-pixel), or stretch it over the whole window?

If the first, you need to very carefully place your vertices at the corners of the screen pixels, which means shifting the vertices up+left by half a screen pixel.

If the second, you have to blur it, because you're resizing...


#5174369 Overhead with gl* Calls

Posted by Hodgman on 17 August 2014 - 11:37 PM


That being said, any thread making a gl* call will be halted while the CPU queries the GPU, waiting for a response. Is this correct so far?
No. Most gl calls will just do CPU work in a driver and not communicate with the GPU at all.

The GPU usually lags behind the CPU by about a whole frame (or more), so GPU->CPU data readback is terrible for performance (can instantly halve your framerate). glGet* functions are the scary ones that can cause this kind of thing.

 

Most gl functions are just setting a small amount of data inside the driver, doing some error checking on the arguments, and setting a dirty flag.

The glDraw* functions then check all of the dirty flags, and generate any required actual native GPU commands (bind this texture, bind this shader, draw these triangles...), telling the GPU how to draw things. This is why draw-calls are expensive on the CPU-side; the driver has to do a lot of work inside the glDraw* functions to figure out what commands need to be written into the command buffer.

These commands aren't sent to the GPU synchronously -- instead they're written into a "command buffer". The GPU asynchronously reads commands from this buffer and executes them, but like I said above, the GPU will usually have about a whole frame's worth of commands buffered up at once, so there's a big delay between the CPU writing commands and the GPU executing them.




#5174366 How to draw multiple objects with Direct3D?

Posted by Hodgman on 17 August 2014 - 11:21 PM


On MSDN, I read that a GPU could only hold 16 vertex buffers... [within each draw call].
You can make use of 16 buffers simultaneously -- as in, when drawing one object, you can read data from up to 16 buffers.

The next object can use the same buffer(s), or a completely different set of 0-16 buffers.




#5174348 Thesis idea: offline collision detection

Posted by Hodgman on 17 August 2014 - 08:22 PM

A very clever compression mechanism would be required.... This is basically just a big table/dictionary, with a key representing the relative orientation/position of two objects, which you could probably squeeze down to ~100 bits, and then a boolean reponse - touching / not touching.

If the key was 72 bits, that's only 47 thousand trillion bits worth of values....

However, all the values are either 0 or 1, so if you can find and exploit some kind of pattern that lets you organize the keys such that the values become sorted, it would compress incredibly well.

 

Maybe instead of storing boolean results, you could store intersection distances (positive values means that distance away from touching, negative means that penetrating by that distance). This would be a six-dimensional distance field. The field itself would be extremely smooth in most places = low rate of change = extremely low-frequency details, which also means it will compress very well. It doesn't have to be lossless compression either -- lossy compression of distance fields is common in computer graphics, and is often good enough. You could throw away the majority of the data, keeping only a small fraction of points that define the field, and interpolate the rest of it from those sparse points.




#5174046 Loading Uncompressed Textures

Posted by Hodgman on 16 August 2014 - 12:01 AM

^As well as that, the old school solution was to throw out the Z channel of your normal maps (because you can reconstruct it from the other two), and then putting the X/Y channels into the Green and Alpha channels of a DXT5 (with nothing in Red/Blue).
This gives better quality than naively using RGB, because DXT5 compresses RGB and Alpha data seperately, so by using G and A, you make sure there's no cross-talk during compression. The alternative to achieve the same memory savings is to reduce the resolution by half in both width and height (1/4 the pixels).

If you're targeting hardware without much memory, then compression or low resolution is a requirement. We used config files in our build system to swap between the two to compare quality, and also we let the artists specify a choice per asset if required.


#5173867 Game Development or Design?

Posted by Hodgman on 15 August 2014 - 07:08 AM

You can't really just start being a *computer* game designer, because you'll need someone to actually develop your designs.
The only starting point there is to be a "self-implementing designer", by learning basic programming or a visual-scripting system, or some easy to use modding system.

Alternatively you can get a pen, some cardboard and scissors and start out designing board/card/tabletop game mechanics :)


#5173809 dx12 - dynamically indexable resources -what exactly it means?

Posted by Hodgman on 15 August 2014 - 01:17 AM

Yes, but at the same time, draw-calls will no longer be a source of excessive CPU-side overhead, as in current APIs (D3D9 < GL < D3D11 < D3D12/Mantle).

You can do that on some GL drivers at the moment by using MultiDrawIndirect and bindless resources.

 

Modern hardware doesn't actually have a fixed-function Input Assembler any more -- instead the logic for reading vertex data from buffers is implemented inside the vertex shader code (the driver prepends this code onto your shaders)... So in theory you could even still have stuff in different buffers if you wanted to, and then implement the logic to pick the appropriate buffer in the VS wink.png




#5173782 What is expected straight out of college?

Posted by Hodgman on 14 August 2014 - 09:44 PM

You'll likely get thrown straight into a team, but with the understanding that you're a junior. You might get about a week or so to read documentation, etc, or you might be given programming tasks on the first day!

When hiring a junior developer, it's expected that one of your senior developer's will be reduced to 50% efficiency for quite some time, because they'll be spending half their time answering questions and helping that new junior get accustomed to the role. You'll probably be assigned a senior 'buddy'/etc, or at least sat next to someone who's knowledgeable so that they can help you out. No one expects that an entry level developer will actually be of any real use at first - everyone expects that you'll take a bit of time to actually get into the swing of things and become productive.




#5173767 Why does Deferred Lighting work with MSAA?

Posted by Hodgman on 14 August 2014 - 08:29 PM

You can fix most of the MSAA artifacts (to the point where you can pretty much say that it 'supports MSAA') by using more than one lighting sample and a depth-sensitive filter during pass #3.

 

Normally in pass #3, you sample the lighting buffer once and use that data to calculate the final shaded colour.

Instead, sample the lighting buffer and the depth buffer multiple times -- say the pixel you would normally sample, plus the pixels to the left/right/top/bottom of it, for a total of 5 samples. For each sample, compare the geometry's current depth to the sampled depth value -- if the difference is beyond some threshold, then mark this lighting sample as being 'rejected'.

If the traditional (center) sample is not rejected, then use it. Otherwise, average together all of the non-rejected samples to get an approximately correct lighting value.

If all of the samples are rejected, then you're in trouble, and are forced to return an incorrect result -- maybe the sample with the lowest depth difference, or just the average of all samples. This case should only occur when rendering extremely thin objects.




#5173510 Loading Uncompressed Textures

Posted by Hodgman on 14 August 2014 - 12:16 AM

Have the artists commit the textures in a lossless compressed form (TIF, PNG, ...) optionally annotate them so you can differentiate between diffuse maps, normal maps, etc. Then write some automatic content processing tool that compresses the textures for the individual platforms. Whenever you change your mind about if and how the textures should be compressed for a specific platform, or when ever you add a new platform, all you need to do is adapt the content processing tool and rerun it overnight.

^This.
The data should be 'compiled' just like the code is, via an automated build system. I personally use RGB PNGs for almost every texture (with each logical bit of data - diffuse/alpha/roughness/etc - in it's own file, instead of packed together), which are then automatically compiled into DXT/etc. If an artist saves out a new PNG, the build system on their PC immediately detects the change, converts to DDS in the background, and if the game is running, sends a packet to the game telling it to reload the file.
The game always works with optimized formats, and the connection in the middle is mostly invisible automation that the artists need not worry about.

 

If there's special conversion requirements, then these can be expressed in the build rules -- e.g. textures named "*_raw.*" are always uncompressed, or textures named "*_nrm.*" use a special normal-map compression scheme...

This lets you keep the game/engine extremely lightweight, instead of having tonnes of extra loading/processing baggage in the code-base... and instead you've got a plugin-based build system where you can keep adding new formats as required.

This is extremely important when you want to support a new platform -- e.g. you suddenly require big-endian files as well as little endian ones, or suddenly require PVRT as well as DXT format textures!
It's also extremely important as your requirements change. In my engine, if the graphics programmer suddenly decides that they want to move the translucency data out of the 'diffuse' texture's Alpha channel, and instead move it into the Red channel of an auxiliary texture (maybe along with roughness and specular-mask in Green and Blue), then they just have to edit a small configuration file, which results in the textures being automatically rebuilt.
If your artists are exporting DXT files by hand, then they'll have to re-export every texture, manually rearranging the channels as they go. Same for model formats, etc...
You never want to hear the line "We could do that, but it would require a re-export..."! biggrin.png

 

If you're going by strict asset budgets, the build system can spit out the nice reports that you need, or even automatically refuse to create infringing files.




#5173476 Revision control for games

Posted by Hodgman on 13 August 2014 - 08:02 PM

I'm trying to figure out a solution for my company to use, regarding revision control. There doesn't seem to be a good solution out of the box for me...
 
My requirements seem to be--
Artists / content: Want to use dropbox. Don't care about commits and updates and merges. Just want to save files in a folder. No interruptions to workflow. Check-ins and check-outs must be asynchronous and not cause any interruption, where they're stuck waiting for a check-in to complete before they can continue working. Also need to allow partial check-outs.
Programmers: Want to use Git. Nuff said.
Contractors / outsourcing: Need to be able to work off-site. Require ability to create regional mirrors.
IT: simple to host, configurable network ports, etc...
Management: need branching and tagging.
Continuous integration: No one is allowed to check-in/commit changes to the master/trunk except the automated integration PC. Every check-in is instead queued (in git, this would mean pushing to a temporary remote branch). An integration PC pops items from the queue, and builds and tests them, before being pushed through to the main branch (or not, if they fail). This ensures that the main branch is always stable and you never have to deal with broken builds, memory leaks, etc...
 
As far as I know, there's nothing that just does all this out of the box. I'm happy to build the CI system, as they usually require a lot of customization anyway, but I was hoping there would be an off-the-shelf version control system...
 
The first simplification is to accept that maybe code and content will exist in two different repositories. Once I accept that, then I can say, ok programmers get to use Git, and artists can use something else.
 
Dropbox: Yes for simplicity and asynchronicity, but no for branching/tagging/queued-CI (it's also scary that we don't have an on-site copy of every past version of every file).
Subversion: Queued-CI is an issue because it requires lightweight branching. Users will work in their own branch (I'll call it "workspace") and periodically merge from trunk->workspace. This is too complex for non-technical staff... unless I make a new client for them to use that hides this operation!
Subversion with a new/custom dropbox style assistant: Almost feasible -- the merge process requires network access (the required data can't be pre-downloaded prior to a merge), so it's an extreme interruption. If I could pre-download all the data required for the merge asynchronously, and then perform the merge very quickly, then it would be feasible. Also, check-ins cannot be done asynchronously, and regional mirroring only accelerates read access (write access is not sped up at all).
Perforce: Feasible, but is extremely complex when compared to dropbox! Also, not capable of an asynchronous workflow like dropbox -- artists have their day interrupted by waiting for network operations to complete... Regional proxying also has some issues.
Git: Not feasible for art files, as every user must store the full history (which may be terabytes).
Extended git: git-fat / git-media / git-annex / etc -- feasible, except that it's too complex for non-technical staff.
Extended git with a new/custom dropbox style assistant: Feasible!

 

But at this point, if I'm setting up git to not actually manage the files, but instead just palm them off to an rsync-based replication system, and have to make a custom dropbox-esque client to hide git's complexity... Maybe I should ditch the git part and just keep the extensions/custom parts?

I'm seriously considering making a new, very simple, lightweight version-control system, based around the concepts in git-fat, using an rsync server and a dropbox-esque custom client!

 

I don't really have a question here... just thinking out loud about building this new system...

What version control system do you use for your game projects?

Does anyone one else have to manage many gigabytes of game assets?

Does anyone else use a CI server?




#5173229 Unbinding resources?

Posted by Hodgman on 12 August 2014 - 06:28 PM

On some older APIs, it might not always be clear to the API that your "left-over" input-assembler bindings aren't actually used, so the IA ends up reading from your old buffers, even though the VS doesn't need that data, wasting a few cycles per vertex.
On any API with an IA config object (InputLayout/etc), this is no longer a concern.

While we're on this topic - does anyone do manual hazard tracking?
That is, when you have a texture bound, but then also bind it as an RT, the API will automatically unbind your texture to avoid a hazzard (and log an error/warning). Does anyone track this themselves to avoid such logging?
AFAIK, on the 'next gen'/'low level' APIs, we will have to do this work ourselves or risk undefined behavior...


#5173225 What's the industry like?

Posted by Hodgman on 12 August 2014 - 06:11 PM

I have an ex-architect friend who's recycled their 3DS Max skills into becoming a game artist.
They're making an independent game, and doing contract work for about 3+ other local indie studios. So that kind of transition is definitely possible :D

As for full time work, it all depends on the company.
I've worked one place with an atmosphere of stress, tension, and hopelessness, where 10 hours of 'voluntary overtime' was expected, hours were mercilessly tracked with an in/out logging system, and pay-cheques were often one to two months late...
But I've also worked at another with the atmosphere of a giant trusting family, where there was never overtime, pay was always on time, HR was helpful, management were on our level, and hours were self-enforced ('if you have to pick up your kids at 3, that's fine, we trust you'll get your work done').
On that note, employers: simply trusting your staff and treating them as adults/friends does amazing things for morale/productivity!

Unfortunately, if you don't have enough confidence in your experience/talent for the role, or if you're the sole bread-winner for your family, then it's pot-luck as to which kind of company you'll end up in.
I would recommend interviewing the interviewers though - make sure to remember to ask them about frequency of overtime, culture/morale, etc... And once in a role, don't be afraid of standing up for your rights. A lot of toxic workplaces come about because peer-pressure makes injustice seem normal, but it can actually really brighten the whole team when a fresh face simply says "no, that's a bad idea, I'm going home now".


#5172460 Creating resources in D3D11 with alignment constraints

Posted by Hodgman on 09 August 2014 - 08:25 AM

FWIW, I would expect current drivers to allocate all textures using at least 256 byte alignment wink.png






PARTNERS