Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Ravyne

Member Since 26 Feb 2007
Offline Last Active Today, 06:17 AM

#5189569 General Programmer Salary

Posted by Ravyne on 27 October 2014 - 07:56 PM

The biggest problem with these numbers and that survey are that the numbers don't really tell you much, and they also tend to skew statistically high, I'd say, since I believe the survey is self-reporting if its the Gamasutra survey. But its really hard to say anything because you don't know whether the respondants were a specialty programmer (graphics or what-have-you), which demand a marked salary increase over the humble "gameplay programmer" most places. The industry also tend to be a place where there's a significant difference in salary between someone who's shipped at least one commercial game and one who hasn't -- probably because of the relatively-high wash-out for first time game devs in high numbers, vs a much-reduced set of people who have shipped a title and come back for more.

 

I would say, at the very low end, a fresh college graduate at a smaller studio might make a salary as low as around 50k -- a number of years ago I was made an offer similar to that, not long after I was out of school. People who have been around through multiple titles and have a specialty or are very strong generalists can top 100K relatively easily, 120K or more is less common, but not unheard of.

 

But in general, for the skills and hours demanded, the games industry is almost always less-compensated than other places you could be employed as a programmer.




#5189479 Standard Project Structure Layout

Posted by Ravyne on 27 October 2014 - 01:58 PM

First, you can create file-system folders that mirror the project folders ("filters" in VS parlance) so that you don't have name collisions, not that you should anyhow, but it helps for general organization. When you add a new file through the VS interface, just be sure you're creating it in the place where you want it to be, and to use the context menu of the filter you want it to appear under. (right-click, add new-file / add existing-file)

 

Other than that it sounds like you're over-complicating things, and are experiencing paralysis by analysis. If this is your first real C++ undertaking, my recommendation is to not try to build some abstract engine. Yes, keep engine code separate from game code as best you can, but your first project, fist engine, first anything in C++ is not likely to be something you'll do well enough that you won't want to do it over next time. Make your game, make a little-e engine to support it, and only it -- don't try to make a Big-E Engine from the start.

 

 

A typical structure I have for a game project is three folders -- One for game assets, one for game code, and one for engine code. In a simple project, these are usually just folders/filters under a single VS project. In something a little more complex, these are each projects under a VS solution, with dependencies set accordingly to facilitate the correct build order. Game assets include graphics, sounds, music, maps, configuration data, and possibly scripts (I put them here, some people might consider them part of the game code,) I make a folder for each type of asset. Game code includes all code that implements game-specific features or logic (possibly with scripts), if you put scripts here, I'd make it a folder from the start -- for other code files, I don't separate them out into folders by feature area until it seems necessary -- often on moderate-sized projects, it never becomes necessary. Engine code is basically the same, but you generally have a better idea of what broad feature areas exist and can make some broad organizational bins early -- Graphics, Sound, Input, Platform, Math.

 

In general, you should prefer to avoid making deeply-nested folders as an organizational technique. Extra layers of folders can make things more complicated than they need be, if you ever need to make custom build rules. Try to stay relatively flat until you're convinced the extra layers give you value. You don't have to have the right, final structure from the start. Moving things around is just another kind of refactoring.




#5188594 what's the principle behind the shader debugger?

Posted by Ravyne on 22 October 2014 - 02:48 PM


I think that if i created a virtual machine which is support asm shader code, may be i can replay the pixel shader process

 

Yes, that is the gist of it, but its a whole project in its own right. You might be able to piggy-back on an existing software renderer like Mesa , Direct3D WARP or the DirectX Ref device. There are limitations though -- for one, your debugging is only as accurate as your VM's adherence to the spec -- this can get very, very detailed, down to the level of where exactly a texel is sampled during boundary conditions, how floating-point values behave, and implementing 16-bit half-floats in software, along with their conversions to and from more common types. Secondly, it's debugging WRT to a reference, so it can help you make sure that *you* are doing the right thing, but if the driver or device does the wrong thing it won't help you (its still a great benefit to know the problem belongs to someone else).




#5188519 What is bad about a cache miss since a initial cache miss is inevitable

Posted by Ravyne on 22 October 2014 - 08:21 AM

Woe is he who uses virtual dispatch on his hot loops. -- ancient Chinese proverb

Yes, certainly the cult of OOP papers over their ongoing transgressions and its leadership still encourages blind adherence to doctrine that can be harmful. But as Frob pointed out so well, at least you're getting what it is you're paying for. A wiser person would simply avoid virtual dispatch where it wasn't necessary.

Data-Oriented Design techniques, its worth noting, ought to positively impact cache efficacy for both data (increasing spacial/temporal locality, splitting data structures along usage lines) and code (sorting polymorphic collections such that all foo subclasses are processed at once, then bar, and so on).

There's performance to be gained for sure, but you ought to be fairly well off if you haven't done something painfully naive. I'd pretty much exhast optimizing D-cache behavior before examining I-cache behavior (though would design from the start with both in mind).


#5188449 What is bad about a cache miss since a initial cache miss is inevitable

Posted by Ravyne on 21 October 2014 - 09:44 PM

Cache misses are inevitable -- you're speeding up memory accesses by mapping several gigabytes of memory space into just a few megabytes. In other words, cache lines are very limited resources, and every miss that causes a cache line to fill also means evicting whatever data was there before.

 

Instruction cache / cache misses are less of a big deal since the structure of the code itself just lends itself pretty naturally to being cached. For the most part, all you have to worry about is that your hot loops and any code they call out to will ideally fit inside the L1 instruction cache.

 

Data cache / cache misses are a much bigger deal since many data structures are non-linear or accessed randomly (e.g. accessing an array element, or iterating a linked list). When you jump all over memory looking at random bytes, you spill what might otherwise be useful data that might've already been in the cache.

 

Think of it like this -- all your gigabytes of main memory are divided up into little chunks of 64 bytes -- the same size as a cache line. Whenever you read a single byte from any chunk, the entire chunk is loaded into a CPU cache line. Now, most CPU caches are 8 or 16-way associative, meaning that there are 8 or 16 CPU cache lines (a 'set') that can hold onto a particular chunk copied in from main memory, and it also means that every 8th or 16th chunk shares the same 'set' of cache lines. This might sound pretty good, surely you won't be reading from 16 different, conflicting chunks at the same time, right? Well, guess what, You have 8GB of RAM and lets say 4MB of 16-way set-associative cache -- that means that 2048 chunks of main memory are competing for just 16 CPU cache lines. Every 17th new chunk you access, and every new chunk there-after, necessarily boots some other chunk out of the cache -- In the worst-case scenario, it booted something you'll need again soon, but your access pattern continues in a way that causes it to again be evicted, repeatedly; causing data to ping-pong back and forth between cache and main memory. You only get the benefit of the cache if you continue to access more bytes from that cache line.

 

And not only does it take a long time to load something from memory, but main memory *bandwidth* is also a limited resource. I did some back-of-the-envelope calculations using DDR3-1866 on a modern x64 platform as an example. If your game runs as 60 frames/second, and had absolutely pathological access patterns (that is, one byte is read from a cache-line and that line is never touched again), your memory bandwidth and cache lines conspire to limit you to touching less than 8 megabytes of code and data per frame. 8 megabytes is not a lot! With fully-optimal cache patterns, you can access about 500MB of code and data per frame (and you can read/write it multiple times for essentially free as long as it stays in the cache), which is a world of difference. Even still this is not a lot of data, and is a big part of the reason why some games run different parts of their simulation at different rates (for example, you might run simulation logic at 30fps, or only run 1/4th of your AI entities each frame.)




#5187551 DLL-Based Plugins

Posted by Ravyne on 16 October 2014 - 07:56 PM


I put the try-catch block there in case there is an exception because I don't want the Editor to crash if the Game DLL causes an exception. Would this be considered as exception handling across binaries?

 

It would. If you were to stick to native plugins and DLL architecture, one approach is to marshall exceptions across the DLL boundary. A simplistic way of doing this is to surround your plugins in a catch(...) so that no exception leaks from the DLL (you can also catch and handle specific exceptions in the same place) -- then, just have your DLL function log the exception data as a dump, and pass back an integer value that indicates the type of the exception. Your client can look up the error code and present the user with a basic description of the exception, and point them to the log for more information -- if necessary, certain error codes might mean for your client to throw the indicated exception itself, as a means of propagating the error to where it can be handled correctly.




#5187546 DLL-Based Plugins

Posted by Ravyne on 16 October 2014 - 07:45 PM

I think, though, what everyone is trying to warn you off of, is that native DLLs aren't straightforward. Its not easy like implementing an interface -- its like that, except that the interface *also* consists of a dozen or three sort-of-hidden settings and dials that all have to be exactly in sync -- the right calling conventions, the same ABI, the same behavior settings for floating-point code (potentially), the same data packing on shared structures (or a mutually agreed upon serialization). A dozen compiler settings you probably never think to touch suddenly become critically important. In general, I'd stick my neck out as far as saying that native-code DLL plugins are largely out of favor -- people will put up with it where performance is a necessity, but it seems like too much trouble when its not.

 

[EDIT] I realize now that the post above was probably in response to nypyren, and that you weren't justifying sticking to native DLLS in the face of my previous post. I'll leave this here for posterity, but apologize that the tone might be a bit harsh in retrospect.




#5187540 What a web I Weave !

Posted by Ravyne on 16 October 2014 - 07:22 PM


Tabs = bad. There is no justification for them (IMMHO).

 

Tabs to indent, spaces to align. Tabstops of 8 are hideous though, 4 is plenty in curly-brace languages, and you can get away with just 2 if you're not a braces-on-their-own-line programmer.

 

3 is heresy, of course, because all programmers know that powers-of-two are faster tongue.png




#5187539 DLL-Based Plugins

Posted by Ravyne on 16 October 2014 - 07:17 PM

As others' are pointing out these difficulties, it might well be the better long-term solution to have such 'plugins' in the form of scripts. You could package such plugins -- resources, code, configuration, manifest -- inside a .zip file to keep it all together. That's basically what Visual Studio's .vsix extensions are, except they use .net dlls for code.

 

Scripting should be fine for UI extensions performance-wise, but it does introduce a dependency that you wouldn't have otherwise. Lua would probably make a good choice.




#5187516 How do you ace the interview?

Posted by Ravyne on 16 October 2014 - 05:12 PM

I don't know if you can expect to find unity-specific examples of interview questions anywhere. In general, the team will probably put you through a few design or thought exercises, have a technical discussion with you, and probably have you do some whiteboard coding or code critiquing. Advice for all of this, as per general, is relatively well known -- listen, repeat, clarify before attacking a solution, think out loud, and do not hesitate to change directions on your solution if it seems you've got stuck -- I've failed one interview by looking for a log-N solution that I was sure existed (and it did) but was more complex than I imagined, and because of complexity was actually no faster than the linear solution that was obvious. If you think you need to backtrack, its actually an excellent time to bounce ideas off your interviewer -- "I went this way because X, but now I see Y. That makes X hard, but maybe Z is a better solution. What do you think?" They're not going to give you the solution, or probably even tell you whether they'd choose X or Z, but they will almost certainly help you puzzle things out some more, and its not the kind of thing they'll dock you for if you are going about it intelligently. They aren't looking for people who know the solution to every problem cold, they're looking for people who can find solutions to problems they maybe haven't seen before.

 

Practice questions -- even or perhaps especially the non-technical ones. The goal isn't to come up with a script, but ask yourself the questions and write down the answers. The goal is to simply bring relevant information from your experiences to the top of your mind. You don't want to wake up the next morning regretting that, under pressure, you failed to recall that really great story you have about solving a similar problem.

 

And as per usual, I will recommend you to buy and read "Programming Interviews Exposed" -- its in its second edition last I looked, and the best $30 a programmer can spend. It covers everything from resume tips, to typical soft and hard questions, to thought exercises, to how to dress, to how to negotiate a salary. Buy it now.




#5187479 Lack of Production From Team Members

Posted by Ravyne on 16 October 2014 - 02:40 PM


You are the lead coder, but not a leader... Often motivated, skilled people tend to fall into the 'I'm the only one who is skilled enough to safe our butt right now'...
This has a very high potential of destroying the team motivation!...If you want to motivate your team, you need to learn to hold back, to delegate challenges to other member, to accept, that others need to learn by failure, to support them, that a good team performes much better than a single, skilled individual.

 

QFE, don't discount this advice. In younger days I have been the perpetrator and the victim of this working style, and neither time was I very satisfied with the result. You cannot expect morale to remain high for the whole team when a minority of the team took ownership of the project early on. It is not their fault as much as it is not yours and as much your fault as it is theirs -- but nevertheless, there is now a situation where only you and the front-end person (people?) are invested in the project, and there seems to be no challenges left in the project for these others to find value in. They're probably thinking "At this point, I can coast through with a C by only doing enough to not get kicked out, or I can work my butt off doing the boring remnants and maybe get a B." I suspect you would not be motivated by that either.

 

Your problem is not productivity, its morale. Attacking it as a productivity issue will not solve the problem. Being older and wiser now, I would approach my past situations like this -- set aside any feeling of superiority or indignation, acknowledge that there is a morale issue stemming from a lack of ownership by all team members, cede some of your remaining *interesting* feature work to them or find new features for them to own, split documentation and other "boring / busywork" tasks across the team.

 

Remember that in this setting there is no hope of promotion, which is usually what enables people to endure more menial tasks in their professional career -- here, to keep the team happy and productive, members have to share in both the interesting and menial work. 




#5187001 Golden era of the RPG

Posted by Ravyne on 14 October 2014 - 01:28 PM


What? No. Not at all. What you get is to do what before you just pretended to be doing. That's a pure win for me.
 
Complexity isn't always a good thing. And dice roll mechanics, albeit complex if you want to make them complex, are as shallow as there can be.

 

I'll stick my neck out and disagree -- for me, there's almost nothing more insufferable in modern games than the lock-picking mini-game. I find it tedious, and unfulfilling. I'd much rather have some kind of lock-picking stat and just be done with it. And why are we picking a lock that looks like Its barely holding together through some miracle chemistry of rust and cockroach feces, anyhow? In real life you'd be loath to touch the thing for fear of tetanus or worse, and just smash the thing apart with the but of your rifle. But I'm probably biased, honestly I don't really like CRPGs to begin with; the massive number of stats, piles of loot, and the like just seem like too much tedium to me. jRPGs are more my speed.




#5186092 anyone have experience of selling Android game?

Posted by Ravyne on 09 October 2014 - 06:45 PM


now that's something interesting :-D
sure to be a great starting method if I have some bucks to spend

 

I wouldn't recommend you follow their lead. I mean, they write the expense off as marketting, but the practice isn't endorsed by the marketplaces, can get your app penalized or banned, and what's more, doesn't guarantee success.

 


I don't think it was easier.
Back then, getting the equipment to develop was much harder. We didn't have the internet to learn how to make games, so we needed books etc.
I recommend listening to some of John Carmack's words on the days of Wolfenstein 3D actually for this. You'll notice that those that ended up developing games back then were taking huge risks (such as a lawsuit that would take over a decade to settle!).
 
It is easier to make games now, which means there are more games, but it doesn't go to say there wasn't any volume of luck back then, or that great titles didn't make it. It was a challenge then, and it is a challenge now!

 

An interesting dynamic is that the barriers to entry -- equipment, know-how, reach -- have been pushed down so far, especially where people begin. All you need is a PC, which you probably already have, a mobile device which you might already have, and $100 to get your app on the store. Knowlege is freely available if you have the time and ability to grok it. The rest is your own creativity and gumption.

 

The result is that almost no one fails before they hit the market, and combined with the apps gold-rush, one could only expect to see the kind of over-saturation and market dynamics that exist today. There's no natural forces keeping those doomed to fail out at an earlier stage (to be clear, I'm not advocating that the previous Plutarchy was "better" but it definitely was different, and those that actually made it to market had a better shot at meaningful success). This leads to rather few people making money at what's become a very large but rather dysfunctional market.




#5185677 If you ever used a vector in c++ could you show...

Posted by Ravyne on 07 October 2014 - 09:02 PM

Figuring out what tiles are where is highly unlikely to be your bottleneck in any event.

 

That said, to be maximally efficient, I would create a vector, or possibly a std::array, with dimensions Width, Height, Depth, where Depth is the 3 layers you describe. I would walk all of (the visible portion of) layer 1 in row-order, then the same for layer 2, and for layer 3. With proper declaration of the vector or array, that will be as efficient as possible.

 

The declaration would look like this for std::array:

std::array<std::array<std::array<TILE_STRUCT, Width>, Height>, Depth> tiles;

 

Or for vector:

std::vector<std::vector<std::vector<TILE_STRUCT>>> tiles; // then populate the vectors appropriately, using reserve() to pre-allocate memory.

 

 

That's for a somewhat niave "I'm just going to carve out a square of this to draw" approach. If I were really needing efficiency (say, with significantly more than 3 layers, or a variable number of layers) I would then organize the data into a 2D grid of map sections a few tiles on a side (probably between 4x4 and 16x16) where the idea would be that I would only ever load the sections I had to draw. The section itself would be a linked-list to another 2D section containing the next layer up, and you could add as many layers as you needed.

 

Then, when rendering I would find all the sections that are in view, including their layers, then walk all of that per-layer building up a draw list, adding an appropriate Z-depth based on the layer. Build the list from the upper-most-layers to the base to take advantage of early-z rejection on the GPU.

 

The declarations would look something like this:

 

typedef std::array<std::array<TILE_STRUCT, 4>, 4> tiles_section; // create a type of a section of tiles that's 4x4;

 

std::array<std::array<std::forward_list<tiles_section>>> map_base_sections; // I'm going to hand-wave here, you can use things other than forward_list, and you need to figure out where you want to store your upper sections, and also how you'll manage lifetimes (e.g. with unique_ptr or scoping, etc).

 

 

But that's if you really need the speed -- almost certainly the 'naive' way is sufficient for most uses.




#5185665 If you ever used a vector in c++ could you show...

Posted by Ravyne on 07 October 2014 - 07:51 PM


I guess this explains why performance might take a hit using vectors. Hopefully my game won't suffer accessing a list of a few hundred at a time.

 

xeyedmary's post might have been a bit misleading -- for sequential reads, vector has the same performance characteristic as an plain old, C-style array. Random reads pay a single extra indirection which is going to be swallowed up and hidden by any modern CPU. The only performance issue that you can get into with a vector is when you grow it by adding extra items to it. Even then its the same cost you would pay to grow an array (except that native arrays don't support growing themselves, you have to relocate and copy, which is what vector knows to do), but vector, when it grows, grows extra so that it doesn't have to reallocate and copy every time. What's more, whenever you know how much room you're going to use, you can ask vector to reserve that amount of memory ahead of time, so that it never has to reallocate and copy while you are adding items. Long story short, vector is no slower than a plain-old-C array unless you're doing something evil, stupid, or ignorant, and since it does all that *and* allows you to grow it *and* handles all the dynamic memory management for you, there's no reason* not to use it.

 

* If you know the exact size you need, and you don't need to grow it ever, prefer std::array (which is an even thinner wrapper over an array than vector is, and like an array it can't grow). For simple storage and lists, prefer std::array and std::vector by default, unless A) you need to insert items at the front or in the middle (then look to std::list, std:deque) or B) you have a different kind of problem altogether, like something suitable to std::queue, std::map or std::set in all their variations. See http://www.cplusplus.com/reference/stl/ for the list of containers you can use.






PARTNERS