Jump to content

  • Log In with Google      Sign In   
  • Create Account

We need your help!

We need 7 developers from Canada and 18 more from Australia to help us complete a research survey.

Support our site by taking a quick sponsored survey and win a chance at a $50 Amazon gift card. Click here to get started!


Promit

Member Since 29 Jul 2001
Offline Last Active Private

#5206245 College degree and Job in the game industry

Posted by Promit on 23 January 2015 - 01:49 PM

 

a degree from a Ivy League University such as Harvard?


I interviewed a guy from Harvard once (that was over 15 years ago). I saw his snooty snobby school as a minus, not a plus, but I tried to put that aside and just look at it as "just a degree."

I see things a bit differently now. I know that some of those big schools have outreach programs and give scholarships in underserved communities to deserving students. I realize now that a school can be seen by some as snooty but that doesn't mean its graduates are snobs.

 

Apparently Harvard in particular is especially vulnerable to this problem, to the extent that people avoid mentioning that they are alums.

http://www.boston.com/news/local/massachusetts/articles/2012/05/28/not_easy_for_harvard_grads_to_say_they_went_there/




#5206028 3 pm EST: Our interview with AIGameDev about AI/Animation in Shark Eaters

Posted by Promit on 22 January 2015 - 01:09 PM

Hopefully it's alright to post this here - At 3pm EST (one hour from now), we are doing an interview with Alex Champandard of AIGameDev about our AI and physical animation work on the iOS game Shark Eaters.

http://aigamedev.com/broadcasts/session-shark-eaters/

This interview with Omar Ahmad looks at the animation technology in mobile game Shark Eaters: Rise of the Dolphins. The game features a different system that animates skeletal rigs — inspired by neurology and learning of motor control. The result are smoothly animated fish and water mammals whose behavior partly emerges from the animation.

 

It's primarily with my colleague who developed it, but I will be there too. Basically we'll be talking about how we do the animation in the game, which is entirely driven by physics simulation. We'll also be talking about how the AI is linked into the physics, both driving it and using physics data to control/derive enemy behavior.

 

Here is a review of the game: https://koffeeklaud.wordpress.com/2015/01/22/ios-review-experiencing-reincarnation-in-shark-eaters-rise-of-the-dolphins/

And some gameplay videos:




#5205883 Lone Poor developer protected from Mega companies

Posted by Promit on 21 January 2015 - 05:59 PM

The key is to understand that the AAA games produced by big companies are vulnerabilities of at least 20 million, and comfortably north of 50 million for the big titles. These things need to make back their large investment, reliably, in order to be worth doing in the first place. So major companies' productions are carefully assessed for market, risk, marketing, etc and set up appropriately. They're almost universally unwilling to take risks on new stuff that is niche, arthouse, creative, etc. That's simply not their role.

 

It's where our role as indies starts. The big team sizes work against the corporate productions in many ways. The biggest indie successes happen when taking things in a completely different direction, one that would have never been greenlit for a full scale production in the first place. Probably the most powerful modern example is Minecraft. Can you imagine that being created by a major company? No way would any publishing exec sign off on that.

 

Think waaaaay outside the box.




#5205442 Ambient Occlusion for Deforming Geometry

Posted by Promit on 19 January 2015 - 07:58 PM

 

Could ambient occlusion be used correctly on skinned models that are animating? It would seem that it only works on static objects that could be repositioned, rotated and scaled as a whole, but not when its geometry is deforming because the radiance map would have to be re-computed. Is this correct?

 

If by pre-baked ambient occlusion, then it's certainly used less on skinned models but is still used. Some will prebake only on areas that move less relative to each area. World of Warcraft, for example, has always used it on character models. Areas like armpits and etc. can still make use of it, and it's probably a good idea for any game still needing prebaked ambient occlusion.

 

I imagine that WoW and many other games are not prebaking AO but simply painting it into the diffuse/albedo maps by hand. That's something artists have done for decades. Centuries? Millenia?




#5202988 C# seems good, but....

Posted by Promit on 08 January 2015 - 08:53 PM

--While reading keep in mind I am in a mobile device--
C# seems to be really popular here, but I have heard it is slow and similar to java. I can read java code, but can kinda write it. I have messed around with c++ (which I kinda like so far), python and lua. So as I know when you begin, you should stay with one language. Should I just stick with c++ and learn a "pro" language first or continue on with Java. I have written a "black box" in java before and would not mind doing it again in any other language. I want to get into game programming and would like to start off with a language that is versatile and I can write faster (in the start of developing). What language should I start with and is the there any good ebooks/ text tutorials online that I can use with the corresponding language. I am open to almost any language that will be continued for a long time.
P.S. I really like game dev so far because on most dev forums your post get rejected because of stupid reasons cough *stack overflow*

> I'm new to driving a car. A lot of people seem to like the Toyota Camry, but I've heard it's very slow like a Hyundai. Should I just stick with a Lamborghini and drive a "pro" car first, or continue with my Hyundai?

If you try to drive a Lamborghini fast as your first car, you will wreck it and look like an idiot in the process.

 

C#, Java, and C++ can all be very fast if you know how to use them properly - or very slow if you don't. Most of these people talking about how fast C++ is, usually don't know jack squat about writing fast code in the first place. A few have some inkling but have never done it on a serious scale. Many are just blindly repeating what they heard in 2006. That said, I find Java's usability to be infuriating, and consider it easily the worst design of the traditional client languages. Graphics code in particular in Java is just awful looking.

 

In general my recommendation for newbies is C# or Python, with a strong lean towards C# if you don't have trouble learning it. C# is also the language that I personally would recommend for somebody who is trying to complete an indie game, even if that person's day job is professional game development in C++.




#5202415 do most games do a lot of dynamic memory allocation?

Posted by Promit on 06 January 2015 - 05:27 PM

Rather than engaging directly in the valuable discussion here, I'm going to share a couple vignettes from 'in the trenches', so to speak.

 

The last time I worked in AAA, that engine was using a lot of modern C++ features and was, by all accounts, cutting edge code for 2007. Cutting edge enough to break some compilers, in fact. This meant a lot of STL containers, a lot of allocation, entity component systems, all the fun stuff except for exceptions. This was a 360/PS3 title and did run on PC, though that was not the intended target.

 

In the last couple months, optimization work began in earnest. Allocation was a significant problem. First up? Tons of traffic in std::vector. A lot of the usual suspects - improper reservation sizes, unnecessary temporaries at function level, etc. Nothing terribly interesting, but a lot to go through in aggregate. Eventually std::vector was dropped in favor of a custom vector with broadly similar behavior but more tightly specified, pooled, and instrumented. After that and a few other low hanging fruits, things were much better but not perfect. I think a lot of small allocations were cleaned up to deal with fragmentation issues, and ultimately the memory allocator itself was replaced with some well known open source third party thing. I don't remember the specific advantages, but it got us to shipping without fragmentation/OOM issues.

 

I heard about another game in a relatively similar timeframe that used entirely static allocation of everything. It may have been Halo. The idea is not dissimilar to Norman's code, though obviously much more complex in practice. The usual issues arise - game behaves unceremoniously when designers exceed hard coded limits, etc. But I have one simple point to make.

 

Let's assume you have a hard limit of 10 MB of memory for your new game, Aardvark Crossing. This memory has to be shared between Aardvarks and Zebras. Level 1, 2, and 3 feature 4 MB of Aardvarks and 3 MB of Zebras. You set the pools at 4 and 3, and leave the rest open for later. Now your designer adds level 4 with way more Aardvarks - 8 MB of them. Problem! But the designer won't relent on Aardvarks, so now you resize the pools and Zebras have to be cut down to 2 MB in all other levels.

 

Things really become a problem, though, when you hit level 5. See, every five levels are the Zebra Bonus Level. It's 9 MB of just Zebras! Or it would've been, if your engine could actually be reconfigured that way in the first place. There's no way you can make the Zebras fit, and there's no way to cut back the Aardvarks in a game called Aardvark Crossing. So now you have to dynamically choose your pool sizes when each level loads. One thing after another falls victim to the dynamic allocation virus, and by the end of it every type of object has its own pool allocator and you're juggling two dozen pools.

 

The truth is dynamic allocation was not invented for laziness, and static allocation is not foolproof. My personal opinion is that it's necessary to have a mix of both, and more important to be able to track everything.




#5200000 Should I use std::cout or have using namespace std; ?

Posted by Promit on 25 December 2014 - 04:58 PM

Short version: Use using std::cout; at source level when appropriate, but avoid pulling in the whole namespace, or putting using statements in headers. Remember that using can be applied at function or class scope, not just file.




#5199907 How does one start off in programming?

Posted by Promit on 24 December 2014 - 05:00 PM

Don't start with C++. You'll get there in time; it's a terrible starting point. And don't trust anyone who says otherwise, frankly.

 

C# and Unity is an excellent way to get started nowadays; don't feel that you have to spend a lot of time getting a handle on C# first. A little, yes, but not a huge amount of time. Programming and game development are very much learn-by-doing and teach-yourself disciplines. Think of something reasonably sized you'd like to create, and then figure out the individual elements you need to learn or accomplish in order to create that thing. Then simply set about learning each of those things and assembling it all together. Google is your absolute best friend, as are the various communities out there - the Unity3D forums, StackOverflow, etc.

 

If it sounds lax and unstructured, good. This is not like classes, where you follow a syllabus chapter by chapter. Don't worry about it. Just start making things.




#5199293 Difference between clock tick and clock cycle?

Posted by Promit on 20 December 2014 - 01:33 PM

There are many, many different clocks in any given computing device. You always need to be clear about which clock you're talking about, or the terminology is pointless. Clock cycles typically refer to the clock signal driving a processor, usually the CPU or occasionally the GPU, but those are not the only clocks around. 




#5199037 C++ how to declare something that isn't declared?!?

Posted by Promit on 18 December 2014 - 07:12 PM

It can be written simpler still:

struct X { struct X* ptr; };



#5198376 Current-Gen Lighting

Posted by Promit on 15 December 2014 - 12:54 PM

Full conference proceedings are generally under paid subscriptions, not published for free. Not sure if SIGGRAPH in particular makes full recordings available; I know GDC does. But I strongly recommend you go through all the SIGGRAPH slides in detail, as well as looking up any materials they reference. There is a LOT in there. It might be easiest to start from 2012 and make your way forwards in time from there.

 

I also like the FilmicGames blog by John Hable.




#5198174 Getting Bounding Box For Sphere on Screen

Posted by Promit on 14 December 2014 - 01:41 PM


Either flip the culling mode when the viewpoint is inside the sphere, or else just use a regular full-screen quad for that case.

Nah, there's an easier way...


EDIT: Tried it and just remember why I decided not to use it. I rendered the geometry with additive blending and it doubled up because the back was showing through. I enabled back face culling, but when Im inside the sphere, it then culls out the sphere, so nothing is rendered. Any ideas?

Draw the back faces, not the front faces.




#5198083 Getting Bounding Box For Sphere on Screen

Posted by Promit on 14 December 2014 - 02:02 AM

Last time I did this, I just rendered a sphere biggrin.png I mean you're issuing a draw call anyway, who cares about a couple dozen extra polys...




#5197450 Physical Based Models

Posted by Promit on 10 December 2014 - 02:14 PM

Honestly I haven't seen enough consistency in what texture maps are expected by physically based renderers to be able to easily produce generic stock models that work well. Albedo is already somewhat variable, but the wide variety of specular/reflectance/rough maps in use nowadays is a tricky problem. A lot of engines are using oddball encodings, and there's not a particularly good way to distribute maps even as raw floating point.




#5196894 Does glMapBuffer() Allocate Client-Side Memory?

Posted by Promit on 07 December 2014 - 10:12 PM

Okay, let's break this down.


At first, I thought: "Great! Direct access to GPU memory

Wrong.


I got the suspicion that glMapBuffer() is really copying whatever data to a client-side pool, then copying the modified data back, and destroying the temporary pool once glUnmapBuffer()'s called.

Close.


At first, I thought glMapBuffer() actually returned a pointer to the GPU's memory, but now it sounds like glMapBuffer()'s doing behind-the-scenes client-side copying, depending on the driver. Is my suspicion correct?

Mostly.

 

So here's the deal: MapBuffer returns a pointer which you can write to. What this pointer actually refers to is the driver's discretion, but it's going to be client memory as a practical matter. (The platforms that can return direct memory won't do it through GL.) This may be memory that the GPU can access via DMA, which means that the GPU can initiate and manage the copy operation without the CPU/driver's participation. The driver also doesn't necessarily need to allocate this memory fresh, as it can keep reusing the same block of memory over and over as long as you remember to Unmap.
 


I thought operating systems typically provide ALL memory, regardless of where in the system it's located, its own unique range of memory addresses. For example, memory address 0x00000000 to 0x2000000 point to main system memory while 0x20000001 to 0x2800000 all point to the GPU's memory. These memory ranges are dictated by the amount of recognized system memory, and GPU memory (including virtual memory stored in page files).

Not so much. Windows Kernel 6.x (Vista) gained the ability to map GPU memory into the virtual address space of a particular process, but that's more about internal management of multitasking with the GPU than having much to do with application code. It's not going to live in the same physical memory address space used for main system memory, though, and you can't read/write to it arbitrarily.






PARTNERS