Jump to content

  • Log In with Google      Sign In   
  • Create Account


Question about graphics of an older game


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
10 replies to this topic

#1 Swordmaster   Members   -  Reputation: 217

Like
0Likes
Like

Posted 05 December 2013 - 02:08 AM

There was an arcade game entitled 'War Gods' back in the 90's, and I remember it for having in particularly photorealistic character models (more so for one character at the least; Vallah).  Which at this time could be considered the infant stages of 3D games to some.

 

Anyways, this follwing video: https://www.youtube.com/watch?v=Ro8icXuBkP8  shows some gameplay from said game.  To me at least, the character Vallah, has quite realistic looking skin and sheen for a game of that era anyways.  I mean, I look at some 3D games nowadays and while I can see a step in the right direction... many triple-A games end up looking almost 'cartoonish' in their final product, intentional or not.  Case in point; the most recent Mortal Kombat or Killer instinct.  I'm not saying this game looks 100% realistic either, but I think the developers did a pretty god job with the visuals considering it's age and in comparison to the aforementioned games.

 

My question is this though...  technically speaking, how did the 3D modelers achieve this look?  Did they do anything out of the norm from current games?  I know video talent is sited in the credits along with the actors that played the character.  What exactly is 'video talent' though?

I'm quite new to all the inner workings of 3D, so maybe someone can enlighten me on the matter.  Also I know this game may not be the best example, but it's the best example I can give. 

 

Thank you to anyone who can offer me any advice.


Edited by Swordmaster, 05 December 2013 - 02:09 AM.


Sponsor:

#2 Olof Hedman   Crossbones+   -  Reputation: 2688

Like
0Likes
Like

Posted 05 December 2013 - 02:20 AM

It doesn't look that good to me? In game, it's just textured polygons, although with pretty well made textures

 

Only thing that looks "photorealistic" to me is that picture at startup, but I'm pretty sure that is just a low res actual photo, or pixel graphics based off a photo


Edited by Olof Hedman, 05 December 2013 - 02:20 AM.


#3 C0lumbo   Crossbones+   -  Reputation: 2155

Like
0Likes
Like

Posted 05 December 2013 - 02:20 AM

I'm not certain, but I think they're sprites, not 3D models. The sprites may have been generated from photographs which would explain the grainy, realistic look.



#4 Swordmaster   Members   -  Reputation: 217

Like
1Likes
Like

Posted 05 December 2013 - 02:48 AM

Thank you both for your help.  It occured to me to look up the entry on wikipedia and to my surprise it mentions this under 'Development':  "The in-game characters were created using a technology called "digital skin", which involved digitizing reference photographs of live actors and mapping them onto 3-D models"  http://en.wikipedia.org/wiki/War_Gods_(video_game)

 

Can anyone go into more detail about this?  DId they use video cameras?

 

Yeah C0lumbo, that grainy look adds to the realism somehow.



#5 Hodgman   Moderators   -  Reputation: 28502

Like
2Likes
Like

Posted 05 December 2013 - 03:01 AM

Instead of computing any lighting or shading, they've just used photos in the game.
This means the game graphics look like photos (because they are), but you can't change the lighting in your scene ever, because the 'phototextures' already have lighting 'drawn' into them.

#6 Swordmaster   Members   -  Reputation: 217

Like
0Likes
Like

Posted 05 December 2013 - 03:16 AM

Thank you Hodgman for clearing that up.  I don't know if you'd agree with me in asking you this, but why can't more recent game entries look this real?  So would it be correct to say if I was to use the same sort of photo mapping technique they used for this game, it would be a trade off of not having dynamic lighting in favor of this more 'photorealistic' look and vice versa?



#7 BagelHero   Members   -  Reputation: 1344

Like
0Likes
Like

Posted 05 December 2013 - 05:30 PM

In the end, though, there are reasons the bigger games DO NOT attempt photorealism. At best you have some games going for more of a hyper-realism streak. This is for a reason.

Firstly, the very fact that a game cannot have dynamic lighting decreases the realism by an astounding amount! Like, everything would look out of place and seem a bit like a poorly photoshopped image unless you took the time to photograph the model under perfect conditions for every environment needed in your game, which seems like a ridiculous amount of work. Without lighting, everything looks amazingly ugly.

Secondly, having more of a cartoonish style allows people to keep up the suspension of disbelief a bit more (eg; a guy that's got rediculous proportions seems more able to jump ten feet in the air and do 3 backflips than a perfectly realistic soldier), and also holds off the uncanny valley.
"Photorealistic" textures/models would be extremely prone to uncanny valley, I think. In an age where facial movements weren't needed or possible, they were alright, but now it'd just be... Mapping that to a model in a way where it could actually move would be kind of really difficult. That's still achievable (people do similar things), but for game purposes, how would you map the eyes correctly? The eyelashes? the teeth, the tongue? The list just keeps going on as such, and that's before I even TOUCH modeling in a way that's perfectly accurate to a human body.

 

Lastly, it's kind of... ugly. Why go for photorealism, when we have so many styles and influences to draw from that follow established design rules and bend more easily to the compositions we strive for? When we can create characters from the ground up, instead of scanning in actors to play them? What we should be going for is believability, not strictly realism.

Just my 2 cents I suppose.

 



#8 sunandshadow   Moderators   -  Reputation: 4671

Like
2Likes
Like

Posted 05 December 2013 - 05:43 PM

I don't understand the assertion that modern games don't attempt photorealism.  Lots of them do, and one of the many techniques they use is capturing humanoid face/skin textures with photography, and also video-capturing humanoid and animal motion.  Most games contain lots of models that do not correspond to any real object on Earth, though.  You can't photograph what doesn't exist, and you can't then make models from nonexistent photographs.  The perceived realism of, say, a dragon is universally dependent on the skill of the artists involved (and often negatively affected by technical limitations and optimization requirements).  Then, breaking photos up to map them onto 3D models is challenging to do well.

 

Also a lot of people prefer an anime or fantasy styled world to a strictly photorealistic one.  Real women are airbrushed to greater perfection in magazines all the time, and game art may aim to do this to the whole game world.


Phone game idea available free to someone who will develop it (Alphadoku game - the only existing phone game of this type is both for windows phone only and awful. PM for details.)


I want to help design a "sandpark" MMO. Optional interactive story with quests and deeply characterized NPCs, plus sandbox elements like player-craftable housing and lots of other crafting. If you are starting a design of this type, please PM me. I also love pet-breeding games.


#9 Hodgman   Moderators   -  Reputation: 28502

Like
5Likes
Like

Posted 05 December 2013 - 06:04 PM

I don't know if you'd agree with me in asking you this, but why can't more recent game entries look this real?  So would it be correct to say if I was to use the same sort of photo mapping technique they used for this game, it would be a trade off of not having dynamic lighting in favor of this more 'photorealistic' look and vice versa?

You can ask whatever you want cool.png tongue.png
 
A simple example of the problem is -- imagine standing in a "T-pose" (arms outstretched sideways) outdoors in the sun, at midday. The backs of your hands are lit by the sun, but your palms are in shadow.
Lets say we then "photoscan" you like this and put you in a game.
When the game character animates so that their palms are facing upwards, their palms are still shadowed, and the backs of their hands (which are now facing downwards) are fully lit... which looks very wrong.
 
Another issue is that almost every real-world material is view-dependent -- this is a fancy way of saying that the appearance of the material is different, depending on the angle that you view it from.
e.g. if you look directly at a window, you can see through it, but if you look at it at a glancing angle, it starts to act more like a mirror (the Fresnel effect).
The extreme example of this is an actual mirror -- every photograph that you take of that surface is going to be completely different depending on where you place the camera!
 
You might think this isn't a big deal for skin, but these view-dependent reflections make up about 3-4% of the total light that your eye receives from skin. It's a subtle detail, but still very important it making something look believable.
 
Those old photo-textured games looked very good at the time, but they don't actually look that great next to modern games any more. You can still use that technique, but only if you're happy with completely fake lighting. It doesn't come off as photorealism in the end, but a kind of weird hyperrealism, due to the incorrect specular highlights, and the incorrect directional lighting.
 
Modern games do actually still use a variation on this technique though!
e.g. Here's an actor standing in the middle of 72 high resolution cameras:
MuHHdrd.jpg
 
And here's a 3D model and a photographic texture reconstructed from those 72 photographs:
4AumS1R.jpg
 
Or here's some skin rendered in a modern game engine, using fully dynamic lighting (also using a 3D model and skin colour texture captured from a 3D scan like above):
Gr89kpt.jpg
 
In these examples, the artists have to remove all the lighting information from the "phototexture" so it appears as if the object was standing in a white room where all the walls/roof/floor were white lights, or outside on a cloudy day. They also have to remove any "highlights"/"sheen", as that's the view-dependent part of the lighting. After that, they're left with a fairly flat and boring colour texture.
Then, they have to hand-author specular/roughness textures, that determine what the highlights/sheen will look like, and be careful to get this to match real skin.
Then you put that colour texture, the specular/roughness textures, and the normal map (which you also got from photo-scanning) into the game engine, and it can "re-light" the model dynamically.
 
Examples of the kind of software and services for capturing this data are below:
http://www.agisoft.ru/products/photoscan
http://ir-ltd.net/

Edited by Hodgman, 05 December 2013 - 06:24 PM.


#10 Swordmaster   Members   -  Reputation: 217

Like
0Likes
Like

Posted 05 December 2013 - 11:31 PM

I don't understand the assertion that modern games don't attempt photorealism.  Lots of them do, and one of the many techniques they use is capturing humanoid face/skin textures with photography, and also video-capturing humanoid and animal motion.  Most games contain lots of models that do not correspond to any real object on Earth, though.  You can't photograph what doesn't exist, and you can't then make models from nonexistent photographs.  The perceived realism of, say, a dragon is universally dependent on the skill of the artists involved (and often negatively affected by technical limitations and optimization requirements).  Then, breaking photos up to map them onto 3D models is challenging to do well.

 

Also a lot of people prefer an anime or fantasy styled world to a strictly photorealistic one.  Real women are airbrushed to greater perfection in magazines all the time, and game art may aim to do this to the whole game world.

 

Do you mean using video capture as 'reference footage' for motion capture or something else?  Also when it comes to photographing objects that don't exist in the real world, what are your thoughts on something like claymation?  Does the game industry still use this technique?  If you ever played the first Mortal Kombat, the character Goro was developed using this technique and made it look pretty realistic.  http://www.joystiq.com/2009/06/22/mortal-kombats-goro-actual-size/

 

 

@Hodgman,  thanks for the thorough insight.  In regards to the problem of the T-pose example, what if instead you took different photos of the person?  As an example one with the subject facing the sun or light and the other with palms facing away.  I don't know if this has been tried before but even it has, is there any hardware and game engine that could handle fluid unoticeable switching between said 'light and shadow maps' (sorry, I'm not shure of technical terms yet) based on the orietation of the the games 'camera' and how the player sees the game objects?  I'm sure this would be a lot of work though.  No argument there.

 

Also, I'm probably just misunderstanding, but what is the purpose of using all white walls as opposed to a traditional green or blue walls?  And what kind of lights are actually used light the room?  Aside from what you've explained to me though, why is it that as realistic as games are starting to look today (such evidenced by that in-engine render of the face you posted), that even realtime gameplay is still discernable to the naked eye between simulated and real-world (such as the game Ryse)?  Graphically speaking of course.  I imagine it has to to with the lighting and ray tracing, and correct me if I'm wrong here.  Whatever the reason, realtime graphics have not been perfected when it comes to gameplay as of yet.  At least in my eyes, as others may see things differently.

 

All of you have been very helpful though.  Thanks again.


Edited by Swordmaster, 05 December 2013 - 11:33 PM.


#11 Hodgman   Moderators   -  Reputation: 28502

Like
1Likes
Like

Posted 06 December 2013 - 12:07 AM

Also, I'm probably just misunderstanding, but what is the purpose of using all white walls as opposed to a traditional green or blue walls?

The colour of the walls themselves doesn't matter, what matters is that the subject is being lit evenly from all sides.
e.g. with the T-pose example, the subject is lit from above, but shadowed from below. If you place the subject in a glowing white box before capturing them, then they'll be lit from every side, which makes the resulting photo-data easier to work with (it's less work for your artists to try and "un-paint" the lighting).
 

I don't know if this has been tried before but even it has, is there any hardware and game engine that could handle fluid unoticeable switching between said 'light and shadow maps' (sorry, I'm not shure of technical terms yet) based on the orietation of the the games 'camera' and how the player sees the game objects?  I'm sure this would be a lot of work though.

A device for capturing that data -- what a surface looks like for each viewing angle, and for each lighting angle -- is called a Goniophotometer. They're used in scientific research mostly.

In theory, if you could capture every part of an actor using one of these, then you could use that data for very realistic rendering! For each point, you basically have a 2D array of colour values, where one axis in the array is the viewing angle, and the other axis is the lighting angle. You have one of these 2D arrays for each point in the photo-data set, which gives you a massive 3D array. The amount of data in this array would be immense, so it's not at all practical... But in theory, it would allow you to have completely realistic lighting for your object, with the lighting code looking as simple as fetching the right colour out of the array:
litColour = photoData[position][viewingDirection][lightDirection]

Mitsubishi Electric Research Laboratories actually has a database of material data like this that they've collected with their own Goniophotometer -- however, each material only has the equivalent of a single pixel/position captured, and most of their materials are different kinds of paints -- not human skin!
If you're trying to recreate realistic images of automotive paints, however, then their data is very useful wink.png
brdf.jpg
 
Because these kinds of data-sets are too big to feasibly use, we instead try to approximate them using mathematical formulas, which we call BRDF's, e.g. the typical, basic one for non-glossy surfaces, used in almost every game is:
litColor = color * cos(lightAngle) * lightColor
That produces typical lighting results, but with no "specular reflections" (aka highlights, or sheen).
 
Back to your quote -- instead of using a full Goniophotometer, you could just capture the person once standing in a white room, then capture them again in the same room with the lights on only 10%, and then use some code like this to either choose one version or the other, depending on whether the light is above or behind the surface:
factor = cos(lightAngle)
litColor = (brightRoomColor * factor) + (darkRoomColor * (1-factor))
 
However, as above, this formula only takes the light-angle into account, not the viewer-angle, which means there's no view-dependent highlights/sheen/specular-reflections.
 
I'm pretty sure in the game you linked to originally, they've just captured their actors under uniform lighting (e.g. in a white room), and then have added shadow gradients to them using the 'typical, basic BRDF' above. Any sheen/highlights are probably contained in the photographs, and don't move according to where the camera is.

 

 

 


why is it that as realistic as games are starting to look today (such evidenced by that in-engine render of the face you posted), that even realtime gameplay is still discernable to the naked eye between simulated and real-world (such as the game Ryse)?  Graphically speaking of course.  I imagine it has to to with the lighting and ray tracing, and correct me if I'm wrong here.  Whatever the reason, realtime graphics have not been perfected when it comes to gameplay as of yet. 

Keep in mind with that face demo, they were just showing off a scene with just that head in it, with nothing else -- this means they could dedicate 100% of the processing time to just drawing the skin. Normally in a game you've got to spend some time drawing the environment, other characters, special effects, etc...

Depending on how much stuff a game has to draw, it will have to adjust the level of quality it can achieve on each object. There's only so much processing-time and memory to go around.

 

As for realistic images -- even with the best film-quality computer-graphics, you're often still able to spot that an image is computer-generated instead of real... And for film, you can spend an hour rendering out each frame using supercomputers, whereas with games, we've only got about 30 milliseconds to draw each frame!

 

[edit] You might also be interested in checking out LA Noire -- They use 3D mesh + colour reconstruction from photography, like the soldier above, however, instead of still photos, they used actual video, to create animated captures of people's faces. They play these back as "video" files in the game, which recreates all the real colours and deformation of the actor's faces.

http://www.youtube.com/watch?v=ZY7RYCsE9KQ


Edited by Hodgman, 06 December 2013 - 12:37 AM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS