Blending actors and 3D models

Started by
1 comment, last by Thaumaturge 16 years ago
A comment on another forum sparked a thought: While modern 3D characters look very good indeed they seem to me to nevertheless still lack some of the humanity of real actors - particularly, I think, in facial expression. On the other hand, FMV of real actors can be expensive, I believe, and the costs of filming over and over again can make the inclusion of much choice in the game problematic, leaving it potentially feeling a little low on interactivity. But might there be a middle way? A way to film relatively few frames of a given character, and use them to inform representations of the character that can nevertheless be posed more or less freely. Such a technique might allow us to preserve some of the expression of a real actor with the malleability and openness to real-time variation of a 3D model. The obvious way to do this is, of course, to texture a 3D model with images of an actor; my experience (admittedly limited though it is) leads me to feel that this doesn't often work very well, still looking a little fake. I seem to recall coming a few years ago across research into generating 3D models (and, if I recall correctly, their accompanying textures) from photographs of buildings; might this, or some extension of it, be applied to actors? (I realise that this is essentially a version of the method mentioned in the previous paragraph, but I wonder whether it might not produce better results than modelling the character separately.) On another track, I am reminded of a much older thought: might one find a way to generate an image of a character from an arbitrary angle, based on a set of photographs of them from various known angles, and presumed to be in the same pose? I would imagine that, if such a way were found, motion would be a problem. For example, if the reference images all had the character's arms out horizontal, I suspect that rendering a view of it with arms held down would be problematic. Perhaps this might be remedied by making individual representations of each part, which might then be linked to a skeleton and blended where they meet. Finally, the niceties of facial expression would probably call for the filming of separate expressions (including mouth shapes for lip-synching), which would be blended together based on the character's given mood and state. (For example, saying the letter 'o' while smiling and winking.) So, any thoughts, and if so, what are they? Has any work been done on this? How might it be achieved? (I did perform a brief search of teh forum for similar topics, but didn't find much; however, I'm not sure that the sets of terms that I came up with were the best for the task. ^^; )

MWAHAHAHAHAHAHA!!!

My Twitter Account: @EbornIan

Advertisement
In my opinion I don't think we need that kind of tech. Some developers are getting pretty close to having convincing facial expressions in their games, have you seen the E3 06 presentation of a game called heavy rain ?


IIRC it was done in a couple of months as a test and they achieved a pretty good result, real time facial expressions, tears etc, granted it's not extremely realistic but it's pretty damn good, it has been a couple of years since they showed that demo and the final title is still under development, so I think we should keep an eye on that game.
That video looks interesting - thank you. ^_^

I watched the start of it (I didn't want to let the whole thing load, since I want to watch my cap), and while the character did indeed look very good, her movements - in particular her mouth movements - were almost a little off-putting. I think that I've seen better lip-synching, although it may just be that it looked poor when used in so good a model, or that I was biased against it.

However, you say that the video is a few years old, and that it was done in a fairly short period of time, and these may account for what issues I noticed.

That said, I wonder how long it takes to produce a decent performance for such a character, and what artistic resources (including the number of actual artists) are called for.

I also wonder how the performance of a system such as I describe might compare to that of the system used in the video that you linked to. Perhaps it will turn out to be more efficient, and thus perhaps a good choice for games that target relatively low-spec machines.

(I doubt that my own machine would manage graphics such as those in the video, for example, it being a little on the old side now.)

MWAHAHAHAHAHAHA!!!

My Twitter Account: @EbornIan

This topic is closed to new replies.

Advertisement