Some questions...

Started by
3 comments, last by amitN 11 years, 8 months ago
Hello,

1. is there any way of animating lip sync on a 3D character based on an audio file (or direct text), on-the-go (i mean live, i.e. as the audio progresses it dynamically lip syncs, instead of relying on a saved animation (pre-rendered to match the lips)).

2. is it possible to have an animated avatar (like the above with the lip syncing and all) placed inside a generic form (as in a window application from visual studio .NET, or some java generated form (eclipse/netbeans)) and have it follow the mouse movements with its eye for example and be interactive to some extent.

thanks.
Advertisement

Hello,

1. is there any way of animating lip sync on a 3D character based on an audio file (or direct text), on-the-go (i mean live, i.e. as the audio progresses it dynamically lip syncs, instead of relying on a saved animation (pre-rendered to match the lips)).

2. is it possible to have an animated avatar (like the above with the lip syncing and all) placed inside a generic form (as in a window application from visual studio .NET, or some java generated form (eclipse/netbeans)) and have it follow the mouse movements with its eye for example and be interactive to some extent.

thanks.


1.Yes
2.Yes


Now, i'll guess that what you really wanted to ask is not "Is there a way or is it possible" but rather how it is done.


This is where it gets tricky, very tricky, the easiest way would probably be to start by converting text to phoenetics (any decent dictionary has the phoenetics for all common words) and then pose the lips for each phoenetic in a modelling tool, then its just a matter of animating between those states and syncing the phoentic data with the audio.

The hard part would be to do the same for an audiofile , voice recognition software go through most of the process of converting audio to phoenetics and words though so googling voice recognition will give you some hints, (Personally if i were to do this for a game with static voice files i would do this using existing voice recognition software during development and just include timesynced phoenetic data with the audio).

If you want to do this with user provided voice samples you could look at any of the voice recognition libraries out there (There are a bunch of both commercial and opensource ones you can use)
[size="1"]I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!
3. alright, lets assume i was able to convert a text file to phonetic, and got the timing right in regards to the audio. what software can i use to apply the lip sync to the model. would blender be ok.

4. And how do i integrate it inside a .Net or java form (what format for example if possible to import from blender OR if it needs to be modeled within the form language itself like WPL for .net)?

[the thing is that i know some programming, and did some CAD modelling before, and have some basic animation knowledge. but never ventured into game programming. But now i got an assignment (uni), in which knowing the above would be of great help and am searching for some ideas on how to do it]

3. alright, lets assume i was able to convert a text file to phonetic, and got the timing right in regards to the audio. what software can i use to apply the lip sync to the model. would blender be ok.

4. And how do i integrate it inside a .Net or java form (what format for example if possible to import from blender OR if it needs to be modeled within the form language itself like WPL for .net)?

[the thing is that i know some programming, and did some CAD modelling before, and have some basic animation knowledge. but never ventured into game programming. But now i got an assignment (uni), in which knowing the above would be of great help and am searching for some ideas on how to do it]


3. You can use blender to create the poses or animations. (because of 4 you don't have to try to get the syncing done in blender, you just need poses for each phonetic (some might share pose).

4) You export the model and animation from blender to a format of your choice and then load it in your application and render the appropriate pose from it using a 3D API such as OpenGL, Direct3D, XNA, etc. (You probably want to interpolate between poses)
[size="1"]I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!
thanks. helped me a lot in understanding how it would be feasible.

This topic is closed to new replies.

Advertisement