unrelated to what i’ve been talking about but there is definitely something people need to realize when it comes to animating faces for a game like andromeda vs. animating faces for a game like uncharted or the last of us
in a game like the last of us, faces are usually mocapped for the dialogue and the full range of expression on the actor’s face makes it into the game. and games like the last of us are also smaller in scale, and have a far more contained story.
however, in something like andromeda, the protagonist can have something like several thousand lines of dialogue when you take into account all the possible different dialogue trees
so what happens in such games with character creation, what’s going on in the backend is that different vowel and consonant sounds will be animated, i.e. an animation for O, for TH, for WOO, for AHH, for AY, etc., and then from there, different expressions are also animated, i.e. angry, confused, sad, happy, surprised. the programming then determines what animations to put together based on the dialogue, and also specifies the mood of characters through different dialogues, i.e. “a little angry” or “a little happy, very surprised” and so on. this is why you see so many repeating expressions in bioware games.
it is significantly harder to “realistically” animate a morphable 3D head model that can be customized in, technically speaking, a million different ways, than it is to animate like… nathan drake. and then to animate it where it’ll work for most of those potential iterations without the 3D model imploding and destroying itself in the process
i do get the concerns about the face animations in andromeda but honestly like let’s not compare them to a naughty dog game or something or any other game that has like 20 hours of gameplay maximum and no face customization whatsoever because the whole backend process to making those happen is so different