Hey, so I’m curious about something. Is there a way for a 3.0 avatar to use lipsync in MMD motions? If so, I’d like to figure out how to make my models sing while a MMD motion plays.
If you can convert them from bone motions into blendshapes (fyi, Cats Blender Plugin has a function to do exactly just that) then yes.
either done with bone mouth rig, or facial shapekey that is named exactly after the animations file
So, the reason why I’m asking this this, and I was just curious, is because that there were some worlds where you can have your avatars sing and dance to different songs via motions. Most of my 2.0 avatars work for them, and I was curious to see if the 3.0 avatars would work with them. That’s what I’m trying to figure out.
Most of those worlds only have lip sync with compatible avatars, either through the use of blend shapes or bone names.
Blendshapes are somewhat standardized to do lip syncing and MMD animations can include these blendshape animations.
The only thing that would need to be done is rename the ones in the animation to the format vrc uses
What would I need to do for that?
@aytimothy already answered your question on how to do that in blender.
yup cats will convert those