Need help with something

Hi, I’m having a little bit of trouble figuring out how to lip sync my avatar. Here’s the thing, it’s not like a human avatar, it’s an avatar similar from the game Five Nights at Freddy’s. I’m using an OC model from MMD, and I don’t know how to lip sync with these kinds of models. It’s hard for me to use CATS because every part is in Japanese, and not all shapes are available. Can someone help me out with figuring out how to lip sync these kinds of models?

Well to solve the issue of your shape keys being in Japanese as well as any other parts of your model. Go to CATS -> Model Options -> Translate and then translate all. That should make them readable for you.

From the looks of things the model already has blend shapes to move the mouth as あ、え、い、お and う are there (a,e,i,o,u) when translated you should easily be able to use CATS to make the proper visemes assuming the shape keys do in fact move the mouth.

But what should I do with the viseme that has “ch”? My model does not have that.

You can do this the easy way which may be fine or may not depends on how well it works or you can do this the “harder” but proper way of making a new viseme that matches “CH”

The easy way is to use a viseme close to CH (Like E) and use CATS to make the visemes, test the visemes in blender to see if they work well or not.

The “hard” way. You create a new Viseme that matches CH, this can be pretty easy to do if your model has a bone for moving the mouth.
Here’s a reference sheet https://developer.oculus.com/documentation/unity/audio-ovrlipsync-viseme-reference

Ok. I’ll look into that. Thanks.

You may want to use both a jaw bone and blend shapes for those kind of models, that’s done inside Unity.

When I tried using the jaw bone with previous models, the jaw wouldn’t move up or down.

Sometimes I tested a jaw bone and the avatar was always with an open mouth, granted I couldn’t test the microphone those times, guess I need to test more.