(This is now obsolete, Oculus has released their official version, get it here: https://developer.oculus.com/documentation/audiosdk/latest/concepts/book-audio-ovrlipsync-unreal/)
After messing with PocketSphinx’s phoneme recognition and not getting the results I wanted, I looked back into porting Oculus’ LipSync plugin for Unity over to UE4, since the Unity plugin is just a wrapper for a DLL and some examples of how to use it.
I’m happy to report that I’ve got a basic version of the OVRLipSync plugin working in UE4, and it’s ready for people to use. I’ve gone ahead and made an example project to show how to use the UE4 version of the plugin (quite straightforward, see example images below). The project has an example mesh to see it in action, and should work out of the box:
Example Project + Plugin Repo:
Plugin Only Repo:
For those that just want a quick rundown on how to use it without downloading the example project, here are some screenshots of the VisemeGenerationActor derived blueprint class.
VisemeGeneratActor event graph setup:
As long as the mesh has the appropriate morphs as listed above, it will work decently well. I’m sure there are things I’m doing wrong, and bug reports are welcome!