Welcome to the Developer Update for February 13.
Today’s featured world is 憩い by 神羽 もえ.
Announcements
2025.1.2
is in Open Beta!
Yep! We have a new Open Beta out – you can check out the full notes here.
We’ll be talking about one of the features (Camera Dolly!) a little later on in this Dev Update, but you should check out the notes to see the big changes – notably, Main Menu updates, Age Verification changes, and a bunch of SDK updates that are sure to make creation in VRChat easier.
Stream! Again! Tomorrow!
We’ll be streaming (as usual) tomorrow at 2PM PST on Twitch! We’ll be checking out the aforementioned Camera Dolly. So come see it in action!
The Next Jam
We’ll be announcing our new World Jam on February 24 – that’s in 11 days!
Want to discuss your excitement or plans for the next Jam? Join our Discord and check out the #vrchat-jams channel!
Y’all Are Loud
As we go through and evaluate the data we gathered over the holidays, especially our busiest time right around New Year’s Eve, one graph in particular caught our attention:
This unsuspecting green line shows our global data throughput, specifically voice packets. Now, do you see all those spikes? How they all happen exactly at the hour? Yep, that’s all of you shouting “Happy New Year”, or whatever your local timezone’s version of it is!
See, your math classes lied to you - sometimes graphs can be fun!
Development Updates
Camera Dolly is in Open Beta!
The Camera Dolly is a new VRC+ exclusive feature for creators who want a little more oomph in their camera.
In short, it allows you to set a pre-defined path for the camera. Think of it like an in-client animation system for the camera, giving VRChat videographers a ton of extra power to do… well, whatever you can dream up!
In short, it adds:
- Path Management: Additionally, paths allow you to play multiple animations in sequence. New camera controls have been added that enable more fine-tuned camera usage. Camera parameters the dolly can animate currently include:
- Position
- Rotation
- Zoom
- Focal distance and aperture
- Look-at-me offsets (these are new!)
- Green screen color (this is new too!)
- We’ve included a number of other configuration options to give you even more control over the camera’s movement and behavior. How fast does it move? How does it transition between states? Does it loop? You can do a lot here.
- …and it all works with OSC.
Camera Dolly is likely to remain in beta after 2025.1.2
ships, as it’s fairly big and complicated.
For more details, we’d strongly suggest reading the docs!
Also, we have a demo world! Go check it out.
Build & Test Avatars on Quest and Android
You can now build and test avatars on Quest and Android using the SDK beta version 3.7.6
and Android/Quest client beta version 2025.1.2
.
This makes iteration on avatars much quicker as you can instantly see changes to your avatar in the game client with a single click from the SDK. No need to wait for uploads anymore. Follow the instructions for setting up Android/Quest build and test here.
Impostors 1.2.0
is Live!
We’ve made generation more reliable and fixed quite a few bugs. Here’s the changelog!
- Fixed the most common cause of impostors being stuck in a t-pose.
- Actually, these impostors were completely disconnected from their animators, so their skeletons wouldn’t animate at all.
- This mainly affected avatars with multiple animators.
- PhysBones are now simulated briefly before capture.
- Fixed an issue that caused impostors’ size/scale not to match the original avatar.
- Impostors will now fail to generate if all parts would be invisible.
- Fixed several bugs that could lead to impostors missing body parts.
- Fallback reflections no longer cause wildly incorrect lighting/colors on impostors.
- Improved logic to be more forgiving of unusual hierarchies and extra parts.
- Fixed a few causes of visual inconsistencies on avatars with depth-based shader effects.
Since these are fixed on the generator side, they’ll only apply when impostors are (re-)generated.
We’ll do this automatically, but popular avatars will be prioritized and there are a lot of avatars! If you have an avatar that needs these fixes, please cut to the front of the line by regenerating impostors on the website.
Very Poor Avatars on Mobile
You might have noticed that some users were able to use and see Very Poor Avatars on Android mobile these past few weeks. This is one of the tests we’ve been running, and we’ve determined that it’s ready to roll out to everyone!
So what does this mean?
As an Android mobile user, you’re now able to see up to 4 Very Poor Avatars, including your own. You’ll need to manually show other users to see them, but your own Very Poor Avatar will be visible regardless.
This works in a rotation when viewing others - The oldest shown Very Poor Avatar is set back to “Use Safety Settings” when you manually show another user using a Very Poor Avatar if the cap of 4 is reached. This will only last for that session, and upon relaunching VRChat, all shown users encountered using a Very Poor Avatar will revert to “Use Safety Settings”.
Avatar visibility below Very Poor is unaffected by this change.
Basically, avatar visibility on Android mobile now works like how it does on Quest, but with a cap of 4 Very Poor Avatars shown at once. Previously, users on Android mobile could not see or use any Very Poor Avatars.
We tried this as a test first to ensure that there was negligible impact on performance and crash rate, which turned out to be the case when limited to 4 Very Poor Avatars shown at at time.
For context: A vast majority of Android avatars are Very Poor, which made finding avatars exceptionally difficult for new and existing users. This was one of mobile’s biggest feature requests.
While this is only available on Android for now, we’re aiming to bring this feature to iOS in the near future.
We’ve Fixed Linux Support for Building Worlds!
In an upcoming SDK release you’ll be able to upload worlds using the VRChat SDK and Unity for Linux.
We have the technology!
Built-in SDK Avatar Optimization is Coming Soon
We’re working on a built-in avatar optimizer! You can see it in action here:
In short, our optimizer does the following:
- Mesh merging
- The tool will try to merge all meshes it can. Some meshes are excluded due to animations, although this might be improved in the future.
- Blendshape baking
- Blendshapes that are being used by animators or visemes will be excluded. This includes MMD blendshapes! Users can also define custom blendshapes to be excluded, if they’d like.
- Texture atlasing
- Texture atlasing will require a shader with an atlasing variant. If a shader has certain criteria met, the tool will be able to combine and atlas each texture properly. This works with our standard lite texture, to start.
- Animation mapping
- After all other changes have been made, the animation remapper will remap past animations to ensure they still are compatible with the new format.
- This means toggles should still work after the process finishes.
In addition, the optimizer includes a fallback system for materials on mobile platforms. On those platforms, the tool will attempt to change materials to a compatible shader when the avatar is uploaded.
The optimizer can be fully disabled or otherwise configured based on your needs! There’s also a preview function built into the SDK, allowing you to see what’s going to happen before you upload your avatar.
Conclusion
That’s it for this Developer Update! We’ll see you next time on February 27!