Developer Update - 13 February 2025

Welcome to the Developer Update for February 13.

Today’s featured world is 憩い by 神羽 もえ.

Announcements

2025.1.2 is in Open Beta!

Yep! We have a new Open Beta out – you can check out the full notes here.

We’ll be talking about one of the features (Camera Dolly!) a little later on in this Dev Update, but you should check out the notes to see the big changes – notably, Main Menu updates, Age Verification changes, and a bunch of SDK updates that are sure to make creation in VRChat easier.

Stream! Again! Tomorrow!

We’ll be streaming (as usual) tomorrow at 2PM PST on Twitch! We’ll be checking out the aforementioned Camera Dolly. So come see it in action!

The Next Jam

We’ll be announcing our new World Jam on February 24 – that’s in 11 days!

Want to discuss your excitement or plans for the next Jam? Join our Discord and check out the #vrchat-jams channel!

Y’all Are Loud

As we go through and evaluate the data we gathered over the holidays, especially our busiest time right around New Year’s Eve, one graph in particular caught our attention:

This unsuspecting green line shows our global data throughput, specifically voice packets. Now, do you see all those spikes? How they all happen exactly at the hour? Yep, that’s all of you shouting “Happy New Year”, or whatever your local timezone’s version of it is!

See, your math classes lied to you - sometimes graphs can be fun!

Development Updates

Camera Dolly is in Open Beta!

The Camera Dolly is a new VRC+ exclusive feature for creators who want a little more oomph in their camera.

In short, it allows you to set a pre-defined path for the camera. Think of it like an in-client animation system for the camera, giving VRChat videographers a ton of extra power to do… well, whatever you can dream up!

In short, it adds:

  • Path Management: Additionally, paths allow you to play multiple animations in sequence. New camera controls have been added that enable more fine-tuned camera usage. Camera parameters the dolly can animate currently include:
    • Position
    • Rotation
    • Zoom
    • Focal distance and aperture
    • Look-at-me offsets (these are new!)
    • Green screen color (this is new too!)
  • We’ve included a number of other configuration options to give you even more control over the camera’s movement and behavior. How fast does it move? How does it transition between states? Does it loop? You can do a lot here.
  • …and it all works with OSC.

Camera Dolly is likely to remain in beta after 2025.1.2 ships, as it’s fairly big and complicated.

For more details, we’d strongly suggest reading the docs!

Also, we have a demo world! Go check it out.

Build & Test Avatars on Quest and Android

You can now build and test avatars on Quest and Android using the SDK beta version 3.7.6 and Android/Quest client beta version 2025.1.2.

This makes iteration on avatars much quicker as you can instantly see changes to your avatar in the game client with a single click from the SDK. No need to wait for uploads anymore. Follow the instructions for setting up Android/Quest build and test here.

Impostors 1.2.0 is Live!

We’ve made generation more reliable and fixed quite a few bugs. Here’s the changelog!

  • Fixed the most common cause of impostors being stuck in a t-pose.
    • Actually, these impostors were completely disconnected from their animators, so their skeletons wouldn’t animate at all.
    • This mainly affected avatars with multiple animators.
  • PhysBones are now simulated briefly before capture.
  • Fixed an issue that caused impostors’ size/scale not to match the original avatar.
  • Impostors will now fail to generate if all parts would be invisible.
  • Fixed several bugs that could lead to impostors missing body parts.
  • Fallback reflections no longer cause wildly incorrect lighting/colors on impostors.
  • Improved logic to be more forgiving of unusual hierarchies and extra parts.
  • Fixed a few causes of visual inconsistencies on avatars with depth-based shader effects.

Since these are fixed on the generator side, they’ll only apply when impostors are (re-)generated.

We’ll do this automatically, but popular avatars will be prioritized and there are a lot of avatars! If you have an avatar that needs these fixes, please cut to the front of the line by regenerating impostors on the website.

Very Poor Avatars on Mobile

You might have noticed that some users were able to use and see Very Poor Avatars on Android mobile these past few weeks. This is one of the tests we’ve been running, and we’ve determined that it’s ready to roll out to everyone!

So what does this mean?

As an Android mobile user, you’re now able to see up to 4 Very Poor Avatars, including your own. You’ll need to manually show other users to see them, but your own Very Poor Avatar will be visible regardless.

This works in a rotation when viewing others - The oldest shown Very Poor Avatar is set back to “Use Safety Settings” when you manually show another user using a Very Poor Avatar if the cap of 4 is reached. This will only last for that session, and upon relaunching VRChat, all shown users encountered using a Very Poor Avatar will revert to “Use Safety Settings”.

Avatar visibility below Very Poor is unaffected by this change.

Basically, avatar visibility on Android mobile now works like how it does on Quest, but with a cap of 4 Very Poor Avatars shown at once. Previously, users on Android mobile could not see or use any Very Poor Avatars.

We tried this as a test first to ensure that there was negligible impact on performance and crash rate, which turned out to be the case when limited to 4 Very Poor Avatars shown at at time.

For context: A vast majority of Android avatars are Very Poor, which made finding avatars exceptionally difficult for new and existing users. This was one of mobile’s biggest feature requests.

While this is only available on Android for now, we’re aiming to bring this feature to iOS in the near future.

We’ve Fixed Linux Support for Building Worlds!

In an upcoming SDK release you’ll be able to upload worlds using the VRChat SDK and Unity for Linux.

We have the technology!

Built-in SDK Avatar Optimization is Coming Soon

We’re working on a built-in avatar optimizer! You can see it in action here:

In short, our optimizer does the following:

  • Mesh merging
    • The tool will try to merge all meshes it can. Some meshes are excluded due to animations, although this might be improved in the future.
  • Blendshape baking
    • Blendshapes that are being used by animators or visemes will be excluded. This includes MMD blendshapes! Users can also define custom blendshapes to be excluded, if they’d like.
  • Texture atlasing
    • Texture atlasing will require a shader with an atlasing variant. If a shader has certain criteria met, the tool will be able to combine and atlas each texture properly. This works with our standard lite texture, to start.
  • Animation mapping
    • After all other changes have been made, the animation remapper will remap past animations to ensure they still are compatible with the new format.
    • This means toggles should still work after the process finishes.

In addition, the optimizer includes a fallback system for materials on mobile platforms. On those platforms, the tool will attempt to change materials to a compatible shader when the avatar is uploaded.

The optimizer can be fully disabled or otherwise configured based on your needs! There’s also a preview function built into the SDK, allowing you to see what’s going to happen before you upload your avatar.

Conclusion

That’s it for this Developer Update! We’ll see you next time on February 27!

17 Likes

I love it xD

5 Likes

BUILT IN AVATAR OPTIMIZER AAAAAAA
Excuse the screaming, I’m so happy. Dolly then this, ya’ll are on a roll.

Could we get VRC constraints simulated like this too? I’ve got avatars with “Fake” arms that follow the real ones via constraints in order to be animated while maintaining IK.

11 Likes

Any plans for decimation for the avatar optimizer?

10 Likes

Super excited about the automatic atlasing. Its something that I have had shower thoughts about before, but without a single standard seemed almost impossible to do well.

Is this something where the community can make atlasing compatible versions of their shaders?

5 Likes

nice :3

1 Like

Yes, the idea is that custom shaders on PC will be able to opt-in to the optimizer. Exact details to be determined :)

Any plans for decimation for the avatar optimizer?

Not right now, although we did explore some avenues for it. It’s a tough area to get right.

6 Likes

Now that there is age verification and we are close to the avatar marketplace being a thing is it possible to adjust the terms of service on nsfw. right now people are still existing in a grey area.

1 Like

The avatar optimization is looking very exciting, will it be enabled by default if an avatar is ranked a certain way, say “poor” or “very poor” that isn’t caused by the polycount?

1 Like

I can’t really argue with that, but I feel a bit sad to see that it is another feature for VRC+ and still nothing for non VRC+ users :/

I completely get that customization features can be behind a “paywall” as it is purely personal/cosmetic, but features like the dolly, feels a bit like forcing VRC+ features …

4 Likes

Instead of copying 3rd party solutions for avatar optimization, have you considered making a toggle system that doesn’t rely on animators?

3 Likes

Haha, all that time to get my avatar to poor so I could see myself on mobile, and I’ll have to do it again. But at least this time I’ll be able to properly atlas the materials together with the new optimizer and I’ll much more easily hit poor or even maybe medium this time.

1 Like

Thanks guys!

The avatar optimizer and Linux support for the SDK is a very pleasant surprise.

3 Likes

Something I feel this should be asked but what if we had like 2 versions for the Quest/Mobile builds only because Quest 2 and Quest 3 should have 2 separate settings. fwik Quest 3 should, for example, support post processing.

As in native Linux support (Vulkan) or just DXVK?

this

“Texture atlasing will require a shader with an atlasing variant. If a shader has certain criteria met, the tool will be able to combine and atlas each texture properly. This works with our standard lite texture, to start.”

Sounds abit wierd. all shaders by default supports an atlas since its just a combined texture and a uv remapping.
however it seems like a cool thing if people actually use it. i have been trying hard with a few friends to push optimizing avatars the past 5 years lol.

for reference i dont think an avatar should have more then 8 textures at worst. it’s rarely necessory to even have 8

I don’t know if you guys are working on it, but deleting unused stuff makes a really big difference.

My workflow is basically throwing stuff to the wall, disabling everything that I don’t want, and leave it to dark’s avatar optimizer.

It’s really cool, specially when your avatar has like a bazillion physbone components.

2 Likes

From what I understand, uploading things from Linux without having to download the windows compiling support?

Edit: This thing

The avatar optimizer is super exciting, stuff like texture atlasing especially. A few questions:

  1. Could this support smart decimation a la CATS/Polytool?
  2. Do toggles still work after mesh merging?
  3. Is this non-destructive in case something goes wrong?
  4. Could UV tile discard also be implemented? Like, if an avatar’s wearing a shirt the underbody mesh could be discarded while the shirt is toggled on, or deleted entirely if there’s no toggle.

I’d also highly recommend working with the devs of third party tools, especially VRCFury, Polytool, and Poiyomi/Liltoon to ensure compatibility/interoperability.

3 Likes

This fix just allows users to upload worlds using the VRChat SDK on Linux. It’s not about supporting a new platform, and more just that a bug was fixed that makes uploading from Linux possible in the same way you can upload VRChat worlds from a Mac.

2 Likes

Okay :frowning_face: Hopefully we could get Vulkan someday