Developer Update - 2 March 2023

Welcome to the Developer Update for 2 March 2023! This is the seventh Developer Update of 2023.

Thank you to Lura_ for their comfy and cozy world Abstract:Echo, which graces the Dev Update thumbnail this week! Hang out in the fire pit and talk about life, the universe, and everything. Or maybe just the Dev Update. Hey, can you bring some marshmallows?

If you’d like to catch up, you can read our previous Developer Update from February 23.

Creator Companion Reminder

We’re making our final preparations to the new “Web Creator Companion”, which uses tech that is much faster and easier to maintain, in addition to working across platforms! This version will replace the current VCC, which is a Unity application.

We’ve recently updated the SDK to 3.1.11, and as previously noted, have removed the older UnityPackages. However, you can still download new VPM-compatible UnityPackages. These new UnityPackages extract into your Packages folder. These packages are meant to support users who cannot use the VCC due to platform or other limitations. They can only be used on new projects.

:warning: WARNING: These new UnityPackages CANNOT be used to upgrade non-VCC-migrated projects. Importing one of these new UnityPackages to a non-VCC project will break that project. Please migrate your project using the VCC instead.

As previously announced, SDK2 UnityPackages are no longer available for download. At a future date (undetermined, but on the scale of “months away”), SDK2 worlds and avatars will no longer be permitted for upload. We will support existing SDK2 content within VRChat for as long as possible.

If you are maintaining a SDK2 world project and want to migrate, CyanLaser maintains CyanTrigger, which provides a SDK2-like interface for Udon. He also has created a SDK2 migrator, which can auto-convert SDK2 to SDK3, including complicated assets like the Standard Assets package that was supported with SDK2.

Unity Version Upload Blocking

We are now actively blocking uploads from Unity versions older than 2019. This has affected an extremely low number of creators!

Users will see a message in their Unity console if their SDK is too old and they’ve been blocked:

Any users still on a version before 2019 should follow the below upgrade guides, and work their way up to the Currently Supported Unity Version.

VRChat 2023.1.2 Open Beta

We’ve just released 2023.1.2 into Open Beta! If you want to give it a shot, hop on our Discord and go to the #open-beta-info channel.

The full patch notes can be read in our documentation, but can also be found below.

VRChat 2023.1.2 Open Beta Patchnotes

Client

Improvements

  • Avatar texture memory usage is now considered a ranked performance stat, and will affect your avatar’s performance rank. An upcoming SDK update will reflect this in the editor.
    • PC:
      • Excellent: 40MB or less
      • Good: 75MB or less
      • Medium: 110MB or less
      • Poor: 150MB or less
      • Very Poor: Above 150MB
    • Quest:
      • Excellent: 10MB or less
      • Good: 18MB or less
      • Medium: 25MB or less
      • Poor: 40MB or less
      • Very Poor: Above 40MB
    • If you are curious about more details regarding these numbers, how we arrived at them, and how many avatars this will affect, check out our Dev Update from 16 February: Developer Update - 16 February 2023
  • Completely overhauled internal handling of portals!
    • This should provide much better stability and security to portals, preventing many cases of portals disappearing or desyncing between instance members
    • In some circumstances it may be possible for a portal to be dropped too close to you. In this case, the portal will be surrounded by an orange overlay and you won’t be able to interact with it until you move away first
      • Previously this would have made the portal disappear for everyone, or worse, desync
      • You are still prevented locally from dropping a portal too close to someone
  • Friends+ portals now default to unlocked
  • Earmuffs range can now take the shape of a cone, allowing you to focus on conversations in front of you
    • You can adjust the size and shape of this cone with earmuff settings
    • We’ve been calling this “cozy earmuffs” mode
  • Nameplates will show an icon when a player has earmuffs enabled
  • Avatar “Hide by Distance” and “Hide by Count” visual aid and toggle
    • Adjusting distances will now show a visual aid similar to earmuffs
    • Sliders now have a toggle to turn their effect on or off without adjusting the value, similar to “mute” on audio sliders
  • PhysBones have been adjusted to use and maintain the initial hand-to-bone offset as your “grab position” instead of a fixed relative position per bone
    • This means that grabbed PhysBones won’t move or “snap” when you initially grab. They will only move when you move the grabbing hand
  • Added a Report User option to report users for using a modified client or “hacking”. You can find this under “Report User → Behavior”.
    • Reporting people using modified clients in this way is very helpful to our Trust and Safety and Security teams! Thank you for your reports!
  • Improved internal handling and performance of nameplates
    • Includes minor offset fixes to group banners and blue voice rings around user icons
    • This is not a fix to the alignment issues you have to deal with when making Group banners. We know about that issue, and will work on improving that going forward
  • The action menu size and opacity can now be set by percentage with a radial control instead of fixed steps
    • Scaling will be reset to medium when initially launching this build (and switching between live and beta)
  • You can now take pictures with the stream camera!
  • Added constraints as a non-ranked avatar performance stat with a recommended maximum of 15 constraints of any type
  • Greatly improved Udon loading performance while entering worlds
  • Improved internal handling and future-proofing of avatar parameters

Fixes

  • VRCat now speaks in the past tense about the early supporter badge. Thanks for your support!
  • Fixed camera display staying white when closing immediately after taking a photo
  • Fixed PhysBones jittering with “isAnimated” enabled in some situations
  • Fixed nameplates not showing up in pictures taken via the “Screenshot” option in the Quick Menu
  • Fixed “Clear Local Profile Data” not correctly resetting some options
  • Fixed tunneling and holoport options breaking when using “Clear Local Profile Data”, and potentially not saving between client restart
  • Desktop users can select tunneling options in Comfort menu again
  • Fixed some issues with user locations not showing up in correct instances
    • We know there’s still some issues here, we’re working on more fixes related to issue!
  • Fixed desktop movement getting stuck after closing camera with Escape key
  • Fixed PhysBone colliders on fingers not respecting custom capsule heights
  • Fixed thumbstick scrolling in one-handed mode
  • Fixed multi-layer camera not being accessible for non-VRC+ subscribers
  • Fixed an issue that could cause crashes when disconnecting multiple audio devices at once
  • Fixed last valid portal placement marker getting stuck in place when looking at invalid surfaces
  • Fixed portal placement in situations with no surface available at all blocking user input
  • Fixed shuffle button in “Random” worlds menu not working
  • Fixed issues with icon camera and newly taken user icons failing to render in menu
  • Fixed text fields across the UI erroneously having rich-text enabled
    • The action menu specifically remains exempt from this restriction
  • Fixed images in groups tab not having proper loading placeholders
  • Fixed various issues with group visibility and representation settings not updating correctly
  • The default “search by” parameters for worlds now include Tags
  • Various smaller UI fixes
  • Various smaller groups fixes
  • Safety and Security improvements

Creators

  • Added “VelocityMagnitude” avatar parameter, similar to the existing VelocityX/Y/Z ones but gives total magnitude of velocity

Ongoing Development

Native Eye Tracking

Native eyetracking support is on the way! All existing avatars uploaded with the “fake” eyelook set up in the SDK will already have support! We’ll support both OSC data input as well as the Meta Quest Pro natively in standalone mode.

When you provide eyetracking data, avatar eyes already set up via the SDK will be driven by the incoming eyetracking data.

The data is synced at “IK rate” with built-in tweening, and slightly faster tweening for eyelids to better support blinking.

Currently, the avatars SDK has only allowed setting up both eyelids as one blink rather than individual left/right winking, so for now blinking is supported. Winking will be coming with a later update to the SDK.

Here it is working on Quest Pro in standalone mode:

There are a few new options in the Tracking & IK Main Menu page. You can display a “Debug View” that shows your look target, as well as a “Force Eyetracking Raycast” option. When active, “Force Eyetracking Raycast” will force the eyes to converge at a distance determined by a raycast hit. You can use this to compensate for imperfect eyetracking, and make sure your avatar always appears like it’s focusing at the correct distance for the surfaces you’re looking at.

As for OSC support, we’ll accept data in a variety of formats. Here are the OSC addresses in their current form-- they’re subject to change before release, though!

/tracking/eye/EyesClosedAmount
0~1 value for how closed the eyes are.

In addition to the EyesClosedAmount, you can send data to one of the addresses below depending on the format you’d like to send:

/tracking/eye/CenterPitchYaw
Pitch value and yaw value in degrees for a single “center” eye look direction. Because no distance is defined here, this mode will always use a raycast in-world to find the convergence distance.

/tracking/eye/CenterPitchYawDist
Same as above but with an added distance value in meters to define the convergence distance.

/tracking/eye/CenterVec
“Center” eye x,y,z directional normalized vector local to the HMD. The vector is normalized so this mode will always use raycast to find the convergence distance.

/tracking/eye/CenterVecFull
“Center” eye x,y,z directional vector local to the HMD. The length of this vector (in meters) will determine the convergence distance.

/tracking/eye/LeftRightPitchYaw
(In degrees) left pitch, left yaw, right pitch, right yaw.

/tracking/eye/LeftRightVec
HMD local normalized directional vectors for each eye (left x,y,z right x,y,z).

We must reiterate that this is a sneak peek at how it’s currently functioning while in development and is all subject to change.

Native eyetracking support will not require you to use up synced avatar parameters, and in general should “just work” for most uploaded avatars that have moving eyes.

Avatar Download Prioritization, Part II

As promised, a before-after comparison of the new Download Prioritization feature in action!

In this video and the below graphic, network download speed has been limited to 8Mbit/sec. It’d take you an additional 3 minutes to discover that everyone here is doing lunges. :melting_face:

In the below graph, the blue line is Download Prioritization On, and the red line is with Download Prioritization off. The instance size is about 15 people. By the time you’ve got one avatar loaded with the old download method, over half the instance has loaded with the new one!

Furthermore: the above test only shows off Size prioritization, so it is a worst-case scenario for the feature. In both videos above, all avatars are in sight (we also asked people to wear high-filesize avatars).

With Distance prioritization, it’ll be an even bigger gap! If you imagine that there were another 25 avatars out of view, the bottom video would take 2.6x longer-- an additional 22 minutes on a slow connection!

Event Execution Order Documentation

We’ve recently added some documentation for Event Execution Order that world creators and Udon programmers would likely find very useful! It has a very nice graphic showing the exact execution order of events, where loops occur, and how disabling or destroying an object works in regards to event firing.

We hope this will make Udon development just a bit easier!

Conclusion

That’s all for this week! Keep an eye out for the Video Patchnotes, due out with the full release of 2023.1.2!

Thanks!

29 Likes

Yesss, native eye tracking!

Here’s to hoping you find a way to implement face tracking as well, and then I can have all my dang parameters back!

12 Likes

Thank you!!! I’m so excited to have Eye tracking available on most avatars! I’m really excited for this!

If possible, I’d love to have stand-alone mouth tracking too (on specialized/setup avatars) but I am super happy with having eyes (there’s lots of emotion that can be displayed with a simple side look)

Thank you!!!

7 Likes

That’s gonna be amazing! Great work guys.

Wondering while the team is doing avatar work if they plan to look at performance drops that occur when an avatar finishes loading? Canny

4 Likes

Native Eye Tracking

Native eyetracking support is on the way! All existing avatars uploaded with the “fake” eyelook set up in the SDK will already have support! We’ll support both OSC data input (which, to be clear, does not have the update rate limitations that avatar parameters has) as well as the Meta Quest Pro natively in standalone mode.

When you provide eyetracking data, avatar eyes already set up via the SDK will be driven by the incoming eyetracking data.

Currently, the avatars SDK has only allowed setting up both eyelids as one blink rather than individual left/right winking, so for now blinking is supported. Winking will be coming with a later update to the SDK.
Here it is working on Quest Pro in standalone mode.

that’s nice that will be interesting experience.

4 Likes

I feel like it is maybe a little too late to add this now. But well, I guess better late than never.

3 Likes

The data is synced at “IK rate” with built-in tweening, and slightly faster tweening for eyelids to better support blinking.

Also a bit of added detail on this: the built-in tweening for eyelook is the same speed that the current “fake” eyelook blends between look target locations when you see your avatar looking around. So the look-direction data itself isn’t tweened (allowing fast snapping of eye movements) but the SDK eyelook that is being controlled by the incoming data is constantly locking on to that data using the existing technique for realistic eye movements.

The blinking incoming data is tweened however, but tweened at a faster rate than body pose data is, to better support quick blinks.

9 Likes

Would ce cool if it also support eye widing and squish, with that it should cover capabilities of most eye tracking headset

6 Likes

Definitely looking forward to these. Any updates on fixing shader fallbacks? Standard, standard Specular, and Autodesk interactive based shaders don’t fall back properly, even with the VRCFallback tag added properly. It does work with Toon, but not any other built in shader. See the updated Canny for more info.

1 Like

Post won’t go through on canny, for some reason. Any chance that the changes to stream mode camera on beta might be removed? The change to the stream camera in beta is a big quality of life reduction for people who use the stream camera to stream or record video using multiple pins. In beta, anytime you use trigger to cycle pins you generate an unwanted photo and the loud camera noise. You can already take photos with stream camera active anyway if you press the take photo button, so there is no need for the change. If the change must be pushed through anyway, then there really needs to be a way to silence the camera to not interfere with streaming or recording which stream mode is supposed to be for. (camera audio should be on it’s own slider anyway and not tied to world volume, but thats a different issue)

1 Like

avatar scaling would be um, would be cool I think :person_in_lotus_position:

5 Likes

Any chance we’re also going to see native face tracking in the near-ish future? I assume there’s some tradeoffs syncing that much data at IK rate, but also there aren’t that many people with face tracking…

2 Likes

It’s nice to have Event Execution Order documentation!
So, could you please add Layers docs too? We’ve been long waiting for that Canny

3 Likes

I hope the Pimax eye tracking will be supported too!

1 Like

I need VelocityXZ(or VelocityMagnitudeHorizontal)(horizontal movement) also…

Native Face Tracking next?

6 Likes

Native face tracking is something on our radar, obviously, but there’s far fewer potential users of it when compared to eye tracking. That doesn’t discount it, but it does kind of have the chicken-and-egg problem.

I can’t honestly say if it’s soon or not, but it definitely is “Eventually” status.

2 Likes

I had trouble posting comments on canny for last few days. Hopefully issue is fixed soon.

More eyetrackig than face? Ive seen the opposite due to facetracking being abled to slap on anything

NATIVE EHE TRACKING WOOOOOOO!!! This is honestly so great. My lazy butt never wants to set up all the blend shapes and what not on new avatars so this is 10/10 would blink again!

Edit: I’ll be looking forward to the day native face tracking is implemented as well :eye::lips::eye: