Developer Update - 29 June 2023

Do the fallback avatars & the grey blocked avatar robot have GPU instancing enabled and stuff? I imagine (For users like me that see 70% of users in the world as fallbacks) that would be galaxies better than with it off.

The same question could extend to every non-static meshes that VRChat uses for avatars, such as:

  • Diamond materials for distant avatars
  • Blue materials for loading avatars.
  • Nameplates and similar groups banners. (I know those are not culled at all for some unknown reasons)
  • Blue bubble when selecting an avatar.

This require more than just GPU instancing to be batch correctly, meshes can’t be higher than 300 polygons, shadows, reflection and pixel lights MUST be disabled in every materials.

1 Like

Is there anything that can be done about the “Friend Join Notifier”? I’ve not had a single notification appear since the feature appeared, though I have the settings fully enabled.

Super glad to finally see the back straightening being addressed. I knew many many people used IKTweaks for this exact purpose.


Would be nice if it were able to be network compatible sooner - it’s not much incentive to longterm test the use of something if you can’t hang around normally while testing it.



Seeing as we’re getting on to doing more UX functionalities for users and avatars, i know there are possible plans for adding stuff like native flying etc as well, my concerns with this is there has been some sentiment towards emphasizing how it’s a “bug” and will be patched whenever an in-built fly system is added - i don’t think this is good. If native flying is added, collider stuff should still remain as it is now.

Couple of reasons for this: for more advanced users this can be used for going very fast in extremely large worlds, but more interestingly, colliders can be very specifically controlled to achieve specific kinds of motion, such as strafing, or even doing tricks like having an avatar that can dash in a direction for a short distance. – Short distance dashing and limited thrust vectoring is something i had very much intended on taking advantage of a lot with avatars.

Advanced mobility is a big thing for me, i hate being stuck to the ground walking slow all the time, and it’s unfair to be expected to go to a specific world for it, and just adding unreal engine fly style flying is not versatile or fun at all. It’s very cool when you can build mobility into an avatar suited to the avatar, such as if it’s got pogo shoes, or a short burst jetpack or similar; having only the choice to walk or fly around like gmod would be subtractive to creativity.

1 Like

that’s pretty good Improvements :+1:

I hope it’s good improvement Quest texture Standard Lite :+1:

Can a global physical collider be implemented?

The rig limit on avatar is too strict

Can there be a limit on the number of scripts for each avatar like a seat?

I want to add to this, since many people have been able to get FSR2.2 to work in Unity and even the built in pipeline for DX11 we should be able to use it. The only thing those people who made FSR2 work in Unity are lacking in is the ability to have it work with VR but that could be due to their main priorities being more focused on desktop gaming. If FSR2 can be implemented and the shader makers are given the necessary parameters for what to update should be able to work

1 Like

What kind of games are you talking about FSR2.2 in Unity3d?

Is it a built-in pipeline?

Be aware that there will be problems, and VR generally uses MSAA and forward lighting. Without TAA and related processing, it will be a mess.

Moreover, this kind of scaling involves problems such as multi-frame handling of mipmaps and transparency, which will lead to abnormal effects that need to be corrected. So far, it has not been handled well.

And it is limited to the effective saving of pixel overhead. If there are a large number of avatars, it may be biased towards geometric overhead, which will increase the burden.

I think it’s obvious what you mentioned should be Unity’s FSR 2 - Upscaling for Unity

He provided a TAA connected to the built-in pipeline

But has it been tested in detail? And using TAA on VR even with advanced algorithms for filtering and correction or using AI, there are still many problems.

He uses compute shaders to work as much as possible

and currently has an issue with not working on multiple cameras, although it looks like there should be a chance

But it will make the UI blurry, especially if not overlaid on the camera

In addition, the point of work insertion is unknown, which will cause problems related to post-processing

Maybe you need to change the shader or even let the world post-processing make changes

I think the difficulty is no less than replacing the built-in pipeline with URP

several difficult problems

Stereo rendering cannot work normally with multi-frames such as TAA, which will cause errors or not work. If this function is turned off,the performance in VR mode can basically lose a lot or even close to half, which is meaningless.

Even though there is still no good solution so far, this upscaler is unable to work and gain benefits in VR mode.

The buffers required by TAA, such as Z-buffer, or geometry will cause more serious load on geometry pipelines such as vertex shaders, but it depends on the way of implementation. After all, there is not only one type of TAA.

Made some simple test scenarios and used nsight detection


405fps noAA

312 fps MSAA 8x

261fps TAA

TAA uses the standard version implemented by Unity

MSAA is also built in
__

Seriously, FSR and DLSS are more like solving the blur caused by TAA, some make up for the performance loss, just by reducing the rendering resolution to get higher performance compensation and ensure that the blur is limited

In this way, it is comparable to the performance without AA, but only when the geometric overhead is not large. I have used DLSS before and was disappointed. It is not a panacea as imagined and even has some side effects.

However, if there is a lot of pixel overhead, the effect will be much better, but only in the case of a small amount of avatar, it can surpass noAA by a lot. After all, it can be exchanged for low resolution, just losing some details but keeping the overall less blurred and anti-aliased

It’s just that sometimes losing too much detail will not look like the effect that the shader has

TAA is more blurred than MSAA and does not necessarily solve the problem of geometric flickering

Also MSAA is ideal in forward shading, unless your game is Deferred shading mixed with forward shading to handle transparency and apply MSAA, in which case MSAA is much more expensive and TAA would be ideal

But it is obviously not applicable in VRchat, because it is really relatively expensive. After all, it needs to run at high resolution, unless you are a 3A-level work, and the rendering pipeline is complicated.

And the object of comparison is MSAA8x, which has a higher overhead. If it is MSAA4x, the overhead will be lower. Although the anti-aliasing effect is poor, considering the blur…it is a bit difficult to choose.


The above test is about 50k polygons per avatar, a total of 32, and each skin additionally uses the contour attribute (unitychan)
Resolution 1440p If a higher resolution can reduce some geometric proportions, it is still very heavy.

vrchat’s usually expensive GPU overhead depends on lighting shadows especially real-time lighting causing geometry overhead and a lot of low-level bandwidth overhead, or HDR plus other post-processing causing too much ROP overhead

Of course, some avatars are too exaggerated in the design of clothes, excessive tessellation shaders and too many details use modeling, which makes the geometric overhead equivalent to 300k~500k active polygons

Considering that the utilization rate of dense polygons will increase, especially for multi-GPC/SE GPUs, the number of polygons can be increased by about four times to double the original utilization rate, until hundreds of thousands of polygons will be increased to skin to 50 ~60% utilization rate (generally 10~20%), mesh can reach 75% (considering URP can reach 80~90%, after all, no skin is used)

Considering that the skinning process is not merged (vertex shaders, but cannot cross vertices, the efficiency of compute shaders is higher in the merged state) only 33~40%

50~60% is an ideal situation. In fact, there are more complicated factors that will affect the geometric pipeline. It is difficult to increase the skin utilization rate. After all, the parallelism is poor.

So many polygons will result in only half the speed compared to the usual 50,000 to 100,000, and an additional 10-20% pixel overhead

If there are a large number of active shape keys, when the shape key value is greater than 0, it is equivalent to one more skin, and it will be interrupted when the calculation shader is skinned.

Also remind that the detail table is completely useless in this regard

Not to mention that the number of observation shape keys such as booth has begun to rise out of control. With the growth of clothing design and independent skinning, the number of active polygons will cause a large number of pipeline interruptions for future unmerged calculation shaders, which will make skin performance in the current population and use. Under the skin, it is estimated that about 10~20 people each have 10 active skins, and the GPU efficiency will drop to about 70% to 80% of the original, which is even slower.


What I want to say is that I have also hoped that similar technology can solve some problems, but the reality after experimentation is always disappointing.

In some cases, improving assets and changing pipelines will yield more

Dev Update this week is being delayed until next week! We don’t have anything to talk about this week, but we will next week.

Next Dev Update is on July 20th.

7 Likes

Nice, looking forward to any jam demos.

Will there be any future improvements to Toon Lit and MatCap Lit? In particular, MatCap lit does not respect light non-directional and unimportant light sources compared to Standard Lite, and Toon Lit in my experiences can be hardly lit at all, looking flatter than one might perhaps want. — I was able to create a shader variant (for PC) of MatCap Lit that has emission and incorporates unimportant lights without additional render passes (Using the nearest 4 unimportant lights), but I’m not an experienced shader dev and have no idea how to fix the issues with brightness at close ranges to light sources, nor how to have it respect the shape of the lights.

Realistically, a more robust MatCap with emission would be very much appreciated, and a Toon Lit with the ability to add (optional) simple “cel shading” with adjustable falloff would be welcome (That’s sort of what I expected a Toon Lit to be, rather than… flat).

1 Like

Personally I’m happy with whatever happens with the mobile shaders, as long as I can use them on PC to see what avatars would look like on Quest. If they start paying attention to lights on PC while most Quest worlds have very simplified lighting it might get weird.

I do like having a shadow when relevant.

Edit: last ish post or will the time limit be extended? Find out next time on dragon ball z

This topic was automatically closed after 14 days. New replies are no longer allowed.