Developer Update - 13 April 2023

Plus, what if someone buys an avatar base? Which is the real avatar, and which ones are allowed?

1 Like

Materials are actually a pretty big deal for performance due to the impact of draw calls on the CPU; in fact it’s tied with texture memory as the first number I go to if I’m trying to get a feel for how much an avatar will bog down other players.

As for the poly count idea, I agree the restriction seems a bit tight for the poor / very poor boundary in particular, but there are a few edge cases (e.g. high-poly meshes with blendshapes) where poly count can significantly impact performance. I feel that these cases will need to be explicitly identified in the performance rankings before it will be wise to raise the poly count restriction.

2 Likes

The idea there is leeway for a poor rated avatar on PC. Either 10k extra polys or 2 extra materials. The poly to material comparison is intentionally a “bad deal”. To point at specific hardware I’d say it’s targeting integrated graphics.

In my limited experience in 3d modeling I think it’s easier to combine materials than reduce polygon count, so most would end up taking the polygons. At least I would

The Image Loading problem does not occur in the Unity editor, and if this problem is not fixed by the time Quest1 is obsoleted, it will be much more difficult for world creators both to become aware of this problem and to confirm that the workaround is working.

I agree with the poly count. The sudden jump from good to very poor at 70k tris seems a bit rough. I wouldn’t call a 75k avatar good, but I wouldn’t call it very poor either.

But 12 materials are so many materials. I can’t think of anything you could do with 14 that you couldn’t do with 12.

There has recently been a bug that has become apparent to me and a few other players where you can no longer change your fallback avatar to a custom fallback. Do you have any plans to fix this soon or will it be tackled later in the future?

2 Likes

Is the bug report for fallbacks if you’re interested in watching it.

I think someone else has already said this, but you cannot do a real time performance check because everyone’s system is different. And it’s not linear because there are different things that take up performance which can affect systems differently. It makes more sense to be aware of how your own system can handle individual things and gauge if it will perform well based on the requirements for performance rankings or just the individual stats of the avatars.

The server statistics performance data is fantastic, and the local calculation will cause a serious momentary lag, if your internet speed is fast enough to calculate everyone’s avatar performance data at the same time, you will be stuck.
On top of that, I hope the official has a better performance stats formula (only 1000 polygons but 10 audio, yet rated as very poor?) If the current evaluation formula is followed, there will be a lot of avatars that don’t take up performance that are blocked, and I don’t know how many people will like seeing robots.
Now that server stats have been implemented, if the official implementation of the new performance stats formula is difficult, perhaps players could be allowed to limit more attributes themselves. Now there are evaluation levels and avatar sizes, that plus the number of polygons, the number of meshes, the number of dynamic skeletons custom limit, let the players themselves shield may be a way.

Translated with DeepL

1 Like

Is it possible to view the internal files since the avatar check is now being done on the server? If it was stolen, the internal files are in shredded form

1 Like

That makes me wonder if avatar audio sources take up ram when I have them muted.

Some people would do 10 clips (Wilhelm), while others would make avatar with full length mmd routines in an avatar.

Unity has a limit of 255 simultaneous audio sources. At 10 audio sources you are limited to only 25 avatars, completely regardless of all other statistics.

So 10 audio sources being considered Very Poor seems reasonable.

2 Likes

I don’t think that necessarily would be the case in all scenarios. What if someone rips a base, but they bring it into blender, do some modifications to it, and start it all on a new project? How do you prove they didn’t do the same thing with the base itself?

I just don’t know if this level of checking is worth it right now when there is an opportunity to have so many false positives.

I agree, plus that the polygon count is currently a hard barrier at 70k. There is no medium rating for it, and poor is explicitly 70,000 exactly.

In larger instances, the game suffers from a bottleneck that often is not the GPU. It is often either CPU time or memory bandwidth/latency, and no amount of GPU compute, VRAM, or avatar hider will help there. There is a good chance that this mostly comes down to draw calls (materials) and animator performance.

The thing is that an avatars performance related to materials and polys on both depends on the complexity of the shader and active blendshapes (most of the time; in my experience). I think it’s fair that most shaders aren’t really an issue, but the number of shaders (materials) people use is an issue.

Most games optimize characters to only use a few materials, if that. Many players in VRChat have avatars with 8-20+ active materials on each avatar. I bet if we had debugging tools in game, we’d really get a good idea of how much CPU time each avatar is taking up, especially since they can’t be batched (at least different avatars). For example, I was able to improve my animator CPU performance by 60-80% by moving almost everything to a blend tree (0.5 ms to 0.1-0.2 ms). That doesn’t seem like a lot, but that scales to 20 ms versus 4 to 8 ms of CPU time with 40 avatars.

I do wish that, similar to Thry’s tool, VRChat can do a benchmark of sorts and then calculate avatar performance based on the benchmark. Surely there is a way to calculate a performance score with some common factors, using the benchmark on a user’s system as the relative scale for calculating the score.

Sure, people have said that people can modify the SDK or somehow get around that. But I counter with: They can do that already with avatar performance. Why don’t they just make their avatars rated good or excellent?

1 Like

Any attempt to benchmark avatar performance runs the risk of producing a system that’s esoteric to everyone but graphics developers. Performance depends on so many factors that are often not obvious to most content creators.

For example: rendering several avatars, each with one light and each in range of each other, would result in an exponential increase in the number of draw calls. This is not intuitive to people who are not familiar with how a traditional forward renderer handles lighting.

draw call and animtor are not the cause of performance bottlenecks

I tried to use profile (before without EAC and in VRchat test mode)

After backtracking, it was found that a large number of checks on the game engine were the cause.

Have you tried to reproduce and profile?


I did an experiment where I used a very resource-intensive public model

Reduced from 500~600fps to 105fps in world without udon

In the case of a total load of 64% of the main thread, which is equivalent to dual cores

41.77% is the game engine, 19.47% is the game logic added by vrchat itself

The render part only occupies 4% of the overall CPU, and is an independent thread

And this situation is not very common in vrchat some friends + instances or the public world

But the ratio of resource consumption will not be too far apart

60% of the main thread load has been obtained in a public world with more than ten people, and 40~50% comes from the game engine.

Rendering only occupies 10~11%, and includes functions such as system calls, which can only be 45fps in the case of 5600X.

The results of backtracking are physics simulation and bone related results.


Thinking of vrchat’s own perspective to execute avatar is problematic, I decided to open two for experiment

VRC_test_02

From the perspective of others, the overhead will be relatively low

But at least remember not to use an avatar that consumes too much resources on yourself, it will make yourself miserable first!

In the case of many people, the main thread load will be reduced to 25~33%, not lower


Udon’s own interpreter accounts for about 3~5% of the overall overhead, but the related implementation may cause a lot of cpu overhead?

Have a standard benchmark test that gets the performance of your system. Then get the performance of the avatar. This can be done with the performance profiler.

Then the relative performance between them can help derive a general score or ranking that better represents the performance of an avatar. The relative performance should be similar across a lot of systems, especially if they account for hardware and adjust the scores as more people do the performance test.

This seems like a huge edge case. But either way, lights are already heavily counted towards very poor avatar status. 1 light is poor rating.

Except we can’t see API calls here, so we can’t say for certain what is causing that. It could be the way that VRChat is running avatars, which includes these heavy animators, which could be a Unity DLL call. And are draw calls exclusively a render thread load? Meaning: Is it not possible that the CPU time they spend could not be from the Unity DLL as well? Also, your test was with a public model, and you mention bones, but was this before physbones and when people ran dynamic bones?

Not trying to doubt you, but I am pointing out that we can at least do performance profiling on our own avatars which should represent pretty well what is happening in game. And since we can get an idea that there is a high chance that avatars are not instanced/batched on the CPU or GPU unless people are using the same one, things like animator performance and anything else in the profiler should scale additionally. So if everyone’s avatar on average has a 0.5 ms animator, that is 20 ms of CPU time at 40 avatars.

It’s no secret that VRChat’s code is inherently kinda slow, compared to that other Social VR game also running way better on Unity 2019. However, I have been in many places where I can maintain 72 FPS (Quest 2 refresh) with about 10 people, many of which with Very Poor avatars, even on a 3950X which is notoriously awful for video games. Meanwhile, other worlds will drop me to 50 FPS just with me in them.

I’ve also been to events with 20-40 people where I might get 20 or less with 20 people, and 30 or more with 30-40 people. It really depends.


At the very least, even if not to get a general score, it would be useful if there was a performance profiling tool and suggested metrics for performance of certain aspects. Similar to how Thry’s VRAM calculator already has a warning/guide on what VRAM target to hit.

Test time is after physbones and before EAC.

The draw call is a separate thread and does not affect the main thread load.

I have experimented with seventy to one hundred layers of animators Less expensive than a few more skinned meshes.

Even for the same number of draw calls, skinned meshes are much more expensive than meshes, which in turn are much more expensive than materials.

I have done many Unity3D experiments.

I must say that reverse engineering is very tiring. Even so, there is no way to accurately locate the cause of the problem. You can only see the result of the accident.

Reversely, having physical simulation and inspection, and searching for scenes is the reason why more than a dozen items cost 1-2% or even 3% each, accumulating up to more than 30%.

It can only be guessed that physbones or udon implement logic, because when reproducing related CPU overhead, they are often together.


I must say that different architectures and even the number of units reflect different percentage results

In particular, the implementation of Unity3D is not very good. If it is URP, there will be many improvements.

The same number of draw calls is obtained through nsight, but the results obtained on AMD or Intel CPU profile are often quite different.

I have so much knowledge to tell that I can hardly fit it here.

For example, the influence of polygons and GPC, the number of avatars that appear in the fov, how lighting effects and shadows are done, you can only introduce a reference equipment to get the score on it, and use the quantity value obtained by this reference equipment as an estimate.

Getting it right requires a lot of running combinations, not a realistic thing to do.

1 Like