Developer Update - 7 December 2023

Quest Stand alone is showing as Android. :robot:
Are you removing movie worlds? :cinema:
And as always stay out of my shed. But for real stop being so invasive you creeps. :identification_card:
Security Checks works so well you will remove EAC? lol.
The only thing good to come out of all of this was the Clock. So who ever did that, Thank you.
As for the rest of you, :pouting_cat: Behave your selves.

I guess all the standalone and phones show as Android now.

Kind of like desktop and PCVR are identical with no identifier.

I’m tempted to just call the phoners “deskies” until a proper term shows up

As one of those weird snout-havers, I’m especially thankful y’all chose to revert the constraint order-of-operation changes for the time being. And I’m excited to see an officially supported feature for modifying how your head is visible in first-person!

I have one gripe with the head-chop component, however: the inability to animate HOW MUCH the affected bones are ‘un-chopped.’ The simple ability to turn it on and off is good, but for all my critter-brained friends that want to see their pretty snouts, it’s also a big part of the workflow to be able to decide - in game - exactly how much of a certain bone is being scaled up or down.

I’m sure this has other use-cases too, but for the specific example of the first-person visible animal snout, this is important because we’re talking about vertexes that are REALLY close to the player’s face. What most of us are doing is scaling the snout back up to 100%, but then having a SECOND bone with a smooth weight paint gradient on it that descends from the brow down onto the snout/face. This bone we can then animate in-game with a radial slider to gradually, carefully pull that obstructing part of the face back until it’s just barely out of the way of the camera… but without heavily distorting the rest of the face, or creating an aggressive, sudden “wall of flesh” right at the bottom of your viewport where your snout starts.

This, in combination with differing near clipping plane distances in various worlds (forced camera near distance is a godsend, but very buggy in its current state), means that the back-and-forth between Unity and Blender to get the weight painting JUST RIGHT is a huge obstacle and annoyance, if we lose the ability to manually adjust the exact scale of transforms on the face in first person. I would really, really love to see the Head Chop component - or something similar - include an animatable property from 0 to 100% for the scale of the un-chopped bone in first person.

Cheers! :sparkles:

5 Likes

A ton of the FT community still use animator-based eye tracking due to the current limitations of the native eye blink. One of the major ones being per-eye blinking.

2 Likes

Why is group creation still locked behind VRC+? Should change early next year.

VRS -

Will it be possible to use VRS with DIY Eye tracking solutions, for example for the index or Bigscreen beyond?

That would be amazing, thanks

1 Like

So wait I’m confused, one sentence says that a disabled component shows the bone, but then the next sentence says all disabled components hides all bones? Seems to contradict itself.

If there are any enabled head chop components on the avatar, then those specific bones will be scaled away as part of head chopping. If there are no head chop components or all of them are disabled, that means custom head chopping isn’t being used, so we fall back to just scaling the head away as usual.

Just a reminder that to replace the functionality of scale constraints that creators need both the ability to show something that is a child of the head, as well as hide things that are not children of the head.

This system will search the entire hierarchy of the avatar for head chop components, not just the head itself, so both of these cases should be supported!

It’s still in active development at the moment, but with the current design, the custom head chopping system will just change which specific bones get scaled away, similarly to what happens to the head bone normally.

1 Like

Thanks very much for your feedback - this is something we can look into!

6 Likes

Well that answers this question lol. W

Will we see more official components to do things like placing things in world space thats currently done with constraints?

1 Like

Will there be a fix to multiple quest friendly avatars not being usable on mobile?

For example, my personal Avi is quest friendly but it is “too poor of a rating” for mobile, however when I loaded in, it briefly showed said Avi. It’s annoying when I can’t even use my own Avi without any way to enable it, especially when my phone has the option of boosting my RAM.

Here is the Canny requesting that very poor be added back into the phone version

https://feedback.vrchat.com/mobile-alpha/p/desperately-need-a-very-poor-allowance-on-mobile

I recently fought a bunch with blender and I made a poor and fallback version of my avatar for quest. I would like to see the polygon count for poor raised a bit. If it was raised from 20k to 21k I’d be tempted to try again.

1 Like

Practically, animators (FX animators in particular) are the worst offenders. And besides that, I find it’s also VRAM. Physbones are incredibly performant. Which, by the way, I just learned that Gesture Manager is able to pull from the profiler to tell you exactly how laggy your animators are. I just compared a 0.22ms animator to a 3.61ms animator. Significant difference at scale.

Udon is also bad and I hope is improved, but it often still comes down to people not being great programmers. No fault of their own, but there’s only so much you can do with inefficient code.

1 Like

Udon lacks many data structures, and it would be easier to improve performance if it had more containers and iterators.

If there is more control over memory operations, this will maximize performance, which is why C/C++ is so important.

As long as the physical basis of computers does not change and various basic laws are limited, caching will always be a major problem for computers. As the scale doubles or increases exponentially, the cost of interaction between different elements will continue to grow constantly.

At present, the cost of energy consumption and area is more than 20 times that of a pure computing unit, and it is spent on registers/cache and other computer implementations of improved memory models.

Things with state cannot accurately measure performance. The performance of simply stripping a program out is inconsistent with the performance of combining it. This is why performance data should be measured from Profile rather than blindly writing and reproducing a program.

Taking the basic benchmark as an example, it is divided into three parts: calculation, branch, and cache, the latter two of which are affected by the state.

This is not to make things confusing, but to emphasize that computers are originally used to manipulate data, and instructions are just coupled to the machine to improve efficiency. Data structures and algorithms for understanding problem construction and design are always the core. If there is a stable bottom layer, good and intuitive data operations, and a large number of packaged libraries, it will be a major guarantee for development efficiency and performance and will be easier to maintain.

1 Like

I’m not actually convinced that cache coherency is as much of a problem on a practical VRChat level, even though there is truth to what you say.

Many poor performing avatars look like something out of an early 2000’s game. They aren’t poor performing due to some fundamental low-level bottleneck; they are poor performing because they use 20 phys bones per whisker, and each whisker is 5k triangles and uses a different material.

You can improve performance by just not doing that. Computers have been more than capable of achieving the perceived graphical fidelity and functionality seen in VRChat for a very long time. But many of the creators are relatively inexperienced and don’t know how to do this. This isn’t intended as criticism – it’s just inherent to the kind of thing VRChat is.

I also (perhaps cynically, I admit) suspect that people would respond to increases in low-level efficiency by using even more physbones, materials, and triangles, until the performance ends up functionally the same i.e. this is less a technological problem and more a sort of psychological one.

VRChat’s performance problem is a Wicked Problem.

2 Likes

The polygon issue is a GPU issue, whereas what I mentioned is a CPU issue.

In fact, it is no longer a big problem now since the limit of 256 components.

The problems caused by polygons are enough to warrant another article, but they are not the subject of this section.

After all, GPUs can be solved by buying higher-end ones, while CPUs are almost difficult to solve. Although polygons on GPUs are an exception, they are a problem in real-time rendering design.

A close problem is VRAM usage caused by textures and mesh, and texture streaming has not yet been turned on.


In a bad world Udon itself can bring considerable load, and is more susceptible to data load caused by a large number of avatars, thereby causing cache pollution.

If components such as animation constraint physical bones can increase the CPU time by 10~20% due to doubling the miss rate.

Then Udon is not just doubled, but more times. This is due to the characteristics of DRAM.

When a larger amount of bandwidth is accessed at once, the latency is longer.

So we can’t ignore the performance harm caused by Udon in these worlds.


To reduce unnecessary arguments, let me say something first.

Generally, a large number of polygons are used, mostly due to the hair tessellation shader.

Most of the problems are due to ignorance or malicious use. After all, the tessellation shader can use the pixel density of the screen to determine the triangle fill rate, which makes it controllable and efficient.

However, some blankets may be used in a relatively stupid way, generating a large amount instead of following density changes.

Although some people may think that tessellation is less efficient than vertex shaders, and I have read similar articles on reddit, but I think from the results of the profile, patterns that use tessellation appropriately without being overly complex are often more efficient.

And the vertex shader also has to consider whether it is mesh or skin, and there is a difference in efficiency between them.

If it is a primitive and inefficient vertex shader without trying tessellation or more advanced mesh shaders or calculation shaders, then a large number of polygons will not only waste the triangle fill rate, but also cause relatively low hardware parallelism.
(The bandwidth obtained by the restricted geometric front-end depends on the specific architecture rather than the scale, so sometimes the geometric performance of mid-range cards to high-end cards is almost unchanged, and the operating frequency needs to be increased)

(Digression: Even if the above-mentioned advanced methods are used, the polygon parallelism is still relatively poor. For example, 5 GPCs can run 4.5, while 11 GPCs only have 7 per cycle, which is 4070ti vs 4090)

I know that many people are quite wasteful on polygons, which leads to a huge load on the GPU when there are many people. Coupled with the inefficiency of the current Unity3D built-in real-time shadows and mirrors, the frame rate is appalling.

By the way, a large number of polygons is also the main cause of low-level bandwidth waste. In most relatively balanced scenarios, more than half or even 70% of the bandwidth to L2 and VRAM is caused by geometry.

This is because the geometry access pattern is poor and not conducive to running on the high-level cache, unlike textures or some operations that are easy on L1. If you use features such as reflective shaders (static), they may run on L2 and there will be considerable load.

For example, a polygon must have three vertices when optimized during Unity3D operation. The sum of the three vertices occupies 60~72 bytes in VRAM, and the processing occupies more than 144 byte bandwidth on L2 (/0.8~0.9=actual occupation). These problems can be solved by reusing geometry and tessellation to generate geometry.
(VRAM may be cached with some bandwidth of about 30~40byte, or even less)

In comparison to the framebuffer per texel/pixel depending on the format and hardware processing, 0.3~4byte is relatively much less and can be cached.

1 Like

I’m sure it’s just a language barrier issues, so no offence intended, but what? I’m having such a hard time parsing your post.

But I think you may have missed my point, which has nothing to do with polygon throughput and everything to do with people and how they act. You can successfully increase the efficiency of the software by orders of magnitude though clever engineering and still have exactly the same problem.

There is one point I’m curious about:

Generally, a large number of polygons are used, mostly due to the hair tessellation shader.

How common are hair tessellation shaders in VRChat? Because I don’t think I have actually seen them used, and the avatars that I based my post off of certainly don’t use them. They just simply upped the subdivision modifier in blender way too high, which I an pretty sure is a more common issue than tessellation shader abuse across genera VRChat avatars.

But the situation you proposed that one root hair has 5,000 polygons is really rare, and it is rare to see an example that should not occur at all.

And hair shaders are not common? Maybe what I see is different from yours. Have you ever seen a tail with many subdivisions?

I know what you mean, but the efficiency of different methods varies greatly. If you want to propose a method that only uses blender to subdivide hair and only uses vertex shaders, of course there will be problems.

I don’t think a person would choose to make a blender and participate in the game without even looking at the details of VRchat.

This is not the first time I have seen the idea that if there is higher efficiency, people will be more wasteful. However, this is not realistic, and as they use Blender for a long time, they will come into contact with normal baking and other methods to become more complete.

I think this is more of a blender newbie question.

If they just read some problematic articles and made them so simply, causing many performance problems, then they should be informed and educated.

Moreover, many people only exaggerate on the table items of avatar display data. If so many resources are not activated at the same time, there is no need to worry.

There are always a lot of poorly produced assets, and over time people will throw them away.

However, I think your question is more about LOD. After all, the object you think may not be wasteful but wants to be refined, but for you, you often cannot see the details. You may need a lower level LOD or fallback.

Stuff like this is very common and is the primary source of performance problems.

One avatar I saw recently had dosens of physbones per whisker, with dosens of whiskers. Not only is this ludicrous performance wise… it also doesn’t look very good – the whiskers move about in bizarre ways which don’t resemble how whiskers (which are fairly stiff) move.

This is normal in VRChat.

You don’t need good cache coherency or tessellation shaders to achieve good performance and great visuals in this case and many other cases. Just use less bones and they will perform and look better.

Fewer materials, sane bone, physbone, and constraint counts, and your avatar can perform absolutely fine and look great.


Your post is an extremely engineer-focused look at the issue (which as an engineer I do emphasise with – irking out more performance is super rewarding and fun) to the point I feel you are underappreciating the human aspects of the problem. People aren’t going to throw out their poor performing avatars based on technical aspects because they identify with them on an emotional level. And that emotional connection, though perhaps problematic from a strict engineering point of view, is undeniably a major part of VRChat.

I’m going to make a controversial statement: VRChat is reasonably optmised for what it is. It’s not perfect, but improving its performance requires more than technical solutions.

If VRChat want to improve the end user experience, there is lower hanging (and jucier) fruit than tessellation and caching concerns. The addition of a simple VRAM limit did tons more than a complex tessellation shader ever could.

If content creators want to improve the performance of their avatars, tessellation and cache coherency should be very, very low on the list of things to look at – games have managed great looking hair with nothing more than some billboards and an eye for quality art for decades.

2 Likes