Developer Update - 28 September 2023

There’s heaps of resources on how to improve coding habits, streamline code, simplify systems, etc, if they aren’t as well versed on the subject, then it can be learned, or have a dedicated team responsible for cleaning stuff up.

Blaming user generated content is an excuse not a solution. The main offender of user generated content performance cost is egregious animator controllers, which could be severely mitigated by adding an SDK2-esque simplified avatar creation pipeline as the standard, which is limited and pre-made, and let full animator control just be for advanced users. User content would change to work around this. The largest performance expenses from what i’ve seen on VRChat’s end are Udon and the Menu, but overall many things have just gotten heavier over the years. Computers that could run content in 2018 just fine, now struggle in the same world with the same avatars and user counts present.

Yes it’s a lot of work, but as demonstrated by countless companies, it is very necessary work that they have spent months to years on total rewrites to achieve. Companies have spent absurd amounts of time and taken massive financial losses in order to ensure the most smooth and streamlined experience possible.

User generated content is only a major performance issue in circumstantial cases where there’s a very specific bug or issue with particular content that the user needs to solve or work around.

2 Likes

about time the user page on the site got updates, blocks and mutes page next?

I wish you would incorporate an event listing in the in world panel. I keep asking, and I will keep asking since I believe it’s important. Nobody can find our events each week, because there is no event listing in the panel. Is it hard to add something like this? This would make VRChat a lot stronger of a platform.

I know for Japanese speakers there are two or three event calendars that are shown in various worlds, maybe someone needs to do the same for English speakers?

A non ideal system would be a good motivator for the devs :stuck_out_tongue_winking_eye:

If you know that texture streaming has not been successfully enabled on vrchat so far, and the reason is very funny, you will feel dumbfounded.

Perhaps their main focus is not to optimize resource expenditures as much as possible, analyze the current hardware costs and user configurations, and improve visibility and experience, but to hope to acquire more users through some improved marketing.

You must know that the current cost of VRAM is not low. Due to changes in hardware growth trends and technical road restrictions, the cost is always very high.

But maybe the target is not just PCVR users, but wants to play it on quest/mobile, so hope to solve the problem through a more versatile path.

However, if you do not use it first when you have a ready-made, feasible and mature solution, but instead pursue a technical path that is more difficult in practice and full of problems.

All I can say is you’re praying for a silver bullet.

It’s got nothing to do with silver bullets. It’s extremely important and the more clear it’s made the more they will be inclined to put efforts it. Performance struggle affects quest more than everyone else, which is why minimizing overheads due to the client itself is paramount, so that user content is less painful and retention deterrent.

Why would they want to choose the imposter solution?

This is because it is impossible to deal with the overhead and impact of object preloading and execution, as well as the difficulty of optimizing visible polygons.

That’s why chose to make a solution like this, however…

Auto-generating LODs may not even be a good solution anymore, let alone shaping something that is similar to itself but saves more resources.

Other than that, I don’t see too many attempts to go down any other path.

Why is it described as a silver bullet? Even if we use AI style transfer to create a new type of robot avatar, we can’t restore the original look and features, which is extremely difficult to implement and has huge uncertainty.

It can easily become something that is described as very powerful, but in reality it has to be wildly discounted over and over again in order to accomplish it.

Once it’s out there, maybe people won’t want to use it, and it becomes a redundant cost.

That’s how I expect it to go down.


Impostors can solve many problems in one go, CPU/RAM/SSD/GPU/VRAM/Network

Other solutions need to be addressed individually.

  1. Streaming textures
    It effectively solves the problem of huge RAM/VRAM usage and allows higher avatar and world capacity to be used for baking lighting and other effects. The cost can be much lower and the efficiency higher, but it will occupy more network bandwidth, and there is no excellent improvement in high-quality applications such as dynamic lighting.
    It is undoubtedly a good solution to ignore the download capacity and the capacity on the SSD to improve quality or exchange for visual effects, efficiency, etc., especially the cost of VRAM, refer to the specifications and price differences of computers commonly installed in the market.

  2. Replace with SRP
    It is estimated that the pain period will last for more than three or four years and is difficult to replace. It has effectively solved the efficiency problem of static mesh rendering and a better future. Let’s not talk about the ridiculous Unity3D implementation problem that is constantly changing.
    Scenes are less expensive, and shadows and reflections in some cases are more efficient because of their visibility, but implementation is unimaginably difficult.

  3. Automatically generate LOD version
    Can’t elaborate on anything, LOD is a big hole.

Maybe there are more solutions that I haven’t thought of and seen.

I would have liked an update on the previously announced Shader updates on Quest/Android, It seems to be taking much longer than I assumed it would, but hopefully that comes around soon enough for some really useful upgrades in visual fidelity on Standalone platforms.

I found out that the shown users list is missing from the site now. Thats creating a security problem with crashers for anything event-related. I would really like to have that feature back.

Showing / Hiding Avatars is all local now. This was actually changed a while ago. ^^

Most of the responsibility for optimisation rests on the content creator, by necessity, since it’s the content that takes up most of the system resources.

Please this, I had to distract someone away from touching thier interactionOn section, as they were mixing it up with the blocked section.

Impostors are supposed to be for far away. That’s why they were originally invented in game design, i remember when they pioneered a realtime bake system for Assassin’s Creed so they could have correctly lit but 2d buildings in the distance.

No it’s not impossible. I don’t want to be one to use harsh words here but that’s just plain ignorant. There’s thousands of solutions for problems that people have come up with over the decades, and again the issue is much less with the user generated content at large, and specifically with vrchat’s systems, and the unfiltered unrestricted clumsy use of bloated animators. Again, give people that don’t know how to optimzied pre-optimzied fool-proof templates to use, and let advanced people take a crack at manual graphing who arm more likely to consider the relevance of optimization.

Realtime LoD generation aside from distance impostors is an extremely taxing system and is one of the main reasons why UE5 struggles to run acceptable framerates without upscalers. Using them as distance-agnostic fallbacks is going to rapidly negatively affect the UX in vrchat, and honestly should be saved for mobile phone use only.

LoD’s are not the issue, unless you had lower LoD versions that had reduced material counts and shader complexity.

No idea what you are getting at here… Baked lighting is usually like a single 4k map encoded in BC6H. Streaming textures is already done. They just aren’t frustum streamed/discarded the way Unreal has done it (always hated this cuz it caused hitching when turning too fast especially when not SSD) straight off disk. Avatars don’t load all of their textures and shader memory into your VRAM and just sit there being fat. Materials and their associated textures are off loaded from working memory and allowed to be replaced at any time with other assets if necessary, which may cause them to be offloaded to system ram, and if they do not need replacing they will be able to quickly return to working memory when the associated mesh renderer is reactivated. It is already very optimized, and why VRChat’s vram metric is not how much memory the avatar uses at any given time, it’s worst case scenario if every texture sample was active at the same time. They are decompressed off disk and allowed to be handled well by native memory management, and to get rid of left over memory, there is manually coded garbage collection which purges stuff after the avatar or world is no longer present for a time.

Shader complexity contributes significantly to VRAM cost, and why this is so much more of an issue in current years, than it was say 5 years ago, because back then people used very simple shaders (unless they were used cubedparadox which was unnecessarily complex). If you have issues with GPU performance, it’s mostly due to overuse/abuse of Poiyomi and liltoon shaders, which are both overdesigned.

Not gonna happen. There’s many reasons, some of which TCL explained in detail in the past, though i forget the specifics, why BIRP is actually the better choice than a SRP for VRC’s use case.

They already created a workaround for this with the avatar hider system, which was originally created (honestly in a slightly more effective manner) with a mod previously. Again, the biggest performance hit on avatars is their animators. LoDs won’t optimize their animators. Having impostors isntead of diamonds i suppose isn’t terrible, but tbh i dunno how i feel about seeing heaps of low res janky sprite puppets everywhere.

. . . Overall i’ve already presented possible solutions that would mitigate community content and discourage them from abusing/overusing taxing systems. But when it comes to the most consistent performance costs, those are entirely vrchat’s problem. User content performance issues are anecdotal and depend on conditions. Sometimes can be in a lobby with 40 people and still have decent frames, sometimes in a lobby with 40 people and frames are garbage (though a lot of the time this is due to bad animators. Udon is too slow/taxing, the UI is too slow/taxing, many things under the hood need to be streamlined to reduce their compute time.

I have a R9 3950x, 32GB RAM, and an RX6700xt 12GB, and i still struggle to get 80fps in most places even just with me and a couple others, due to Udon and vrchat’s internal systems, even if our avatars are well optimized (this applies to most others as well unless they have a monster cpu like a 7800x3d). SDK2 worlds, i get 80fps piece of cake mind you, because CPU and I/O has room to breathe. I gave up on a lot of world creation plans due to how heavy Udon is, and vrc dumping sdk2.

I’m very skeptical that you’ve actually done a benchmark and profile of all the parts?

I’m already a bit skeptical.

The avatar is indeed fully loaded and does not use stream textures, as does the world.

Experiment with multiple uploads and local exports to see for yourself.

Some of the megaworlds are over 1GB in size, and even some of the graphics cards won’t run smoothly, not because of the computational load, but because of the repeated VRAM releases.

Even if you make your own project to open the stream texture you will realize it’s not the same at all, when was it enabled?

Also did you not even read the announcement and the past description of the impostor’s use? The same name is not the same., okay? UE5 and VRchat are two completely different things!

Also VRAM and main memory are monitored and not unloaded in the past, I would even call that nonsense.

Tell me, did you learn to decompile tools or even monitor? Have you ever used a 6GB or 8GB graphics card with different performance?
Do you understand how drivers and engines work?
Do you know the difference in experience between a 3060 6GB and 12GB in VRAM even if the general benchmark performance is the same?

My God, I suspect I’m living in a parallel world that operates on completely different rules that make things so different that it’s beyond belief?

Also have you actually experimented with the animator to find out if performance is the cause, doing more scenes in detail or even carrying tools to find out why?
It’s not just a matter of watching a twitter do an experiment.
I’ve used many people’s public avatars and I’m more than aware that this is not the cause of the problem at all.
I don’t want to be harsh, but I don’t think it’s possible to get such vastly different conclusions if you really spend a lot of time researching and repeatedly experimenting over a long period of time.


vrchat doesn’t have streaming textures enabled, that’s pretty obvious,Same situation this year, answered in forum discussion.

I clearly answered the ridiculous factor, but as a result, some people turned a blind eye and thought that this function already existed.

You know what some people experimented with what would happen if you had a super-sized world? Your RAM and VRAM would both be full!

And when you get closer to the peak, it will also cause the GPU usage to be fully loaded to 100% (through PCIE access),
The behavior then becomes VRAM repeatedly reallocating.

The behavior here is different from the general VRAM that is fully loaded and offloaded to RAM and continues to use PCIE until the hotspot frequently accesses RAM and causes a lot of waste.
Through monitoring, you can find that VRchat does not actually transfer the contents of VRAM to RAM, but win transfers some resources, but they are only a few hundred MB.
Later, until it cannot be solved, in extreme cases, the driver will have problems and other applications that need to use VRAM will crash. Before that, Unity3D will try to reallocate its own VRAM instead of using RAM. Unity3D cannot migrate VRAM in VRchat transferred to RAM.
If you find that the RAM is abnormally high, it is a problem caused by Unity’s resource management cycle and it is also related to problems with VRchat settings (and built-in check options). It is not that RAM can carry the content on VRAM. This is A major misunderstanding.
But if you are a developer, you should quickly notice the obvious difference and should not make this mistake.

If it is true VRAM overflow to RAM, you can experiment by yourself, which will cause win to use the shared main memory, and PCIE will increase from 10~15% to about 50%.
Some games also have this feature, so it’s obvious once you test it out.
Then sometimes, it is not IO at all but the behavior of the driver and game engine executed on the CPU, causing the content on the VRAM to be replaced and reloaded.
If you have a streaming texture, the texture will be blurred for a short time and then replaced with a high-definition texture. This feature is very noticeable, but VRchat has never had this feature.

actual operating efficiency plummets, so that there is obviously a small amount of mesh activity on an avatar, but the volume is as high as 100MB or more, and even the texture memory is occupied Up to 1.5-2GB of avatar will make you lag or even unresponsive.

It doesn’t have an effect just by enabling it in the texture. You have to turn on the texture streaming switch.

Other things need to be experienced to understand. No matter how much you talk about it, it can’t compare to one actual action.

When the VRAM is full, the fps will drop rapidly and the CPU frametime will skyrocket for a period of time. Do you have a tool like fpsVR to notice it?

Fax:VRChat currently doesn’t support streaming mipmaps. It’s something we might decide to add at some point, but some issues would need to be addressed first.

1 Like

Thanks for this, but that’s in world. I can always add a meeting times list in my world, and it does have one, but an event system in the panel in VRChat would make it so people can find events. Right now you have to look online and really have to find it through Discord to find the online events. Most people aren’t gong to go into the web and do a Google search for VRChat events. This is something Altspace had, and it made the platform strong till they decided to kill it. VRchat has so many amazing things about it that Altspace didn’t have, and this would make VRChat a lot stronger platform. It’s really the only thing missing aside from prohibiting freaking heavy avatars that make people wonder why the world they are in is so heavy. LOL Thanks everyone for listening. I love VRChat, and I just want to make it better.

Something seems wrong about this. I have far weaker hardware and generally everything runs smooth, even when people start using poor performance avatars. If there’s a performance problem, it’s usually caused by one exceptionally poor avatar, and the problems go away once that avatar is blocked.

Could you elaborate on this? In my experience, Poiyomi is reasonably well optimised. Keep in mind that Poiyomi is a classic uber shader, so it looks very complex and heavy in the Unity Editor, but it compiles only the necessary functionalities into the final optimised shader variant at build time; testing in the editor is not a valid way to measure its performance.

It comes across as quite a well-designed and elegant shader system overall.

any info on creator economy there hasn’t been info since may

I think you are confusing what type of stream.

Most worlds that are that big have crazy light bakes, and a lot of audio, animations, and mesh data that adds up. Textures too ofc, but they are only actively rendering and keeping in active vram the mips that are rendered in scene from your current position and the LoD config. A lot of world creators also further compensate this by properly configuring culling or manual triggers for loading/unloading objects.

I’ve been informed that streaming mipmaps should be active in vrchat, just perhaps not implemented well, as in only applies to extremely far away stuff. I’ll have to test this - i’m thinking using a stupidly fast avatar and yeeting myself at a cliff or wall that is far away and look for mip pop-in. Unity terrain does realtime reduce it’s detail at distances as well, and it probably manages the streaming more properly.

No, unity already has impostor systems, impostors is not UE5, impostors is universal and has been around for a very long time. I mentioned Assassin’s Creed which is Anvil because they did realtime impostor baking. Unity has a package that does this called Amplify Impostors (there are probably others too) which does more or less the same thing. VRChat’s impostors are not industry standard impostors, they’d have to use a weird lightfield-esque sprite baking of each bone and then rigging up a puppet dummy that renders these perspective sprites as non-skinned meshes - and that’s exactly what it appeared to be doing in the examples.

I feel like you don’t even know how unity and C# works here. VRChat ahs many times done work on the very much necessary garbage collection, which is purging unused memory, because it doesn’t automatically expire it, you have to tell it to do so. I’ve had many people i know test, and have also found it to be the case myself, that mesh renderers that are not active indeed offload themselves out of working memory and if your memory fills up they will be evicted in favour of more readily needed memory, which can cause those meshes to hitch your system to squeeze them back into vram taking them off of your system ram, and if your system ram also filled then it has to query it off of your pagefile, into system ram, and then back into vram; this can be a laggy process.

VRChat has more than once made announcements in patches that they’ve improved garbage collection, and recently made it so avatar assets that have not been in the scene (world) for longer than forget if they changed it to 5 minutes or 1 minute, but it gets purged from memory. Leaving a world also causes a memory purge of everything that was there in that rendered scene.

I used to be on an RX480 4GB, and frequently got that fps buck where i’d get corrupted memory form having oculus dash, and sometimes just a mirror on or a grabpass shader. That issue went away when i moved to an RX580 8GB, but as i upgraded, over those same timespans, people kept getting more lazy and careless with their content as 3.0 made it’s headway and booth assets kept being churned out which people keep piling onto said avatars. That framerate issue from memory corruption had been a topic of discussion for a long time, and it wasn’t until some really indepth conversations with very intelligent and knowledgeable people before the conclusion it is probably corruption due to overflow that causes it, and it borks frame pacing.

Many many people have. And yes i have, and yes i’ve also gone over in detail with a number of people about a lot of very specific cases that are egregious, including ones that unity goes over in GDC etc lectures: near-but-non-zero floats cause performance issues, too many transitions cause perf issues, transition to self via anystate especially on a complex layer is very heavy, every layer incurs a cost because it has to run it in parallel with all the other layers, actively animated blendshapes especially of the non-zero kind are very taxing, animation speed affects the frame time cost of animator as well. All kinds of stuff like this adds up. These things are well known, you just have to find the right resources and/or talk to the right people.

You can’t just use your own system as a bench, because if you got a cpu that happens to play nice with vrchat due to it’s architecture, or is high end, then the benchmark is moot because it’s anecdotal. The cpu cost of animators especially hurts because of how udon is already clogging the cpu, unity itself clogs the i/o with all it’s metadata in gameobjects etc, the ui and other systems are not streamlined enough. The cpu gets bottlenecked hard on most systems. VRChat only runs effectively on 4 cores (or 4 hyperthreads), and a majority of it’s legwork is done on the main thread (excluding stuff like physbone because it’s on jobs which is multithreaded).

Depends what you mean by super-sized. I go to gigantic worlds all the time. 2 of the most optimized worlds i’ve ever been to were absolutely massive. One called Moria, and another one an acquaintance made where they combined like all of the doom games into one or something (that one had thousands of drawcalls btw, and it was like 1ms or less, and that was back on my old RX480).

There’s heaps of misinformation that goes around all the time.


I think you might not quite understand what i mean. The assets are loaded into memory, and then loaded from that into vram for rendering. When vram is full it has to re-load it back into vram to render the next thing, this is what causes the stutters, because the gpu is waiting for the information to render the next frame, and how long this takes depends on the transfer speed.

The SDK literally screeches at you if you don’t have streaming mipmaps enabled.

fpsVR is not accurate, how inaccurate it is depends on the situation. Software based latency metrics will always be wrong by a certain degree, and some cases can cause it to read radically wrong. It’s better than nothing, but it’s not the be all end all. Graphical rendering is extremely complex.

My VRAM on my previous GPUs was always instantly full. On my current 12gb gpu it doesn’t will that much, or at least takes time. I have never experienced fps dropping due to full vram since my 4gb card that experienced the corruption issue; which was usually caused by arbitrary rendertextures, like grabpass, camera, mirror, etc.



Depends what CPU. I run better now that i’m on the r9 3950x, but i used to be on a r5 2600x, and it was much worse. The poorest performing avatars for me have always been on animator primarily. Some avatars even have bugged animators where frametime continuously increases over time, dragging fps down, until you reload it.

Ever since i upgraded to this gpu, my shader problems went away mostly. When i was on my rx480 and even the rx580 i constantly had to hide shaders, and the offenders were almost always poiyomi. I did benching back when poi was in v7, and it was more than 6x worse than UTS2. I’m hoping to see UTS3 abound when we jump to unity 2022.

a) the “lock in” is mostly unnecessary, this just reduces how many shader variants are stored in the package, local keywords should accomplish the same thing performance-wise.
b) that implies you aren’t using a lot of the systems
c) systems like sparkles in poi use voronoi noise (expensive)
d) the base lighting is complex and based on research in industry practice and methods other vrchat shader creators who have gone in this endeavour, it’s much more complex than it needs to be for a social platform and for the types of models that use it. most games don’t use shaders nearly as complex, especially if they aren’t made for photorealism - and photorealistic rendering uses a great many shortcuts built into game engines, which atypical or stylized shading does not benefit from.

It’s been improved a lot over time, but if you ran the shader by industry devs you’d quickly find that it is very overdesigned, and relies on a lot of tricks to compensate - like the uv tile discard (which doesn’t actually discard anything, it just offsets polygons to oblivion), it helps but shouldn’t be necessary.

ps: this is not an argument to say they are subjectively bad, it’s just a standard of shading which is much heavier than is ideal for a context in which people are likely to need several materials per skinned mesh, considering many skinned meshes are likely to be present (skinned mesh = rip batching and gpu instancing).

pps: shaders like poi and pretty much all the ones people use are pixel lit, that means they take realtime lights at the pixel level instead of the vertex level, and they do so in additive shader passes, which means the shader has to load into vram and render an entire extra pass for every realtime light (the cost is not as noticeable with lightweight shaders however). This is not necessary, i built my own shader to be single pass and vertex lit to avoid this.

Everything you listed is much newer and faster than what I’m running.

There are advantages to discarding at the geometry stage via degenerate triangles vs pixel stage. The tile discard that you described sounds like the sort of hacks that you would expect to see in any game. That’s how games get developed.

If your draw calls are exploding you probably need to reign in your submeshes. Worst case for a well optimised avatar with under four submeshes and four pixel lights (the max that VRChat uses) would be 4 * 5 = 20 draw calls. Which is nothing.

1 Like

Nah, games program a lot of these things in code. You could just write systems to handle stuff like that. Or better yet, make it not necessary. I was trying to explain that poi’s poly discard isn’t discarding anything, it yeets them beyond the frustum and farplane so that the geometry surface isn’t running on them (i dunno if this actually works how they hope, i’d like to see actual tests of it rather than just claims). You’d just write a system that does that manually and directly on the geometry instead.
—It’s a clever idea though.

About Lights

Every single light adds another drawcall for every material’s additive pass, and if shadows are on, there’s another call for the shadow. I can get away with more drawcalls due to my shader being cheaper (plus i can get away with more effects in the shader), which makes developing vastly less painful. VRChat most certainly does not max out at 4 pixel lights. Unity maxes out at 4 vertex lights, and they populate a table sorted by attenuation, intensity and colour, which is always calculated, so it’s free. Most people do not and will not effectively merge materials down to below like 20 even on a relatively simple avatar, unless they’re specifically trying to meet demands for a club event. Turn on 3 lights in a world or on avatars, and that 20 materials that already had probably 3 passes each at last, now are 3x20x3=180. Also calls aren’t a flat cost, it depends what’s on them. 1000 SetPasses of a materal that returns rgb(1,1,1) for every pixel and nothing else is going to be way less costly overall than a material draw that has to calculate the equivalent of a retro game for every pixel on the screen.

I’m talking about if you are in a world that has a lot of realtime point lights/spotlights. Or if a bunch of people turn on flashlights in the world or on avatars.



Telling everyone “just optimize your avatars bro, just combine all your meshes, just atlas your materials in blender” … reality: ain’t nobody got time fo dat.
The more bulk systems are optimized, the less these things drain on everyone’s performance.

That responsibility falls upon those who make things that a vast majority of what people use; ie booth assets, common shaders like poiyomi and liltoon, the VRC SDK, and the client itself. Which is why i keep trying to push for bulk system streamlining and implementation of features that encourage people to bloat their content less.

Udon, Animators, DrawCalls, UI, IK, and a zillion other client features, all are bombarding the CPU. The GPU just has to render geometry, materials, shadows, rendertextures (including camera) and postprocessing (and video if you aren’t using the software decode launch option). All of the GPU work has to wait for the CPU to hand it all of the calls in order to render the materials, and eventually the final render, and if udon, animators, decompressing/decoding assets to be shipped to vram, and everything else are clogging the only threads unity uses, then there’s no room left for drawcalls, and it will take longer, hence lower framerate. If the other cpu stuff is cheaper, then drawcalls don’t need to be strangled as much. This is also why polycount is not that big of a deal, because more polygons doesn’t have to wait any extra for the cpu, it’s in the same call (well, except for back in old unity where >65535 was diced into additional calls).