Developer Update - 4 May 2023

Is it possible to provide texture streaming function after the Unity3D 2021 version? I think it is very important. It is very important for the majority of display cards with only 8GB 10GB 12GB

In addition, can VRchat provide a profile interface or connect it in other ways? In the current EAC situation, it is impossible to make an accurate profile without trying to crack it and try to find out the cause of the problem.


In VRAM, it can be roughly classified into frame buffer + shader buffer + texture map overhead

At present, the large-capacity world is almost unavoidable, and it will take up 2-3GB of VRAM. For 6-8GB of VRAM, it is almost difficult to play.

For a single eye above 2k, 4x will consume 1GB, and even 8x anti-aliasing will require 2GB

A large number of meshes take up hundreds of MB to GB of buffers in the world (because the material and a large number of independent meshes, not the mesh itself)


Allow me to overlook some things such as mesh, although mesh can also take up hundreds of MB but Unity3D currently has no easy streaming function.

Currently, more than 90% of the VRAM content buffer comes from textures, and the remaining 5~10% comes from mesh.

The above ratios do not include the framebuffer and its ancillary buffers and buffers utilized by shaders.


Some avatars consume quite a lot of VRAM, for example, from 200~300MB or even up to 1.5GB or even 2.5GB

However, there are very few textures that really need to be loaded into VRAM. In Unity3D, only 6464~512512 or 20482048 is needed in the mipmap in Unity3D for detection and reproduction in a texture as large as 40964096.

And according to the skinned mesh and its active material slot in the FOV, we will need an additional 4~5MB or even 15~20MB

Why do we have to pay huge RAM and VRAM overhead for so many barely used textures?

I would like to see an Event Listing in the UI. I think this would really help make VRChat much more useful for people like me who do ongoing events during the week. Nobody can find us, because they don’t know about the online event listings. Also, why on Earth isn’t there a clock on the UI? That is just a basic no brainer thing every platform has. Sure I can put a Udon cIock in my world, but what about worlds that have no clock. The biggest thing I want to see though is an Event System built into the UI.

I don’t understand why Flight is such a big deal. Altspace had flight with no issues. Is it an Udon scripting conflict issue as to why it’s not being implemented? or a security risk? I’m not a programmer, so I don’t know why flight is such a hard thing to implement. I have rudimentary coding understanding, but it seems like it would be easy to do. Maybe having to do it for Quest and PCVR is an issue? I personally would like to see the Quest 2 die. I hate developing worlds that are dumbed down and look flat. No depth on water assets, and limitations of what it can render drive me bats$#t. Most of the Q2 users have no idea what the worlds really look like. Unless they come in on PC in 2D. I keep hoping for a console equivalent of the PCVR in Mobile. The technology exists to make a headset like this, but isn’t happening. I don’t get why?

Different games and programs will take different approaches to certain things. For flight it’s already possible for worlds that why it to add a component for it. I’ve seen the flight simulator world just use the controllers. Other worlds have flying vehicles to travel around. Sit in one and point in the direction you want to go and whee.

PC avatars can kind of fly around it’s not perfect.

For quest and PC I personally like comparing the differences. Went to an island in the middle of an ocean and on PC there is a tugboat parked very sloppily. No tugboat on Quest.

Or for some of the ratchet and clank levels, to fit within the quest limits the author deleted objects in the level, so on PC it looks normal but on quest there is lot of emptyness.

Personally I find the clock thing hilarious. I have an Oculus rift, and probably the last software change done was add like a “lab” feature that adds a watch to the wrist. So I have a watch in VR.

I did find out that OSC controlled analog watches can be made to show on Quest. Just tricky setting up animation as the watch needs to be attached instead of constrained to the wrist.

I guess back to hating on Quest, there are a lot of people happily ignoring Quest. Don’t hate the player, hate the game? I dunno.

Because the cost is very high and it is not realistic.

You must understand one thing, the space of the mobile platform is limited, and the energy is also limited.

Semiconductor chips cannot obtain unlimited higher energy efficiency ratios by reducing the frequency and voltage infinitely downward.

Power consumption, volume, and heat dissipation are all unsolvable problems.

In terms of the largest gap between PC and mobile platforms, it often comes from the GPU.

The most difficult problem for GPU is that in order to process huge amounts of data, it needs to be supplied with powerful memory and cache system, which complicates many architecture designs and leads to its huge area.

At present, the technology is already bottlenecked, and it is impossible to effectively increase the multiple under very limited power consumption, so that it is close to the effect of PC.

Even if baked lighting has a good effect in a fixed range, it still requires a lot of calculation and bandwidth, although it is much less than instant lighting.

If you try to have more attributes on the material on the mesh, you need to consume more fill rate and bandwidth, which is unsolvable.


Here I make some precautions like FSR or DLSS.

First of all, the game must effectively support TAA, so as to facilitate subsequent modification to support the motion vector buffer, and the related shader needs to be modified for this.

Secondly, these are working in terms of resolution and pixels, and the increased polygon overhead of having a large number of avatars cannot be effectively solved or even have side effects in terms of performance.

If you want to be able to apply at least DLSS3.0 or FSR3.0 which is not yet on the market? But there will be more problems such as frame input delay.

Is there any potential future for a props system? I recognize it’d be… a lot of work. And that a lot of pre-requisite systems that you’ve already mentioned wanting would need to be built out before something like props could exist, but I’m just interested in whether it falls into the “that sounds neat and is something we’d like to have some day” bucket.

Additionally, Hai’s canny post about the IK and the accompanying world showing off what could be possible seem really neat, any hopes that we could see this become a native function in the future?

avatar scaling when

It’s probably not terribly complicated to implement. But there are a virtually unlimited number of features people want to see, both simple and complex. There’s only so many they can do at once.

You would have to justify why flight should have priority over the other features that other people want.

The trick is to work backwards—design for Quest. There are plenty of great looking games on the Quest that demonstrates that it’s powerful enough to make some great looking things.

I swear that the PC has spoiled people by letting them spew polygons and textures with zero regard for efficiency :laughing:.

The Quest was successful based on its low entry price and practicality. People could afford it and it didn’t crush their necks, nor did it require a plethora of cables and a convoluted setup like other systems. Nobody wants to pay hundreds of dollars more for a Super Quest with a 180watt GPU that fries their face and has a five minute battery life.

Just because could be done doesn’t mean it’s a good idea.

They are perfectly capable of implementing flight. But there are more things to consider then just being able to do it. You have to think about how it impacts current worlds and creators.

Right now, if a world creator wants flight, there’s a simple prefab. And if they don’t, there’s no native flight.

Why spend the development resources to have functionally the same thing?

i read this with comercial voice i dont know why

Thanks for the reply. I finally put a clock in my worlds that need one. I need it for events that run an hour, and having to take off my headset to look at phone is a pain. It just seemed a no brainer for a UI clock, but I get why people have different opinions on this. Flight is fun, but very clunky with grab a tool to fly. I just thought it would be an easy thing, but with Udon, it’s already there. I get this. Thanks.

Thank you everyone for enlightening me on these issues I brought up. I really appreciate you taking the time to answer them.

1 Like

I still really hope they don’t move to Steam Audio. The current audio is pretty solid, and every implementation I’ve seen of Steam Audio has been broken or doesn’t work like it really should.

On another note, I just really hope that granular safety settings is a thing we can have. Getting real tired of having to block very poor avatars that aren’t actually that bad.

I know it’s a bit late in the dev update cycle to comment, but I just now found https://notes.sleightly.dev/benchmarks/ (a remarkably thorough guide to VRC performance) and noticed that “Constraints” has a jump around 680 as well as a smaller jump at half that and a larger one at twice that… I’m wondering if this may be due to an internal data structure representation that doubles in size at specific points (such as a variable size list). Just tossing this out there just in case it helps the dev team figure out what’s going on there.

Similar tests have been done for a long time, but I personally think that no real reason was found.

Officials are also clear about the constraints and related performance overheads.

Just have multiple controllers, not a single controller with so many layers.

In the current mechanism is able to use more than enough threads to cover the increase in latency.

This topic was automatically closed after 14 days. New replies are no longer allowed.