Should We Limit Mesh Memory Size?

Recently, there has been discussion regarding whether to limit polygon counts. However, we want to avoid overly strict constraints that might stifle creativity.

Given the characteristics of toggling active triangles and the current difficulty in unloading mesh memory, it is more practical to implement limits through this metric.

This approach would help reduce triangle counts, discourage the abuse of blendshapes (shape keys), and encourage users to familiarize themselves with optimization tools.

I propose a hard cap of 200MB for mesh memory, with controls similar to those for uncompressed file sizes, and integrating this into standard rank systems.

For a file with a 500MB uncompressed size, 200MB would be allocated to mesh memory, while the download size limit would also be set at 200MB.

The goal is to solve multiple issues simultaneously through a well-designed hard cap.

This would subtly push the market toward better practices, preventing cases where a single piece of clothing consumes tens of megabytes in mesh memory.

By establishing a substantial yet firm limit, creators will be incentivized to optimize their products for general market compatibility.

1 Like

You could make a feature request on Canny for attention of the developers: Feature Requests | VRChat

Similar topics have been proposed previously:

Please note that one of those accounts is mine (it is no longer in use).

The core objective is to have the officials and others prioritize this issue once again, and to achieve the maximum benefit at the minimum cost, considering various constraints and the limitations of the existing system design.

I am raising this here again because the simple triangle did not provide effective coverage.

Me personally I would vote for giving more avatar culling options for users rather than putting hard limits on what you can upload. Options such as:

(These options could be pre download phase or culling like how culling works currently, just expanded, my vote is for a pre download check, where if any of these are enabled, and a remote user has an avatar that exceeds the user’s set limit, the avatar skips being downloaded for the local user)

  • Limit based on download size
  • Limit based on Mesh/Texture memory size
  • Limit based on Light/Material/Particle count

Pretty much all avatar performance metrics could be limited by each individual user, as some users might have space aged PC’s that can and want to render massive avatars that are packed with performance costly features, while some might have lower end PC’s (or Quest users) who would want to limit certain areas that are most performance costly for them.

I’m sure this could be done in a, at least semi seamless way, and this would allow for not needing to set any sort of hard limits, and give the control to the individual users. And as a quick note, while this may be annoying to set up at first, a default limit could be set for all users initially to prevent confusion, where users can raise the limit themselves.

I am more inclined toward the simultaneous requirement of providing hard limits and establishing guiding metrics.

This would encourage people to think about improvement while preventing performance crashes caused by excessive resource consumption.

Since the mainstream still consists of graphics cards with around 8GB of VRAM, we should consider reducing the mesh memory limit to a maximum of 200MB.

This limit can theoretically reach up to 500MB, although in practice 250 to 300MB is already extremely high.

Mesh memory must reside permanently on the graphics card, making its impact even more significant than that of textures.

Furthermore, this would to some extent stimulate optimization and lower bloated download sizes.

By avoiding excessive resource fluctuations required to fully view a target, we can prevent other unpleasant issues. Appropriate constraints are always better.

It would also lead people to focus on saving mesh memory, aiming for around 50 or 100MB, thereby improving the overall gaming environment.

Without any hard limits, and given how chaotic the community currently is, people simply will not put thought into optimization, leading to even more extreme consumption.

At the same time, this could curb certain types of malicious crashes, such as the combination of high polygon counts and tessellation shaders.

I strongly hope that the official staff will take notice, or that the community becomes aware of this and pushes for implementation.

The VRChat community is far too loose.