Everyone has different requirements.
This is especially true when considering how topology and distance lead to significant variations in judging both the geometric structure and the visual value of an object.
These factors make assessment or automated simplification extremely difficult.
It is similar to how people believe that baking can replace all dynamic scenes or dynamic calculations.
However, dynamic scenes require additional vertex data retrieval to calculate positions with greater precision.
They simply cannot efficiently iterate to produce relatively accurate diffuse reflections from multiple bounces or a host of other effects.
I will provide a very common example. Deferred lighting can process lighting in large batches (deferred lighting does not handle shadows and reflections), which effectively improves efficiency.
However, it cannot solve the issues associated with transparency. Extensive use of transparency leads to inefficiency instead.
Nevertheless, transparency can create a more detailed and realistic material feel and avoid the use of excessive triangles to build certain structures.
I am merely using an analogy here.
Although many might not understand what I have described above, my point is that value and properties are very difficult to assess.
Otherwise, it would not be so impossible to reach a consensus.
The biggest problem lies in the failure to provide an effective static metric.
Dynamic variables change too much based on the scene, making it impossible for people to have a stable understanding.
This can even create more performance vulnerabilities and increase the burden on maintainers.
Decision-making for static values is much harder.
Setting them too high is unacceptable, and setting them too low is also problematic.
Everyone makes decisions based on their own desired standards, ignoring actual needs and instead emphasizing restrictions.
For example, if the goal were truly to maximize performance optimization, why not turn everyone into 2D paper cutouts? But would that be right?
Furthermore, the Unity editor contains too many components that cannot be detected during the upload process.
This is not to mention that real scenes might feature planar reflections on water combined with dynamic soft shadows, shaders paired with outlines, and translucent materials.
Even the most powerful rendering pipeline would struggle and require painful modifications without finding an effective solution, especially when supporting VR alongside complex issues like TAA and MSAA.
This makes technical improvements difficult.
To put it simply, if an avatar has 200K triangles, applying an outline material makes it 400K.
Subsequently, turning on dynamic lighting with soft shadows brings the load to roughly 1.6M rather than 1.2M.
Then, enabling mirror effects results in a load as high as 3.2M, even if one is not directly looking into the mirror.
While a few people might have GPUs powerful enough to maintain a high frame rate, once the number of users increases and various loads are added, very little performance remains.
Technically, there are too many trade-offs to consider, and these are difficult to quantify numerically.
When people encounter something where numerical quantification is visible, they fall into an illusion.
It is actually a very simple illusion.
But is it truly simple? The simplest things often have the greatest impact.
In fact, this is much harder than it appears. Therefore, it is difficult to satisfy everyone when establishing reasonable values.
___
Actual values, using a single dynamic light source instead of multiple dynamic light sources, with one dynamic soft shadow applied.
In the default configuration with outlines, mayo is 335K, and enabling dynamic soft shadows brings it to 836K triangles.
Of course, this value and the proportions will vary depending on the specific circumstances.
Mirror effects will then double this to 1.6M.
In fact, based on my recent usage and observations on Booth, avatars with 200K or even 250K to 300K triangles already exist.
Furthermore, their sales volume is massive.
These default values and activity metrics are very high, where the count of active triangles can potentially reach 80% to 90% of the total.