I think considering imposters were mentioned, it will not die. People will eventually be able to generate a quest-compatible version at the click of a button.
With the whole avatar ripping event thing going on around, what’s your opinion on anti-rip stuff like Avatar Protection System for VRChat being on avatars? Are they secretly massive performance hogs?
Just in case you have missed the toggle, the enter button can be automatically skipped.
And while VRChat could just have a fade to black, loading an already preloaded world can kinda take a long time sometimes, especially noticeable with some worlds. I wouldn’t want to stand in, at worst up to, 30 seconds in complete darkness without knowing if anything is happening.
It is very easy to hit Poor. You don’t need 30 outfits on a single avatar, a million particle systems, or jewelry made of 200k polys.
At the very least, you can make a “base” version of the model that shows up on Android.
Could you view the instance from an invite if the world was invite only?
20k is absolutely NOT an easy target. That’s hours if not days of retopolgy work, and likely having to completely start over from scratch or abandon an avatar if it’s look is too complex.
Imposters might be a good point actually and might be an acceptable stopgap though. Non-PC content needs some changes though. I think 20k is a bit hard to hit for just Poor.
And if we want people to not just rely on imposters, I think we need to finally talk about shaders, because VRChat built in shaders just aren’t cutting it anymore compared to what Poiyomi, etc. can do.
I know and understand we can’t use custom shaders on Quest for performance and platform compatibility, but an extension of the featureset in the built-in shaders to cover some of the aesthetic possibilities custom shaders provide on PC would be killer. At least even just make the toon shader more closely match how Poiyomi or the like looks on PC, and maybe some sort of metallics and transparent texture support.
I don’t think you’ll see transparency support any time soon; it’s very difficult on Quest.
That said, I would like some kind of control of unlit/min brightness, as well as toon outlines (on both the Toon and Matcap shaders… matcaps being a way to approximate shadow ramps for an MMD-esque look).
Oh so it’s even more restricted than Quest then. (Or at least last I checked.) Interesting.
I suppose it makes sense during the beta; ensure that the software is stable first (as issues people perceive as being the app could just be the avatars; very poor avatars could increase strain on starved resources.)
Hopefully showing individual people’s avatars makes its way towards mobile closer to release; a lot of tricks can’t currently be done with the current ranking system, and VRChat has yet to mark elements of an avatar as “optional” (which would go a long way for a lot of avatars with complex toggles and features that aren’t critical to the base presentation).
I think it would be easy enough to support 70K if it could support surface subdivision and even mesh shader features (let’s drop the discussion of geometric shaders which are inefficient, maybe some low SIMD width GPUs are quite efficient?)
Or maybe it’s possible to write C# that uses computational shaders, but that’s too low security for VRchat’s technology, but don’t worry, we’ll get to that below.
Because in practice polygonal throughput is a very inefficient thing to do on any GPU, and especially on vertex shaders, which in practice are subject to a lot of GPU front-end limitations and inefficiencies.
It may be hard for you to imagine that theoretically a GPU has multiple front ends in it, and a GPU with 1-2Ghz should theoretically be able to run 6 to 12 or even more billion polygons per second.
In practice, the mesh is not compact due to wave/warp issues and huge latency and low parallelism, plus the need to deal with the details of indexing and indexing of indexes.
Realistically rendering tens of billions of polygons per second? Even with all the GPU culling and optimization, and the final output of 6-7 million polygons per frame, a 3060 is barely capable of running 200fps with the vertex shader, and only 100fps with the equivalent pixel overhead.
I’m sure a lot of people will choose the 12GB 3060 for VR mode.
For the 3060, it can theoretically run up to 6 billion polygons per second, but in practice it can only do about 1.2 billion, which is more than double or triple if it has a mesh shader, or if it utilizes surface subdivision to make good use of it instead of wasting it.
You may say that 6-7 million polygons per screen is a lot of polygons, but in reality it’s not much due to light and shadow issues.
First of all, the detail table on an avatar gives you 200,000, and then when you run it in VRChat, it’s about 400-500,000, and then after GPU culling it’s back to about 200,000 or so on the screen.
Turning on hard shadows or soft shadows directly becomes 400k polygons, since soft shadows are less efficient or equivalent to the cost of 500k polygons (which only consumes 400k, but with less parallelism).
Due to the overlapping of many real-time lighting and shadows, a polygon with an overhead of more than a million dollars can only support five or six such avatars.
Open the mirror, at this point the overhead doubles again, from 100fps down to about 50fps (usually not fully illuminated to so 60~70fps)
If your resolution is higher than 1440p, the pixel overhead will be higher, generally much higher in VR mode, in addition to a large number of polygonal often bring some overlapping surface is not good to eliminate, and will bring over-painting, in VR mode may be only 30fps (binocular quest resolution), some of the overlapping surface caused by only 23-25fps.
Maybe you will say that this is caused by 3060 and poorly optimized worlds and avatars, but in reality the phone on android may not lose too much in terms of pixel fill rate and architecture, but from the scale topology you can find that the bottleneck caused by the GPU’s internal bandwidth and the front-end is more serious, and this is generally caused by high polygonal overheads.
With 8gen2 the current Qualcomm SOC is comparable to nvidia’s GTX1050 in terms of GPU (only the runtime is close).
Putting aside other possible bottlenecks, in the case of the 1050, the 3060 is 3.75x its performance, and we’re estimating that roughly equates to quadruple the performance using run scores.
And 8gne2 is also the specification for XR2gen2, the next generation of quest aka quest3.
The GPUs in the latest low to mid-range phones are roughly equivalent to one-third to one-half the performance of this SOC, and we can estimate that the 3060 is eight to twelve times as powerful as them.
A 200,000 polygon is roughly ten times the performance of a 20,000 polygon, and because of the 3.1x improvement in drawing efficiency due to the increased GPC parallelism of dense polygons, the polygon efficiency is roughly quadrupled by taking into account some of the detail changes? In the end, it was only 2.4 times slower overall.
(Re-examine the approximate efficiency has been increased by three times instead of four times)
With the new version of Unity(2020), since the skinning has been changed to compute shaders instead of vertex shaders, the merge mesh has increased the overall efficiency by more than 40%, and considering some details, the merge efficiency has been increased by about 55% in the low polygon mesh. But we’re still talking about this after Unity2019, after all, the total efficiency of too much skinning in a single scene is still down.
Incidentally the stupid Unity doesn’t seem to handle thread combining for calculating shaders very well, too many skinned grids makes GPU efficiency drop dramatically.
First of all, the size of the GPU on the phone is also larger with multiple front ends, which can be equivalent to 3 GPC’s 3060 so we can assume that the efficiency of that aspect of the decline is similar, otherwise the discussion will not go on.
First, the limit is lowered from 200K to 20K, and then lowered to only 2.4 times faster. (6-7M->600K->700k but 2.4x)
(There is an error here, this is the result of including pixels, it should be 3.1x time)
First cut out a lot of real-time light and shadow issues, and only allow one real-time light and shadow, or one real-time light for three to five baked-in light and shadow sources of equal cost.
This way a 20k polygon is just the equivalent of doubling to 40k or no increase at all. (4x->2x)
Mirrors are still needed, so no discussion there, and pixel-wise going from 1440p->1080p, let’s assume for a moment that lowering the pixel size doesn’t affect utilization (it usually does, but GPU size is offset by assuming not a lot of attenuation).
In terms of pixels, we can thus be 1.78x faster, and ignore the fact that a lot of small polygons may over rasterize the pixels (estimated to affect 10~20% of the pixels).
On quest/android the polygon is running 4.8x faster(Updated to 3.1x) and the pixels are only 1.78x faster, again the polygon and pixel load are equal to start with, which is estimated to be about 50%/4.8+50%/1.78=38.5% of the load.
(Updated to 36.1% 50/6.2+50/1.78)
update below
100fps/0.361=277fps, which is estimated to be 69fps on an 8gen2 with a quarter of the performance.
A newer cell phone can run about 23fps~46fps. (The quest2 can run about 30fps).
Part of the experience and data estimation and calculation of the conclusion, the scene limit at least six people in a good scene on the mirror and all use 20k polygonal model.
I have made my own run on quest, 17k or so model using animation style shader to shine the mirror on quest v55 before about 40~45fps.
70k or so polygons are roughly 40% slower(+40% time), but can be compensated for by calculating shader mask optimization, which is why I mentioned earlier or intentionally, but lower scale GPUs may be slower by more percentage due to their inherent high utilization.
(Polygon takes less than 100% extra time, only 40% slower on average with pixels)
Please note that the above conditions are animation style shader and 1080p, as long as the conditions are different, there will be a performance difference of more than ten percent or even more.
In addition, the power consumption of the mobile terminal cannot be fully loaded as much as possible. Although the 8gen2 is 2.5 times that of the S865 in theory and running scores, the power consumption is only about 2.1 times when the power consumption is similar.
The progress of semiconductor manufacturing process is not fast, all rely on large-scale, need to reduce the frequency to obtain more energy efficiency, it is difficult to run to the peak.
So in practice, the fps played out will often be lower.
__
Due to the pixel overhead, using liltoon and Unitychan will be higher than the quest version shader, so there is a certain amount of data distortion.
…The very embarrassing fact came out, the quest version only increased the frame rate by 5% in 2019 vs 2021
Although it is tested on the PC side
But the analysis of the pipeline in 2021 is relatively fragmented. I don’t know how efficient it is actually in the quest…It takes more time to test
Compute shaders in Unity 2021 don’t know what to do anymore I try to use more lights and it’s even slower than in 2019…
Personally don’t like them because they turn avatars into monstrosities if you have your safety settings hiding shaders.
haha funny decimate button.
not the best solution, but if you can’t be bothered doing it properly, it’s passable.
I’ve used the decimation modifier for my Quest versions, just keep the hands a bit less decimated for obvious reasons.
UVs won’t be perfect and what not, but you can’t expect 1:1 with minimal effort.
@tupper Do you happen to know if any changes were made to the StringLoading/ImageLoading clients in this update that weren’t included in the changelog? Starting a few days ago people were reporting an image failing to load in my world, and I noticed I was getting an error on one of the image requests in VRChat itself, yet it continued to work fine in the Unity ClientSim. After some testing, it appears to be caused by the lack of a cookie being set that was used to differentiate initial/subsequent requests in that session. If it’s intentional that’s fine (if unfortunate), I’m just curious to know if it was an intended change or possibly an unintended side effect.
Yeah, it’s a hard target to hit. Developing quality 3d art for a VR device with an obsolete mobile chip is inherently hard. Spending hours or days on a retopo seems pretty normal for quality mobile 3d art development, at least if you’re working with a sculpt or an already subdivided mesh.
The Quest already struggles, with the Canny full of people complaining about performance and OOM problems. It’s very unlikely VRChat will increase the thresholds—they are arguably already too high.
I’m surprised cookies were being accepted.
It’s funny to me that no one usually mentions the un-subdivide on the decimate. If the topology of the mesh is already good and has proper quads (or at least can be converted to quads) un-subdivide often manages to retain most of the shape while noticeably reducing the poly count. Obviously may not be as effective as the the normal options decimate modifier, but can be a less destructive option.
They were reasonably limited: isolated to the VRChat client and cleared every new launch of the game. It made it useful for things like having a single image URL in the world, but each time you visit the image the cookie would increment and return a different image, so you could rotate through multiple banners without having to update the world each time to add new image links.
This change was intentional.
Due to the fact that many avatars are modifications of already-subdivided meshes, the unsubdivide modifier can often produce high-quality base meshes that deform and bake well.
Absolutely worth a shot.
but… you already KNEW that it was causing performance issues - that’s exactly why it was an opt-in separate build.
If your reason is only the following:
just say so. it’s a perfectly valid reason to drop it on its own.
now it just looks like you’re lying.
(unless somehow having this code/dependencies in a separate build caused performance issues in the main build soomehow, but you haven’t made that clear)