Is Quest avatar audio ever gonna be a thing?
Not any time soon: Developer Update - 11 April 2024 - #36 by JessicaOnMain
People who day that stuff have no idea how much of their info is stored everywhere…
Polietly; you are lacking brain power if you think all that info isn’t accessible to people. I’ve had to show so many of you “afults” on vrchat that your info isn’t ad private as you claim. I can always find peoples names, id, socials, etc. If you think that shits private, you are living in a river in Egypt. Denile.
I make shaders from scratch quite consistently (ones on my github need work, they’re demos); i deal with their overheads on a regular basis, and have benched shaders with unity profiler plenty (including my own while isolating parts of the code). Lighting is one of the main heavy costs of shading. If it’s pixel lit, every single realtime light stacks on a complete additional pass, which is a setpass call (diffenernt from drawcall) and re-computation of the add pass’s fragment every frame. I don’t have like screenshots and test environment data to show you right now, because it’s mainly for my own development research.
We can’t use compute shaders in vrchat so that bit is a moot point.
Watch the youtube video “Unity Blendshapes Stress Test” by ZnelArts for example (can’t link cuz the demo model is naked).
The only example i have of potential blendshape performance is an animator bug that probably was causing non-zero microdecimals loop on transitions or anystate. There was a weird bug myself and an acquaintance discovered and attempted to investigate which caused a gradual increase in frame time and decline in fps, but it was laregely elusive.
The shortcomings of BIRP forward lighting can only be solved by supporting deferred shading, but not only is the Shader support in BIRP problematic, VRchat itself has not considered allowing players to perform deferred shading.
The best way to avoid the heavy vertex shaders caused by repeated shading is to use URP, and if you want to solve the pixel shader overhead of repeated shading then forward+ is a better choice.
(More efficient deferred shading at the expense of VRAM, L2 bandwidth is better in terms of reducing double counting).
Also there are still some issues in the game that make opaque objects fail in hardware culling (here the mesh still counts, but the pixel shader overheads should be culled if there is depth, which there isn’t).
Incidentally, if you want to cull small areas of particles, depending on the area of the final rasterisation to screen space, if it is not semi-transparent thinking allows to enable depth writing and culling, however, this will be due to a series of problems with PROP, RAS, ROP writing to L2, etc. that result in excessive pipeline latency, and the bottleneck in tool detection is not that a particular unit is over-utilised but that the unit is highly latency resulting in The bottleneck in tool detection is not high utilisation of a particular cell, but high cell latency leading to overall low utilisation (not related to registers, caching, etc.).
So it’s hard to optimise particle overhead by culling, even if it’s not semi-transparent, and by the way turning on MASS will increase samples a lot, which will increase latency and then decrease overall utilisation, and a small amount of increased threading (about 1~4 or even more times times times multiplied by the resolution depending on the resolution and the degree of overdraw), and thus you can calculate that turning on the MASS8x slows down the system by about 10-30% in different scenarios.
While it is important to think about how to reduce overdraw or unnecessary costs, most of the important parts are in the core of the entire pipeline and it is difficult to continue to reduce costs without making changes and adding more adjustable parts.
This is because your savings in computational costs may come at the expense of a drop in utilisation and ultimately a drop in frame rate rather than an increase.
For example, in URPs you can save money on shadows by eliminating depth, and as long as they are not visible, you can also avoid the cost of being mirrored in some cases. (Not as good as the current VRchat implementation, but the VRchat implementation comes at a cost.)
Since BIRP’s current commercial shaders are generally mature, there is not much that can be done to improve it when considering VRchat’s inability to use computational shaders, its inability to implement more advanced culling, and its use of dynamically-generated mipmaps to do what it does. (Commercial games cleverly use masking, downsampling, time domains, and other tricks to integrate them into the pipeline without having to iterate on each shader, ultimately reducing development costs and improving performance.)
I, and someone i know kilerbomb have both done deferred lighting in vrchat (i’m sure some others have done so quietly as well). This information is incorrect. You cannot accomplish this beyond the unity 4-light of vertex lighting non-batched/instanced avatar objects independently, but anything you can batch or instance (worlds) can 100% be deferred.
About Achievable Deferred Lighting in VRC
This is achieved via having all your lights programmed into the shader as a fixed-size data table (which is the proper way to do optimized programming anyway) where each “light” is just a vector with a colour value as a property (you could put say 16 lights in your table if you don’t need more than that), which can be driven by scripts, making this a fully deferred method with single-pass lighting, because it bypases the use of native unity lights entirely. You can include additive passes to allow unity lights to function on top of this easily however. Once you bake / batch / instance all of your world assets, they can share the material for the lighting computation, and even use the poiyomi-style UV tiling with mega-atlases; though this would be best if you could use proper UDIMs and such.
Methodology like this, in where you can also use arbitrarily shaped light sources, means you could potentially skip the need for unity lightmaps entirely as well if your lightning need not be too complex with bounces, though technically you could also program those in the shader as well if you wanted. This isn’t as cheap as fully-baked light, however this is entirely realtime lit, meaning you can animate all of it if you wanted, or cut down on costs by making a lot of values fixed, and is significantly cheaper than standard forward lighting due to the lack of requiring ForwardAdd passes.
You can cheat deferred lighting onto avatars as well if you are willing to sacrifice peoples’ normal maps using a projector (AudioOrbs)… or realtime transferring the light data into Spherical Harmonics(SH9, which essentially every lit unity shader anymore uses), such as with unity’s realtime GI (which is actually very performant btw because it’s animated lightmaps and probes rather than heavy brute-force raytrace).
I have had the intention of making a fully deferred-dominant light vrchat world for quite some time (capable of both deferred and forward light simultaneously for avatar light support), but i have been sitting somewhat impatiently waiting for Udon 2 to ship.
But this discovery of optimized hybridizing deferred lighting into forward pipeline has made me rather strongly opposed to deferred rendering, due to the lack of MSAA capability and unnecessary design traits that severely constrain it. Numerous game devs have started finding more creative ways to program lighting and shadowing and have been moving back to forward pipelines recently because of similar reasoning.
As far as BIRP’s shaders being ‘complete’ or ‘mature’, this is a broad lack of comprehension of the fact that HLSL is a full (i’m pretty sure turing complete) programming language, which means you have a potentially infinite scope of possibilities for what can be constructed with. The limitations are in one’s resourcefulness and creativity, not in time or “maturity”, as that merely means that the standard fare is stagnated.
or they could have an option to upload an optimized version for pc besides the imposters. i want to have the best looking avatar for myself but not lag every other pc player who can see my avatar
What I mean is that VRchat can certainly exploit loopholes to use deferred shading.
However, it will actually cause many problems. If you try to use a Shader such as liltoon, it will cause rendering exceptions.
Secondly, the “maturity” I refer to always refers to efficiency issues rather than functionality.
I think your test portfolio should include more, because there is still a lack of consideration. As long as there are bugs in the operation of Shaders, it means that the problem is difficult to solve and will lead to a worse experience for participants.
This problem is not as simple as RenderQueue.
Furthermore, the current Shader in the current game lacks an “easy to obtain” unified set of Shader with a consistent style that covers all aspects of application, which will lead to flaws in the art performance. (This limits the performance of the picture to a certain extent, so that the creator may prefer a design that reduces problems and abandons details)
Personally, I think there is no need to spend too much time pursuing such efficiency. It is better to try to solve the problem of style shader being difficult to ensure a consistent style. There is a lack of highly consistent and efficient design of various effects in the scene.
It’s impossible to be more efficient than pipeline modification or fake means… (Although there are some ways to improve it, such as merging passes and using variants that reduce the amount of computation, it only applies to the scene itself, as these shaders are controllable, just the setup is cumbersome and the shaders are complex and difficult to write).
Of course, this is not to say that deferred shading lighting is not important, but the cost is disproportionate to the benefits compared to the problems that need to be dealt with to keep it working properly.
This topic was automatically closed after 14 days. New replies are no longer allowed.