Upcoming changes to Standard Lite (MonoSH, Specularity hacks, optimizations, oh my!)

Hey y’all!

As you may have seen in the latest Developer Update(s), we’re currently working on integrating some overhauls into Standard Lite you may be interested in. I’m reaching out here to get some opinions on these upcoming features, especially one in particular.

As you may be aware, Standard Lite effectively disables specular highlights in 99% of cases currently - since it uses #pragma surface ... noforwardadd, it doesn’t do any real processing of specular highlights, except in the quite rare case that you have a single, realtime directional light in your world on Mobile. This renders it pretty underpowered for most use cases, in fact I did some quick digging in worlds and found that it was equivalent to Lightmapped in a huge number of cases. You do (now) get specular reflections if you have reflection probes, but it still leaves a pretty big gap in the reactiveness of the environment.

One thing we’re implementing to mitigate that is MonoSH, an extremely efficient way to pack a single direction and spread into what would ordinarily just be directional lightmaps, giving you a method to bake in full lightmap specularity (with some caveats).


I’ve spent the better part of the last two weeks crunching down the Standard Lite shader to allow this to be introduced without incurring too much performance debt and I’m happy to report the worst case is only about 9% worse: ~6600usec to ~7225usec! The average case is also quite a bit nicer, 20-30% faster.

The thing I stumbled upon while experimenting with this was essentially:

  • We can take encoded sphere harmonics from lightmaps and turn them into specular highlights, though not as detailed
  • Lightprobes are encoded sphere harmonics
  • what uh, what if I did both?


The result is a pretty effective approximation of emulating specularity from realtime lights, although to my understanding Unity’s lighting model is supposed to consider lightprobes strictly diffuse light (hence why I’m calling it a hack). The light looks fairly close to realtime specular highlights, though it only ever emulates one light source, so it has the same caveats as MonoSH and friends - you only get a max of one highlight.

The feedback I’m looking for from world creators here is, how intrusive would you expect this to be? The proposed rollout right now would have this replace the (effectively currently no-op) Specular Highlights flag in Standard Lite. The failure cases for it generally concern ultra-bright lightprobes (intensity of 5+ or something, blows out the lighting) and unbaked lighting, which I intend to resolve dynamically once I can identify some specific cases. I don’t want a million avatars to show up in your world suddenly brighter than expected, but I also don’t want users who uploaded avatars with the expectation of specular highlights to have to go back and re-upload them again.

Here’s my avatar (filamented standard, the above hacked in, on PC) in a sunbeam, showing the ‘too bright’ case:

Here’s some quick pictures of my avatar on Midnight Rooftop, using the above:


(tried to do a video but it appears our ask forums aren’t up to it :C)

4 Likes

Incidentally if anyone’s interested in the lightprobe hack code ahead of time, I can bundle it up here!

1 Like

Can we have access to the shader with the proposed changes implemented? I’d like to be able to provide quantitative feedback rather than explaining how I feel about the vibe of the idea.

I do have to say, at first blush these changes seem really powerful for improving environmental lighting representation.

1 Like

Very nice!

I think we arrived at a similar-ish hack for baked light probe specular — it’s definitely a subtle thing, but with decent light probes it really helps make dynamic objects integrate into environments and can improve perceptual fidelity as it makes lighting influence on normal detail more pronounced.

lightprobespecularity

Here’s an animated visual of a sphere with just the specular contribution coming from light probes to help folks visualize what it’s adding to the sauce. Not sure if your implementation is identical to mine, but I imagine it’s similar!

Hyped to see this coming to VRC natively.

Here’s the relevant changes, more or less:

    #elif UNITY_SHOULD_SAMPLE_SH
        ... sample diffuse from SH ...

        #if !defined(_SPECULARHIGHLIGHTS_OFF)
            half3 L0 = half3(unity_SHAr.w, unity_SHAg.w, unity_SHAb.w);
            half3x3 L1 = half3x3(unity_SHAr.x, unity_SHAg.x, unity_SHAb.x,
                                 unity_SHAr.y, unity_SHAg.y, unity_SHAb.y,
                                 unity_SHAr.z, unity_SHAg.z, unity_SHAb.z) * 2;
            
            half3x3 nL1 = half3x3(L1[0] / L0,
                                  L1[1] / L0,
                                  L1[2] / L0);
            half3 dominantDir = mul(nL1, lumaConv); 
            half perceptualRoughness = SmoothnessToPerceptualRoughness (smoothness);
            half roughness = PerceptualRoughnessToRoughness(perceptualRoughness);

            half specularTerm = ComputeSpecularGGX(dominantDir, eyeVec, normalWorld, perceptualRoughness, roughness);
    
            half3 sh = L0 + mul(dominantDir, L1);
    
            o_gi.indirect.specular = max(specularTerm * sh, 0.0); 

In this we convert the SHA values from color-wise to xyz-component-wise, extract a dominant direction by dot producting it out to get an average direction (lumaConv is just .333 .333 .333), then pass that in to ComputeSpecularGGX which is just unity’s fast BRDF2 lighting model to get specular. It effectively bubbles up the data tho the end BRDF the same way we would for any individual light!

1 Like

Yep, that looks pretty similar!

I’ll check it out, thank god the existing directional lightmap stuff just works for this. On that note, was any consideration given to fleshing out support for the built-in directional lightmapping mode? MonoSH is great, but it kinda requires a lightmapper that bakes to monoSH, of which I know only one commercially available.

Does the memory savings from not needing an alpha channel actually manifest here? I’m actually not familiar with how ASTC handles the presence/lack of certain color channels.

Some nitpicks…
You’re multiplying the L1 contribution vectors by 2, so your lumaConv should be divided by 2 for energy conservation.
L1 is an offset against L0 values, so I don’t see why you’d want to divide L1 by them? L0 contains the ambient intensity, so you’d be shifting ambient intensities around the L0 value rather than getting the luminance.
You can get the dominant direction value with

normalize((unity_SHAr.xyz + unity_SHAg.xyz + unity_SHAb.xyz) + FLT_EPS);

where FLT_EPS is your favourite minimum float epsilon value. That should save some calculation too.
So if you were to take

        #if !defined(_SPECULARHIGHLIGHTS_OFF)
            half3 L0 = half3(unity_SHAr.w, unity_SHAg.w, unity_SHAb.w);
            half3x3 L1 = half3x3(unity_SHAr.x, unity_SHAg.x, unity_SHAb.x,
                                 unity_SHAr.y, unity_SHAg.y, unity_SHAb.y,
                                 unity_SHAr.z, unity_SHAg.z, unity_SHAb.z);
            half3 dominantDir = normalize((unity_SHAr.xyz + unity_SHAg.xyz + unity_SHAb.xyz) + FLT_EPS);

            half perceptualRoughness = SmoothnessToPerceptualRoughness (smoothness);
            half roughness = PerceptualRoughnessToRoughness(perceptualRoughness);

            half specularTerm = ComputeSpecularGGX(dominantDir, eyeVec, normalWorld, perceptualRoughness, roughness);
    
            half3 sh = L0 + mul(dominantDir, L1);
    
            o_gi.indirect.specular = max(specularTerm * sh, 0.0); 

…I think you’d get a better result, cheaper.

Personally, I’ve never been a big fan of doing this - it tends to look bad on highly reflective or metallic surfaces, and if it overlaps with reflection probe highlights you get an ugly double highlight effect. On PC there’s rarely an excuse to not have reflection probes. But for Quest it makes sense. I’d say the highlight should probably be faded out when the perceptual smoothness is above 0.9ish to avoid weird pinpoint highlights or sparkles.

You mention that the case where you have a single, realtime directional light is quite rare, have you actually checked? A single realtime directional light is always calculated as part of the base pass even when lighting is fully baked. If you use baked shadowmask mode for lighting, Unity will store light occlusion as part of the light probes and pass it through to dynamic objects as a multiplier against the light intensity, making it a really good option for having the benefits of a realtime light while also supporting shadows without the performance penalty of realtime shadows. I use this mode even for Quest maps and it works fine there. Wouldn’t this change break specular highlights on avatars for those lighting setups?

The memory savings in question is versus a full RGB SH setup, which uses one L0 map and three L1 maps. In practical terms with ASTC you’re roughly even with dominant-direction, but with nice specularity and diffuse.

You’re multiplying the L1 contribution vectors by 2, so your lumaConv should be divided by 2 for energy conservation.

Good catch! I think I had it in my head that L1 was in the range [-.5, .5] in unity_SHA but that doesn’t… really make much sense.

You can get the dominant direction value with

normalize((unity_SHAr.xyz + unity_SHAg.xyz + unity_SHAb.xyz) + FLT_EPS);

Not sure why that didn’t occur to me actually. Let me give that a shot!

You mention that the case where you have a single, realtime directional light is quite rare, have you actually checked?

Specifically, on Android content yes - I checked a couple dozen maps and didn’t see any using them. I can dig again though!

If we’re expecting this to conflict with realtime lights, I could potentially make it dynamic based on whether there’s any lights - you’d only get the specularity in worlds without it.

if it overlaps with reflection probe highlights you get an ugly double highlight effect

This is the case for existing specular highlights - they come from realtime lights, whereas reflection probes can have the same light in a slightly different place since it’s baked. This frequently results in a subtle double-lighting if your reflection probe highlights are based on emissive objects. Another method we were considering to mitigate this (on the creator side) was to integrate something akin GitHub - zulubo/SpecularProbes: Bake specular highlights into Unity Reflection Probes, allowing baked lights to cast sharp specular highlights for free into the SDK - it’s basically the baked version of the above.

Hey, I’m one of the SLZ devs, this is almost exactly what we did for Bonelab! Silent’s recommendation of just normalizing the sum of the L1 coefficients is pretty spot on. To add to it, I generally avoid doing the standard normalize function on half-precision vectors as it is very inaccurate and can return vectors that are significantly shorter/longer than 1. It’s better to either cast the vector to float before normalizing or to write your own “safe” normalize function for half vectors where you cast up to full float before taking the rsqrt as that is the part that is significantly inaccurate with halfs.

half3 SafeHalf3Normalize(half3 value)
{
    float lenSqr = max((float)dot(value, value), FLOAT_MIN);
    return value * (half) rsqrt(lenSqr); 
}

You can save some calculations by not re-evaluating the light probe intensity in the average direction of the L1 coefficients and instead just directly use the light-probe value calculated for the diffuse. Not as “correct” but none of this is physically accurate anyways.

Also, you’re gonna need to apply a falloff function to the specular highlight, as GGX actually has a second specular peak in the opposite direction of the light source. You normally never see it with point sources as the light intensity is 0 on that side, but with probes the light coming from the opposite side is extremely unlikely to be 0. If you’re evaluating the specular intensity from the probe in the direction of the average L1, just pretend its a directional light and multiply by NdotL. For my case where I’m evaluating from the probe diffuse directly there’s already a factor of NdotL baked in as part of the lambert diffuse precalculated into the probe. So I made up some bullshit factor based on the square of the inverse of NdotL to get a sharper falloff curve.

One trick I did is to only evaluate the specular only once for both the probe or the main directional light, switching which one the specular is being calculated for depending on if the main light exists. Just determine if the main light intensity is greater that some small value, and if it is calculate the specular from the main light’s direction otherwise use the probe direction. Then multiply by either the probe intensity or main light. That saves a lot of calculations.

That aside, I’m actually not a fan of faking specular highlights. Its a crutch for level designers using point sources without a “physical” source. Ideally you’d only use mesh light sources with physically correct emission intensity, but for VRC I understand you can’t assume that. Unfortunately, its a tradeoff where you make bad lighting in levels not suck as much in exchange for making levels with properly set up lighting look worse.

Also, please do not just force on specular highlights for baked lighting from unity’s dominant direction map. Make it optional. I know it’s in several popular shaders and we even did it for Bonelab though we’re moving away from doing that, but it very often looks like complete garbage. The dominant direction maps were not meant to be used like that and don’t contain enough information to get a plausible highlight in most situations. It makes all smooth surfaces look warped and melted due to small scale variations in the average lighting direction as well as just noise in the map, puts obvious creases around shadows and strong gradients, and in situations dominated by bounce lighting it produces overly bright and obviously wrong highlights coming from the surface normal. Also, it completely fails to handle mesh light sources correctly. Parallax corrected cubemaps are cheaper and usually produce less obviously wrong results.

Hey, I’m one of the SLZ devs, this is almost exactly what we did for Bonelab! Silent’s recommendation of just normalizing the sum of the L1 coefficients is pretty spot on. To add to it, I generally avoid doing the standard normalize function on half-precision vectors as it is very inaccurate and can return vectors that are significantly shorter/longer than 1. It’s better to either cast the vector to float before normalizing or to write your own “safe” normalize function for half vectors where you cast up to full float before taking the rsqrt as that is the part that is significantly inaccurate with halfs.

I’ll have to try to come up with some edge test cases for all of this where I can, but I imagine we’ll lean on an open beta in practice. I’ll implement this change where I can though! We use Unity_SafeNormalize in a few places already, but that one isn’t half-oriented.

Also, you’re gonna need to apply a falloff function to the specular highlight, as GGX actually has a second specular peak in the opposite direction of the light source. You normally never see it with point sources as the light intensity is 0 on that side, but with probes the light coming from the opposite side is extremely unlikely to be 0. If you’re evaluating the specular intensity from the probe in the direction of the average L1, just pretend its a directional light and multiply by NdotL. For my case where I’m evaluating from the probe diffuse directly there’s already a factor of NdotL baked in as part of the lambert diffuse precalculated into the probe. So I made up some bullshit factor based on the square of the inverse of NdotL to get a sharper falloff curve.

hmm, I’ll add some debug code to blow out areas with that so I can identify anywhere where that might be happening. Good callout though!

One trick I did is to only evaluate the specular only once for both the probe or the main directional light, switching which one the specular is being calculated for depending on if the main light exists. Just determine if the main light intensity is greater that some small value, and if it is calculate the specular from the main light’s direction otherwise use the probe direction. Then multiply by either the probe intensity or main light. That saves a lot of calculations.

Frequently in VRC there’s no main light ever, so one of my changes is to dynamically branch based on if one exists - this saved ~350 usec in my testing for the general case. If I can instead swap in the specular light that would be even faster, avoiding the dynamic branch entirely.

That aside, I’m actually not a fan of faking specular highlights. Its a crutch for level designers using point sources without a “physical” source. Ideally you’d only use mesh light sources with physically correct emission intensity, but for VRC I understand you can’t assume that. Unfortunately, its a tradeoff where you make bad lighting in levels not suck as much in exchange for making levels with properly set up lighting look worse.

Yeah… One thing I was considering is also having this auto-disabled if actual reflection probes exist in the scene, and only making it the fallback for the no-reflprobe no-rtlight case. Then we can introduce the SpecularReflectionProbes code I mentioned above, to pretty much always be using the best-case scenario.

It’s an unfortunate truth that we need the crutch, as VRC has a lot of unavoidable CPU frametime, and as I mentioned above even our stripped-down Standard equivalent uses about half of the frametime on GPU if you use all of its slots.

Also, please do not just force on specular highlights for baked lighting from unity’s dominant direction map. Make it optional. I know it’s in several popular shaders and we even did it for Bonelab though we’re moving away from doing that, but it very often looks like complete garbage. The dominant direction maps were not meant to be used like that and don’t contain enough information to get a plausible highlight in most situations. It makes all smooth surfaces look warped and melted due to small scale variations in the average lighting direction as well as just noise in the map, puts obvious creases around shadows and strong gradients, and in situations dominated by bounce lighting it produces overly bright and obviously wrong highlights coming from the surface normal. Also, it completely fails to handle mesh light sources correctly. Parallax corrected cubemaps are cheaper and usually produce less obviously wrong results.

We aren’t! The shader allows you to select plain lightmap, directional lightmap, MonoSH or MonoSH (no specular highlight). For world creation shaders like this we don’t force anything, creators are free to use whatever they like - intent here is just to curate some features we’re pretty sure are fast enough for whatever you need on Quest.

I should actually take a swing at implementing parallax-corrected cubemaps, as yeah - those get a lot of use, would be good to have it curated!

Thank you for pointing this out, turns out my calculation only accidentally worked correctly in some situations. Now it works like, everywhere. Horray!

A single realtime directional light is always calculated as part of the base pass even when lighting is fully baked. If you use baked shadowmask mode for lighting, Unity will store light occlusion as part of the light probes and pass it through to dynamic objects as a multiplier against the light intensity, making it a really good option for having the benefits of a realtime light while also supporting shadows without the performance penalty of realtime shadows. I use this mode even for Quest maps and it works fine there. Wouldn’t this change break specular highlights on avatars for those lighting setups?

In my testing, a BRDF pass is always run, but is frequently effectively a no-op if there’s no lighting - it creates a ‘light’ with light.color = (0, 0, 0). This change doesn’t break that, though I did add some dynamic branching to skip the unused parts of the BRDF if we have no use for them.

Confusingly, we’re supposed to support the ‘single directional light specularity’ case with Standard Lite (and we do!) but it doesn’t work if UNITY_SHOULD_SAMPLE_SH is enabled… even if we disable my logic, must be some weird part of noforwardadd. I’ll keep investigating, but I feel like I know even less now…

Could you point me in the direction of a map you’re using that does this? It’d be nice to have a closer look to make sure I’m not breaking anything!

Actually on further reading, I’m really not sure I understand how this is used - Standard Lite explicitly doesn’t support shadows or shadowmasks:
#pragma surface surf StandardVRC vertex:vert exclude_path:prepass exclude_path:deferred noforwardadd noshadow nodynlightmap nolppv noshadowmask

If you have a world you could link that’ll probably help me understand better!

Sorry, let me clarify, I never even considered using a full SH setup on mobile. I meant the memory saving of MonoSH over Dominant direction - where MonoSH doesn’t need that alpha channel on L1.

Why choose Mono SH over Dominant Direction? Is there any planned DD support? I understand the lighting representation benefits of MonoSH over DD, but DD is just so much more widely available to the average creator. It just doesn’t seem like it’s in the best interest of the creator community at large to support a directional lightmapping mode that is just straight up not available without paid addons without also supporting the - although technically inferior, still perfectly usable - directional lightmapping mode that is literally bundled with the engine. If VRC MSL already supports DD, I just straight up missed that, please correct me if that’s the case.

Huh, I was under the impression that DD doesn’t give any specularity? The worlds I checked in on on PC that had lightmap specularity were all using some form of SH (specifically of the ones I could find, always MonoSH, though through different shaders). Exception is Midnight Rooftop where I couldn’t quite figure out what it was using.

Alrighty! Integrated a bunch of this feedback, and ended up accidentally optimizing some more stuff while digging. New Standard Lite runs just as fast as old Standard Lite in the worst case (of both), while looking like this:

In the most common case, it’s way faster! It now only tries to do the fake-probe-light thing when there’s no realtime lights in the scene, so if you’ve got a real light it won’t change anything. I opted to keep it as-is vis-a-vis disabling it when there’s reflection probes, since it’s meant to be a stand-in for realtime lights - see above discussion about being realtime light stand-ins.

Still left to do:

  • I’d already handled a number of cases where half could under/overflow, but I’ll be going through with a fine-tooth comb to hopefully catch any more.
  • Double check that it works identically to old Standard Lite in shadowmasked worlds as @Silent described, except that now it’ll use non-linear probe handling.
  • Highlight falloff function as described by @error.mdl
  • Try to implement some parallax cubemaps, though that may become a future development deal.
1 Like

Alright, I think it’s in mostly a satisfactory state. Only lingering concern is that we would still eventually like to use something like Specular Reflection Probes, and this would pretty much always end up enabled, meaning you have two sources of (what’s specifically intended to be) specular highlights. We could disable them if you have reflection probes, but that would look weird without specular reflection probes - or at least get us back to square one.

As it currently stands, this won’t apply retroactively to world shaders, only avatars. So there won’t be any changes to pre-this-SDK world uploads, beyond avatars shading more dynamically within.

2 Likes

@error.mdl Incidentally here’s what I ended up with for my SH function:

            half3 L0 = half3(unity_SHAr.w, unity_SHAg.w, unity_SHAb.w);
            half3x3 L1 = half3x3(unity_SHAr.x, unity_SHAg.x, unity_SHAb.x,
                                 unity_SHAr.y, unity_SHAg.y, unity_SHAb.y,
                                 unity_SHAr.z, unity_SHAg.z, unity_SHAb.z);

            half3 dominantDir = VRC_SafeNormalize(unity_SHAr.xyz + unity_SHAg.xyz + unity_SHAb.xyz);
            half specularTerm = ComputeSpecularGGX(dominantDir, eyeVec, normalWorld, smoothness);
    
            half energyFactor = mul(dominantDir, normalWorld) / 2 + .5;
            half3 sh = (L0 + mul(dominantDir, L1)) * energyFactor;
            DEBUG_VAL(energyFactor);
    
            o_gi.indirect.specular = max(specularTerm * sh, 0.0); 

Here’s what EnergyFactor ends up being over the whole mesh:


sh:

Result:

Without the factor:

Not completely sure if I’m satisfied with this, I’ll come back to it in the upcoming week.

Pondering if it might make sense to lerp between length(L0) and length(L1) as like, a ‘fully directional’ to ‘nondirectional’ factor, use that to determine how much we take into account the energyfactor.

Edit: Did some digging on the above and found some other folks in the literature doing that, so I’ll give that a shot. Looking like this will need more time in the oven, regardless!

I think I’m leaning towards making this exclusively something that kicks in if there’s no realtime lights/reflection probes whatsoever, since there’s better ways to represent highlights in reflection probes we can develop instead.