I’m asking this here since it might be useful to other people… I’ve seen a couple of worlds in VRChat that make use of multiple cameras to create effects where the geometry of the world is simultaneously rendered from two or more different perspectives. I’ve been looking at this for days now and it’s left me scratching my head because I can get it to work in Unity but I’m missing something to translate that into VRChat.
In Unity you can use the “depth” field on a camera to determine when it’s drawn at runtime. This allows, say, rendering a HUD with a minimap, or preventing clipping on props, among other things. You can get cool effects by utilizing clear flags and culling masks in creative ways.
Just as an example of what I mean by “I can’t get it to work…”
I have this scene set up with two cameras: one on the ground, which is my Reference Camera in the VRC Scene Descriptor, and another in the air, looking down. Both are able to see the green cube.
Here are my parameters for the Reference Camera (Depth: 1) and the Overhead Camera (Depth: 0).
I have the Overhead Camera set to render before the Reference Camera, and when I run it in Unity I see the following, which is what I expect:
When I run it in VRChat, even in non-VR mode, I see this instead
Is there any way I can deal with this? I’m stumped, and I haven’t seen anything about camera restrictions in VRChat either.