DLSS isn’t a replacement for FSR, as it only runs on nVidia cards.
That aside, tl;dr:
- FSR isn’t implementable
- VRS has potential but requires a huge amount of work
- DLSS isn’t possible because users would have to support it, and that means all shaders in VRC would break
Unfortunately none of the tech we talked about is making headway currently for a variety of reasons. All text that follows is me relaying information, I don’t know super technical details and cannot expand the response beyond what’s provided here.
AMD FSR is a no-go as proper integration following AMD’s recommendations is not possible due to incompatibilities with the Post Processing Stack and other limitations of Unity’s Built-in Render Pipeline.
Nvidia VRS is potentially usable, but due to the Nvidia VRWorks VRS based foveated rendering API not supporting Unity’s implementation of Single Pass Stereo Instancing, a proper implementation that’s compatible with future engine upgrades will require too much development work. We’d need to use the lower level VRS API to re-implement everything the Nvidia VRS foveated rendering API does.
Re-implementing the VRS foveated rendering API is doable, but it’s a bunch of C++ work to write a native rendering plugin.
We could use the VRS helper API, but it doesn’t support SPS-I using a render texture array.
We thought we got it worked out, but it just uses the left eye twice. Nvidia’s demo even has an instanced stereo mode but it still uses a double wide render target.
We did some lower level investigation and even contacted a few other VR studios with not too much promise, unfortunately.
DLSS is not being investigated as it requires reliable motion vectors. We cannot guarantee these existing, as they must be implemented into the shader. Basically nobody’s shaders are going to have that implementation. Things as simple as a scrolling texture require custom motion vectors, or they’ll end up with a ton of ghosting artifacts.
There’s some delicate lines we have to walk here but blocking/blocked behavior is something we’ve been looking at recently.
No news yet, but almost certainly nothing this year. I believe the current priority is eye tracking first, as we think its the “low hanging fruit”.
SDK2 content will continue to work as long as we can possibly maintain it. Uploading it, however, will be eventually impossible.
Notably, there’s SDK2 → SDK3 avatar converters that work pretty well. SDK2 functionality is so limited that replicating it in SDK3 is pretty easy nowadays.