Why? LS Media shares pirated content. Simple as that. I mean, I use LS Media because it’s great, but just saying. VRC technically already supports pirated content, so sharing the NYE VOD shouldn’t be a worry.
@codel1417 @VoicesShiature
Just gonna leave this here, and hope that VRC doesn’t implement it.
This image is back from April 2022. It has since then been removed from the modulateai website.
In this case, it would take the voice audio from VRC instances (no idea if this would be public-only or all) and run it through an AI to determine moderation actions. I also believe the audio clips are stored temporarily for a human to listen, in cases where the AI found a match.
ohhh okay!
DLSS works for Nvidia graphics cards and FSR for AMD graphics cards if I remember correctly so no benefit removing FSR and replacing it with DLSS
DLSS isn’t a replacement for FSR, as it only runs on nVidia cards.
That aside, tl;dr:
- FSR isn’t implementable
- VRS has potential but requires a huge amount of work
- DLSS isn’t possible because users would have to support it, and that means all shaders in VRC would break
Unfortunately none of the tech we talked about is making headway currently for a variety of reasons. All text that follows is me relaying information, I don’t know super technical details and cannot expand the response beyond what’s provided here.
AMD FSR is a no-go as proper integration following AMD’s recommendations is not possible due to incompatibilities with the Post Processing Stack and other limitations of Unity’s Built-in Render Pipeline.
Nvidia VRS is potentially usable, but due to the Nvidia VRWorks VRS based foveated rendering API not supporting Unity’s implementation of Single Pass Stereo Instancing, a proper implementation that’s compatible with future engine upgrades will require too much development work. We’d need to use the lower level VRS API to re-implement everything the Nvidia VRS foveated rendering API does.
Re-implementing the VRS foveated rendering API is doable, but it’s a bunch of C++ work to write a native rendering plugin.
We could use the VRS helper API, but it doesn’t support SPS-I using a render texture array.
We thought we got it worked out, but it just uses the left eye twice. Nvidia’s demo even has an instanced stereo mode but it still uses a double wide render target.
We did some lower level investigation and even contacted a few other VR studios with not too much promise, unfortunately.
DLSS is not being investigated as it requires reliable motion vectors. We cannot guarantee these existing, as they must be implemented into the shader. Basically nobody’s shaders are going to have that implementation. Things as simple as a scrolling texture require custom motion vectors, or they’ll end up with a ton of ghosting artifacts.
There’s some delicate lines we have to walk here but blocking/blocked behavior is something we’ve been looking at recently.
No news yet, but almost certainly nothing this year. I believe the current priority is eye tracking first, as we think its the “low hanging fruit”.
SDK2 content will continue to work as long as we can possibly maintain it. Uploading it, however, will be eventually impossible.
Notably, there’s SDK2 → SDK3 avatar converters that work pretty well. SDK2 functionality is so limited that replicating it in SDK3 is pretty easy nowadays.
I am not sure if the rumors of implementing Toxmod are accurate or not however I just wanna put out there that it is not something that fits this game whatsoever. If something like that was ever implemented I’d hope it would be limited to Public instances and not to Friends or Invite instances. Would be very awkward knowing that every conversation especially in what are supposed to be private spaces in a wonderful platform like this that transcends your average video game would be monitored by a speech-to-text algorithm and scrutinized by AI. I understand it for Public instances as they are difficult to moderate as is but please keep it out of Private or semi-private instances.
Translated from Japanese by DeepL machine translation.
I sent several bug reports to Canny a while ago, but after more than a week, I have not received any tags on my posts.
I understand that the resources of the development and support teams are not plentiful, but that being said, if there is no “TRACKED BUG”, “TRACKED INTERNALLY” or “NEEDS MORE INFORMATION” or some other tag attached to the report, it is not clear to me as a user whether the bug report was accepted or not. Unless there is some kind of tag, such as “TRACKED BUG,” “TRACKED INTERNALLY,” or “NEEDS MORE INFORMATION,” the user does not know if the report has been accepted, has reached the development team, has been missed for some reason, or is a specification.
Even if you find a bug or suspicious behavior, if you do not receive a report, you will lose motivation to report it.
Can’t you do something more about this?
バグレポートがcannyで考慮されるためには、明確な証拠とともによく書かれているか、繰り返し報告されていないか、他の多くのユーザーの間で非常に人気があるかによって、開発者の注意を引くようにする必要があります。
もし、他のユーザーがバグレポートに熱中しているのであれば、ソーシャルメディアでそのバグレポートを紹介し、投票できるようにしましょう。開発者がすべての報告を見ているのは知っていますが、明確でなければ、コメントしたり、実装を開始するのはかなり難しいのです。
For a bug report to be considered in canny, it should be brought to the attention of the developer by being well written with clear evidence, not repeatedly reported, and very popular among many other users.
If other users are enthusiastic about a bug report, then make sure to showcase it on social media and allow people to vote on it. I know developers see all reports, but if it’s not clear, it’s pretty hard for them to comment and start implementing.
I would totally dig this myself, we do absolutely massive ammounts of recording of VRC at Virtual Market for all our releases. Automated movement withough having to hold a control is something that can be done via OSC with VRC Lens, but it’s way beyond most users to implement something like that and as much as I want to, I’m too busy to do it myself right now.
I actually implemented a cinemachine camera fullscreen override on my last music vket party map with 3 paths for this very same reason, we needed an autocamera for our streams. Shelter’s current map _Reality does this as well (as well as some other maps), but having to rely on a world feature for recording pathing (that increases render load significantly for at least the recorder - or the entire audience if done improperly) is not ideal.
I did a lot of multi pass green screen recording to do more motion control type effects. To be able to add in things like fog and other layered effects.
I did a lot of that sort of thing with mods on this music video. No clue how I would do a lot of these effects now.
We do not have a guaranteed time to acknowledge bugs posted to the Canny. There are thousands there! We try our best to parse them but it is a large amount of bugs.
Following the advice posted by @lhun seems reasonable.
We know users want this feature. It’s currently on our list to return to when we have time, but right now I do not have an estimate for when that time will be.
Same text as above-- we know users want this feature! We’ll get to it when we have time, but I don’t know when that will be.