Developer Update - 10 November 2022

i know what it does, thats why i call it toxic, horrible for privacy, and knowing stupid vrchat logic they will implement something like that for friends+ instances too and you wont be able to play without hiding in private.

I’m hopeful that it costs more to monitor even friends+ and the cost different helps VRChat do the right thing. Unrelated, I actually kicked someone from a friends+ instance last night. Worked perfectly.

Just want to say i think you guys are crazy and works i Cant say here for actually wanting sdk2 avatars to be fed. I sure hope my old Avis that i payed money for will keep working. Othervise its just more bs just like physbones

“knowing stupid vrchat logic they will implement something like that for friends+ instances too”

From a business POV, do you think monitoring public instances makes more sense for the moderation team, or are Friends+ instances more of a problem?

If you look at these posters on loging screen and other communication from vrchat team when they say “public” it usually means “public and friends+” like they actually state this


Public worlds == public and friends+

and the link you added… does not say anything good about it. Randomly decides to send data if it thinks its bad, as well no way to check without sending it and can be included in training data. Just horrible for privacy

Better solution: Have people be able to apply, and then have users vote on the people that applied. This can let you have a better representation of the communities and users that have an actual impact on VRChat, especially those that actually want to attend the NYE event.

At the very least, I think a default permission for worlds to record should be asked on each world join. It is a bit weird to me that there’s a lot of features that are missing from VRChat (even just a clock for god’s sake), but this feature is already being tested without concerns over privacy and safety.

@codel1417 @VoicesShiature
Just gonna leave this here, and hope that VRC doesn’t implement it.

This image is back from April 2022. It has since then been removed from the modulateai website.

In this case, it would take the voice audio from VRC instances (no idea if this would be public-only or all) and run it through an AI to determine moderation actions. I also believe the audio clips are stored temporarily for a human to listen, in cases where the AI found a match.

You could host the video on a server rather than YouTube. I mean, LS Media exists on VRC, so I don’t think you guys would have an issue with sharing your NYE VOD, lol.

Oh yeah, once again I am asking for persistent dynamic near-clip settings in VRChat pretty please, with sugar on top.

Also, please add a clock. This response was hilariously bad. How does a clock have a negative effect on social experiences?

image

2 Likes

I would recommend against using that as a reason for anything

Why? LS Media shares pirated content. Simple as that. I mean, I use LS Media because it’s great, but just saying. VRC technically already supports pirated content, so sharing the NYE VOD shouldn’t be a worry.

@codel1417 @VoicesShiature
Just gonna leave this here, and hope that VRC doesn’t implement it.

This image is back from April 2022. It has since then been removed from the modulateai website.

In this case, it would take the voice audio from VRC instances (no idea if this would be public-only or all) and run it through an AI to determine moderation actions. I also believe the audio clips are stored temporarily for a human to listen, in cases where the AI found a match.

ohhh okay!

DLSS works for Nvidia graphics cards and FSR for AMD graphics cards if I remember correctly so no benefit removing FSR and replacing it with DLSS

DLSS isn’t a replacement for FSR, as it only runs on nVidia cards.

That aside, tl;dr:

  • FSR isn’t implementable
  • VRS has potential but requires a huge amount of work
  • DLSS isn’t possible because users would have to support it, and that means all shaders in VRC would break

Unfortunately none of the tech we talked about is making headway currently for a variety of reasons. All text that follows is me relaying information, I don’t know super technical details and cannot expand the response beyond what’s provided here.

AMD FSR is a no-go as proper integration following AMD’s recommendations is not possible due to incompatibilities with the Post Processing Stack and other limitations of Unity’s Built-in Render Pipeline.

Nvidia VRS is potentially usable, but due to the Nvidia VRWorks VRS based foveated rendering API not supporting Unity’s implementation of Single Pass Stereo Instancing, a proper implementation that’s compatible with future engine upgrades will require too much development work. We’d need to use the lower level VRS API to re-implement everything the Nvidia VRS foveated rendering API does.

Re-implementing the VRS foveated rendering API is doable, but it’s a bunch of C++ work to write a native rendering plugin.

We could use the VRS helper API, but it doesn’t support SPS-I using a render texture array.

We thought we got it worked out, but it just uses the left eye twice. Nvidia’s demo even has an instanced stereo mode but it still uses a double wide render target.

We did some lower level investigation and even contacted a few other VR studios with not too much promise, unfortunately.

DLSS is not being investigated as it requires reliable motion vectors. We cannot guarantee these existing, as they must be implemented into the shader. Basically nobody’s shaders are going to have that implementation. Things as simple as a scrolling texture require custom motion vectors, or they’ll end up with a ton of ghosting artifacts.

There’s some delicate lines we have to walk here but blocking/blocked behavior is something we’ve been looking at recently.

No news yet, but almost certainly nothing this year. I believe the current priority is eye tracking first, as we think its the “low hanging fruit”.

SDK2 content will continue to work as long as we can possibly maintain it. Uploading it, however, will be eventually impossible.

Notably, there’s SDK2 → SDK3 avatar converters that work pretty well. SDK2 functionality is so limited that replicating it in SDK3 is pretty easy nowadays.

I am not sure if the rumors of implementing Toxmod are accurate or not however I just wanna put out there that it is not something that fits this game whatsoever. If something like that was ever implemented I’d hope it would be limited to Public instances and not to Friends or Invite instances. Would be very awkward knowing that every conversation especially in what are supposed to be private spaces in a wonderful platform like this that transcends your average video game would be monitored by a speech-to-text algorithm and scrutinized by AI. I understand it for Public instances as they are difficult to moderate as is but please keep it out of Private or semi-private instances.

1 Like

Translated from Japanese by DeepL machine translation.

I sent several bug reports to Canny a while ago, but after more than a week, I have not received any tags on my posts.

I understand that the resources of the development and support teams are not plentiful, but that being said, if there is no “TRACKED BUG”, “TRACKED INTERNALLY” or “NEEDS MORE INFORMATION” or some other tag attached to the report, it is not clear to me as a user whether the bug report was accepted or not. Unless there is some kind of tag, such as “TRACKED BUG,” “TRACKED INTERNALLY,” or “NEEDS MORE INFORMATION,” the user does not know if the report has been accepted, has reached the development team, has been missed for some reason, or is a specification.

Even if you find a bug or suspicious behavior, if you do not receive a report, you will lose motivation to report it.
Can’t you do something more about this?

2 Likes

バグレポートがcannyで考慮されるためには、明確な証拠とともによく書かれているか、繰り返し報告されていないか、他の多くのユーザーの間で非常に人気があるかによって、開発者の注意を引くようにする必要があります。
もし、他のユーザーがバグレポートに熱中しているのであれば、ソーシャルメディアでそのバグレポートを紹介し、投票できるようにしましょう。開発者がすべての報告を見ているのは知っていますが、明確でなければ、コメントしたり、実装を開始するのはかなり難しいのです。
For a bug report to be considered in canny, it should be brought to the attention of the developer by being well written with clear evidence, not repeatedly reported, and very popular among many other users.
If other users are enthusiastic about a bug report, then make sure to showcase it on social media and allow people to vote on it. I know developers see all reports, but if it’s not clear, it’s pretty hard for them to comment and start implementing.

1 Like

I would totally dig this myself, we do absolutely massive ammounts of recording of VRC at Virtual Market for all our releases. Automated movement withough having to hold a control is something that can be done via OSC with VRC Lens, but it’s way beyond most users to implement something like that and as much as I want to, I’m too busy to do it myself right now.
I actually implemented a cinemachine camera fullscreen override on my last music vket party map with 3 paths for this very same reason, we needed an autocamera for our streams. Shelter’s current map _Reality does this as well (as well as some other maps), but having to rely on a world feature for recording pathing (that increases render load significantly for at least the recorder - or the entire audience if done improperly) is not ideal.

1 Like

I did a lot of multi pass green screen recording to do more motion control type effects. To be able to add in things like fog and other layered effects.

I did a lot of that sort of thing with mods on this music video. No clue how I would do a lot of these effects now.

1 Like

When?

We do not have a guaranteed time to acknowledge bugs posted to the Canny. There are thousands there! We try our best to parse them but it is a large amount of bugs.

Following the advice posted by @lhun seems reasonable.

We know users want this feature. It’s currently on our list to return to when we have time, but right now I do not have an estimate for when that time will be.

Same text as above-- we know users want this feature! We’ll get to it when we have time, but I don’t know when that will be.