Developer Update - 22 November 2023

I could see us implementing that. Due to how free-form the Marketplace is, it’s not possible to detect if a world is “locked behind” a product or not. We can only tell if a world is using those systems or not.

I suppose we could trust authors to tag those?

1 Like

When all of this are written clear and complete in the ToS, people-and myself- will feel better and we can start playing again. This really need to be fixed! We can’t agree something so vague and dangerous.

-ToS has to be clear that private instances are still private and that our data wont be given for services we dont use ourselves.
-ToS has to be clear that only public, and people being reported are getting their privacy deminished, LIMITED SITUATIONS. and not everyone, unless what you are saying isn’t true.
-if the acquirer changes, we would have to accept it again, and they will have to ask for our permissition again, and a clear descriptions of the process. We not exchanging bread, but actual human data.

These stuffs should be in the ToS so we can agree with it. Also a list of who see and use our data, because apparantly not only vr chat team will handle it. Like a website showing you to opt out stuffs that are not needed, and see what is needed.

3 Likes

Something is better than nothing! :heart:
I think authors tags would be good in this case.
Maybe a required option type thing where they’re given a option to say if it’s paid/free, if that doesn’t work out.

All of this info is in the TOS, but in a generalized form. We have to keep it generalized for the reasons I noted in the section you partially commented

We’re also considering running some kind of automatic voice moderation or detection system in public instances and Group public instances

God no. Fuck that. The nice thing about VRChat is it’s minimal proactive moderation. If I want to jokingly call my friends slurs or discuss culturally sensitive matters in public instances (like The Roost of The Great Pug), I will, and I don’t want some stupid, biased automod breathing down my neck.

If people like @Skuld weren’t already a good example of what happens when you overmoderate then I don’t know anymore.

2 Likes

The issue i have with this is in my prior post, as there is already an issue with deceptive reporting in vrchat:

It’s like when people use out of context discord screenshots to make people they don’t like look as incriminating as possible. People can and will abuse the system to do exactly this.



Will this include stuff like dark skinned people using the n-word amongst their fellow dark skinned brethren? Because that’d be similar to banning “bro” because non-caucasians decided it was an insult or something. I dunno, i find the “hate speech” war to be a very stupid one personally, people should toughen up instead of be more sensitive to words. You cannot be harmed by words, you have the ability to stop listening (muting, blocking, plugging your ears, whatever) and are thereby immune to the funny sounds that escape peoples’ mouths. “Hate speech” and such should only apply to politically influential contexts, not just random conversation.

Perhaps in the context of auto-mod in public instances, limit it to setting the ‘offender’ to a default muted state which people can choose to override by unmuting them (no penalization elsewise). That would be the least disruptive way of implementing it i can think of, so long as there is clear and obvious indication of it. — Or even better, just have a profanity tick box in the settings that you can choose to enable, which essentially negates the filter.

I and many others, still don’t like the idea in general. The desire to clean up public instances is a noble one, but it’s hard to say what will actually help and what will just cause other problems.



I just worry because i’ve seen multiple people get falsely banned before, even after very thorough investigation by numerous people (sometimes investigation is to the dox level of thoroughness) shown to be at the very least innocent enough to not warrant ban, but ban held because the evidence happened to be convincing enough and moderation didn’t seem to want to put in the effort to fully evaluate the case. And if people can make super convincing scenarios for the timing of the video snapshots given to moderation, then it’d be even harder to disprove. Plenty of people say things arbitrarily or when drunk or whatever that could be construed as malicious or even dangerous, and a 2 minute video clip might look like a case closed need to eliminate, even though that person might have just been having a bad day and said some less than clean words at the time or implied some questionable things. People could send report recordings whenever convenient to them, especially if it’s just a click of a button and not having to go through recording software and hosting files.

I think what people want more than anything is the terms specifying the restraint - like if the intent is to record only as isolated packets for moderation reports should it not say that they are not active moderation? Like i said before, the way the terms were worded, it implied that whether or not vrchat currently does, there was nothing saying that it won’t record everything in private instances, there is no liability there in the documentation inhibiting such overreaching practice. By it’s wording currently, vrchat could just choose to stalk anyone however long they like and say “it’s for safety and security”. There is a reason search warrants exist irl.

The issue people are gonna have is that you can say you don’t or won’t, but that’s a “just trust me bro” scenario. It doesn’t legally say you don’t or won’t.

4 Likes

Quick question on the plans for video recording~ Do you know how it’s going to be done yet? My one worry is that it could have a performance impact on our ends. I would imagine the buffer would be replay data instead of actual videos ~ Taking the voice, tracking data, avatar IDs, world IDs, ect so moderators could replay the reported scene.

Could have a trickle down effect of giving the creator community tools to record replays for filmmaking, but that’s just wishful thinking.

1 Like

Nope, engineering hasn’t even started on it, but when you’re working on something that impacts your TOS and PP like this you want to get ahead of it

1 Like

I thought I would share my personal thoughts in regards to the extensive data collection and monitoring that occurs online through social media platforms and technologies in general:

Most already know this, but virtually ALL online activities and metadata are tracked by companies in order to power personalized services, target ads, and detect illegal behavior. The basic point that many, perhaps most, AI systems rely on privacy violations at a massive scale is important to underscore.

While privacy policies outline broad tracking and use of personal data, the full extent and implications may not be clear to all users. I remember the first time I read Facebook’s privacy policy I was horrified and immediately demanded that they delete my account forever (which is hysterical looking back on now that I agree to allow Meta to track my body and face movements in order to play VRChat :rofl:). If you take the time to fully read any major tech company’s TOS or Privacy Policy, one finds that personal data is routinely collected, linked, analyzed and used in ways most would not expect and would find very invasive.

Those like myself in the technology industry can attest to the depth of data collection and analysis conducted behind the scenes. For the average user concerned primarily with enjoying online services, a degree of data sharing and monitoring may be an unavoidable tradeoff. A mini deal with the devil :smiling_imp:, if you will… However, individuals seeking complete anonymity or engaged in harmful or unlawful acts have more cause for concern about lack of privacy protections- and in my personal opinion, rightfully so. Esp. when it comes to online preds, of which there are a countless growing number… :no_entry:

I see above some very valid concerns that VRChat appears to be establishing a very broad terms of service that could enable highly intrusive monitoring systems without proper user consent. However, based on their public statements, some factors indicate their intent may not be as nefarious as it seems:

→ Representatives have clarified they do not actively monitor or directly record users in private instances. Any future expansion would require reconsidering user expectations of privacy.
→ Proposed systems like audio buffers are intended to provide evidentiary context for reports, not ubiquitous surveillance. This suggests addressing a valid trust & safety problem, not merely expanding control.

VRChat has a vested interest in building user trust, as invasion of privacy could severely damage their platform. Allowing overreach would likely not be strategically beneficial long-term. While the terms could certainly be narrowed and future plans more clearly outlined, VRChat’s actions so far also demonstrate a genuine care and concern for user’s privacy. I appreciate Tupper being on here (on a holiday and sick) personally addressing some of these concerns, and for providing the opportunity to give feedback and discuss issues. As long as this transparency and open dialogue can continue, I think this may help ensure new measures respect privacy while enabling platform integrity. Ultimately VRChat’s priority should be mutual understanding with users. Broad terms alone leave too much open to undesirable interpretation.

Finally, I think it is important to note that the establishment of privacy rights in the virtual space is an uncharted frontier. Navigating how to establish appropriate player privacy protections within complex digital domains will require creative thinking and nuanced policy-making to balance openness with justified security concerns.

3 Likes

It’s explicitly stated that the tracking data is only used for transmitting your avatar position to other players, and that it’s not stored. This has been the case literally forever, and the only alternative would be playing the game in desktop mode as static permanently T-Posing mute.

All this ToS drama is nothing more than absurd fearmongering, spread by people who aren’t remotely qualified to understand legal document jargon, yet decide to freak out and sensationalise it regardless.

The only data rights issue in the ToS that’s remotely notable, is the part about audio/video snapshots, which is only triggered when reports are made and used solely for better moderation. Which is something that’s been a major request from the community for years now.

I don’t understand why everyone is suddenly acting like VRChat has some sinister ulterior motive that they’re trying to sneak past us all, and how publicly informing us about ToS changes and data handling, is apparently somehow the big accidental reveal for their conspiracy. You guys know that they’ve ALWAYS had this data for the entire life of the platform, right? Why suddenly act concerned now?

Why do we keep having this ridiculous panic EVERY SINGLE TIME the ToS is ever mentioned? Can we please STOP, freaking out over generic routine legal documents? This is like the 12th time by now, and it’s been baseless, ignorant, panicky, nonsense, every time.

2 Likes

The ToS is vague and not clear enough, not generalized. Rn its pretty dangerous to accept those terms. It needs to be profesional and clear.

If what you say is genuine, it needs fixing.

1 Like

Unless large corporations like AWS are able to require private data for reasons like AI and drastically reduce storage and bandwidth costs for those who are willing to deliver it.

While it is true that our uploads, which are equivalent to server downloads, are almost as inexpensive or even costless, and often take up a lot of the server’s costs, in other words storing the data is still pretty cheap. In other words, it’s cheap to store data (because it’s a much lower percentage of the total).

Otherwise I can’t think of any reason to monitor everyone for long periods of time, especially since it costs a lot of money to keep this data for a long time.

As long as you don’t leave the monitor on when you close the game I’m fine with it.

99.9% of that is just bulk statistical data to be sold to advertisers, they aren’t spying on you and recording videos of you.

I firewall the outbound from oculus client, as far as it’s concerned, i’m playing offline.



see:


I’m sure most of us appreciate it, but it’s not Tupper that needs to be convinced of anything, it’s whoever is writing the legal and planning related systems.




That’s not even true lol. If anything it’s likely you are the one that doesn’t understand legalese and is trying to point fingers at people who do understand it to an extent and see the available loopholes.

The ToS doesn’t say that it’s limited to snapshots, that’s just Tupper’s word. The ToS should specify the limit of it’s usage.

Nobody’s saying that it’s a sinister conspiracy besides you. People are concerned that it opens the doors for possible future corruption, misuse, abuse, instability, etc.

I dunno what you are on about “every time ToS…”, the only 2 times there was any kind of major backlash regarding it is EAC and now. Both of which were perfectly justified response. People didn’t just change their minds about EAC, people just tolerate it because it’s not bad enough. When it’s dropped is the appropriate time to respond loudly if necessary, that’s when the most people are willing to speak up, and when VRChat is most listening, it’s not hysteria or fearmongering, it’s people expressing their concerns.



That’s not true. Massive AWS plan costs can go up quite a lot. VRChat needs a combination of investor money AND revenue if they hope to keep vrchat growing and not just exesting.

4 Likes

Just a reminder that there’s a service that summarizes legalese.

2 Likes
  1. Where do friends and friends+ fit in to this? Are they considered public or private?

  2. Who gets to decide what is toxic/hate speech? No one in the world can agree on such a definition thus you need to clearly define what constitutes hate speech on your platform so it can be enforced objectively according to your rules and not subjectively according to a potentially hypersensitive “victim”.

  3. Remind people that they can always BLOCK someone. The power is literally in their hands!!

5 Likes

This.

ToS needs to be fixed, because this is what matters mostly, and right now ToS is the issue. There is so many gaps and holes in it for people to give consent to their actual freedom for no clear reason (hunt hate speech? Im not even sure this is a reason at all to spy adults and their kids 24h a day).

Some people and corporations are not dumb, they dont like holes. The only thing people are asking is their own rights, and respect.

Like someone said before, dont work agaisnt your community and use them as numbers, work with them to build an healthy and secure place.

6 Likes

I’ll start by saying that this clarification is appreciated. My specific concern was regarding the automated monitoring systems within private instances, and the assurance that recording within private instances would be conducted actively addresses that concern. However, I disagree that the privacy policy clearly states this. While the policy addresses the use of personal information for legal compliance and terms enforcement, it doesn’t distinctly outline the active involvement of Trust and Safety agents or detail the role of future automated monitoring systems. While I recognize and support the intention to implement automated monitoring in areas outside of private instances, as mentioned in previous sections, clarifying these points in either the policy or the Terms of Service, particularly with the upcoming introduction of automated monitoring, would enhance transparency and reassure users about how their data is being handled and safeguarded.

5 Likes

We don’t want to leak personal information and we don’t want to be monitored in real time.

It’s also not safe.If we do something wrong
Personal information may be leaked. If personal information is
How will you take responsibility if such a situation comes to be leaked?Of course, other game companies monitor it as well.

It’s an action that you only do in a place that’s not a social game.
Social games don’t monitor. The reason is not to protect personal information and users’ lives. But VRChat is a social VR game.

That’s where the big problem comes in.
It’s like illegally providing all of our conversational behavior to VRChat.And VRChat operators
You’ll be watching it.

We don’t like it.
We’re not your TV show.
I want to share our memories and actions with other people
I don’t want to reveal it.

There are also many other good ways other than monitoring. Why do you think monitoring will solve everything?
Please think again. If it goes like this, VRChat will disappear.

Who wants to play a game that peeks at my daily life?
There’s no one in the world. Currently, many community sites are starting to say that VRChat is a game that peeks at users’ daily lives and personal information.

Now, the word is going to spread more and more soon. If that happens, who would want to play this game? Please find another way. I love VRChat. So, like this
I’m talking. Please do not ignore this article.

3 Likes

Public only certainly seems like a fair thing to implement! I doubt anyone is truly expecting privacy within a public instance. The issue for me is that the new terms specify that this extends to private instances, which to me is concerning!

The way I contextualize it is that if I go to a public instance, I am in a public area. I’m in someone’s store, a park, or a government building. I have precisely zero expectation of privacy, anyone may be recording me and I will conduct myself as such. However, if I’m in a private instance, that’s more akin to me visiting in someone’s house. Now, the owner of that house and other guests in that house may choose to record things, but I wouldn’t expect the landlord of that property to be making recordings without permission.

And if I’m seeking privacy, then going to someone’s home is precisely what I would do! The way the terms are written though, that’s simply not on the table.

2 Likes

Some places Friends+ is considered public, other times it’s not. Having this decided upon and finalized would be ideal. In the system, F+ had been functionally identical to public except unlisted. The instance access permission change update changed them to not being this anymore, thus they ought to not be considered public.

5 Likes