There are still some potential things here that I wont like and the TOS are so vauge that you could still technically do all the things we fear but, thank you for clarifying to us and making me feel more comfortable.
My question here, is that does this mean if a user creates a report while in a private instance, do you receive a recording? Or do you simply receive a report and then a Trust and Safety agent must make the recording themselves? There’s a big difference here in my mind, between someone directly submitting a recording with the ease of pressing a button versus someone pressing the button and that simply saying “Hey, come check this out”.
VRChat really should have an elected council of representatives that directly partake in development decisions. People that represent various communities and demographics of vrchat that can sign NDA and actually partake in dev meetings, to discuss the implications of topics such as this.
Every time there’s a major feature update, it feels like it’s coming out of an echo chamber. The common opinions people find wandering around vrchat and talking to people through all kinds of avenues, tends to be vastly different from what vrchat decides or even thinks the community wants. Sure there is SOME listening, but like i mentioned before with how vrchat needs to implement surveys, there is not enough broad representation.
No matter how broad vrchat might think it’s perspective is on the community; every governing body makes this mistake until they have real representation provided to them. No, you do NOT have nearly as wide of a perspective as you think you do. A handful of vocal dancer, world designer, furry, etc communities, and random people who speak up or happen to go out of their way to look at the canny is not adequate.
Don’t just listen to me, or just listen to a vrc twitter space, or a popular canny post or screeching on the discord, listen to everyone.
If you want an example of a game that uses an elected council, EVE Online does this, and to great success.
ps: this is not to detract from the serious ethical concerns of surveillace, but a committee would be able to bring these things to attention much sooner and with more weight.
Precisely.
The worst thing is that Kids, the ones that really doesnt care or understand those things, will be one of the most attacked by such ToS, we adult are usually more aware of modern times.
They really need to revert and fix some stuffs.
If they don’t, it means…
I dont understand why they want to hurt the community and the platform as well, well, I do… Its called money I suppose.
For real. I personally think it has to be because they focus on investors, rather than the majority of the community, or its security/health. I do understand the need of money, but not the- apparantly- shady ways to approach things.
A social game where people can chill can be easily be turned into a Virtual ID collector jail with just a few changes, and who got the hands on such things. A social platform, despite the current laws still not really ready for it, is completely different by miles from other games.
Private places should be freaking Private, most of it anyways.
We can all agree that Public is not private.
People need clarity and transparency. Talk to them. Play with them. Don’t use them.
I actually agree with this. It’s being blown way out of proportion. Although the ToS and PP could’ve been worded better here, the audio recording for moderation purposes is nothing new in the online world. If you’re online using any kind of communication application, there’s an absolute guarantee your audio is being recorded solely for the purpose of moderation, and in most cases, isn’t even used at all unless some kind of investigation occurs. There really isn’t much to worry about here imo and even Tupper assured us that any recordings won’t end up being abused.
I’m sure everyone has seen the cringe that occurs in public instances. Having AI identify a very toxic situation in public instances will be very beneficial. People blatantly screaming the N-Word here have no place in VRChat and thus I can definitely see how much of a help this could be in removing these people. Private instances are private and don’t need automated intervention, we have moderation tools here in case someone comes in to be a jerk.
What’s stopping VRChat Inc from dropping the hammer on subjects they consider questionable but I want to discuss with someone in a public instance?
I sure as hell do not want proactive moderation that can and WILL be abused to high holy hell in the name of buzzword bullshit like ‘toxicity’ or ‘hate speech’. If words offend you, mute or block, or better yet, just leave VRChat as a whole if you’re that sensitive.
Actively moderating truly toxic (that word again) people like Soliouss on the other hand, I can live with that. What I do not accept is being automatically silenced just because I discuss a sensitive matter in public.
Nobody can actually be hurt by some annoying miscreant screaming the n-word, just mute or block them, end of story.
Everyone has a problem with the fact that you could be recorded at anytime anywhere, no matter how sensitive your conversations may be, no matter how they could be misconstrued to your detriment without your knowing…
Do you want to have to be ultra paranoid 24/7 to make sure you never say the smallest thing that someone doesn’t like you could use to have corded with a report at the lazy click of a button?
Manually going out of your way to record with your own software is effort, and having to upload it somewhere is more effort, this makes it a strong deterrent. But all it takes is a random prick to click report and some moderator that decides to be ultra strict on vague guidelines to ban you and decide it’s permanent, even if you were having a perfectly benign conversation with someone on sensitive subjects.
It’s got nothing to do with it being new or not, it’s never acceptable to have surveillance invade private spaces, ever. People always fight and ridicule them, it’s just hard to win against overpowering corporate entities.
Anyone who thinks they’d be immune to this is wrong. Because all it takes is one disgruntled person to wait for the perfect opportunity to send in the most unlucky timed report, and anyone could get hit. Nobody is safe from this. — If there is no video it’s harder for these types of people to prove that you have indeed done something wrong, so there is potential to appeal due to lack of evidence; but it’s there’s false evidence, it’s a lot more difficult to resolve and increasingly unlikely success, due to moderation usually preferring to be final on reports instead of spending all day reinvestigating each one.
It’s not blowing it out of proportion, in fact it’s the opposite. Telling people to stop making a fuss about it is just saying “let companies do whatever they want, don’t bother fighting it cuz it’s not a big enough deal”… See the thing is, when it gets to the point that is fills your criteria of “being a big enough deal” it’s already too far gone and there is no way to save it. At that point it’s doomed to die.
Also, tupper’s words alone don’t mean much here, the tos still claims the ability to do so.
ps: if anyone’s curious, i don’t really have stuff to hide either, i just don’t want to have my existence and conversations constantly questioned and scrutinized by sensitive people who don’t understand them. i this keep to my own spaces for a reason (and put a lot of effort into them), but this allows peoples’ spaces to be invaded unknowingly. i’m sure most people share similar sentiment.
@tupper Every online game and platform has had to deal with volumes of bs reports since the dawn of online communication and moderation infrastructure. It comes with the job, and is an unavoidable consequence of dealing with the internet, because people are crazy and ignorant and whatever else. Trying to “lighten the load” or “make their jobs easier” almost always detriments everyone else and the platform, because it further complicates matters. (sure it’s a lot easier to just ‘shoot’ everyone and not think too hard)
This is not just an issue of online moderation, the same struggles apply to nation governance; people are crazy and the more you try to employ systems to “clean up the crime” the less effective that ends up being, and often times it actually creates crime where there wasn’t prior, and makes everything more complicated and harder to be certain of.
Having a recording doesn’t always make it easier, often times it puts everything a layer deeper and also presents the opportunity for false certainty. These things are not really solutions, nor are they likely to help much if at all, and is more likely to give the false impression of being effective due to there being more people punished for misbehaviour (which is the trap even governments fall into). There may never be actual solutions. Moderation needs accountability just as much as everyone else (lest we fall into the dictatorial hellholes that are things like subreddits and various discord communities with mods that ban people simply because they don’t want to deal with it or someone said something they didn’t like).
The most effective things at cleaning op environments on large scales is to solve the core problems that generate them in the first place; unfortunately vrchat has no access to this: because it involved the quality of life of the individuals who find themselves misbehaving (poor home life, declining economy, other things causing depression or other behaviours that result in projection or otherwise). VRChat cannot solve peoples’ livelihoods, so they cannot prevent the formation of delinquency.
We live in the age of cancel culture and everyone being hypersensitive to everything, nobody should be enabling them to do as much more than they already do, nor should anyone be rewarding their disruptive behaviour by fulfilling their every wish and letting them punish those that bother them (because everyone bothers somebody somehow, and if everyone was able to remove everyone that bothered them, there’d be nobody left). Gang reporting, framing people as p-words and other stuff are already a huge problem.
If you want to implement ToxMod or something similar for just public worlds, where people are filtered to a profanity list while present (and nothing more), and it stays in public only, then that is acceptable, but free reign recording anywhere and everywhere is absolutely unequivocally unacceptable.
“Manually going out of your way to record with your own software is effort, and having to upload it somewhere is more effort, this makes it a strong deterrent.” - Not for those running PCVR either on screen or through a tethered headset. Gamer graphics/audio drivers have independent one-touch recording and streaming built-in. The barrier is so low, it’s scary.
The Quest headset also has a single button for recording and taking screenshots, and another single button for sharing them on just about any social media site on the internet.
Not really, they still have to go get the file and upload that content somewhere. It’s also not built into the client, so only people that care more than normal will do so; they also have to configure said software. I’m talking about random 4head that is hypersensitive and probably not overly tech savvy pressing a button that vrchat provides internally.
After researching ToxMod a little (what vrchat investigated prior for automated moderation, and caused controversy), friends and we have discovered from other platforms that it is kind of what we’d fear: that it appears to be snowflake moderation like an elementary school playground, where nobody’s supposed to say swear words or imply something even a little derogatory ever (inadvertently or not). This is by far overkill. Maybe you want this for quest due to younger demographic, but otherwise, is just overkill. I seriously hope vrchat never goes this route (even if using a different but similar tool).
*This message is responding to the TOS changes in this thread
*before the message, i’d like to say that i am new to this side of the VRChat website, and so i didn’t format anything, but i hope the message still comes trough
I as i’m sure many others as well really appreciate this clarification, but, as other people said above, it should be clear in the Terms of Service what is/and isn’t monitored, where it is and isn’t, and how it is being done. I do not want any of my potentially “toxic” talk to be the reason i am gonna get banned for a while, even tho i was just drunk, talking to a friend who i talk like that all the time for no reason.
I am super concerned about such audio/video recordings, as it is really hard to draw a line on what is just some drunk people saying some things to a friend, and what is someone actually harassing someone. I’ve seen countless times how (and i’m sorry to say this but) VRChat customer service managed to be absolutely useless, not wanting to discuss anything about the reason someone was banned, how it may be a mistake, etc. If anyone can make a report based on some random stuff i or someone else happens to say, i will rack up years of bans in no time, even tho i am just calling my friends names for fun, in a way that we both agree is fine between each-other.
Other things i am concerned about is the definition of a “private instance”, and how it may be monitored.
So far it’s clear that invite and probably invite+ instances are considered private, but as someone stated above, it is not clear where group, group+, friends, friends+ instances lie on this scale. In a public lobby i’d never expect to have privacy, just as i don’t expect it out on the street irl, but in a friendly hangout in a Friends+, i would not want everything to be recorded and that i could get banned for such “offenses”.
Another thing i am concerned above, which was not mentioned before, is the way these recordings may be recorded. It was mentioned that there may be a performance impact, but that’s not what i’m worried about. The thing i’m worried about is more so the problem of malicious users possibly forging scenarios/replays, trough some way of cheating EAC and other things. EAC is not a bulletproof Anti-cheat solution, i have seen people bypass it many times, there are cheat clients, etc, and if something like that could be possible, it’d really mess with the whole safety of this Trust And Safety system addition.
Minecraft is a game that has a huge modding community, and in there lie the few that to this day can circumvent any kind of defense Microsoft implemented to make ChatReporting safe, to the point where you can be falsely banned for saying No, and then Yes, to random questions, and thus is the reason i do not play that game, only with people i know and not random people.
If VRChat wants to implement such a feature, it’ll have to be engineered in a way that it won’t be able to be bypassed, cheated, etc, as it would make the whole situation even worse than it already is. Not only would you be harassed by a guy with 14 modded bots in a public, but they’d also report you and thus get banned with fabricated replays or else.
Also, i understand that you have to keep things vague in the TOS as a save yourself move for the company, but it saying in legal jargon that VRChat can on a whim record whatever they want, movement, avatar data, voice, etc, in any instance, under any circumstances, and the only thing that says otherwise being an unofficial post by one employee is, at the very least, a feature abuse problem just waiting to happen, and the victim in such a case would have no recourse, but to walk away, with someone on the other side knowing everything about them.
I know it is all a very, very fear-mongery and dystopian thing to say all this, but if the rules and definitions are not clear enough, someone will abuse the loophole, and i do not want to risk being in the middle of it.
What i do and say privately to friends or members of a group of like-minded people, i want to stay private, not something that’s in the hands of someone at random, with no indication of them actively using it, who can abuse these things whenever, and however they please.
I hope the community and VRChat can come to an agreement of how this could be handled without invading privacy, without being unsafe/risky to use, and work as intended by the devs.
It has the potential to be very useful in coming to conclusions about reports when both parties could replay said event, and talk trough how it may be a false-positive, but that assumes that the system would
A: work as intended, (AKA people not being able to fabricate reports), and
B: that the system is used as intended by all parties, both of which could very well not be the case, and so we are in this limbo state where i couldn’t trust such a system in its current state with either my privacy, and thus private information i may be saying out to a friend in a one-on-one conversation, or the safety of my account.
I know i focused on the negatives in the reply very heavily, but the addition of such a feature is genuinely a very good idea, there are certainly tons of predators and other bad people that need be caught, but it needs a lot of ironing-out still, and i hope VRChat is game on doing that in cooperation with the community.
*tho with all that said i’d love to see such a system in place for creating media inside this behemoth of an application as a creative feature.
I had mentioned maybe not people hacking the system and forging doctored content, but reporting people at conveniently inconvenient times or manipulating them into saying incriminating things etc (which has the same effective result).
Blessed would be the day when a company actually moderates fairly and have a case be a mediator case instead of just flipping through files and stamping guilty or not quickly based on however the moderator happens to interpret the rules.
There’s predators everywhere, and they will always continue to crop up no matter how many you smack down… there’s no realistic solution for them that could actually help the problem, besides educating people to protect themselves from becoming a victim of a predator on the internet. Nobody can force you to do anything. If we have to resort to watching everyone on the toilet to make sure nothing illegal is happening in there, then what is society anymore?
Whether or not their intentions are as gross or far reaching as people are concerned about in actual practice, even initially, the concern is the potential evolution to that state if it is not legally restrained from doing so.
But yeah, the worries of false positives is very real and it absolutely will happen, and nobody wants to be the unlucky one.
Cross-posting from Twitter:
I do not feel comfy with the new ToS, granting unfettered monitoring of private instances.
I appreciate you’re trying to clean up public instances; group publics were a super useful addition to that. However…
VRC isn’t just a game; it’s where we are the most intimate with eachother. The new ToS threatens this emotional closeness.
Because we can never be sure if someone is watching, we can never let ourselves to express in non-conforming ways.
There are many, many authorities around the world, with highly conflicting views on what is okay (see eg UK’s online safety act, or contrast Japan, and US stance on specific anime).
By implementing pervasive video recording tech, you are opening up yourself to pressures from authorities to get realtime feed into a sanctuary place.
This is very similar to Apple’s controversial client-side scanning, in that the existence of the capability enables third-party authorities to commit privacy invasion at massive scale. To put it differently: regardless of your initial intentions, if capability is given, authorities will use this as a tool for other purposes.
History has shown us, that the regime’s appetite for conformity is insatiable. This will be exploited for any number of reasons; and because it’s T&S, users will have no knowledge on the frequency, and level of surveillance occurring.
In the physical world, this would be the equivalent of living in a panopticon. No reasonable person would find this acceptable. Even for email, we require court warrant for search. VRC is much, much more intimate than that.
For this reason, we strongly recommend: omitting the dashcam for private instances, and committing in ToS a requirement for a court warrant (with reasonable suspicion -just like any phone, or email) for any searches and intrusions in private instances.
We understand that lawyers advise for the ToS to be as general as possible. Credible precommitment, however, is a major defense against government overreach -if you don’t set up clear access guidelines, authorities will interpret it as carte-blanche access.
Thank you kindly.
-Kitten
I didn’t think about japan. In japan you aren’t even allowed to take a photo of someone without their permission. There’s a lot of legal things that extend into the online space that are illegal (i forget the details right now) but i think that readily available monitoring would impinge upon JP laws as well. They are very privacy-centric.
I feel it necessary to add that opinions of those on the discord are going to be biased towards less concern with privacy, because everyone who is capable of talking there has had to verify their phone. People who care about privacy refuse to give discord their phone number, and thus are incapable of speaking there.
While people are entitled to their opinions, the widespread “everywhere else is worse, just deal with it” mentality is not helping vrchat or anyone in it. It is difficult for people to counterpoint these statements and discuss the topic properly when the conversation is randomly spattered all over in unknown or inaccessible places.
Hey Tupper, Could you better explain this to me in layman’s terms?
A: VRC will record you to try and catch you breaking rules? (This is Bad)
B: VRC will record you to use as evidence if a report is made? (This is Good)
So basically any time someone feels like it with no limits, liability, responsibility, rules? You will not even know that someone decided to take a peek, either because they thought so, or someone made few reports on you that might not even be valid without them even being in the instance… Thats just horrible… 0 privacy as usually.
We’re also considering running some kind of automatic voice moderation or detection system in public instances and Group public instances.
hey, what would be the minimums for angering tha automod? can we joke around, or…? honest question. because there’s language (that i CAN say) that i use around friends (that are comfortable with it). i’d be very upset if i got instabanned by an AI because i said something i can reclaim around friends that can do the same. @tupper