Open Discussion on Transparency in VRC

While we get detailed explanations about marshmallow items and stickers, hundreds to thousands of users are asking for more accessible age verification, coherent NSFW content policies, and meaningful feature development instead of gimmicks. The years-long silence on these issues while maintaining constant chatter about minor content additions and patches creates a disconnect that frustrates the community to a noticeable degree.

There’s something particularly frustrating about digital platform dynamics where the people with actual power to address problems often seem to disappear behind walls of automated responses, policy pages, and radio silence.

You see it everywhere - users reporting genuine issues, security vulnerabilities, accessibility problems, or even basic bugs, and instead of any human acknowledgment, they get algorithmic responses or just… nothing. Meanwhile, problems that could have been small fixes become major incidents, and users who started out wanting to help become adversaries.

It’s especially baffling because the cost of basic transparency is so low in digital contexts. A simple “we see this, we’re looking into it” from an actual human can buy enormous goodwill. But instead there’s often this fortress mentality where any acknowledgment is seen as admitting liability or opening floodgates.

The irony is that this conflict avoidance often creates way bigger conflicts. Someone who might have been satisfied with a brief explanation or timeline becomes someone documenting everything, posting publicly, or finding creative ways to force attention to the issue.

It seems like many digital system owners haven’t learned that a little proactive transparency and human engagement can prevent so many of the fires they end up having to fight later. The fear of engaging seems to override basic risk assessment about what actually makes problems worse.

This is honestly inexcusable when broken down like this. The economics are trivial - we’re talking about maybe $15-20/hour for someone who could handle the basic human triage that prevents most escalations.

Even a part-time intern could:

  • Read actual user reports instead of auto-routing them
  • Route legitimate issues to the right teams
  • Send a real “we got this, here’s roughly what to expect” response
  • Catch the obvious false positives that automated systems miss constantly

Instead, platforms would rather deal with:

  • Public relations disasters when automated systems fail spectacularly
  • Legal costs when ignored issues become bigger problems
  • Developer time spent firefighting issues that could have been caught early
  • User exodus and reputation damage from feeling completely dehumanized

The math doesn’t even make sense from a pure business perspective. One minimum-wage human doing basic triage prevents more expensive problems than they cost within their first week.

But there’s this almost ideological commitment to “scale” and automation that seems to override basic common sense. Like you’d rather have a system that fails 1% of the time catastrophically than pay someone $30K a year to catch the obvious problems.

It really does feel like willful negligence at this point. The tools and knowledge exist. The cost is negligible. The only thing missing is the basic acknowledgment that some human judgment in the loop is worth the trivial expense.

The economics make even more sense when you consider that most platforms are dealing with the same basic categories of issues repeatedly. A small paid team develops expertise and institutional knowledge. They know the patterns, they can spot the edge cases, they can make consistent decisions instead of the wildly variable enforcement you get from volunteer teams or algorithms with different interpretations of rules.

The fact is you’ve seen this work across a variety of platform types - from old-school forums to Discord servers to web games - this isn’t some unsolvable technical challenge. It’s just a willingness to spend what amounts to pocket change (for most successful platforms) on basic human infrastructure.

A few dozen threads per week on VRChat’s forum - for the biggest VR platform in the world. That’s what, maybe 5-10 threads per day on a busy day?

I’ve seen less than 30 new posts since I first joined the forum.

That’s legitimately less activity than some hobbyist forums for niche interests. You could moderate that while doing literally any other job. Check in a couple times a day, spend maybe 30 minutes total handling any issues that come up.

The fact that this level of activity - from a platform with millions of users - somehow can’t justify daily human moderation is just… it perfectly illustrates how disconnected platform thinking has become from reality. We’re talking about what amounts to a small neighborhood’s worth of daily conversation, spread across an entire week.

This really drives home the point about willful negligence. If VRChat can’t manage basic human oversight for a forum that gets less traffic than a decent-sized Discord server, then the problem isn’t scale or economics - it’s just a complete abandonment of the idea that community spaces deserve any human attention at all.

No wonder the forum feels dead: Why would anyone bother posting when it’s clear nobody’s home?

And this is from the biggest social VR platform in the world. If VRChat’s forum is that manageable, it really exposes how manufactured the “we can’t afford human moderation” problem is for most platforms. We’re not talking about millions of posts per minute like Twitter - we’re talking about community spaces where actual human engagement is totally feasible.

It’s almost like platforms have convinced themselves that any user-generated content automatically means they need massive automated systems, when the reality is that most communities outside the social media giants have very manageable volumes of actual moderation work.

When users can’t get basic human responses to legitimate issues, they just… stop trying to engage through official channels.

When a user posts “Hey, this feature is broken” and three days later a dev pops in with “Yeah, we see the issue, pushed a fix to testing, should be live next week” - that transforms the entire relationship.

It’s not even about solving every problem immediately. It’s about that moment when someone realizes there’s actually a human being who read their report and cares enough to give them a real update. That user goes from feeling like they’re shouting into the void to feeling like they’re part of a community where their feedback matters.

And for the dev, it’s literally five minutes. They’re already working on the issue anyway - the marginal cost of posting a quick update is almost nothing. But the return on investment is enormous. Instead of 20 angry follow-up posts from frustrated users, you get people actually helping test the fix and providing useful immediate feedback.

The current model seems to be: users report issues → issues get fixed in silence → users never know if their report helped or if the timing was coincidental → users stop bothering to report things → platforms lose their best source of real-world testing feedback.

It’s such a self-defeating cycle. The platforms that do maintain that human connection - even sporadically - tend to have much more engaged and helpful communities. People actually want to contribute when they feel heard.

The absolute worst - when platforms create this perfect Catch-22 where they don’t provide proper channels for legitimate issues, then ban people for “misusing” the only channels that actually work;

When the only way to get a human response is to post in some general discussion area or contact form meant for something else, and then you get hit with “this is not the appropriate channel” or “abuse of the reporting system.” Meanwhile, the “appropriate” channels are either non-existent, broken, or disappear into a black hole of dozens of similar links.

It’s this bizarre victim-blaming approach where platforms absolve themselves of responsibility for their own communication failures. “Well, you should have used the proper process” - what proper process? The one that doesn’t exist or doesn’t work?

It gets even more perverse when users figure out that the only way to get attention is to make enough noise that it becomes a bigger problem. Then suddenly the same issue that was being ignored through “proper channels” gets immediate attention when it threatens to become a PR problem.

So the system literally trains users to escalate inappropriately because that’s the only thing that produces results. Then punishes them for doing exactly what the broken system incentivized them to do.

It’s like designing a building with no doors and then arresting people for climbing through windows.

The threshold seems to be something like “viral outrage plus media coverage plus potential regulatory attention” before anything moves. A single user getting screwed over and posting about it publicly? Ignored. Dozens of users with the same problem creating threads and videos? Still ignored. Major tech YouTuber makes a video about the pattern of abuse? Maybe some intern gets told to look into it six months later.

By the time they actually address it, you’ve got users who’ve been dealing with broken systems for months or years, who’ve tried every available channel, who’ve escalated appropriately and inappropriately, who’ve documented everything, and who’ve basically become accidental experts on the platform’s dysfunction.

And then the “solution” is often some blanket policy change that doesn’t actually help the people who suffered through the broken system, just prevents some future cases. The users who got punished for “abusing” channels while trying to report legitimate problems rarely get any kind of acknowledgment that the platform failed them.

Even the cold economic calculation falls apart when you think it through properly.

Empathy and human responsiveness create loyalty, word-of-mouth growth, reduced churn, better feedback loops that prevent expensive mistakes, and communities that essentially provide free quality control and content. Users who feel heard become advocates. Users who feel ignored become active detractors or just leave.

Systems built on suppressing human agency and ignoring feedback consistently collapse because they cut themselves off from the information they need to function. They optimize for short-term control at the expense of long-term sustainability.

The platforms that claim human oversight is impossible at their scale are essentially admitting they’ve built systems too large to be accountable to the humans they serve. At that point, maybe the systems shouldn’t even exist.

There’s something deeply dystopian about digital platforms that have become so “successful” that they can’t actually engage with their users as humans. It’s like they’ve accidentally built digital feudalism where the scale itself becomes justification for treating people as abstract data points rather than individuals with legitimate concerns.

The wealth and engagement flows upward while the basic dignity of human interaction gets sacrificed to “efficiency” - but it’s not even efficient if it’s constantly hemorrhaging trust and creating adversarial relationships with the very people who make the platform valuable.

VRC DOES have someone whose job it is to manage the community and optics, but the system is set up so that the only way to get their attention for legitimate issues is to flag content for “moderation” - which gets the poster or user in trouble for “abusing” the system.

So you’ve got the perfect catch-22: There’s literally a human being paid to help, but the communication pathways are so broken that you can’t actually communicate with them without getting punished. It’s like having a customer service desk behind a door marked “EMPLOYEES ONLY - VIOLATORS WILL BE PROSECUTED.”

And meanwhile this person is probably sitting there wondering why the community seems so dysfunctional, not realizing that users who need help can’t actually reach them without risking getting flagged themselves.

This is exactly the kind of systems design failure that creates the adversarial dynamic described earlier. Users learn that the official processes don’t work, so they find workarounds, then get punished for the workarounds, which teaches them that the platform is fundamentally hostile to their needs.

The community managers probably have no idea that there are users with legitimate problems they can’t address because those users can’t adequately contact them. It’s like VRC accidentally built a system that prevents its own support staff from doing their jobs effectively.

One button. One additional option in the existing interfaces. “Flag for support” right next to “Flag for moderation.”

The infrastructure is already there - VRC has the systems, the human managers, the notification pathways. Adding one more category to the existing report systems would probably take a developer about 20 minutes to implement per stack.

But instead VRC created this system where users have to choose between:

  1. Being ignored
  2. “Abusing” the system and risking punishment
  3. Just giving up

When the fix is literally just… acknowledging that sometimes people need help rather than enforcement. It’s such a basic UX oversight that it almost feels intentional, except that would require more malice than makes sense for a platform this small.

It’s the perfect microcosm of everything stated prior. The solution is trivial, the cost is negligible, the benefit is enormous, and yet… nothing. You’re probably not even aware it’s a problem because the current system actively prevents you from learning about VRC’s broken communication pathways.

Meanwhile users are getting frustrated, the devs are probably confused about why community engagement and optics are so poor, and problems that could be solved in minutes end up festering for weeks.

All because nobody thought to add one button.

Some honest admission would be absolutely transformative. “Hey, we’re stuck on this one and could use some community input” or “This is trickier than we thought - anyone have experience with X?”

Users would go from adversaries to collaborators instantly. Most people actually WANT to help when they feel like their input is valued. The VR community especially tends to be full of technical people who’ve probably solved similar problems in other contexts.

But there’s this weird corporate pride thing where admitting you don’t know something is seen as weakness, when in reality it shows you’re taking the problem seriously enough to seek good solutions rather than just implementing whatever comes to mind first.

The irony is that “we don’t know what we’re doing” is often the most competent thing you can say about genuinely complex problems. It shows you understand the complexity instead of rushing to oversimplified solutions that create more problems.

And for VRChat specifically - we are literally living in virtual worlds, we understand technical limitations and creative problem-solving. Many of us would be thrilled to help think through thorny implementation issues if asked genuinely rather than having solutions imposed on us.

Instead of “this is how it is” it becomes “this is what we’re trying to achieve, here’s what we’ve tried, what are we missing?” Completely different dynamic.

The same fundamental lack of confidence that creates these problems is showing up everywhere - VRC can’t articulate a coherent NSFW policy because they’re seemingly too scared to actually take a stance, they stumble through novel implementations without explaining the reasoning, they handle PR disasters by going silent instead of engaging.
While this has been changing lately, it doesn’t look great.

It’s like the entire platform is paralyzed by the fear of making the “wrong” choice, so end up making no clear choices at all, which is actually the worst possible outcome. Users can work with clear policies they disagree with, but they can’t work with mysterious, inconsistent, or non-existent policies.

The NSFW thing is probably the perfect example - instead of saying “here’s our policy and here’s why” there’s some vague, constantly-changing approach that nobody understands, including VRC themselves. So users are left guessing what’s allowed, getting arbitrary enforcement, and the devs are probably stressed constantly about potential backlash and regulation concerns.

When you’re conflict-averse, every problem becomes an existential crisis because you don’t have frameworks for making decisions and standing by them. So instead of “we decided X because of Y, and we’re open to feedback on Z aspects” it’s just… silence and hope nobody notices when things break.

This confidence issue creates this spiral where indecision and a lack of communication makes problems worse, which makes platforms even more gun-shy about taking clear positions.

To the counterpoint: they might actually know exactly what stance they want to take but are choosing to maintain ambiguity because it gives them flexibility to enforce selectively, or because they’re trying to thread some impossible needle between different stakeholder groups.

The communication failures might not all be from lack of confidence. Some of it could be calculated opacity - if you never clearly state your policies or reasoning, you can’t be held accountable when you change direction or make exceptions.

There may be a mix of genuine “we don’t know what we’re doing”, but more willful “we don’t want to commit to anything” - maybe they genuinely don’t see the breakdown, or maybe they prefer having users jump through hoops rather than making it easy to reach them with complaints.

Hard to tell from the outside which problems are competence issues versus strategy issues.
They’re creating a vacuum that gets filled with the worst possible interpretations. When you don’t communicate your reasoning or process, people assume malice or incompetence - and often the most inflammatory version spreads fastest.

“Hey, we’re not ready to take a solid stance on this yet, feel free to keep giving us your ideas” is better than the “VRC IS PROMOTING PREDATORS BY NOT AGE LOCKING NSFW CONTENT” that you can see in 90% of the quest store reviews.

Honesty is actually a much better look than appearing to make arbitrary decisions or having no policy at all. Users can respect the complexity of balancing different concerns, but they can’t respect what looks like negligence or hidden agendas.

The transparency doesn’t even have to include the final answer - just showing that there IS a thoughtful process happening would defuse most of the argument. But instead the community speculates wildly while VRC figures things out in private.

3 Likes

5 Likes

All good.

Rocker Boys and 'runners don’t need to read. Feel free to go do an arasaka tower instead.

Honestly I think most of your argument is moot or flawed just by looking at their social medias and developer updates– They’ve extensively talked and covered many of the exact issues you bring up, and actively, WITH HUMANS, respond to a lot of criticism/reports levied towards them¦ not as though this defends them as perfect in any way, I’m also a pretty vocal critic of how they handle many, MANY situations and processes behind the scenes, but again they do address things personally and have done so increasingly over the last year or so– And when it comes to “Gimmick” additions over palpable updates– Idk what to tell you, that’s largely subjective, much of what they’ve added has Ben requested by the community (Besides the more monitary stuff, though they’ve also used that to push some extra things people have asked for.), them not adding the specific features YOU find valuable, doesn’t make them not valuable to others– Again, not like I’m saying they’re perfect, seemingly dragging their feet along on Soba after what was a evidencially almost finished “Udon 2” will always be a stinging point for me, especially after diminishing what it originally set out to do, and safety features like age verification I am LIVID they’ve decided not to put more emphasis on, to me it is imperative to do and get right, so I’m disappointed in them on that front– But by in large what you’ve mentioned as the source or example of these issues are, imo, largely inaccurate to how they actually handle things, and an oversimplification of what you consider they SHOULD be doing, likely just do to a lack of knowledge on what it takes to run a service like this and all it’s facets. Not to say it all has no merit, but much of it could use some more heavy consideration, or I may just fundamentally disagree in some regards.

I did take the time to read it though lol.

3 Likes

I would like to see some links to some of the social media posts regarding NSFW clarification in your claims if you have time to track them down for me. I only follow the content posted on the site, and now on youtube with the QnA being added recently. I don’t use conventional social media as a source of reliable news generally.

While I agree that communication overall has gotten increasingly better in recent times, a comment I made myself in my post: Neither that fact nor soba delays have anything to do with what the post actually discussed, nor does that diminish the fact that yet more can still be done.

A very simple cursory search will show the lack of communication regarding the topics I actually put forward, the topics of actual concern to relevant stakeholders of the platform and within the community, and I’m well aware of the tightrope that has to be walked regarding communications with the public like these: That’s kind of the whole point of the post in its entirety: What is being done is addressing the wrong markets (people who don’t really care about platform health) and what is not being done has detrimental consequences that are piling up the longer they go ignored (examples of which are numerous and visible everywhere that non-VRC users encouter the onboarding process; ie. app reviews, public blogs, ex-user opinions, etc.).

I don’t recall any part of my post making demands or implying current processes should be changed regarding what does or does not get implemented. If people like and want to pay for gimmick content: That’s good for the platform.

Prove to me that’s actually real though, because what data I can gather as a lone individual shows me that very few people are using the current item implementation nor do others have a desire to. People want to MAKE items, which brings its own problems that, once again: VRC has only vaguely alluded to, likely regarding user misuse, I’m sure. That’s to address the only remark toward this type of content I referenced, which seems to have been a hyperfixation of the point you made in return. It’s not about the content, it’s about the communication.

I know that implementing items opens up an entire box of potential exploit vectors and they’re right to be hesitant on implementing user uploaded items without extensive validation. It would be cool if the platform could be up-front about that obvious consideration though, rather than, and I’m paraphrasing Strasz here: “Eh, we still don’t really know how to handle items yet, we’ll get around to it when we get to it.”

You’re right that I was overly simplistic in my opening, if that’s your only genuine concern of my content and argument: There was a point to that. I’m not trying to make a shareholder argument that reads like a thesis to change minds at scale. I’m simply pointing out what I see as a clear problem so the community can weigh in on and add to that perspective from alternative lenses. Whether anyone agrees with the specifics of my statements or not is mostly irrelevant, when even you can seem to agree on the bulk considerations I’m trying to narrow down for a broad audience to interpret: Plenty is being done, but those aren’t the problems that need solving. If that’s a question to you, then please provide a logical and philosophical argument explaining what so many people in this userbase are feeling and unable to otherwise expound. I’m just putting words to vibes.

If you or they want actual solutions, VRC has my resume sitting in their inbox. People usually get paid to provide real answers, and they have an extensive value proposition to look over before they consider what I have to offer them practically. They already know what they really need to do to both provide growth to the platform while solving their current backlog of issues.

I appreciate your input. It usually takes more work to find the right meme to represent your laziness than it does to just respond with malice or argument, and it’s usually pretty easy to simply engage on equal terms with content you have no stake in. There’s no reason for the dig at the prior poster, I expected at least a few more of those before this conversation went anywhere, and it wasn’t an insult toward me just toward themselves, so your reply was a welcome addition to my day. Sorry if my rebuttal comes off disjointed and rushed: It was.

There is one vote with a stop sign. There are two votes with like signs. What personal information did I share? What members of mine do you speak of? If you don’t like my conversation topic being here, flag it for a moderator. This is the most confusing reply I’ve read all month.

They mentioned on one of the dev streams that they might create a one-time purchase of $2-5, as an alternative to needing VRC+. But it would likely remain free with VRC+ still.

I don’t think it has been mentioned anywhere else, but that seems like the most likely direction that will go with it.

Any security vulnerabilities or exploits should be reported here: https://help.vrchat.com/hc/en-us/requests/new?ticket_form_id=1500001130621

Bug reports, feature requests, and general feedback should be posted here: https://feedback.vrchat.com/

I do agree with this a little. But knowing that a bug report is tracked in their system is usually enough for me.

But I assume you are talking about them acknowledging larger issues?

As much as I miss the good old days of forums, I think people just generally use Discord more nowadays. I only saw this thread because it appeared in my email as a “[VRChat Ask Forum] Summary”, because I like looking at these every so often.

Isn’t that just what making a support ticket does?
Here: https://vrch.at/support

I wish you had started your thread with a TL;DR version. It was a chore to read, and even so I only skimmed the second haft since it just got so boring and long winded.

2 Likes

The flag for support part was referring to the posts themselves here on the forum and feedback pages.

It took me like 10 minutes to write that post and less than five to read it back to myself. I really can’t help the Western literacy problem.

I know. But you want a button that just automatically converts a form post into a support ticket? Or do you want VRChat to hire someone to reply to all those posts with “make a support ticket here” proactively?

I am dyslexic and read slow. Long texts like your opening post is freaking intimidating to start reading. But my curiosity won and I started, but as said, it was just so boring to read that I had to give up half way through.

I will say you could have written it worse. At least it looks nicely structured and such. But I think you could have separated it into separate sections with headlines, so it could be read in chunks instead of all at once.

Yeah, unfortunately discussions on logistics, legal and otherwise nuanced and complicated issues usually are pretty boring. The type of things that change the world rarely make good media content until after the fact.

I’m pretty sure VRC *does* hire someone to make those exact types of posts though.