Recent changes to Abuse Reporting , Trust and Safety

tupper has responded to another user in another Canny: Re-open Moderation Tickets for certain malicious avatars known as “Crashers”. | Voters | VRChat

My personal take and experiences on the rest of your comment (not an employee or affiliated with VRChat Inc).

I currently have MULTIPLE crasher and NSFW avatars reported via the in-game/website ‘Report’ option which remain publicly available weeks after their being reported.

It still occurs at times or the response may be delayed or forgotten, even in batches of similar reports for identical content from the same date of reporting (until lunch time / end of shift). Often a small nudge from a friend or a report deletion & resubmission in-app helps, if the effort to review the content is not large or time consuming.

I’ve seen action on weekends may be slower at times, as was the case with a prolific crasher avatar this last weekend. (The uploader was eventually permabanned and deranked to Visitor this Monday, with offending content removed.) This wasn’t a case of a VRCHeadChop requiring two clients to test the avatar, except for a Keyframe toggle (which displayed a static screenspace FX image texture).

Human mistakes for review also happen. I’ve seen a total of five odd reports from three different times get closed in-app silently or left open because a public NSFW avatar model shared a name with a different public NSFW avatar model in a batch of similar avatar reports from the same hour, or when the process to review takes numerous Expression menu steps in a specific order in a foreign language (non-English) Expression menu. As a consequence to avoid confusion, I’ve learned to space out these “Alice 1” & “Alice 2” (fictional name for example) reports from each other and attempt to not mix them in between reports to avoid creating confusion. Anecdotally I think Trust and Safety looks at each avatar individually and carefully, even in batches of similar reports.

Public avatars which expose their explicit content only via parameters and OSC (no Expression menu or a “security PIN” in menu), or public avatars which have too many Expression menus, are 50/50 if they end up in “open reports” hell. But I’ve also seen both of these types actioned, with short but detailed steps in in-app reason.

Anecdotally if an avatar has no mesh for bits but has leftover explicit textures that can be exposed by replacement shaders (shader magic) in world, it’s been closed in-app silently without action more often than not - I’ve stopped reporting these kinds of avatars almost entirely. (This is an anecdote.)

Avatars which have the mesh and sexually explicit textures for bits but utilize backside culling are anecdotally 50/50 actioned in my experience, especially if there’s no toggles to expose the mesh clearly, even if there’s a test world ID in the report reason where the explicit textures can be exposed easily. (This is an anecdote.)

Avatars with meshes and backside culling off with explicit textures get actioned most often, even if there are no toggles for those. I’ve also seen a few exceptions to this, but those exceptions are rare and for hard, understandable reasons for a reasonable person. (This is still an anecdote.)

For avatars, it seems more streamlined to change into an avatar and look at it in-app, than to do any unnecessary screenshots / videos and ticketing for both the reporter and Trust and Safety agent.

Anecdotally, my experience tells me my reports about worlds set to Public status (or Private worlds which do not indicate acquiring consent for provocative content via notices at respawn point or in content warnings) may require additional evidence (a short video < 2 minutes) via web form ticket to be closed in-app even with detailed reasoning of Community Guidelines / Creator Guidelines, reproduction steps in the in-app reason and a reference to a ticket # for “video evidence”, if the world reports stay open in-app for weeks. A video may take less actual time to watch than trying to wait for a world to download and attempting to decipher the text instructions in in-app reason, but at the same time, reports are directed to be made in-app whenever that’s sufficient for evidence.

For private avatars/worlds/content that may be provocative, I personally also take into consideration where that content was used, using Developer Update - December 18 2025 and the Creator / Community Guidelines as a reference. If it was in a private space, I don’t generally even bother to report among other consenting adults.

Understandable the job of a Trust and Safety is very difficult too, for numerous factors at play.

And to share some anecdotal numbers with you, not counting web form tickets, I have 303 open avatar reports (an estimated ~90 opened since January 27, 2026) and 391 closed avatar reports in-app (the majority of these were submitted in-app after January 27, 2026). 50 open world reports (24 of these opened since January 27, 2026), 2 closed world reports in-app.

Jan. 08, 2026 https://help.vrchat.com/hc/en-us/requests/653874

Without visibility into the ticket, I assume this web form ticket was closed for age (submitted before January 27, 2026). Replying to re-open the ticket may or may not get the ticket ignored (or auto-closed after 7 days), because it’s already been assigned to an agent who decided to close the ticket for its age (and not one of the several Trust and Safety agents that could look into your ticket).

In my case, resubmitting an user behavior ticket as a new ticket with “(resubmission / evidence)” in title got my month old ticket actioned, avoiding the issue of a ticket remaining assigned to a single agent who formerly closed the former ticket (probably based on an instruction from someone higher up based on original ticket submission date). Though, this was an oddball report which took a total of 3 in-app reports (1 closed silently as non-actioned) and 2 web form tickets (1 closed due to original submission date) to solve for numerous unfortunate factors at play.

For a > 6 months old open web form avatar reports with evidence in the tickets, I’ve deleted the in-app reports and/or resubmitted those as an in-app report with a short reason and a ticket # in the reason. In each of these cases, the open web form ticket # mentioned in in-app reason was ignored and the web form ticket was left open, but the content was actioned and the in-app report was set to “closed report”. I then responded to those web form tickets myself to have those tickets be automatically closed as solved for age, as they were no longer required to be actionable.

A more recent ticket was to report a KKK avatar, in which I pointed out that the avatar was too new to appear in the standard avatar search, so the in-game/website report function was impossible to use in this circumstance. So I included logfile which show the avatar’s name and the username of the uploader.

Generally speaking an user ID or an avatar ID would be best to have, preferably the latter. Or as tupper emphasized in the Canny issue above: “Please do your best to include the avatar ID”.

Sometimes if an avatar is not on Prismic’s Avatar Search, then it may already be on avtrDB. The latter allows searching by username, avatar name, user ID or avatar ID.

If the avatar is not in avatar search databases, you can attempt to dump an avatar ID to full output logs (see my comment in that Canny), follow up with changing into that public avatar from VRChat Home website with URL manipulation, and then if necessary, reporting it in-app. If it’s a private avatar and I can no longer view the private avatar in-app for reporting, in my experience I’d still need to use the web form with evidence, an avatar ID and clear indications why in-app reporting couldn’t be used - or I’ll report the uploader’s account in-app instead if the in-app report reasoning is sufficient enough to avoid the web form ticketing.

Feb. 06 2026 https://help.vrchat.com/hc/en-us/requests/662660

This received the standard boilerplate response and closure of the ticket literally seconds after it was submitted.

Again I don’t have visiblity to the ticket, but I am not discouraged if I have an open in-app report for an avatar/user/world referencing a closed web-form ticket # for evidence in the in-app reason. The importance and guidance given here seems to be to have an in-app report.

For me, an in-app closed report is the final decision on the matter. Web form ticketing is just for attaching/submitting additional evidence for me when in-app isn’t enough.

Similarly when I’ve submitted private full output logs to a Support ticket to reference to that Support ticket from a public Canny bug report, the ticket is closed by a Support agent because that’s how their ticketing system works - albeit I think the Support team’s response has better clarity in their response. It’s as follows (full quote):

Thank you for submitting this. We will keep this on file if it’s needed for investigation. In the meantime, this ticket will be closed and marked as “solved” due to the way our ticketing system works.

Meanwhile I understand the ticket closure response from Trust and Safety I’ve seen can signal ignorance to the report (especially if evidence was attached to OP), even if that was not the intent of the response.

I decided to try moving it up the chain, and submitted what you see above to the Help Desk to report that I was having trouble with the website.

I recognize the Support team & Trust and Safety teams are separate, with different people handling the tickets. I think the response you’ve received about your Trust and Safety ticket reaching the incorrect Support channel seems to have been an appropriate response from Support.

This direct feedback may not reach Jun Young Ro at VRChat Trust and Safety when filed to Support. I’ve had success making a moderation request in website form with content/user ID described as “Numerous (N/A)” via a moderation request, with “Feedback to Jun Young Ro” in the ticket title with a request to forward the feedback to Jun in the body. This ticket was assigned to a Trust and Safety agent and assumably then forwarded to Jun directly. (This is the process Jun’s requested to do on a VoicesOfVR podcast: #1636: Planned Improvements of VRChat’s Trust & Safety from New Lead Jun Young Ro.)

This new “smooth transition into a newer, more streamlined report system” is more akin to a brick wall if the report doesnt fall into a very narrow set of defined violations, with no room for special case outliers. (outliers that are far too common) It is utterly demoralizing for those who want to help make VRChat better/safer.

In general, the new policy works for me (and it works better than web form ticketing) after adapting to the new processes and instructions given.

I’ll do what I want and can do in-app to help (for free) if something gets on my nerves with my understanding of the Community Guidelines, but in the end Trust and Safety has the final judgement whether to take action, to delay making a (hard) decision, or not take action after passing the concern onto them.

1 Like