Developer Update - 23 March 2023

I feel like there’s two opposites to this.

One side- being that everyone should have the ability to teleport to another.

The other- is not.

But I believe there can be an inbetween solution that everyone would agree on. And that is to do a similar thing to the way vrchat approaches jumping. Which is to make it a tickable option for world creators to impose.

This way, games aren’t ruined, and players that are in big maps can enjoy not having to walk 50 million kilometers to get to where they need to be.

1 Like

2FA’d my way here to say thanks for the update! Fingers crossed for that video editor app :wink:

I personally can’t wait for UdonUI and all the cool things people will make for it.
Vowgan and the rest of us from GT put together a “bracelet system” for this kind of thing at the last music vket as a stopgap, precisely for the reasons others mentioned: things like sliders for post processing and performance, resync video, and all that are just all around hard to find. I love how each vrchat world will soon feel like it’s own little “minigame” with themed menus that make sense. Love it.

1 Like

Get a job at VRChat and help make it better I guess?

There’s only so far you can take something before it stops being an avatar altogether.

Someone should tell that to the person who made the entire Half-Life intro animation as an avatar. :stuck_out_tongue:

2 Likes

:open_mouth: this seems to be useful!!

If the half life intro is ever converted into a Quest compatible world I got a few people that need to see it. Good educational material.

what? unless things are just designed badly, there should only be one playing per source, on each end locally. again, most games have priority systems (better than unity’s native one which ignores distance) that handle all of this stuff and cull stuff that is either stacking the same sound or low audibility if other stuff is playing. avatars are just objects slapped into the scene that is the world with networked player updates - an audio source on an avatar should be the same as in a world unless otherwise specified.



Huh? the billion dollar industry does all the hard work, and everyone else gets the fruits of their labour for essentially free because they can study it. A lot of things aren’t as difficult to actually build as they are made out to be, just maybe difficult to process or manage. For instance a modular prop system for vrchat wouldn’t need to be vastly different from existing systems, just adapted - a hybrid between avatars and worlds for loadable udon objects, or they could have made udon entirely modular and behave more like ECS in how you use it in scenes in tandem with native functions, and would have made it vastly more flexible and future-proofed. VRChat has well over a hundred employees (and surely many more commissions/unofficial), that’s bigger than Bungie was, and they turned over the planet. Valve (having famously low employee count) has also innovated things that are used industry-wide as well, back in their days of working on Half Life 2 even. That’s not an excuse.



This would be helpful for some instances, but not necessarily transformative. Turning off one audio source and then turning on another after should be more or less the same cost aside from the small overhead of toggling gameobjects. Unless they let us use AudioSource.PlayOneShot and we could play one shot then change the audio to another and play one shot again. My understanding is that playoneshot is more optimized for instantiated audio sources, but can still overflow? Hard to find detailed information on the topic.



I’m not a coder and never will be, my skills are in studying and understanding design. I’m not useful if my input is not considered valuable. Plus design input is arbitrary, not something that works well for payroll unless you are a manager or director.



Sorry, i wasn’t intending to give that impression, i was spit-balling examples of where things are going and that people need to consider following in it’s wake. People on the edge of the curve should have been anticipating or even racing for realizing similar concepts anyway. I’ve seen indepth discussions for metaverse innovations in vrchat, but response is usually that it’s outside of vrchat’s scope, even though they could have capitalized on it with huge investment potential, even if their attempts at it were crude.
ps: unreal’s metaverse plans are only in early stages anyway, but they’re building proof of concept for their desired scale by engineering a new coding language.

“Because immersion” is excuses they’ve given for things in the past though lol. Immersion is extremely important for VR because it’s VR’s biggest selling point, so no that is a very valid and strong argument. Audio is the second most important aspect after visual for immersion in a media, there is reason why millions are spent on movie and game audio engineering. This is also why VRChat has been trying to get Steam Audio, because the oculus spatialization system they use now it kinda bad.

Continuing the discussion from Developer Update - 16 March 2023:

To address this, i actually did give examples this time (i was too slow to reply before and missed the thread closure)… I mentioned props, transformations, and foley (foley is like clothes ruffles and other nameless sounds).

Personally my issue is mostly in props and weapons, i want to make weapons that i can dynamically fight with my friends using, or funny props with jokes, and be able to use reactive particles (particle collision triggered) and have dynamic sounds. I want to be able to build prebaked environmental sounds for things that are supposed to be louder - in the example of a gun, you have a lot of sounds to consider: the handling, the magazine loading, slide, muzzle blast, cycling, environmental reverb, bullet cracks, and the biggest one: impact sounds. The only way to have a sound happen on a surface you aim something at currently is FinalIK (or some cursed stop-action world constraint nonsense). I don’t expect craploads of people to be making guns and stuff, but they are far from rare, as there are huge communities, especially RP ones, that make wide use of firearms on avatars. The difficulty with using anecdotal cases, is that it’s easy to just be like “well we don’t do that or don’t think it’s necessary” and ignore it’s relevance. The thing that is more important than current demand, is what opening doors to opportunity can create in the future, it’s all the ideas that we haven’t thought of yet.

The argument “just go to a world that has guns then” is a bad one, because a) there are hardly much options for polished well designed worlds that have these conveniently (and it limits individual expression to that of the world creator), and b) there is an absurd barrier of entry for designing worlds now that they are removing SDK2, as SDK3 is NOT newbie friendly (even with CyanTrigger), and is very deterring for people with limited mental power, limited attention span, and limited TIME. But i’ve made countless rounded logical arguments to why this is a problem, and all of them just get brushed off with “just git gud, go learn udon” (which is a terrible counterargument obviously).


※ I bring up UE5 mostly because of how resolute they are in making the design process easier, faster, and more newbie-friendly while also being very versatile, and how important that is, especially when observing the growth of AI systems, democratizing content creation to people who otherwise are hindered from doing so. VRChat seems to be doing the opposite, making it harder. It’s important that VRChat creators can make more cool things, because these benefit interest in the platform, and therefore it’s market value.

PlayOneShot doesn’t let you stop an audioclip after starting it, does it?

And yeah the soundbank wouldn’t improve performance, but it would make it so you aren’t forced to make a VeryPoor avatar when you’re playing under 8 sources simultaneously. (and yes I know it’s capped lower than that for simultaneous)

I, and most i am around, have chosen to just ignore the performance system, as it’s extremely difficult to avoid VeryPoor and also make anything quality and interesting (especially because it measures as if everything was enabled at the same time). I and those i know encourage proper performance profiling practice and learning valuable optimization techniques to make sure avatars are relatively low cost at runtime, even if they have a lot of stuff on them.

Like we’ve been pushing setting audio sources to Compressed in Memory instead of Decompress on Load, and avoiding texture crunching, and reducing texture channel overuse, limiting animator layers and conditions, swapping to lighter weight shaders etc for ages (i was a primary advocate against cubedparadox even back in 2018 because of it’s performance cost, and poiyomi is much worse). The performance systems gives us a big thumbs down and a poop emoji despite all the efforts to balance quality with performance.

well yeah, otherwise they’d have to constantly do perf checks in runtime, which would hit performance for ALL avatars.

there is no perfect solution, the system has to protect against bad actors. (and it can’t even do that perfectly)

the easier solution would be to NOT run a metric on avatar performance as a ranking (just let it be count totals), but instead have a system built around the profiler (which is completely accessible data in the vrchat client btw debug build runs profiler always), where avatars are rating on their performance by their REAL frame time impact in ms.

This would be an all-round solution, because it would also account for bugs and other issues that aren’t detected by the avatar performance scanner.

The added benefit would be that you see the performance cost of everything based on your own hardware locally, instead of having a socially centralized thing people can use to discriminate.

not familiar enough with the perf impact of running profiler all the time, and the reliability of its numbers depending on scenarios, but could be a possible way to do it?

You aren’t understanding, it’s already running, always has been.
From what i’ve seen and learned from ex-modders as well as other technically knowledgeable, vrchat has always been a debug build, not a final build from unity.

You can view a bit of the metrics with the Shift + ` + number keys (if you enable the debug overlays at launch), but it’s not the most useful layout (graphs aren’t labelled properly), plus it’s a partially hidden feature.

on debug build you said

post-edit:
alright, wasn’t aware of that.
would need a dev to confirm that it’s still running I suppose.
At least Udon timing is shown with --enable-debug-gui, dunno if that information is still calculated without the launch option active.

I’m pretty sure these reasons are why VRChat is choosing to focus on SDK3 and retire SDK2.

For the sounds thing, the limits might be taking into consideration a full instance of users all going ham on thier audio sources. I’ll be keeping avatar sounds firmly muted, but I know that other people really like them. 80*X sources of audio. How high can X be?

1 Like

They mean that avatar audio sources have to account for 40+ avatars in an instance. The “only” 2 extra audio sources can quickly go up to 160 more in a totally maxed instance.

1 Like

I don’t follow. Multiple audio sources require multiple audio sources. A priority system is not some magic solution. The performance ranking system is built to measure the worst case, something a priority system wouldn’t actually change.

From a technical point this is somewhat true. But there are many other non-technical considerations that have to be taken into account. I would say that the biggest challenges are non-technical in nature.

Well, if reports on current investment trends are to be believed, anything “metaverse” is going nowhere :rofl:

Investors soured after Zucc promised the world and with tens of billions delivered this. VRChat doesn’t have too much to sweat about when it comes to competition.

I think you misunderstood. “Immersion” is a vague term with a hugely varying definition person-to-person. Saying something should be done “because immersion” is basically saying nothing at all.

This makes much more sense. It’s a fair and reasonable thing to want.

But don’t you think this would be better served by some sort of prop system? Increasing the audio source count, or developing a different audio system, just seems like a hack to make pseudo-props work.

I would love to see props be implemented in some form.

This is a very bad idea and not one you will see ever implemented. The overhead from running the profiler is very high. It would reduce performance greatly on PC, and make the Quest version borderline unplayable. I very much doubt VRChat are releasing debug builds.

The profiler does not report on the cost of rendering individual ‘things’ such as avatars, as the GPU does not render avatars one-by-one. It only reports on how long specific stages of rendering (such as shadow maps, forward-additive lighting) take. The information you can pull from the profiler would not be of much use for automatically measuring avatar performances in real world scenarios.

In addition: it would makes developing avatars very complicated. Avatar developers (who are generally not software developers) would have even less of an idea who their avatars are compatible with then they do now. The performance system has problems, but it’s at least simple.

The profiler is not a practical solution.

Have fun with the waterfall of reports from people that see every single profiler error each frame that will pop up that are normally hidden or systematically ignored when creating a system like that.

Unity is not that stable in general that one could create a performance system/limiter based on a profiler metric.

If you think Unreal is more powerful to handle a app like vrchat… than justvtake a look at the handful of failed attempts as unreal’s massive networked Multiplayer aspects are even worse.

No, Zuck is just an idiot. Epic Games will be vastly more successful. Also their metaverse dosen’t require vr, it’s a REAL metaverse that includes all things being able to mutually communicate with a common programming language.

No, you misunderstood. Immersion is FAR from subjective. It has subjective weight biases, but human psychology is human psychology.

What? No it can assess costs of materials and stuff like mesh skinning on individual setpass calls etc. The point is that if it’s there, you should be able to just flip it on, check, flip it back off. Or have it flip on momentarily when someone loads an avatar or something. Big ms render time = bad, big red flag if you are costing like 2ms or more. Nobody needs to know complex stuff, just basic raw costs on associated setpass calls.

Prior to UE5 i was very firm about UE being to fat and problematic, but it’s really been shaping up to be a huge step up.