Developer Update - 23 March 2023

I noticed another issue with audio - vrchat doesn’t process surround sound anymore. The Spatial Audio Source changes the sound destructively (and doesn’t work in world reverb zones), and with spatialization is disabled i only get a 180 degree stereo pan range, not full 5.1/7.1 directionality (i have speakers/headphones capable). This didn’t used to b the case. Don’t know when it changed, probably a long time ago, but years back i had spent a lot of time messing with directional audio, and vrchat did 100% process surround sound before without requiring the spatial audio active.


Huh? SDK3 is significantly harder to learn and takes way more time and attention span to work on. SDK2 is extremely simple, uses native unity systems and functions, and requires comparatively very little to develop and maintain on their end. Udon is a massive headache for everyone.

This never happens. 40+ user instances, you rarely see more than like half a dozen avatars with even one audio source playing at a time each. The main count of audio sources is everyone’s voices, then there’s a reserve for worlds, and the rest is just capped limited globally assuming that everyone is going to abuse audio (reality check, most people don’t use any audio), so they hard limit for a worst-case scenario that essentially never occurs.

iirc vrchat has 256 channels for audio? if they have is set less then i have to just express a massive WHY… as someone who grew up on rts’s, and modded a lot for them, i’m very familiar with priority ducking & culling systems to prevent buffer overflow - this was the case for games in the 90s even. such systems are standard issue in that genre, and also in modern fps’s as well. it’s not fancy modern tech that’s a pain to develop, it’s standard procedure since times of yonder. i don’t understand why vrchat doesn’t have such a basic system.

vol float (could even convert to integer to make it clean), offset by attenuation, return a value out of a total, take count of total active audio sources, ones that have the lowest value for volume/distance when too many are playing.

What? No it can assess costs of materials and stuff like mesh skinning on individual setpass calls etc. The point is that if it’s there, you should be able to just flip it on, check, flip it back off. Or have it flip on momentarily when someone loads an avatar or something. Big ms render time = bad, big red flag if you are costing like 2ms or more. Nobody needs to know complex stuff, just basic raw costs on associated setpass calls.

The profiler does not tell you the cost of individual materials or meshes. Materials are abstract concepts that are broken down and become a part of several rendering steps. There is no simple material cost. The profiler will tell you the total time to skin all skinned meshes only.

The profiler tool is useful to measure the performance of avatars in the Unity Editor only. The issues above are not an issue in the editor, but are in a complex environment like an actual level. It’s not viable to use it to measure the performance of avatars in VRChat itself.

Plus… It’s not there.

I’m starting to become convinced that you have never seriously used the profiler outside the context of VRChat avatars. Am I right?

Immersion is FAR from subjective

This isn’t true and doesn’t matter. “Do this thing because it’s more immersive” tells people nothing.

Zuck is just an idiot

No argument there :+1:

Udon is a massive headache for everyone.

I think your generalising a bit. Coming from general Unity development, Udon is basically bog-standard unity development but exactly the same. I picked it up with no effort.

Udon’s great.

More real/convincing/detailed = more immersive. Makes me think you may need to go read a dictionary or something on what the word means… I already explained what i meant contextually AND gave examples but whatever.

Only if you are a programmer already. Everyone else is expected to learn absurd amounts of things. Also no, Udon# is not the same as doing C# because there’s lot of proprietary jank and limitations to contend with. The graph is completely proprietary.

edit:

I meant functions that VRC doesn’t let you use, variables you don’t have access too etc, stuff that doesn’t allow 1:1 translation. Also even just the factor of having to go to that level of work just to create some art in a game is obscene. Importing and modding assets into any other unity game i know of that checks for asset bundles is as easy as slapping together a basic scene and compiling, then just adding something to a table that the game uses to reference them ingame. I want to make cool worlds, not code games - i’d just make my own indie game that’s better at doing the game world i’m making if i wanted to do that.

Yes they do, it’s inferred conditions for the definitions to be valid.

needless semantics

A) the only dictionary that has authority in the english language is the true oxford (not the bootlegs that only have oxford branding)
B) i literally said real/convincing/detailed the /'s being and/or. the more vivid/lucidity there is to occupy the brain, the more immersed it will be, this is universal. again, movies and games invest iirc like 10% or more total budget on sound alone depending on the production. i could spend hours here explaining the absurdly complex immersion strategies employed by games and movies involving visual and auditory presentation

No you can’t from everything i’ve seen, you are forced to work within the udon VM. If you want to do stuff that is SDK2-ish you need to use CyanTrigger, but that has it’s issues and is nowhere near as simple nor as easy to problem solve or adapt. Even simply adding a mirror toggle or jumping requires having to go through a whole process or learn how to make sense of the graph’s heavy jargon methodology.

They tend to use injectors where you can just pile your stuff into folders, like vrchat modding was. If it’s just like characters, props, maps etc, then it’s super simple, just use a package editor maybe, and compile some assets a specific way and voila it works. The tools are pre-made by appreciated people and usually fairly easy to use without any technical knowledge.

No definition of immersion required any of those things. But you have missed the point. “I want lots of audio sources for immersion” says nothing “I want more audio sources for props on my avatar” is much more useful.

That’s fair. Perhaps Udon is not the most accessable thing to people who are not used to programming.

I’m not sure what the solution is, but SDK2 was undoubtably limiting.

Yeah. Unity development. That’s what I said :rofl:

I’m not actually being serious when I say exactly. That’s exaggeration for emphasis.

No it obviously doesn’t literally allow 1:1 translation. But be just a teeny-weeny bit flexible and it easy to pick up for anyone familiar with Unity development.

Cambridge Dictionary:

  • to involve someone completely in an activity:

The example given:

  • She immersed herself wholly in her work.

Things that can be immersive include:

  • Reading a book about magic and other unrealistic and made up things
  • Daydreaming
  • Working on hobbies
  • other abstract but engrossing things

Importing and modding assets into any other unity game i know of that checks for asset bundles is as easy as slapping together a basic scene and compiling

Correct me if I’m wrong, but is that not how VRChat works? Like, if your goal is just to add assets to a world… you just can. Without Udon, right? Am I missing something?

Modding game functionality beyond assets requires an IL decompiler such as DNSpy, or in the case of IL2CPP games, low level tools such as disassemblers and debuggers.

Immersion

A) What on Earth?
B) What on Earth?

No you can’t from everything i’ve seen, you are forced to work within the udon VM. If you want to do stuff that is SDK2-ish you need to use CyanTrigger, but that has it’s issues and is nowhere near as simple nor as easy to problem solve or adapt. Even simply adding a mirror toggle or jumping requires having to go through a whole process or learn how to make sense of the graph’s heavy jargon methodology.

Could prebuilt prefabs help, perhaps? Prefabs can do things without touching Udon. I got my first toggle-mirror working by just pulling it from the example scene, and Avatar Pedestals are already Udon. No need to touch the noodles to get those working.

Perhaps more example prefabs like that might be of use? For example for commonly done things like video players, seats, doors etc.

Making world creating accessible is of course a legitimate concern (just in case some of my posts sound too dismissive)

Prefabs are very bad for creativity. If you’ve paid any attention to creative development in any avenues which promote overuse or excessive convenience of prefabs especially due to the barrier of entry (due to difficulty, paywall or other causes) for quality original content, it severely suffocates it. The largest complaints about Unreal had always been how easy it is to get photorealism, and how much of a pain it is to stylize anything, which causes the overall uniqueness density to decrease.

VRChat has been suffering from this severely as more and more content is being saturated with barely or in some cases not at all altered prefabbed asset flipping. This issue has been plaguing avatars since the addition of av3.0 as well. As amazing as it is to have access to the mecanim graph, that’s a large learning curve to make something that not only functions preferably, but is also optimized in doing so.

My concern is that if VRChat doesn’t stay ahead of the curve, other platforms and environments are going to supersede it in the future because it didn’t stay small creator-centric and capitalizing on accessibility (which is what made it get big in the first place).

. . .
If there is something that needs a prefab - it’s a basic system that checks the distance between the player object origin and the head camera position (ex Loli Sanctuary), so that game world creators can use that to anti-cheat their worlds and VRChat can let us have avatar scaling without needing a silly “allow x feature in world” which won’t even work because playspace and switching avatars is still a thing (not that i am convinced avatar size changing is much of an issue).

1 Like

I tried to say that the VRChat developers as in Tupper, not you or me, can only do so much.

With SDK3 being non native they can more easily dig themselves into and then later out of a metaphorical hole. You can dig upwards when programming, don’t worry about it.

Native unity functions in the editor is great, except that between the editor, the world being compiled, and actually pressing the button in VRChat, I’m pretty sure your native unity objects are tinkered with, once at the SDK and again when loading in. A component whitelist might not necessarily just be a yes or no, it probably has to inspect the parameters given.

I haven’t had a chance to try cyan triggers yet, does it add it’s own objects, or does it switch things around temporarily when you ask for a build?

For prefabs, if some people don’t tweak them, I think it’s unfair to blame the prefab. There is a purchasable world called “the observatory” and I looked at every single version that was uploaded, 30 different people. It’s a world with two areas separated by a in world portal that you walk through.

One person moved the transition points so that the visitor can see more pictures on the wall as they walk up the stairs, another person took the components in the project and added a second area to visit. I’d argue that the components of such a map fit a definition of prefab. A third person changed the time of day to day, instead of night.

All of the other versions are varying levels of customization. Seems to be photos on the wall, then the building textures, and only lastly the default video URL. Barely anyone got to the URL, so I heard the same song many times.

Oh two versions are Quest compatible, but both had issues for the people I dragged in, kicking and screaming.

I came into VRChat without any 3d or other game engine experience, and on one hand yes those maps are all asset flips, but on the other hand they show people having success with using the unity editor and actually editing something. They all have the same world photo, so it’s easy enough to scroll past.

1 Like

I’ve avoided sdk3 for worlds entirely. What i know about it is through conversations with many people who use it.

No there’s nothing wrong with the prefab, it’s that the prefab is so easy to get and creating unique content has been gated to people who are very notably technically inclined.

Customization of these prefabs is usually very limited. People change pictures, add props and on rare occasion they’ll combine a couple prefabs together, but you rarely see any transformative change to them.

It does show success in modifying it slightly, but it was vastly easier to do so in the past with sdk2 and thus you were more common to see unique model ports, original designs and transformative remakes of things (both in avatars and worlds). There might be more of them total now, simply because there are more total users, but they are a much smaller percentage now from what i can see.