Developer Update - 23 February 2023

A way to do what you’re describing - send data packets representing midi notes to everyone as fast as possible without polling - would be to create separate events for each note, and use SendCustomNetworkEvent for each one.
However, the delay will be different for each connected player, and may not match the audio delay.

You could use the new Midi Playback system to instead sync the audio & midi events (recorded ahead of time) and then you only need to sync a single float for the playback time to all participants, so their worlds can play the audio and midi events in sync with each other.

3 Likes

Im on PC and those video posters usually take a longer time to load in, especially if the first load attempt is blocked because I’m joining late and the video player got first crack.

I wonder how well quest handles 4000x4000 poster. Hopefully you test on hardware before saying it’s okay and publish. I’ve seen a fair number of quest compatible worlds with 3d scans of small living area, these just cause crash on Quest. Yes it fit into asset bundle and uploaded, still too many polygon. More of a problem when I look at description and image and can’t tell it’s 3d Scan

I’ve seen 1920 width split into three posters. If other people can split a video into three, maybe you can split a texture into three.

That’ll probably come automatically once we get avatar scaling, which comes after another feature that allows world authors to setup permissions for what can and cannot be utilized in their worlds if I remember correctly.

you can indeed do sprite sheet

1 Like

Thanks for replying!

The playback feature certainly DOES take care of this for the pre-recorded case.
I am really glad I didn’t have to work that part out on my own.

The part I’m still figuring out is for the “live performance” use case.
I wanna make a musical instrument the size of a building, that I can play with my real keyboard, live, for my friends.

At any rate, thanks so much for the timestamps! I was delighted to see that the delay math I needed to do was suddenly possible. I might look into some of the SDK2-style SendCustomNetworkEvent deep magic; a creator the other day told me a story of them setting up a bunch of those with a clock signal to send data.

I might also try the “read data out of a barcode texture in a Twitch stream” approach, since that would be a way to ensure the data in the texture is in sync with the audio.

1 Like

Just sending data for this purpose can be achieved with manual sync and requestserialization. The main requirement of that which makes some use cases difficult is that you need to have ownership of the object to send data from it, and that’s the reason why many people still want a more raw network message as you are suggesting. But in your case that shouldn’t be an obstacle at all because only a small number (or even 1) person will be playing at a time.

The network timestamp stuff is pretty much exactly what you need. I would recommend working with that further. What you’d end up doing is sending midi data through a manual synced behaviour and then when other players receive it, they store that data plus sendtime in a ring buffer (an array of fixed size that you loop through over and over) . Then, after they receive it they play back the buffer at the appropriate location.

You could use simulationtime as the basis for where the playback is, but simulationtime is frequently adjusted and may zip around a bit. For music you’d want something a bit more stable, so you could perhaps use a stable, constant time that is always half a second behind.

The reason why you’d want a buffer like this at all is because messages are not guaranteed to all have exactly the same latency. Any dip in network stability will cause them to bunch up and then come back in a flood. So you need to be able to play back the data with a reliable timer rather than just as soon as possible. Also, manual sync tends to be faster than network events.

For further help on that it’d probably be best to move over to another thread, but I hope that helps to get you started.

4 Likes

To clarify, in the most common case it’ll be one by one! Previously all downloads started simultaneously, so you’d get 20 avatars at 1/20th the speed each. Now it’ll go in priority order, with some toggleable options, but by default:

  • nearby, smallest → largest
  • far away, smallest → largest
    It keeps track of things dynamically though, so if a new download comes up that’s higher priority than all existing downloads, it’ll start it right away concurrently. So if you’re downloading a 200Mb avatar and two people join at 1Mb and 2Mb, the 1Mb will start followed by the 2Mb.
    In practice it never went above 3 concurrent downloads, but almost always it’ll stick to 1.
    I do hope it’ll help with lag on join for slow connections!
1 Like

I’ll note that video codecs and decoding are quite different from image loading! I don’t think that’s why 2K was chosen, but it’s important to keep in mind.

In the case that you need multiple images, it’d probably be better to just load N images instead of pulling N out of a 4K video frame. Differences in memory are likely fairly trivial.

What are you doing that requires this?

1 Like

An old feature from mods is changing size without reloading. Your view would scale as you got bigger or smaller seamlessly.

OK, I understood.

The reason for atlasing is simply to get around the 5 second loading limit.

1 Like

I am currently involving on a daily event for Quest users.
Quest2 have never crashed at least on 4000×4000px videos, but Quest1 have crashed in some cases. Therefore, I have made Quest1 load 2000×2000px video when interacted with.

It would be smarter not to make it an atlas in the first place, but to avoid problems, especially in Quest, I have made it an atlas. Talking about the time when external loading of images is not yet implemented, Quest cannot process multiple videos at the same time, so if the instance had a video player, there was no option to load multiple videos for the poster over time.

If you’re using GoGo Loco you just need to switch on and off the avatar or change your IK. Not ideal though ofc, but a temporary workaround until it gets officially added.

Oh boi, everyone get ready for redacted :eyes:
Looking forward to see some more cool standalone Midi concert worlds :smiley:

One thing i’ve wondered:
Are there currently any plans to change the Collider Capsule for players?
It would be really nice if it was attached to the hip instead of the head, for all the FBT users who want to lean over a small object, or look down a cliff.

Would be small effort (i think) for huge QoL improvement :smiley:

4 Likes

I disagree. If someone can make a 2048x2048 video with 24 frames of images, this would be magnitudes faster than waiting N * 5 seconds (120 seconds in this case) with the image loader.

Is there a reason it can not be, say, X amount of requests per Y amount of time, like many rate limits do? 5 seconds per request is a lot. If it was 4 requests in 20 seconds, that would be a lot better. Though even then, I really fail to see why it couldn’t be something like 20-40 requests per 10 seconds maybe.

Modern web is designed around these sorts of bandwidth. Many websites themselves will often grab 20-100+ images within a few hundred milliseconds. Granted, browsers have a caching feature, but still. If a web server/CDN can be overwhelmed with a few dozen requests per client in a busy VRC world, then there is something wrong with that web server/CDN.

Many CDNs, DNS, or other web solutions even offer their own caching. Heck, anyone can use Cloudflare for free and it will cache all of their requests so the web server itself is not abused. This is what I do with a few of my home-hosted servers. Cloudflare even blocks DDOS attacks! Ignore the 8% cache here, as I have some services that are not behind Cloudflare Proxy. If it was a service made specifically for VRC with images and text, there is a great chance it would be 95%+.

image

I am obviously grateful that we do have image and string loading in the first place. But this aggressive rate limit is going to significantly limit the capabilities of creators, and they will have to engineer solutions around the rate limit. Even think about LS Media, which has thousands of images for posters. Right now, the world is 200 MB and most of the thumbnails are incredibly poor quality. Are you suggesting that a world like LS Media should have to create their own atlasing solution in code based on requests, rather than hosting a generic CDN that they can probably run out of their home behind Cloudflare?

You are preventing a problem that does not exist yet.

Even a rate limit of 1 request per 100 ms would be plenty to stop DDOS attacks or other abuse. And even then, what is stopping VRC from having a dynamic rate limit and auto-block/ban feature for worlds that are sending too many requests? This can be something that is programmed, no?

I would hope at least a better rate limit can be achieved for worlds that may need many custom images to be loaded, or data from different sources to be loaded with the string loader. If I want to load a few images and a few strings, does this mean that I have to supply loading screens and have the user wait 30-60 seconds for the world to fully load?

Here is a canny asking to improve the rate limit.

4 Likes

It’s great to hear that the issue of having multiple avatars download at the same time causing problems for those with slower connections has been resolved. Previously, each download would receive less MiB per second, causing slower download speeds and increased ping, particularly when downloading multiple large avatars simultaneously. In addition, multiple avatars initializing at the same time would lead to performance hitches as they initialized. However, I’m definitely curious to know if, even on a fast connection, a room of 20 avatars would finish downloading them all faster one by one or simultaneously.

1 Like

sad PCVR user looking down at his still untracked hands
Come on, it’s been four months now…

1 Like

Lots of new stuff to try out here. :+1:

With a Quest 2 you won’t get full hand tracking on PC any time soon, not with Link or AirLink anyway. (Meta doesn’t give hand tracking data to PC, and if they did it would probably be through OpenXR)

OSC hand tracking would let people add whatever data they can access on their PC, so something like a Leap Motion module, etc.

In my testing the difference was negligible, but I should have some pretty graphs for next week’s update!

1 Like

Usually how club worlds sync their light effects to the music is by having a livestream that has special pixel location embedded in it where the R, G and B values encode data. This will always make sure the lighting data arrives at the same time as the music

1 Like