I sure hope that REDACTED is doing fine in development. Can’t wait to see how REDACTED shapes in the future.
What is this 5-second limit based on, what kind of abuse cases does it prevent? One 2k image per 5 seconds can’t be a bandwidth issue. I assume it’s a local request from the external host, so it shouldn’t be a big strain on VRChat’s API either.
Let’s say I use this feature to load in textures for objects, it would take 30 seconds to load textures for just 6 objects. If the limit was 1 second per image, it would take 6 seconds. The limit is producing a real chilling effect on creativity here.
Some criticism for this update, if I may:
1. The 5 second delay does NOT prevent abuse cases! If you really want to do that, implement an actual rate limit based on size (i.e. 100KB/s), as this would also encourage people to use compressed image formats. One can still absolutely kill someone’s internet connection by downloading a 2048x2048 uncompressed image every 5 seconds (16 Megabytes per image, or 192 Megabytes per minute).
The only thing this limit serves to do is prevent people from making cool things like dynamic Art Galleries that update regularly. Imagine telling people “oh, it takes about 10 minutes for all the art in here to load, please be patient”
2. PLEASE allow us to put arbitrary request parameters into URLs! Not asking for much here. Just the ability to insert one or two Udon script variables as request parameters. So many more amazing things could be done if the server responding to the request had an Idea of what the state of the instance was at the moment.
3. Extend the String Loader site whitelist. Pretty much everything on there is just used for hosting static content, and at that point, why am I making a web request to fetch a const string? Granted, pushing to a GitHub repo is easier then re-uploading a whole world to update said string, but that makes the feature only a little less than useless without begging players to turn on loading from untrusted URLs. You mention loading weather data into worlds, but I want you to tell me how to do that using just GitHub and Pastebin, without getting into the complicated mess of setting up a script to push to a repo automatically.
Can you please provide an example of loading dynamic real-world weather data from unprogrammable Pastebin and/or a static GitHub website? Such an example would greatly help to understand how to deliver dynamic data from a completely static website!
To be fair: Most worlds that have any sort of URL loading, even just for video, already require untrusted URLs. I would imagine that the majority of VRChat players have untrusted URLs turned on.
With more URL stuff to trust, maybe there should be more “moderation” options available?
For example instead of having a global “allow untrusted URL”, maybe have “allow untrusted URLs in this world”?
I wrote a quick Canny on this.
More options for “Untrusted URLs” | Voters | VRChat
I think this would allow a bit more “sanity” checking in things, and potentially reduce chances of exploits with malicious web servers.
Are you guys still working on the Avatar Dynamics specifically for clothing ? I remember it was mentionned some time ago and I’m curious if it’s still being worked on.
Camera bluescreen mask, group banner slimming and customizing quick menu is what I’m hoping for. Gonna make a list and video on all the things I want, it’s not much.
Yes, keep going!
Load in an atlas with lower res version, prioritize images closest to the player, and recalculate this during the 5 second delay. The world design would influence the aspect ratios of the content, and locked aspect ratios makes atlasing viable.
Also you could put some of the art inside the world.
I feel like in this scenario specifically, the artists might not want their art to be displayed at a low resolution as part of an atlas. Distance-based load priority would help, but not in a world where there is a lot of images. Ultimately, it is still just simpler to use a video player to quickly stream in dozens of frames in just a few seconds, using a camera to capture them. In the end, that’s not even the point. This is also just one example, and as I pointed out, the 5 second timeout is an attempt to stop malicious scripts, but its actually completely ineffective in doing so. The only thing it does is limit people’s creativity.
Remember that rate limits are not only about download size though.
Invoking the download and handling it (especially images) probably comes with a cost to client performance.
And additionally quite a lot of providers do not like it when you hammer them with request over long time.
It’s fine with a high burst of request over say up to 5 minutes (up to web hosts to decide what is sane).
But most do not like it when you hammer their endpoints over several hours with high amount of requests. Which could happen if you have a world doing downloads with short intervals where people hang out for hours.
aaaaaahhhhhhhhhhhh the suspense is killing me i just want proper avatar scaling XD
Hey guys!
First off let me say that I’m a long time user and content developer for VRC (primarily world building), and I do admire all the work you’ve put in, especially recently.
That being said, while I was very excited to see the release of string and image loading, as it stands these features are extremely limited. I was quite confused by the implementation and I honestly don’t understand the reasoning behind some aspects of it.
Specifically, why are you not allowing us to compose URLs at runtime? The biggest advantage of this whole string loading feature would be that I can save world state per user and then load it back in automatically, and seamlessly. While this is technically possible in the current implementation , but it requires the user to fill in a URL manually, using the data provided by a script (copy and then paste the URL generated by the script into the URL field manually), which is fairly cumbersome on the user side.
If this is meant as a security feature, it feels quite unnecessary. Why don’t you just show a disclaimer in worlds that exchange data with a server so the user can opt in or out of this data exchange? If they opt in , then they’ve accepted the risk. Even better just have an option in settings “Allow data exchange between worlds and 3rd party servers”. Quite frankly though, how could you possibly harm a user by allowing script-built, runtime URLs? You can’t collect information, as far as I know, that is in any way sensitive. Even if you could, your script can just generate a URL that they are then instructed to copy and paste into a URL field manually, they wont know what’s encoded in that URL. They wouldn’t be any wiser than if it happened automatically.
So in the end all you’ve achieved, in my opinion, by not allowing URLs to be generated runtime, is to seriously limit the usage of this great new feature, for no real security benefit. I’m not even that concerned about the rate limit, although as others have stated above I feel there would’ve been better ways for rate limiting, such as x number of requests per x amount of time, instead of a fixed 5 sec interval.
So please, PLEASE consider this and other comments above and let us automatically communicate (both ways with the data included in the URL) with a server, and use URLs that we can generate runtime. My company chose not to use your platform for our products mostly because of the lack of the ability to do this (the ability for automatic data exchange between worlds and a server).
Another note, a smaller thing but…I don’t see the point of not allowing “untrusted URLs” (I never really understood this for videos). You can host any sort of harmful or offensive content on any of the trusted URLs, since you aren’t checking actual content. Why block any URL then? They can’t do any worse than the trusted ones, as far as I know you can’t load malicious code onto a computer through VRC or steal actual sensitive user data, etc…It feels like that whole option is completely pointless since you can’t do any harm anyway (please do correct me if I’m wrong in this assumption).
Again all of this comes from a good intention, because believe me I speak for everyone who wants to build worlds that would have the ability to deliver a customized and persistent experience to their users (per user), or to have proper access control to worlds.
Thank you for all the work you do, and carry on!
I do not know the exact download components/http client used by VRChat/Unity in this case.
But it is entirely possible that there exists vulnerabilities in these libraries that could be exploited with this feature by connecting to a malicious webhost.
The risk isn’t harmful content as in people seeing something they do not want to see (not really any good way of protecting against).
But rather code exploits that could allow attacks on a user machine.
Edit: On protecting from harmful content, by restricting to known hosts you can kind of lean on the whitelisted content provider to handle the harmful content.
2k image each 5 second is not a lot… hoped this feature would open up possibility of making lightweight worlds with images streamed to client as they enter rooms or just lazy loaded, so you can start enjoying the world faster.
Tho would be nice to see a “cached” property then too, to be able to tell vrchat to cache this image with the world to not repeat the request each time, prefferably with some control of TTL.
But for that higher resoluton is needed and/or more requests, as cant really atlas that many images on 2k texture and i dont see any reason to limit it like that, not like it prevents worlds from doing anything bad.
As sorry but
like creating a dynamic art gallery that changes from day to day
is just bulll… at this point, you can’t create that with current system, unless you add a 1-3 minute waiting room before enetring the world, but at this point its more efficient to just automate reuploading the world every day
Those are fair points, but I really don’t see how those vulnerabilities would work through VRChat. I mean then VRChat would need to be able to execute code on the user’s machine (beyond the functions of the viewer) via in-world scripting, with a non modded client. Also you could exploit this just as easily by building a world in which you hard code this malicious behaviour, if it was possible, you really don’t need an external server for it.
Also I very highly doubt any of those trusted hosts are checking actual user content. As a matter of fact they don’t have to at all, as they are not responsible for it. Most of these probably work on a report basis, or maybe random checks for EULA violations, if they’ve specified anything. Looking at pastebin’s privacy statement, they state that they don’t generally search for content. You could have any sort of potentially malicious code in there, even if they would check, since they’d have no way of knowing what a bunch of code does (and besides you can encode anything and then decode it on the world side). So no you can’t really rely on the content provider to filter for such things (malicious code or offensive content, etc).
So yea I really don’t see the point of not allowing pretty much any URL to be honest, it doesn’t seem to make any difference.
To be fair I can live with this (kinda pointless) feature (meaning the “untrusted URLs” setting), because I could ask users to simply enable untrusted URLs to allow my own server to communicate with my world(s). My biggest issue is definitely the fact that URLs can’t be generated runtime…that just horribly cripples this feature for no apparent security benefit (even logically it baffles me , because then why have the trusted URL thing if you then don’t allow a script to send data to a “trusted” URL automatically without the user manually sending a URL).
Exploit what exactly? Its a string/image being put into a Udon script, which is running in a sandboxed environment already. If there did exist a vulnerability in Unity that could be triggered by just reading, not even parsing, a string or image, that’d be BIG news and would receive a patch in hours.
Also, you can find literal malware being hosted on GitHub and pastebin RIGHT NOW. They can’t check every single upload either.
Quite honestly, instead of “untrusted URLs” they could just have a “Allow worlds to communicate with 3rd party servers” option, which would make a lot more sense , if we are so concerned about privacy and security…I can understand if somebody doesn’t want to be potentially “tracked” as being in a certain world (since you could transmit display names to your server). Then they should allow runtime generated URLs, and get rid of the “untrusted URLs” setting, and with that new setting anyone can opt out, if they choose to, simple as that. If they want to be extra careful , add a popup/disclaimer warning the user that “This world passes data to a 3rd party server” and you could either accept or deny… (then the client would remember your choice).
Why not fixing current bug before implementing new features?