I noticed that the Image Loader and String Loader have a rate limit of once per 5 seconds.
What is the reasoning behind this? Is it possible to have this reduced in the future, or have some way to batch requests?
It may be inconvenient in scenarios where you may need to download 2 or 3 images at once, or perhaps need to do multiple API requests to different services to provide a good user experience.
Yes, but the overhead should be basically negligible. If it is properly implemented, like it should be and probably already is, the request should be asynchronous. The worst part would be loading images into memory and VRAM, but that already is a thing that is going to happen here, and it shouldn’t be more than a small hiccup. Really, the rate limit or bandwidth limit should be something the user can set for themselves.
If a user wants to consume multiple APIs, or perhaps the API requires multiple requests, they would have to make their own web server to aggregate the requests, or they would have to wait.
And what happens if I need to load 3 images from a CDN that I do not host and don’t have direct control over the images? Am I really expected to write my own webserver/aggregation solution that combines the images into an atlas and then serves it to the client?
There aren’t currently any examples because the tech is brand new.
The easiest example of what will likely exist is probably LS Media, where they have 40,000+ thumbnails that, currently, are extremely low quality. The world size is 200 MB as well. However, there is a good chance that people won’t see 99% of the thumbnails most of the time, so why should those be loaded at all? Perhaps if all the images could be network loaded, I’m sure that the world size would drop below 60 MB. It would be much easier to load 256 or 128 thumbnails that are higher quality, but why atlas these?
If you need 3 thumbnails and they are all on different 2048 atlases, you’re then loading 63 (if 256) other thumbnails * 3 that you do not need. It is much more efficient to have 3-6 network requests that are 128 or 256 images, than to have atlased 2048 images that you would then need to load into VRAM and never use the majority of them.
With that example of the atlas being inefficient, the potential user has to look at categorization that is quite specific, and somehow not do any further browsing because any further browsing would probably make good use of the downloaded atlases.
I guess a search would return 3 items
I think it’s still a good idea to put textures in worlds, even for lsmedia, as texture in asset bundle can have compression for VRAM. 2048x2048 dxt1 is 2.7MB while RGB is 16MB
All I can say is that web browsers already manage to do it a certain way and it works incredibly well, so I can’t understand why it can’t be done the same way in this case (small images that lazy load as they are needed).