Need some help with shaders and RenderTexture for a painting world

Hi. I’m trying to use RenderTexture to do some custom drawing on textures for a custom painting world. Unfortunately I am running into some snags.

See what I have so far, the brush tip gets rendered to a RenderTexture and will then be rendered to the canvas textures. The canvas would be 2 RenderTextures that alternate, taking the other in as a tex2D in order to add to the existing painting.

I have so many questions.

  1. How can I use the inverse of the ortho camera depth in my shader to affect the opacity of the brush render mask? It’ll be 0.0-1.0 along the depth of the view volume, right? The camera depth is 0.2, which I think should be enough world space for decent opacity control.
  2. Is there any way to initialize a RenderTexture programmatically?
  3. I’d like to use UdonSharp, but can’t get intellisense working in VS Code. So I would be programming in the dark and don’t want to do that. Is it possible to set it up?
  4. I can’t seem to find RenderTexture.SetActive in order to call ReadPixels to convert the RenderTexture to a Texture2D so I can sync it as Color array. Is it not exposed? Is this a bad idea to sync textures this way?

I am posting again to answer some of my own questions and ask more.

  1. I wound up not using depth and just setting the opacity of the brush tip based on distance to the active canvas plane. That works okay.
  2. There is no way to initialize a RenderTexture programmatically. So every canvas needs hardcoded project-level unique assets to be created on disk.
  3. I was able to get UdonSharp up and running with Visual Studio Community 2019. Intellisense works, it’s great.
  4. How to sync textures is still undetermined.

Okay, so now my new question. I got the painting working with a CustomRenderTexture for the canvas that takes the brush mask (RenderTexture on a camera that renders brush tips in an area in front of the canvas) and blends it with itself (double buffered). This works fine if I press play in the editor, no errors, can paint with the brush by moving it with the inspector. However if I “build and test” in the VRC client, the brush no longer paints.

Is something I’m using secretly not supported by VRC? Why might something work in-editor but not in-game?

Thanks to the debug logs I was able to figure it out. Setting an object’s layer required an int instead of the return value of a layer by name function. Painting with opacity in vr is functional now. To anyone silently following along, expect a showcase thread as this project comes together.

In the mean time, does anyone know a good way to sync a CustomRenderTexture? Convert to Texture2D and then to a synced Color array was my first guess, but I’m not sure conversion to Texture2D is exposed.

There are a few people who have successfully synced render textures. The most successful that I’m aware of was bd_, you can find them here https://twitter.com/bd_j/status/1399514183343874049

I haven’t done this myself, but from what I understand it is quite easy to sync a small image because all the necessary functions are exposed to go from texture to color array and back to texture. The difficult part is that images are just a lot of data and while udon manual sync is able to handle around 60 kb per serialization, that’s still not enough to do images in one go. So if I understand correctly, bd_ splits it into a bunch of smaller serializations and you can see it work it’s way across the image. I think they also do some compression algorithms.

From what I understand @bdunderscore actually implemented a form of DXT compression in Udon :smiley:

Good to hear what’s possible, thanks for the replies. I will make an attempt. bd_'s world is what inspired me to create this, I have a lot of respect for their work. Perhaps I will also perform segmented and compressed synchronization.

The hover distance based opacity of the current prototype makes me want to turn this project into a spray can graffiti world rather than simply a digital painting world. I think that would be fun. See this video of the early stages of the prototype. Already I can see some issues to resolve, but it’s getting there…

There’s been a few graffiti worlds and they’ve been really, really fun! It’d be cool to see a late-joiner-syncable one.

I talked to jynbug directly, but I guess I should probably post some information here as well.

Capturing a RenderTexture in Udon isn’t too hard - you need to attach an UdonBehaviour to the same GameObject as a Camera; in the OnPostRender event the render texture attached to the camera will be active, so you can do ReadPixels there. To have control of when you read the texture, set the camera component inactive, and call Render explicitly. But… that’s the easy part.

The difficult parts about syncing textures are 1) the limited bandwidth available, 2) the issue of desyncs when the texture changes during sync, and 3) the issue of multiple writers. Drawing Retreat has done a first stab at 1), but it’s far from solved (and these limitations are the primary reason I’ve not packaged up the texture sync system as it is today).

Bandwidth and Compression

First, the limited bandwidth issue. There’s currently about 11 kilobytes/s of bandwidth available per sender for Udon manually-synced objects. Without compression, a 512x512 rendertexture will take about 93 seconds to transmit, best case - but in practice, you’re competing for bandwidth with other objects, and it’s necessary to implement a backoff mechanism to avoid latency growing without bound. So it could take something closer to 2 minutes for a 512x512 image.

To improve on this for Drawing Retreat, I implemented a modified version of DXT5 compression, based on the technique described in the paper “Compressing Dynamically Generated Textures on the GPU” (Alexanderson 2006). On top of this I layered a run-length encoding. The DXT5 compression was modified by rearranging its fields to place all alpha data first, followed by base color data, followed by indices; this helps improve RLE encoding efficiency.

The RLE encoding was performed on pairs of pixels; on the encode side, a shader identifies, for each pixel, how many pixels following are either part of a run starting at that pixel, or not part of any run (longer than a threshold). An Udon script uses this information to skip over most pixels, reducing Udon overhead. Then, on the receive side, an Udon script generates a lookup table recording the input and output offsets of each repeating or non-repeating run, and a shader performs binary searches into this table to decode the RLE-encoded data.

In practice, the DXT5 encoding is reasonably effective (4:1 fixed compression ratio), but the RLE encoding isn’t very effective on anything other than fully-transparent areas. This is because the brushes in the world have an noise texture, which is beyond what RLE can efficiently compress. A Huffman or arithmetic code would probably work much better, but would be much more complex to implement in a fragment shader.

Having access to a built-in compression method that can quickly pack a Color32[] to a byte[] would make most of this work unnecessary and would enable syncing much larger textures over Udon.

Desyncs

Transmitting image data takes quite some time, even with compression. If a player draws while the transmission is in progress, then a desync occurs due to the transmitter continuing to transmit an older image, while the local brush rendering is disabled on the receiver. There’s some logic to detect this (using a collider on the brush) and restart, but in practice either this doesn’t happen (because of a bug where the collider is only on the handle, not the brush), or it causes the sync to never complete.

Further, if you’ve been watching for a while, Object Sync has been replicating brush positions and they’ve been rendered locally; this works okay but in practice there’s a significant loss of fidelity; there’s a “Send” button to force a resync when you’re done, but people often don’t know to press it (and doing it while you’re drawing is a bad idea for the reasons above).

In the future, I would like to break the canvas up into smaller segments so that dirty segments can be continually retransmitted as they’re drawn to, but that requires a more-or-less complete rewrite of the texture sync system…

Multiple writers

This isn’t so much an issue for Drawing Retreat, but for very large canvases (such as you might find on a graffiti world), there will be multiple players drawing on the same canvas at the same time. Keeping things in sync becomes very complex; whose view is authoritative for late joiners, for example? Can we do segmented retransmission between two players who are drawing at the same time, but on different areas of the texture? It’s not an easy problem.