WASM’s heap and the Mono runtime increased memory usage by one gigabyte in some worlds.
Like existing worlds? Is this using the backwards compatibility? Did you make any attempt to understand how memory usage went up by 1GB? This doesn’t sound right at all. Are you sure there wasn’t some memory leak here? This should be something you look at and can debug in a day or few, not something you pivot off of the whole project for. There is a heap snapshot tool in Mono that should work for debugging this FWIW, not that it’s very helpful now HeapShot | Mono
so we would have needed to invest additional development time into making WASM work on iOS - and its performance would be lower than on other platforms.
This is wrong, it’s a straightforward set of API calls in the Rust APIs that your engineer is supposedly skilled with to precompile it to avoid the JIT. Engine in wasmtime - Rust Most popular Wasm VMs have some form of precompiled wasm modules
It does not have degraded performance
edit: To elaborate, wasmtime of course doesn’t have iOS support – at least not yet and it just seems to be some build target changes as Cranelift already supports aarch64-apple-darwin
. Wasmer is what you’d probably want to use for that. I could see confusion arising since Wasmer recently announced iOS support by way of an interpreter. This is a more ‘complete’ support where you can dynamically load new Wasm. However this is not needed for Blazor to function so you can use the precompiled modules and cross-compilation, which have been in Wasmer since 2.1. Here are some relevant posts about this feature.
Wasmer has since updated their APIs a bit, the dylib engine mentioned in their iOS headless sample has been removed and merged into their “universal engine”, as a result the iOS sample does not work out of the box since it hasn’t been updated. A basic working example of getting a precompiled module of an iOS build can be found here. This was thrown together combining the cross compilation example with the target triple from the iOS example, along with the serialization of the module from their headless example.
This was a well-known thing, I made sure of it because it was the #1 concern of VRC before I even pitched Udon 2.
While developing Udon 2, we realized how many worlds depend on unspecified behavior, which would be difficult to recreate in Soba. We want to maintain backward compatibility with Udon.
Soba was not needed for this, you could’ve just left old Udon compatibility as you are already if your engineers didn’t understand the implicit behavior.
The WASM runtime added enormous complexity to our code. It added edge cases that hadn’t been fixed yet, made our development process slower, and would have distracted us from adding feature requests that the community had been asking for.
Instead you decided to spend years of working on slowly approaching C# with a bunch of inherent jank and bugs. Of course all the while building up a VM that you depend on 1 person to maintain as opposed to being able to benifit from the Mono team’s work over 20 years.
So basically what I’m reading is you ran some profiles in worlds, didn’t make effort to understand why the heap size was so large, and pivoted to another project entirely because you didn’t know how to fix issues.
Compared to using WASM, Soba’s development has been very fast. We’ve carefully limited the number of initial features and reused existing code whenever possible, making Soba small and easy to work on.
Of course it’s fast to develop, it’s reimplementing Udon but slightly different with a few new features. You’d likely get most of the way to what Soba is with a few features and changes to Udon.
But hey, credit where credit’s due you posted the issues even if some aren’t valid and others are dubious at best, only took most of the creator community asking why you are making a big decision that directly impacts them without any actual info.
(In fact, the initial announcement contained many promises we shouldn’t have made at the time!)
edit: going to add here, VRC likes to say “oh no we shouldn’t have communicated this thing now people’s hopes are up.” This is not the problem. The problem is that VRC always manages to cancel and delay major projects for years. It’s pretty much never a communication problem when VRC deflects it as “we shouldn’t have shared this” it’s a delivery problem.
edit: Noting here as I said below, I understand this overall to some degree of “better the devil you know.” If you are not confident in shipping Udon 2, that sucks but makes some sense. However, I’d really rather you didn’t represent it as “we found fundamental show-stopping problems in these software packages that have been widely used for many years-to-decades.” Udon 2 was ultimately just glue between these packages with some handling for fast Unity-specialized marshaling.