I was wondering if there could be a way to read the amount of used GPU memory in an OpenFL + Starling project that targets Android and iOS.
There is openfl.system.System.totalMemory, but that gives just the regular RAM usage. If I load a bitmap via Starling’s new AssetManager, which pushes it to the GPU as a texture, I will see a momentary increase in RAM (for a 2048 x 2048 px image, at 4 bytes per pixel for RGBA, that’s exactly 16MB, as confirmed by System.totalMemory). But then the memory goes back down to where it was before the loading (in 2-4 sec) because Starling knows how to free-up the bitmap in regular RAM (it will be reloaded from the original source file, if the context3D is lost and we need to recreate the texture in the GPU). So now it seems that my image has gone off the radars, but of course that it still occupies GPU memory, yet I don’t have any way to monitor that.
There is Context3D.totalGPUMemory (which in AIR used to work) but it defaults to zero and I’m not aware of any code that would change it anywhere. What would it take to read the used GPU memory? Some OpenGL C++ code placed somewhere in the implementation backend of openfl or lime (how to do that)? Or a custom native extension maybe, for this purpose alone? Any help or advice would be greatly appreciated.
We’re pursuing this in the context of having random crashes in our game, happening most often on older devices with less RAM, with no causal indication in the logs and which go away if we artificially reduce the memory footprint of the app i.e. they seem to be caused by too much memory consumption, and we’re trying to investigate it.