How to use ANGLE library on old machines

Hi guys,

Is there any chance that ANGLE support will appear “out of the box” soon?

I note that in new lime’s build 2.4.0 (via haxelib upgrade) was added build support for ANGLE.
But still it is not clear what should be done to compile the application for using ANGLE.
Please give us detailed instructions :slight_smile:

Try lime rebuild windows -DLIME_SDL_ANGLE

Unfortunately, my haxelib can’t rebuild (console window opened and closed immediately when I run ‘lime rebuild windows’).
So ‘detailed instructions’ for such dummies as me means really detailed howto from beginning :slight_smile:
I.e. how to install proper version of haxelib etc.
Can anybody describe in detail all the steps?

There are detailed instruction on how to get a development version of lime and how to build it here: https://github.com/openfl/lime#building-from-source

Oh, I almost forgot, you cannot rebuild a haxelib version, you need to clone from the source to get all the submodules

Any progress on implementing this without rebuilding? Is it possible to make the app choose at startup using machine’s configuration?

No, I’m pretty sure it’s all or nothing, so we would have to be ready to enable it by default

1 Like

Are you sure enabling it by default will not affect performance on machines that were previously running without ANGLE?

It is supposed to use the built-in support for a GLES context allowed in newer cards, there is a performance hit but nothing too severe, but the key is that we will need to distribute with the ANGLE binaries, which we might not have in perfect order at the moment (I had trouble getting it to work), so if we get it together properly, we could perhaps enable ANGLE by default – it might be ideal for broadening the desktop support

I am a little worried about the performance hit since I have intensive custom shaders for my project… How will the performance be compare to WebGL?

Sure, in any case the performance wil be good.
I can not imagine well-formed game application on OpenFL that able to drop the performance of modern systems.
Anyway, there is no other alternative. So waiting for ANGLE by default :slight_smile:

There might be slight difference in draw call overhead, though I have never measured this in a benchmark.
For other aspects ANGLE peforms as good as raw OpenGL, at least in my test cases.

If you have questions about ANGLE implementation you may find the answers here. http://www.seas.upenn.edu/~pcozzi/OpenGLInsights/OpenGLInsights-ANGLE.pdf

Compare to WebGL? Currently no browsers on windows use real OpenGL by default.
Chrome and Firefox are already using ANGLE for WebGL, and seems pretty optimized. Internet Explorer is usng some library for WebGL rendering too, but it’s not ANGLE.

Thanks a lot for the useful document.

Also one question on shaders, maybe not really on-topic but I dont want to open another thread and I am asking it here since ANGLE will increase drawcall overhead… :stuck_out_tongue:

Lets say I am doing a fake-hdr glare shader(you can check my profile background for the effect I am doing), which includes 1 pre-pass to get the bright part of the image then 5 separate parts each blurs the image using different radius( http://kalogirou.net/2006/05/20/how-to-do-good-bloom-for-hdr-rendering/ ). For each blur I used 5 passes, 4 for blur (2 vertical 2 horizontal linear blur), and used low resolution texture when doing blur with large radius, then another pass to sample the downscaled texture to main texture. I have a total of 26 passes for this effect, and it is not really efficient on some laptops.
I heard about tile batching and then kinda thought that 1 single pass that samples the image 10 times is always faster than 10 passes each sampling once. In this case multiple passes help optimize the blur, but I’m not sure if I should uses more passes to optimize it more(for example instead of sampling an image 10 times in 1 pass, use 2 passes first one samples twice second one samples 5 times), or if I should use less passes and make fast gaussian blur(which I dont know how to yet). In commercial games I saw this kind of glare and it felt way more efficient than the algorithm I’m using, and does not have the rectangular artifacts from my box blur. I wonder if I can do anything to make this better…

Thanks a lot!

Any news on angle support for legacy? The awkward situation I’m having is I have a framework and part of my game built on legacy, and porting it to next or hybrid is crashing EVERYWHERE. Without next I lose ANGLE support, potentially assets packing support in the future, and there is bugs like right mouse event, and buggy sound loading… I don’t know what to do now. :’(

I think I just found the cause of the right mouse event regression, usually I want to leave legacy alone, in order to not introduce new problems, with the goal being to keep pushing Lime 2 so we reach parity (and exceed) as we push forward

With your help, perhaps we can continue to try and whittle away at the list of problems you have

Thanks for the reply. I have worked out a way to solve both the assets packing problem and the sound loading problem: I edited the byte array of ogg and saved it to custom format, and the load it decode it and use Sound.loadCompressedDataFromByteArray(). I used similar method on bitmap datas. This method works fine for now except that the memory usage went SKY HIGH… probably because stuffs are being loaded multiple times?

I moved 3 posts to an existing topic: Sound.hx:101: Error: Could not load “my music.ogg”

Thanks a lot for looking at the sound loading problem. My problem list only has ANGLE support left! For the custom asset type, all I did is loading the individual assets in byte array and compress it using GZIP then save it somewhere(I think this is enough to keep some people out). Also I think using this method we can make custom asset package files as well. I am afraid I don’t have enough knowledge to integrate it into the command line tool tho.

For ANGLE, I think after the stable default ANGLE build is out for next, it would be easy to just port it to legacy. Same principle I guess, and some modifications to the shaders.

I am investigating the high memory usage. I used some tile sheet that is 4096 by 4096, and that took a lot of memory for some reason. After some investigations I found that for each tile sheet instance(with different bitmapdata), as long as you called draw tiles on it once, there is a memory usage with the exact size as the bitmapdata, which is pretty big already. I 'll wait til I get back to my desktop and try and see if the problem is caused by individual machines or by the new lime version. Once I done that I 'll probably open a new thread.