Render sprites to texture for GL targets

How can I render sprites to texture for GL targets (native and maybe -Dwebgl)?

BitmapData.draw() is not what I want, because it use Cairo, and do not apply shaders.

P. S. OpenGLView is not suitable also (probably), because I want to render regular Sprite.

+1, I just wanted to ask the same question. Or is there any possibility to implement BitmapData.draw() using hardware rendering?

I have this working right now, but I would love to get some help if anyone has ideas what it should be called.

I added bitmapData.readable, which currently can be set to false to enable support for a GL framebuffer instead of using traditional memory. I don’t know that I feel completely happy with this, I think that bitmapData.readable may be better as a getter (similar to transparent) and some other method (perhaps) should change the behavior (similar to bitmapData.dispose(), but intended for disposing the software memory and making it use a texture in hardware only)

Thank you :slight_smile:

3 Likes

Wow, this is probably the biggest news about OpenFL for me in a long time. In my game I’m doing a lot (if not most) of the things “off screen” (procedurally generating textures, lights, etc.), so ability to render Sprites to framebuffer is absolutely essential for me, as I realised in previous week while trying to get around this.

Thank you very, very much! I’m going to look at the changes right now and start testing (if they are already on github). If you need any specific testing, feel free to send me a message, I’m able to do macOS, Windows, and probably Linux and iOS too.

No idea what it should be called though, readable seems fine to me, but probably someone with more experience in rendering than me should have a word in this.

Two problems I found yet (mac target). The square in the middle of the screen is a Bitmap, to which .bitmapData I’m drawing the sprite, which is either in display list (shifted far to the right, so it is not seen on the screen) or not added to display list at all.

1. Using Cairo, the bitmapData.draw(sprite) method effectively renders only the part of sprite from (0,0) to (bitmapData.width, bitmapData.height), as expected:

Using new GL framebuffer method (bitmapData.readable = false) renders whole… I’m not exactly sure what and squishes it down:

It seems to be related to the size of the window / stage. This is how it looks in window with lower width:

I’ll try to figure out what causes it.


2. This is probably unrelated, because it happens also in Cairo, but sprite.scaleX / scaleY and also sprite.transform.matrix.scale() do not update children of the sprite properly, when sprite is not in display list. This is how it looks when sprite is on display list (with Cairo, using framebuffer it looks the same, but squished down):

and this, when it’s not on display list:

Other transformations, such as translation or rotation seem to work fine.


UPDATE:

I managed to fix issue 1. with this, or at least as far as I tested it.

Issue 2. remains unresolved for now, as it is not related to this, I will look into it next.

1 Like

I added bitmapData.readable, which currently can be set to false to enable support for a GL framebuffer instead of using traditional memory. I don’t know that I feel completely happy with this, I think that bitmapData.readable may be better as a getter (similar to transparent)

In my taste readable is confusing naming, because it you don’t know that this is related to GL renderer, you can’t guess it by its name.

What about layerType for getter name? It should be enum with values FRAMEBUFFER and SOFTWARE.

and some other method (perhaps) should change the behavior

At my point, there is no need to change layer type at runtime, so it can be set during BitmapData creation.

Also it should be no-ops for non-GL targets.

1 Like

BitmapData uses an in-memory buffer, plus allocates a texture when using OpenGL. By deleting the buffer, we lose all support for reading or writing – getPixel and setPixel become non-operative right now. This is a clear compromise that is only feasible if a developer chooses to limit their use. Stage3D Texture class follows this same pattern, you upload a BitmapData or ByteArray or Vector, and then you don’t access the pixels anymore. However, because BitmapData is used so commonly, having a way to earmark a BitmapData to use hardware only (and lose read/write) makes sense, though perhaps being able to still use ‘draw’ makes that paradigm a little less clear

1 Like

By deleting the buffer, we lose all support for reading or writing – getPixel and setPixel become non-operative right now.

Maybe changing the flag name from readable to buffered (or unbuffered) would make more sense then?

At my point, there is no need to change layer type at runtime, so it can be set during BitmapData creation.

I think it would be great to be able to change from readable=true (or buffered) to readable=false at runtime. That allows to create a bitmapdata/texture programatically and then “lock it” before using it as GL framebuffer. But I agree the reverse (the possibility to switch from readable=false to readable=true) is not essential (and probably difficult to achieve anyway)

I agree, this can half the amount of memory required, but it isn’t an optimization OpenFL can really make automatic :slight_smile:

I think that readable is difficult to name, but I think if we support hardware-optimized texture formats in the future (such as PVRTC or ETC1) in principle, you would Assets.getBitmapData and receive a bitmapData object that is non-readable, not totally different than getting a JPEG without transparent support.

Perhaps buffer is a good term, “preserveBuffer”, “disposeBuffer”… deserves more thought :thinking:

Good point, I have not thought about it in that way.

But I agree the reverse (the possibility to switch from readable=false to readable=true) is not essential (and probably difficult to achieve anyway)

Why not? We can achieve that by rendering texture to the framebuffer, than using glReadPixels to get actual pixel data.

1 Like

Another issue – children masking isn’t working for me when using GL framebuffer rendering, I’m trying to fix it, if anybody could navigate me, where could be a problem, it would be a huge help.

Thanks for the pull request – I think I found another way to solve the scaling issue when rendering to a framebuffer that is not the same size as the stage

What is the issue you are seeing with masking? Would you mind sharing sample code? Thanks :slight_smile:

@singmajesty Great news :slight_smile:

I’ll try to post test case code for masking issue tomorrow. Shortly, when I render a Sprite with children with masks to framebuffer, everything just “disappears”, while same code renders properly with Cairo (mac target). It’s not a pressing issue for me right now, as I wrote more efficient solution which doesn’t need this, but at least for me it’s not working.

Any idea about scaling DisplayObjects, when they are not in display list? I noticed, that scaling of children works differently when parent is on display list and when it isn’t (for example when it is only used for rendering with bitmapData.draw()), other transformations I tried (translation, rotation) work fine as it seems. I can write some test code for this too.

Ok, technically it is tomorrow here :smiley: . Here is the example code for masking bug:

        var drawLayer = new Sprite();
        var maskLayer = new Shape();

        var mtx:Matrix = new Matrix();
        mtx.createGradientBox(60, 60, 0, 0, 0);

        drawLayer.graphics.beginGradientFill(GradientType.RADIAL, [0xFFFFFF, 0xFFFFFF], [1,0], [0, 255], mtx);
        drawLayer.graphics.drawRect(0, 0, 60, 60);
        drawLayer.graphics.endFill();

        maskLayer.graphics.beginFill(0x000000);
        maskLayer.graphics.moveTo(30, 20);
        maskLayer.graphics.lineTo(40, 40);
        maskLayer.graphics.lineTo(20, 40);
        maskLayer.graphics.lineTo(30, 20);
        maskLayer.graphics.endFill();

        drawLayer.mask = maskLayer;

        // Cairo
        var bitmapDataCairo = new BitmapData(60, 60, false, 0x330000);
        bitmapDataCairo.draw(drawLayer);
        var bitmapCairo = new Bitmap(bitmapDataCairo);
        addChild(bitmapCairo);

        // GL Framebuffer
        var bitmapDataFB = new BitmapData(60, 60, false, 0x003300);
        bitmapDataFB.readable = false;
        bitmapDataFB.draw(drawLayer);
        var bitmapFB = new Bitmap(bitmapDataFB);
        bitmapFB.x = 70;
        addChild(bitmapFB);

I could not yet reproduce scaling bug, probably false alarm.

It looks like the framebuffer creation was discarding the old image color. As a result, the background color was discarded, and the white blended in with the background.

I’ve just committed improvements that fixes this. Now your sample works perfectly (except for differences in GL masking, since it supports only rectangular masks at the moment)

If we get framebuffers solid as well as reliable cacheAsBitmap behavior, we’ll use that to implement complex masks (such as the triangle) using shaders :wink:

1 Like

Works for me with native targets :slight_smile:
Not works with html5 -Dwebgl :frowning:

One question - it is possible to have some method, that will enable framebuffer if it is possible, and no-op for other cases?

Currently I’m do bitmapData.disposeImage(), and this works well for native and probably will works well for html5 -Dwebgl. But for other targets (-Dcairo, html5 -Ddom, html5 -Dcanvas) it just render nothing.

#if ((native && !cairo) || webgl)
    ...
#end

Is not working also, because as I know cairo and canvas renderers can be used as fallbacks when GL is not available.

Another variant - can I check GL availability at runtime?

if (GL.context != null) {
   bitmapData.disposeImage();
}

should probably work, I haven’t tried.

1 Like

Hi again :slight_smile:

I have been losing sleep over the above APIs, because of several issues such as how it will not be able to handle an application with multiple GL contexts.

Adobe manages this workflow in the Stage3D API, but since the display list in Flash Player is run in software, there’s an (obvious) divorce between the APIs.

I just committed the following, I am curious of your feelings

var texture = stage.stage3Ds[0].context3D.createTexture (100, 100, BGRA, true);
var bitmapData = BitmapData.fromTexture (texture);

var sprite = new Sprite ();
sprite.graphics.beginFill (0xFF0000);
sprite.graphics.drawRect (0, 0, 100, 100);

bitmapData.draw (sprite);

This will result in a BitmapData object that has an internal framebuffer, and does not allocate a software buffer.

I have also made changes in the development builds to initialize Context3D automatically when using an OpenGL renderer. We already initialize the real OpenGL context, so it seems reasonable that the Context3D object would immediately be available.

That makes it simpler to use something like the above example code

Thoughts?

(EDIT: BitmapData.fromTexture would return null on Flash)

2 Likes