[HELP] OpenGL FXAA Shader on BitmapData (with alpha channel) [SOLVED]

Hello,

My objective is to apply a FXAA (Fast AntiAlias) or any other antialias on a sprite.
I don’t need to update it often (like Enter_Frame), it is just to save a generated Bitmap to a file.
I render the bitmap with normal graphics.drawXXX and I want the picture to be smoother, or maybe to apply different kind of shaders.

I was looking around and I found this https://github.com/bmfs/glslTest_openfl
It works perfectly but I can’t reach my purpose because of 2 problems:

  1. Apply a different shader
    I’ve tried to get a VERT and FRAG around on the web (like this one https://github.com/v002/v002-FXAA/blob/master/v002.FXAA.frag) but it doesn’t work when I use it in the example. Just get a BLACK OpenGL view.
    I’ve never used OpenGL and I don’t really understand how it works, but for example I don’t understand why in the example there are two framebuffer, the first with a Blur Horizontally and the second with a Vertical Blur. Can I use only one? How I have to modify the FXAA to make it work in the example enviroment?

  2. save the resulting OpenGL view to a BitmapData (or ByteArray)
    I’ve tried GL.readPixels but it crashes.

I don’t need (if you like you are very welcome) a working example.
Just some kind of hints to where I should look to solve my problem.

Thanks
yup

If you want to apply a shader to a BitmapData and get back a BitmapData I made this https://github.com/ibilon/BitmapDataShaders

It’s probably (most likely) more complex than what you need, but should help.
If you have any problem this it don’t hesitate to ask for help :wink:

2 Likes

This is awesome :slight_smile: Does it work well on android and ios?

Ok, after too many hours I give up!

Anyway your project it’s AWESOME. It should be wrapped and made as an Haxelib.
It works beautifully, and I like how easy it is.

I can’t understand completely GLGS anyway.

I understand perfectly the difference between a vertex shader and a fragment shader.
I found a good FXAA shader. It looks exactly what I need https://dl.dropboxusercontent.com/u/11542084/FXAA

Honestly I can’t implement it in your project example.

I’ve tried everything I could.
I think I have to switch
uniform sampler2D bgl_RenderedTexture; //redered scene texture
with
uniform sampler2d uImage0;

about the resolution I really don’t know. How could I get it in the fragment?

Also, more generically, what variables arrive at the fragment and the vertex shaders?
Is that all custom or there is some fixed stuff?
I see in your example you handle some attributes and uniform. Is that the way you pass stuff to the vertex and fragment shaders?
I’m googling some nice introduction to GLGS but I can’t find anything clear.

I already have to really thank you because you solved me the “bitmapData” problem, and also you gave me an amazing tool.
I ask a little bit more to send me in the right direction.
I’m quite struggling testing, but all I could do is make easy fragment shaders that change colors, or vertex shaders (I made your project customizable also on the vertex part) that translate or scale the texture… but this bloody FXAA looks very hard to implement for me.

Hi,
Don’t give up, you are pretty close…

http://glslsandbox.com/

This is a very good site (successor of heroku shaders) for you to tryout GLSL (OpenGL Shading Language, not called GLGS) shaders, it provides instant compilation and you can see your changes in real-time.

http://www.lighthouse3d.com/tutorials/glsl-core-tutorial/glsl-core-tutorial-index/

This is an OpenGL tutorial if you are interested, which I think is pretty good and easy to understand.

Besides this you can look into the HerokuShaders example in openfl-samples as well.

I liked post-processing effects and stuff (well u can check out my profile background) so I looked into the GL shaders, but I am not pro so I ll share what I ve learned.

https://github.com/bmfs/glslTest_openfl This example is for rendering (basically) full screen post-processing effects as it uses OpenGLView and calls the render function everyframe (not what you needed).
How it works is basically add the Before and After class etc in such a way that for every frame it will:
-set the render target from to texture1 (the Before class),
-draws the sprite onto the texture1 (openfl will do this as it renders the sprite but since the render target is altered it draws onto the texture instead of the screen),
-renders the texture1 onto texture2 with a shader (the RenderToTextureLayer class),
-renders texture2 onto texture1 with another shader (the RenderToTextureLayer class),
-finally renders texture1 to screen (the After class).

A framebuffer(object) is basically a buffer with a texture attached, acting as a render target if you want to render to the attached texture.

The reason why it needs 2 textures is the box blurring can be done more efficiently if you use 2 passes that blurs horizontally first and then vertically (for example if you want to blur with 10 samples, instead of 100 calculations you can do it in just 20, and many complex effects require multi-pass shaders). And when OpenGL is drawing stuff onto textures you can’t draw to the texture itself as it will causes unexpected behaviors so you need at least 2 textures if you want to render multiple passes.

Now speaking of GLSL and shaders, I think @ibilon 's library https://github.com/ibilon/BitmapDataShaders is a very useful tool. It uploads the bitmap to the shader, renders it and get the image back using GL.readPixels().

In order to pass stuff to shaders (a number or an image etc) one of the ways is to use uniform.

“imageUniform = GL.getUniformLocation (program, “uImage0”);” returns a location in the GPU to store your uniform, named “uImage0”,
“GL.texImage2D” with some other functions use your bitmap’s pixel data to create a texture which is bound to location 0, and finally
"GL.uniform1i (imageUniform, 0);" uploads number 0(the location where you have ur texture stored) to the uniform location where you got earlier.

Once these are set up, inside your fragment shader (basically pixel shader) you will need
"uniform sampler2D uImage0;" which tells the shader “I am gonna be using uImage0 that I just uploaded”, put another way, creating a variable with name uImage0 and with the value I just uploaded. Then inside the shader main function you use texture2D (uImage0, [the coordinate I want to get color from]) to get the pixel color(vec4 storing RGBA) from the image using a vec2(XY) coordinate.

I hope this will help you understand GLSL better by a little bit. And for the FXAA shader that you’ve found ( https://dl.dropboxusercontent.com/u/11542084/FXAA ), it seems that it has 2 more uniform values needed which is the width and height of the texture. ibilon’s library gives a way to upload uniform stuffs, which is in his example Main.hx

var c1_bitmap = new Bitmap (ShaderCompositing.compositePerLayerParams (c1, shader, [[{name: "param", value: [0.2, 0.5, 0.2], type: Float3}],[{name: "param", value: [0.2, 0.2, 0.2], type: Float3}],[{name: "param", value: [0.5, 0.5, 0.2], type: Float3}]]));
var c2_bitmap = new Bitmap (ShaderCompositing.compositeParams (c2, shader, [{name: "param", value: [0.2, 0.5, 0.2], type: Float3}]));

And you will need to upload 2 floats with name “bgl_RenderedTextureWidth” and “bgl_RenderedTextureHeight”, value equal to the width and height of the texture, and type being Float. Something like this:

var bitmap = new Bitmap (ShaderCompositing.compositeParams(c1, shader, [{name: "bgl_RenderedTextureWidth", value: myBitmapdata.width, type: Float},{name: "bgl_RenderedTextureHeight", value: myBitmapdata.height, type: Float}]));

And I am pretty sure you can achieve what you wanted with just a little bit more arrangement and stuff. It seems that if you only uploaded 1 image, you can name it whatever you want inside the shader so you dont need to mess with “uniform sampler2D bgl_RenderedTexture;”, but if something is wrong you can just ctrl+Find all “bgl_RenderedTexture” in your FXAA shader and change it to “uImage0”.

Hope this helps and good luck with shaders!

Yes! It works, I did it!!!
Actually only with your big help. You made me understand better how GLGS works!
I think it is not too crazy once you get introduced.

This is my result (FXAA 8x)

And this is the code

var bitmap = new Bitmap(fxaa(bitmapdata,8));

.

public function fxaa(bitmapdata:BitmapData,passes:Int=1):BitmapData {
	if (!OpenGLView.isSupported)
		trace("Couldn't get openGL view");


	var shader = "
		/*
		FXAA fragment shader by Timothy Lottes
		http://timothylottes.blogspot.com/
		GLSL version by Geeks3D
		http://www.geeks3d.com/
		modified and adapted to BGE by Martins Upitis
		http://devlog-martinsh.blogspot.com/
		modified by Simone Cingano
		http://akifox.com
		*/

		#version 120
		varying vec2 vTexCoord;

		uniform sampler2D uImage0; //redered scene texture
		uniform float uImage0Width; //texture width
		uniform float uImage0Height; //texture height

		float width = uImage0Width;
		float height = uImage0Height;

		float FXAA_SUBPIX_SHIFT = 1.0/4.0;
		vec2 rcpFrame = vec2(1.0/width, 1.0/height);
		vec4 posPos = vec4(vTexCoord.st,vTexCoord.st -(rcpFrame * (0.5 + FXAA_SUBPIX_SHIFT)));

		vec3 FxaaPixelShader(vec4 posPos, sampler2D tex, vec2 rcpFrame)
		{
		  //posPos   // Output of FxaaVertexShader interpolated across screen
		  //tex      // Input texture.
		  //rcpFrame // Constant {1.0/frameWidth, 1.0/frameHeight}
		  /*---------------------------------------------------------*/
		  #define FXAA_REDUCE_MIN   (1.0/128.0)
		  #define FXAA_REDUCE_MUL   (1.0/8.0)
		  #define FXAA_SPAN_MAX     8.0
		  /*---------------------------------------------------------*/
		  vec3 rgbNW = texture2D(tex, posPos.zw).xyz;
		  vec3 rgbNE = texture2D(tex, posPos.zw + vec2(1.0,0.0)*rcpFrame.xy).xyz;
		  vec3 rgbSW = texture2D(tex, posPos.zw + vec2(0.0,1.0)*rcpFrame.xy).xyz;
		  vec3 rgbSE = texture2D(tex, posPos.zw + vec2(1.0,1.0)*rcpFrame.xy).xyz;
		  vec3 rgbM  = texture2D(tex, posPos.xy).xyz;
		  /*---------------------------------------------------------*/
		  vec3 luma = vec3(0.299, 0.587, 0.114);
		  float lumaNW = dot(rgbNW, luma);
		  float lumaNE = dot(rgbNE, luma);
		  float lumaSW = dot(rgbSW, luma);
		  float lumaSE = dot(rgbSE, luma);
		  float lumaM  = dot(rgbM,  luma);
		  /*---------------------------------------------------------*/
		  float lumaMin = min(lumaM, min(min(lumaNW, lumaNE), min(lumaSW, lumaSE)));
		  float lumaMax = max(lumaM, max(max(lumaNW, lumaNE), max(lumaSW, lumaSE)));
		  /*---------------------------------------------------------*/
		  vec2 dir;
		  dir.x = -((lumaNW + lumaNE) - (lumaSW + lumaSE));
		  dir.y =  ((lumaNW + lumaSW) - (lumaNE + lumaSE));
		  /*---------------------------------------------------------*/
		  float dirReduce = max(
		    (lumaNW + lumaNE + lumaSW + lumaSE) * (0.25 * FXAA_REDUCE_MUL),
		    FXAA_REDUCE_MIN);
		  float rcpDirMin = 1.0/(min(abs(dir.x), abs(dir.y)) + dirReduce);
		  dir = min(vec2( FXAA_SPAN_MAX,  FXAA_SPAN_MAX),
		      max(vec2(-FXAA_SPAN_MAX, -FXAA_SPAN_MAX),
		      dir * rcpDirMin)) * rcpFrame.xy;
		  /*--------------------------------------------------------*/
		  vec3 rgbA = (1.0/2.0) * (
		  texture2D(tex, posPos.xy + dir * (1.0/3.0 - 0.5)).xyz +
		  texture2D(tex, posPos.xy + dir * (2.0/3.0 - 0.5)).xyz);
		  vec3 rgbB = rgbA * (1.0/2.0) + (1.0/4.0) * (
		  texture2D(tex, posPos.xy + dir * (0.0/3.0 - 0.5)).xyz +
		    texture2D(tex, posPos.xy + dir * (3.0/3.0 - 0.5)).xyz);
		  float lumaB = dot(rgbB, luma);
		  if((lumaB < lumaMin) || (lumaB > lumaMax)) return rgbA;
		  return rgbB;
		}

		vec4 PostFX(sampler2D tex, vec2 uv)
		{
		  vec4 c = texture2D(tex, uv.xy);
		  vec2 rcpFrame = vec2(1.0/width, 1.0/height);
		  c.rgb = FxaaPixelShader(posPos, tex, rcpFrame);
		  //c.a = 1.0; //set alpha to 1.0
		  return c;
		}

		void main()
		{
		  gl_FragColor = PostFX(uImage0, vTexCoord);
		}";

	ShaderCompositing.init (bitmapdata.width, bitmapdata.height);
	var bp = bitmapdata.clone();
	for (el in 0...passes) {
		var composite = ShaderCompositing.uploadLayers ([bp]);

		bp = ShaderCompositing.compositeParams(composite, shader,
																						[{name: "uImage0Width", value: bitmapdata.width, type: Float},
																						 {name: "uImage0Height", value: bitmapdata.height, type: Float}]);

		composite.delete();
		composite = null;
	}
	ShaderCompositing.clean ();
	return bp;

}

In respect with the original GLGS code, I had to turn the alpha on (get alpha value from texture) and
also to debug (almost line by line) to finally find I had to substitute texture2DLod with texture2D.

Once you understand it is better to add small part of code at a time (to check if it works and go on) it becomes easy to implement other peoples GLGS code. If you add it all at once you easily get a compiler error but you don’t know where is the problem.


Thanks to all of you for the great help (the tips from tommy and the code from ibilon), you definitely made my day :wink:

2 Likes

Actually my solution for the alpha is not good because the pixels with alpha 0 around the edges remains with alpha 0.
I’m trying to figure it out how to make them smooth to alpha 0. (all the modified pixels by fxaa around the edges)

EDIT:
I can’t make it work for bitmap with alpha channel.
I’ve tried different ways but what I see strange is the blending.
It uses in my code the background. And this is right. you can see the black from the GL background.

I’ve tried change the alpha folllowing the luma but nothing happens. It still uses the BLACK background.
If I set all the pixels alpha to 1.0 the background becomes white. why?

Anyway I’m trying to use this one instead which seems using the alpha channel in the calculations

I’ll be back later when easter is done :wink:

No idea how fxaa works,
but the black could come from my library, GL.clearColor (0.0, 0.0, 0.0, 0.0); https://github.com/ibilon/BitmapDataShaders/blob/master/Source/ShaderCompositing.hx#L170

Maybe the clear color shouldn’t have the alpha at 0?

There was also some problems with the blending function, GL.blendFunc (GL.SRC_ALPHA, GL.ONE_MINUS_SRC_ALPHA); https://github.com/ibilon/BitmapDataShaders/blob/master/Source/ShaderCompositing.hx#L174

Good to know it helped, :wink:

The blending error could come from: 1- trying to blend transparent images, 2- the FXAA shader, and I will explain both.

1-

The default alpha blend function is this: GL.blendFunc (GL.SRC_ALPHA, GL.ONE_MINUS_SRC_ALPHA);.

What this means is, when you put image2 on top to image1, the final color will equal to image2 * first parameter + image1 * second parameter.
For this blending we multiply image2 RGBA by image2’s alpha value, and multiple image1 RGBA by 1-image2’s alpha value. This makes sense but it will create a problem here if you have image2 with alpha 0.5 and image1 with alpha 1.0. As you can see, 0.50.5 + 1.00.5 is less than 1.0, which exposes the background color (default black) when it’s not suppose to.

To correctly blend alpha, you will need GL.blendFuncSeparate which I am not sure if openfl supports. Try goto ibilon’s demo’s ShaderCompositing.hx and change all the blendFunc to

GL.blendFuncSeparate(GL.SRC_ALPHA, GL.ONE_MINUS_SRC_ALPHA, GL.ONE, GL.ONE_MINUS_SRC_ALPHA);

Presumably this will let the program use the first 2 parameters on RGB and the last 2 for alpha, preventing the first alpha from multiplying itself.

2-

FXAA is a post anti-aliasing technique which is designed to be executed on the entire screen, this means that (and also in most post-processing techniques) the alpha channel is ignored.

The new FXAA ( https://github.com/mattdesl/glsl-fxaa ) is not so different from the previous one, in fact I suggest you use the previous one instead since you got it working already. Both do not blur the alpha channel, and the newer one will keep the bitmap’s original alpha channel. After you commented the c.a = 1.0; line in the previous one, it will use the alpha channel of the original bitmap as well.

What this means is the shader blurs the color of the image around the edges and by doing that takes some of the black color that was suppose to be having alpha 0. But then the alpha channel is left unchanged so some of those color that weren’t suppose to be seen bleed into the edges.

To cope with this (kind of) you can perform a simple blur on the alpha channel by adding these lines after //c.a = 1.0;

c.a = texture2D(tex, posPos.xy).a;
c.a += texture2D(tex, posPos.xy + vec2(1.0,1.0)*rcpFrame.xy).a;
c.a += texture2D(tex, posPos.xy + vec2(-1.0,1.0)*rcpFrame.xy).a;
c.a += texture2D(tex, posPos.xy + vec2(-1.0,-1.0)*rcpFrame.xy).a;
c.a += texture2D(tex, posPos.xy + vec2(1.0,-1.0)*rcpFrame.xy).a;
c.a *= 0.2;

Not sure if this will work, but I ll try it sometimes.

Lastly, set bitmap.smoothing to true if you haven’t already.

Good luck!

1 Like

Yes, I’ve seen that. I’ve actually tried to change it and the BLACK edges changed to the color I’ve set. (obviously).

Interesting, this totally make sense. (I’m a newbie of opengl as you can see).
I’ve changed the line and no error. But at the same time same result. Pretty sad.

Brilliant. This is actually something I was thinking about when I was driving few hours ago.
Try to distribute the alpha around the edges.
It works but the result is still bad because of the BLEND problem.

Does it really matter? This should have some effects ONLY when the bitmap is scaled.
I’ve applied it but, as I was supposing, no difference at all.


Here’s what I’ve got now with your suggestions
https://dl.dropboxusercontent.com/u/683344/akifox/shaders/BitmapDataShaders-yupswing.zip

If you have time and you want to take a look.
I’ll try some more blending test to see what happens.
(I’m running it with lime test mac/windows -Dv2 -Dlegacy because neko is a lot slower and openfl 3 gives errors on GL)

As I said before, thanks a lot for all your efforts to help me. I really appreciate this.

The blending problem is still there BUT maybe I’ve found a way to get around that.

First set the clearColor and back to previous blending (the separate one had no effect)

GL.clearColor (0.5, 0.5, 0.5, 0.0);
GL.blendFunc(GL.SRC_ALPHA, GL.ONE_MINUS_SRC_ALPHA);

so it is the same distance from white as from black (exactly in the middle)
after setting the factor to c.a a bit bigger make it looks good with just a small outline (sometimes almost impossible to see)

c.a *= 0.26;

This is the result (FXAA 8x)

(obviously the background is the STAGE background and not the original image)

It looks quite good to me.

Almost forgot… you are really great guys!!!
I could never do this without your help.

PS: I need to lear this bloody OpenGL and GLGS, it is quite complicated to me but, bloody hell, it is amazing!

1 Like

Interesting, this totally make sense. (I’m a newbie of opengl as you can see).
I’ve changed the line and no error. But at the same time same result. Pretty sad.

Then I think this problem is not related to the blend function. Anyways, it may still be something useful in the future.

(I’m running it with lime test mac/windows -Dv2 -Dlegacy because neko is a lot slower and openfl 3 gives errors on GL)

Just a little note, you only need one of the v2 or legacy flag to use openfl legacy. For openfl 3, one thing I have found different is the render function of OpenGLView, in legacy “render” is a method to override and in 3.0 “render” is a property to set the function to.

after setting the factor to c.a a bit bigger make it looks good with just a small outline (sometimes almost impossible to see)

That is smart! the result looks amazing!
So it looks like your problem has been solved. I think I ll take your example and put it in my engine, nice work!

All right!

The solutions was all right but not perfect.
It was creating an outline (same color as the GL background or other pixels with alpha 0 in the picture, usally color 0,0,0).
The solution by tommy was blurring the edges and after that cutting some pixels away. We found a good compromise, but the picture was loosing pixels on the edges and keeping a little outline.
At the same time it was losing the fxaa filter on convex edges (my test case above was only with concave shapes.
On the worst downside, that solution was no compatible with alphas different from 0 or 1. All the stuff in between became 1. Pretty bad (see below)

The outline looked cool actually, so I made an FXAA that exploit that (and fix the problems) to have a proper one
it is published here

But I was not happy. I wanted an FXAA to be applied on a picture with alpha.
I was thinking about this since quite a while with no solution at all.
I’ve looked on google a lot but with no respose.

Today I had an idea.
Since the fxaa use the pixel around to “blend” why not changing the color (and not the alpha) of the pixels around the edges? If they are not black, but the same color of the pixel with alpha=1 around them (3 pixel radius right now), the fxaa will use them to antialias and not the default color.
It was the right track.
After that I changed the Blending mode to
GL.blendFunc(GL.ONE_MINUS_DST_ALPHA,GL.DST_ALPHA);
which makes the alpha really alpha. (I don’t know how to explain it better, see the picture below)

This is the result.

The two fragment shaders are available on github (the preparation edges expander here and the modified fxaa here.

Now, except somebody want to improve my pretty low elegance “edges expander”, I can really declare the case closed!

:wink:

Very impressive! By the way, the huge black outline is there because you increased the factor at the end. I have modified your pre-pass to average the edge color instead of taking only one of them, and reduced it to 4 samples only. Let me know if it doesn’t compile or anything! :smiley:

varying vec2 vTexCoord;
uniform sampler2D uImage0; //input texture
uniform float uImage0Width; //texture width
uniform float uImage0Height; //texture height

vec2 rcpFrame = vec2(1.0/uImage0Width, 1.0/uImage0Height);

void main()
{
    vec4 color = texture2D(uImage0, vTexCoord.xy);
    if (color.a==0) {

        vec4 near = texture2D(uImage0, vTexCoord.xy + vec2(1.0,1.0)*rcpFrame.xy);
        near.a = ceil(near.a);
        float divisor = near.a;
        color.rgb = near.rgb*near.a;

        near = texture2D(uImage0, vTexCoord.xy + vec2(-1.0,1.0)*rcpFrame.xy);
        near.a = ceil(near.a);
        divisor += near.a;
        color.rgb += near.rgb*near.a;

        near = texture2D(uImage0, vTexCoord.xy + vec2(1.0,-1.0)*rcpFrame.xy);
        near.a = ceil(near.a);
        divisor += near.a;
        color.rgb += near.rgb*near.a;

        near = texture2D(uImage0, vTexCoord.xy + vec2(-1.0,-1.0)*rcpFrame.xy);
        near.a = ceil(near.a);
        divisor += near.a;
        color.rgb += near.rgb*near.a;

        color.rgb /= divisor;
   }
   gl_FragColor = color;
}
1 Like

It compiles perfectly.
A code very similar to that one I wrote before (like yoda I talk today)

Anyway, mine was not working, so in the end I opted for a very easy “copy the last pixel with some alpha around you thanks”. Ugly but effective.

Your code is way better. But with only a 3x3 grid (+1+1,-1+1,-1-1,+1-1) it remains some black.
It needs a 5x5 grid to really make the black around (all the pixel alpha=0 are black, or any other bg color depends by the settings) goes away and be changed.

So I’ve just changed your algorithm to a 5x5 grid and the result is ALMOST the same.
The overall difference is in 10 pixels only (I’ve counted them).
It is actually an improvement anyway, in elegance for sure, but also in result. Very tiny but there is.

I’ve tried to use a 7x7 grid as well (+3+3 to -3-3) but it does too much for the fxaa (which need difference to do something ;))

here’s the test

as always, thanks a lot for your precious hints :wink:

EDIT: code updated https://github.com/yupswing/TileCraft/commit/90ad3d69cb52bf04c7182b3f0c7fe4af59aa2fbc

Nice result! I suggest that the four ±1 ±1 passes be removed, since they maybe redundant and cost some performance. You can add them back if you see black edges anyway.

How complex would it be to add FXAA to the standard renderer?

good hint, it is pratically the same with less computation!

I’m quite a newbie… no idea :wink: also because I don’t know how’s the code of the standard renderer.