I’d want to add some simple particle-style visual effects to my game project (e.g., sparkles, hit flashes, coin fly animations) without overcomplicating things or introducing platform-specific code.
My requirements:
Must work in both HTML5 (I’m sticked with canvas) and C++ targets
Should not depend on WebGL-specific or DOM-based solutions
Ideally minimal dependencies
So, what are the current best practices in the OpenFL ecosystem for this use case?
Any libraries or examples you’d recommend for simple, performant FX that work across HTML5/C++ builds?
With canvas accessing individual pixels in not going to be so fast, so using a particle engine may not be efficient as just using tile frames. So say you create an effect in something like AfterEffects then export the frames to a suitable Atlas and render with Tilemap.
Your better off using WebGL for instance with heaps( engine ) you might use something like.
or in openfl a particle engine like
With GL abstractions you can use a mix of particles and tilemaps, so one pass for say a tilemap then renderer some particles in a second pass ( single pass maybe possible ).
For html5 and C++ target for pixel engines use WebGL/OpenGL/Other C++ shader and OpenFL or other engine will have code to deal with GL abstractions and normally have code that will drop down to canvas on openfl if webgl is not supported, so you don’t have to deal with that accept explaining to customers that fall back for lower computers will not be as fast.
If you are using a particle engine on GL then you can emulate a point particle with two equalatrial triangles overlapping with 60 degree offset like a star.
I would happily use WebGL if it wasn’t for the blurry borders and fonts on Windows when system scale > 100% (which is widespread nowadays).
It just pisses me off, and I don’t want to ship a game that looks like that.
If you know a real workaround (not just “ignore it”), I’d appreciate it.
Have you tried setting <window allow-high-dpi="true"/> in your project.xml? This setting allows rendering at a higher quality without blurring. Vector graphics are automatically much more crisp without any additional changes. For bitmaps, you’d need to load higher quality bitmaps and then scale them down so that they appear crisp.
(to be clear, this currently applies to HTML5 only on Windows. But you were talking about WebGL, so I assumed that was what you specifically mean. When using a native target like C++ or HashLink, allow-high-dpi doesn’t work for Windows yet, but it will in the future).
Yes, I’ve always had <window allow-high-dpi="true"/> enabled.
It helps a lot, sure — and I don’t really have issues with vector graphics (SVG-style) or bitmaps, either in WebGL or canvas rendering.
The problem is specifically with font rendering and borders (like the ones drawn with graphics.lineTo) in HTML5 targets on Windows browsers with WebGL rendering.
It’s not a critical difference, but it’s definitely not pixel-perfect — and once you notice it, it’s hard to unsee.
I have not used window with haxe since win7->8 time. It should be possible to use multiple context… It should be possible to render text in haxe Canvas and draw borders with Canvas using haxe js and draw them above an openfl webgl context. Typically I think that openfl draws with lineto etc to a bitmap and then renders that to webgl. It is perfectly possible to implement your own lineTo pixel drawing.
or you can draw lineTo etc.. with triangles
From memory with Kha on html5 it was ideal to draw fonts at 4x the size and scale down. So you could try similar by putting a textfield inside a scaled down sprite.
Just ideas.. probably none specifically good but perhaps allows you to think more about the problems. I am not really sure if clients care as much about pixels as you!
I wonder if it would help if you snapped the x and y positions of the text and graphics with borders to the nearest pixel. It’s possible that the blurring you are seeing might simply be caused by texture smoothing from being placed on a partial pixel like 1.9 or 2.1 instead of 2.0 (it’s worth mentioning that, in OpenFL, Graphics and text are both drawn in software, and then uploaded as a texture to the GPU).
Note that I’m talking about the nearest pixel in stage coordinates. So both the border/text, its parent, all the way up to the stage could potentially all be a bit offset from the nearest pixel by differing amounts. To be sure that you have it right, you could check the x and y values of textField.localToGlobal(new Point(0.0, 0.0)). If they’re both integers, then it should be correctly snapped to the nearest pixel.
It’s also worth mentioning that the thickness you pass to lineStyle() ends up being centered on the line’s coordinates. So, half will render above the line and half will render below the line (using a horizontal line, as an example). For a 1 pixel line, this means that it may actually be partially included within two different pixels, so it may not be super crisp (this should affect both Canvas and WebGL, though, so it may not make a difference to change this).
The top of the line will actually be at y position -0.5, and the bottom of the line will end at y position 0.5.
I sometimes compensate for this behavior just for aesthetic reasons. I like the full thickness of the line to fall fully within the bounds of the shape that I’m drawing, and not extend half the line thickness out further. However, it might be interesting to see if this technique might help a bit with your rendering issue too.
In this case, the top of the line will be at 0.0, and the bottom of the line at 1.0. So it should be fully contained within 1 pixel, which may help fix the appearance of blurring.
Thanks, Josh — this is a really interesting deep dive.
What you’re describing is actually very familiar — we used to do similar adjustments back in the Flash days to avoid blurry lines and text. We haven’t tried applying those same techniques in OpenFL yet, since with the canvas renderer and HaxeUI it kind of solves itself magically. But your explanation makes a lot of sense, especially in the context of how things get uploaded as textures in WebGL.
It’ll definitely require digging down to that level of abstraction, but when we get around to refining rendering further, we’ll absolutely revisit this and see how much we can improve things.
Really appreciate you taking the time to explain it so clearly.