Feedback about Lime (in reply to singmajesty)

This is a response to @singmajesty regarding my thoughts about how Lime is to use without OpenFL.

It’s a great question! Lime is a excellent tool, but it takes a low-level approach, and is not documented, making it difficult to jump into.

First I should say: the reason I’m using it instead of OpenFL is because my Flash games were primarily rendered with BitmapData operations, which seem to be slow and blurry on web canvas and OpenFL. So I’m making myself a GL engine that will get better performance with that sort of thing as I port my old projects to Haxe.

In direct response to sing’s questions, here are a few nitpicks that come to mind that I think would improve my workflow in Lime:

  • Lime has a glyph rendering function, but it seems OpenFL handles the rest. Something a bit more unified for text rendering would be amazing! I could be missing something about it though.
  • Converting from GL.readPixels() to an Image seems harder than it should be: Bytes -> Uint8Array -> ImageBuffer -> Image. A dedicated function on Image might be more convenient for this. Basically, Image.fromBytes() expects the Bytes to be in a specific image format (like PNG), instead of raw Image Buffer data. At least, I couldn’t find an easy way to edit the Image’s buffer without just creating a new Image. The relation between image.data and image.buffer isn’t quite clear to me.
  • I don’t think I would have been able to figure out that I had to enable the depth buffer manually without jgranick’s help. I wonder why this isn’t on by default?
  • GL.texImage2D() (at least) should be able to accept a null DataPointer parameter in HTML5, so I think it should be an optional parameter.

So really my biggest wish has been for some documentation, but I totally get why it’s not the focus right now haha.

Of course, once I figured out how to set up what I needed, Lime has been really powerful and useful. Plus, I enjoy using it. I think it does what it claims to do very well and conveniently. I mean I very rarely have to do compile conditionals for my targets, thanks to Lime!

I’m more than happy to discuss more! I’ve only been using it on and off for the past year though.

1 Like

Hello I thought I’d help out with references to this to help both you and Joshua :smiley:

You’re totally right unification would be ideal and here is the issue on OpenFL https://github.com/openfl/openfl/issues/1693

As for not being able to figure out stencil-buffer , there are some great resources at openfl.org where you can find the xml project format showing stencil-buffer http://www.openfl.org/lime/docs/project-files/xml-format/ , or hxp.
I would love to see improved documentation as well, specifically updated changes to api notes away from flash for certain functions and classes, greater presence of openfl.org documentation. For Lime I think it would be a good idea for the documentation to have all of the features listed here on http://www.openfl.org/lime/

  • Windowing
  • Input
  • Events
  • Audio
  • Render contexts
  • Network access
  • Assets

Example’s of all of the possible Lime renderers.

  • Cairo
  • Canvas
  • DOM
  • Flash
  • GL"

for OpenFL it’d be nice to have the same listing as Lime and when you go to docs it would give the selection of OpenFL , Lime, project files, core architecture, and tools.

Anyways I hope this helped cheers!

1 Like

Thanks for pitching in some references!

Ah, yes would have solved it had I found this at the time. It’s not organized quite how I’d imagine a doc for that sort of thing would be, but nonetheless the information is there. If there are documentation standards set up, I wonder how quickly the community could pull something together!

I agree having documentation for the main points would be the best starting point. There are a lot of utilities that are great to discover too of course. And as you said, a collection of tiny use-case examples always helps.

Also, perhaps another outcome of all this is that this particular point could be solved:

I appreciate how lightweight Lime is at this point, but for the sake of convenience, I can probably port each of my games to OpenFL code within a couple of days. It’s just that the performance on WebGL makes it an unreasonable solution at this moment. The performance in cpp is surprisingly good though, which makes me think I’m doing something wrong haha

1 Like

I do wonder sometimes whether Lime should have its own website, and establish itself more outside OpenFL. The potential problem there is stretching too thin, but its an important issue.

BitmapData on HTML5 uses canvas or uses UInt8Array operations directly, which is slow. Perhaps we should try and look at doing WebAssembly for just some of these operations.

On C++, we can hand-write a native version of the same code, to loop through our pixels in a more optimal way.

We do have a convenience method called renderer.readPixels which takes a Lime Rectangle object. This should call GL.getPixels for you, but convert it to an Image.

Here’s the idea behind both Image and ImageBuffer:

ImageBuffer holds the actually data for an image. Depending on the platform and the source of the data, this could hold a Flash BitmapData, an HTML5 ImageElement, or an HTML5 CanvasElement, or a UInt8Array of bytes. The former values are accessible using imageBuffer.src and the latter is available through imageBuffer.data

An Image is all or a sub-rectangle of the source ImageBuffer. Being able to use multiple images from the same ImageBuffer is an intended feature that probably is not fully supported in all the APIs in practice :slight_smile:

Image is a higher-level object. ImageBuffer is what you need fundamentally to render, but Image contains the higher-level BitmapData-style APIs, as well as some convenience features like being able to convert to/from premultiplied alpha, force power-of-two, etc. The image.src or image.data properties are getter/setters over it’s buffer (so it’s like calling image.buffer.src or image.buffer.data)

If you create a new Image by hand, perhaps you could create a new ImageBuffer instead, then create a new Image (imageBuffer). ImageBuffer should just let you set the data, width, height, pixel format, or whether it is transparent or premultiplied already.

The idea of disabling the stencil or depth buffer is for memory purposes, and OpenFL didn’t used to use either, so we had them disabled by default, because why not? Perhaps they should both be enabled, and frameworks can optional disable them.

So I’m open if we think there’s a clearer way to do this, but here’s some detail on the GL API.

The GL and GLRenderContext APIs are static and instance-based APIs of the same concept. Each are meant to be sort of a universal GL API, branching across WebGL, OpenGL ES, and some desktop GL. We have room for adding more desktop GL-only APIs, but due to the cross-platform focus, have been primarily doing GLES APIs (which overlaps as a common ground between web, desktop and mobile APIs)

How this looks is you’ll find things like gl.clearColor that are consistent everywhere, but in some cases you will find additional versions (similar to GL vendor extensions) where the behavior or parameters might be a bit different. You gave gl.texImage2D as an example, if you look, there is also gl.texImage2DWEBGL in the API. This should look like the WebGL version. Contrary to how it sounds, this should be implemented to work on desktop and mobile, but it’s the WebGL-style of the API which takes a CanvasElement or a typed array, where the desktop/mobile takes a pointer (maybe it should be an optional parameter, though, like you said).

We had some difficulty decided how to support multiple API versions. We could support everything, and then you (the user) need to know the minefield of what is supported, and what is not. Using GL directly in Lime is sort of like this, but we added gl.type and gl.version to help make it easier to do runtime checking (if (gl.type == GLES && gl.version > 2) ...)

But we also have abstracts that lock the API down to follow a certain flavor of OpenGL. Contrary (again) to how it sounds, this does not mean it only works on a certain platform, though making WebGL behave entirely like GLES is mostly (but not fully) implemented.

So you can cast from GL or GLRenderContext to WebGLContext to lock it down to a WebGL-style (and compatible) context, or WebGL2Context if you prefer. You can also convert at any time, so you can call an API that prefers to use GL like GLES (and send pointers) then write code that’s written for typed arrays.

1 Like

Thanks again man, this is very helpful. It’s nice to see the design intentions here!

The convenience of having OpenFL’s BitmapData match or exceed that of Flash’s runtime speed haha would simply be incredible :smiley: I would owe you guys so much for the time saved. I owe you a ton already.

The renderer.readPixels seems handy! It looks like it does the same chain of conversions as mine does, but with the added convenience of HTML5 acceleration and support to read the base renderer. Thanks!

I was not aware of the differences between Image and Imagebuffer. I’ll have to investigate how I can use them better. Also I haven’t yet gotten into the stuff Image can do, and I’m curious how its performance is.

If I’m going to be using Image to perform some of the operations (like fillRect for example), I will have to bake the render, readPixels from GL to the Image, perform the Image operations, then draw a textured quad back to the GL context.
I assume it’d be more convenient for me to just write GL functions that do these things for me (since I can queue calls up without sending data back and forth). Then again, Image's functions might be faster than that if I’m using it excessively.
Anyway, things like this have me wondering how to have Lime to help me out instead of reinventing the wheel for small performance boosts :slight_smile:

For legacy’s sake, I understand keeping the depth buffer disabled makes sense. Though if OpenFL uses it now then it seems like having it disabled by default would only cause confusion (since OpenFL is arguably the default use case). Either way, I just think it’s an important variable that’s more hidden than it should be. That was only my experience though!
I’m unclear whether changing these works during runtime with window.config as well.

Thanks for the explanation about how you approached the GL APIs. So far I’ve been just using the GL API, and honestly it hasn’t even felt like a minefield to me. But that’s really cool that casting to a specific version of the context works. I’ll have to investigate that!

Thanks again for your discussion about this! It helps me understand a lot more how the whole thing is organized.