Mixing 2D and 3D

I really want to make a game that is primarily 2D (i.e. sprite-based) but has some 3D lighting effects, specifically having sprites colored differently depending on the lights in the scene. I know there are techniques that use normal (or depth) maps to achieve this, but they all end up passing information to something like OpenGL.

I’m trying to figure out the best way to do this. I don’t really want to use actual OpenGL (because it adds another dependency) so ideally some kind of fancy trick … But that sounds like it would require, well, hard-coding lighting equations, which (even if I have the math right) will certainly slow down performance …

Anyway. So my thought was - If I were to use OpenGL, how easy is this to do with OpenFL ? I understand that OpenFL is based on the Flash API, and that Flash is 2D … However, my understanding is also that OpenFL uses OpenGL to draw ? Or am I misunderstanding ? Does it … just use that in certain platforms ?

My sense is that what I’m trying to do doesn’t make sense - OpenFL is based on the Flash API and you are stuck with that functionality. I know there is OpenGLView, but (I am I’m struggling to find any documentation on it) it seems like it’s a separate entity - I presume you create a separate OpenGL context and do your stuff in it but that will be separate from any of the other, 2D, Flash-based graphics … ?

So again, and I’m just answering my own question here - Do you really only have a choice of … going full 3D (using OpenGLView, or perhaps Away3D…) and doing all the blending and filtering effects on your own (i.e. totally ignoring the Flash graphics API) or, well … That’s it really … All or nothing …

I appreciate that this, well, seems obvious to an extent but I was just hoping to get some feedback - Do you agree that, well, you really have to choose … ?

I suppose you could create an … OpenGLView for each sprite ? (I haven’t really understood how OpenGLView works) And then … I dunno, push the view onto a BitmapData … ? Jeepers, I’m pulling at straws here …

OpenGL is automatically used when your application compiles to native platforms, so whatever code you put in will translate to OpenGL according to the specifications of OpenFL.

To make 3D effects on 2D sprites (essentially 2.5D), there is the option to use two bitmaps (one as a light source, another as a shadow) for your main sprites.

You will use the light source location to determine the length and depth of the shadow bitmap, which you would cast based on the location of the sprite the shadow supposedly casts from (if that makes sense). Reflections are also fairly simply to implement, using either the Graphics API to produce a gradient from transparent to full opacity of the supposed reflection, or use a bitmap with the reflection.

There are many ways to do it, all of which may require some thought.

Hi. I’m not sure if there is an easier way in OpenFL > 2.1 , but what you usually do is to create a an object that extends OpenGLView, as alternative, that extends Sprite and contains an OpenGLView. Then use the “render” function to setup your draw and call GL.DrawArrays. Then it is important to disable/unbind what you have used so you don’t affect openFL render.

There is a cool function bindBitmapDataTexture that let you bind the texture loaded from the OpenFL. Also you can use localToGlobal function to find the current position.

Yes no - There are several effects I know about. Shadows I understand. Also you can approximate a spotlight by raising the colors of everything under it’s cone … These I agree are worth investigating.

What I was really turned on by, though, was what Megasphere was doing http://www.antonkudin.me/megasphere/ It wasn’t just a shadow or brightening up an entire section of the screen - Here the changes in colour of the sprite are localised - they effect small portions of the sprite at a time, and differently depending on added information (normal map) … I am trying to figure out a ‘trick’ way of doing this (i.e. without using 3D accelerated hardware) but have come up short so far …

The reflections you mention are another cool idea, though … I am interested in as many ideas as possible. But right now I’m also fixated on the effects in Megasphere - It just makes things engrossing and dynamic, somehow …

Will need to actually try this to understand what you’re saying better (and whether I’m able to do what I want, which is not losing the Flash graphics capabilities but still have dynamic, 3D accelerated lighting), but it’s definitely a great start for me … Will get on it. Thank you.

All the implementations I’ve seen use a normal map in tandem with an OpenGL shader. You could try rolling your own to use a custom software renderer to replace the GPU accelerated shader but that will likely be very slow in terms of performance.

I believe there was some buzz about GLSL bindings for OpenFL a while back. Try pinging Josh for an update on that side of things.

I think there is a Simple OpenGLView sample, and the Heroku shaders sample.

Yes, the OpenFL common graphics wont be affected, but OpenGLView will not work on Flash target.

Yes, that is what I assumed. Unless I could find a very fast (and clever) way it’s probably a non-started without going opengl …

Sorry for being ridiculous, but by ‘ping’ do you mean email ?

Ok, this makes sense - Common flash doesn’t have OpenGL … Which is a bummer. Being able to push to Flash is useful.

I’m thinking of just using the OpenFL port of Away3D … It sort of falls within the same line of APIs (i.e. adobe) … Am going to do some experimentation and post what I come up with (basically how cross-platform a solution mixing things like 2D filters and 3D lighting)

I have sort of just realised that what I’m trying to do is fundamentally impossible. You see, I’ve been looking around for visual styles that I like and I’ve found two that I’m trying to mix - the 2D overlays at deepnight.net and the normal-mapped lighting at http://www.antonkudin.me/megasphere/. But it doesn’t make sense - Once you give a 3D accelerator (weird wording…) a set of triangles and textures, That’s It - you can’t then come in and try to mess with the full-screen pixels afterwards … I mean, you don’t have access to the final pixel buffer (like you do in 2D games)…

I’m right, right ? As far as I understand GPU acceleration and hardware, you really don’t get to insert yourself in between the GPU and the screen - you send stuff to the GPU and it talks to the screen directly … It’s not going to, after figuring out what the screen looks like, come back to you and say “hey - you want to do anything more to this?” …

My confusion came with the whole “OpenFL uses OpenGL” thing - cause what it’s doing there is just … giving you a graphics buffer (s) to play with and then once you’re done it sends it off to the GPU as a texture to take advantage of some speedy thing over there that I don’t understand, really …

Anyway. If anyone thinks I’m wrong I’d be happy to hear but … Starting to feel like what I was hoping to do (mix these two … graphics techniques) is sort of never going to happen by definition …

Email, Tweet, PM, etc…its meant as a catch-all phrase.

From what I understand that’s what shaders allow you to do…apply transformations to pixel, vertex and stencil buffers…

Got it :wink:

Yeah, that’s what I thought … Though I’m not too brushed up on shaders … I’ve tried several times and I just can’t get my head around it. Going to have to dig a little deeper … Thanks.

I think that you want to do “post-processing”. You render to another buffer instead to the back buffer. It is tricky to implement. Check also Effects on cpp targets

That’s interesting, thanks.

I also see that Away3D has filters … And you can write your own. Which look to be post-processing … I don’t what they actually are (shaders?).

Also I read somewhere you can render to texture ? And then … I dunno, do another pass ?

I will do some investigating and report back.