I’m currently implementing my own opengl renderer using
OpenGLView because I’m making a custom 3D game engine for my game. I recently finished most other parts of the engine. (Been using
Tilesheet up until now).
My question is can I have a general outline on how to get graphics on screen in an efficient way ? I can currently get my quads to render but when I start to zoom in it starts getting slow-down from what I believe to be the fill rate of my Gfx Card, but the problem with that is I can fill the screen with graphics using
Tilesheet but get slow down (around 25 - 40fps) with my own
OpenGLView based renderer. I just want some guidance on optimizing the fill rate or how
Tilesheet is Implemented.
- I’m using VBO and EBO.
- All Vertices are Batched.
- I Checked My Shaders and They are not the cause of slow-down.
- The scene I’m testing with only contains 100 quads.
– Thanks, BluFedora
It could be a lot of different things. Does your code have a lot of array iteration or looping through data? Are there parts in the code where variables are being created over and over again every frame that could be created once and re-used throughout. Are you using the Z buffer because I don’t think tilesheet does. These are just a few things to check.
It isn’t a Cpu bound problem because if I resize the window then I get over 120fps but maximizing the window while somewhat ‘zoomed in’ then the fps goes down. Then when the vertices are off screen (ie the camera is moved away) then the fps goes back up. I may need to check the Z buffer but no variables are being created over and over again and there are not many array iteration since this is in my “Sandbox” for testing.
Maybe it’s just a viewport thing. In the maximised window there’s more pixels to calculate and push to the screen. If you maximise the window in your game do you get to see more of the world or does it just show the same picture but larger? If you’re showing more of the world then theres more vertices to consider too.
I’m not doing special scaling right now (probably later) so its the same image but bigger.
Often, the cause of slow-downs could be due to blitting graphics that need not be blitted because it is outside the field of view, but it is still blitted to a virtual part of graphics memory and then the graphics card itself gets stuck in a loop trying to figure out what to do with the memory. That’s the only thing I can think of when you’re zooming in.
Graphics cards will sometimes just dump the memory if it’s not needed, but that puts strain onto it and you want to try and prevent that as much as possible. Although I’ve had no experience with
OpenGLView so perhaps it doesn’t put strain on it. However, I am curious as to why zooming in would slow down rendering. If anything, it should speed it up because there’s less to render.
I’m confused about the zooming in and out role reversal in this situation too. I tried making the graphics smaller but if I zoom in so they end up the same size I still get the slow-down. I’ll try to check all parts of my renderer but I’m not sure what to check. I have little experience with gpu based problems.
Is it possible to send us a video so we can see what you’re seeing?
Ok I will when I get home (high school student).
It’s not perfect (open to optimization ideas!) but take a look at BunnyMark in Lime samples. It uses something similar to drawTiles (but in raw OpenGL code) and should be worth looking at
@singmajesty On a side note is it possible to use the openfl display list in a Lime application because I’m thinking about converting my game engine to use just Lime as the core.
You could use OpenFL as a layer over Lime, “HandlingInputEvents” shows how that might look
Thanks a lot, I’m going to make a renderer with lime (since it gives greater control) and see if I run into any problems related to my opengl rendering code. I’ve been testing out lime and I love it’s structure. I only need openfl for a certain ui library. - I’ll get back to this thread in a few days.