Does this still work? Iāve been trying to use web-m but so far unsuccessfully. Does it work the the latest haxe/openfl/lime? If so would you mind doing me a huge favour and sharing how you got it to work?
When you say without audio does that mean it needs to be stripped from the video before hand or whether it simply doesnāt play the audio of a video which should otherwise have sound?
Thanks!
Just in case anyone else is interested. I apologies, web-m does still work.You need to make a small change to the code due to some recent changes in the BytesData and ByteArray classes. It does work, however as Mike says, there is no sound playback.
From licensing point of view, theora is free of charge even in commercial apps (BSD-style license), but it gives poor quality videos. FFmpeg can be compiled with support of theora/ogg (enabling just these two codecs reduces size greatly), to give fast YCbCr to RGB transform (or, omitting FFmpeg -> with SDL, displaying YCbCr quickly with YUV surfaces)ā¦
Eventually this may be considered http://xiph.org/daala/
I think Theora is a good first step, in terms of a safe license, simple to build, and (I think) helping us pave the path to APIs around Lime VideoBuffers and other APIs for managing video. Then with some of this ground-work in place, we can look at optimizing, either in performance, or in newer/smarter codecs. I think Theora is pretty good (in the tests Iāve seen) but not as good as some other competing codecs, but I think it would really be a great step forward
What data should such video lime implementation receive from decoder? Raw rgb array?
And I have question about integrating decoder - should be it in non blocking way, for example every frame in game asks for corresponding frame in the move, or rather some separate thread should be created for that? And how is current audio engine solved? Is it streamed, or all data is in ram? Iām trying to understand audio system (I programmed in C with OpenAL and portaudio before, with oggās and Theora also), but Iām lost int the connection between haxe and C.
We use (on the native side) OGG and WAV decoders, currently, these are implemented as a one-time, full synchronous behavior. We should be able to use threads to decode OGG āas we goā in the future, we had this before, just need to implement again. Lime has OpenAL exposed to Haxe completely, so under the hood in the Lime āAudioSourceā API (which is the simple "play a sound for me please) convenience layer) it uses OpenAL, but OpenAL is also available directly
I think that these are the questions we need to answer, I think that having an API where you request a specific frame is reasonable, but I think we may need to allow for YUV raw data rather than assuming an RGB format. If you use OpenGL shaders, you should be able to use YUV planes directly on the GPU rather than doing software conversion to RGBA.
For sound, I assume we probably have a matching AudioBuffer that corresponds with playback of the video. I think that first steps in this space will help us feel a good API
Thank You for detailed reply.
Iāve tried to create a theora decoder, actually I copied idea from AudioBuffer.cpp. I am able for now decode frame, and reveal video length, height and rgba array of video data from lime.ndll. RGBA Data are passed in the same way as audio data from AudioBuffer, but I have two questions about it:
Are data in form of Bytes, passed from lime as a pointer, or are all copied?
And how would be the best way of showing this array? Iāve created a lime VideoBuffer.hx, which gets exported video informations from lime.ndll (tracing video width and height from compiled openfl app shows good values), but I have a problem with displaying rgba array. I tried to someway push it into an openfl Bitmap, or BitmapData, but didnāt succeed. Hopefully it could be done without pixels copying.
In my own c lib, i decoded frames into rgba array and draw with opengl DrawPixels.
Iāll check the shader option also:)
Edit:
Ah, I saw openfl-webm example, Iāll try to add displaying data based on that project.
You can modify (if you use the image) the data pointer directly, then you need to flag the image.dirty as true to see changes, though we use BGRA premultiplied surfaces by default, though opaque video shouldnāt be affected by premultiplication
Oh, it worked, thank You:) Iām trying to create an android app also, but didnāt succeed yet. Iāll reformat code and post it, maybe it could be reused somehow.
Basically I wanted to add android support also, and some normal way of quitting ;), but for now it looks like this:
it is demo for windows, with Big Buck Bunny trailer.
lime source:
and example source:
whatās missing - timers (movie is decoded as fast as it can be), audio decoding (there are few ways it could be done, so I left it untouched for now), normal and useful API, faster frame decoding, fixing msvc, mac and ios compiler flags (Iāve tested on linux and mingw only, other versions could need some tweaks in theora/files.xml)ā¦
And there is a problem with memory allocation - application is constantly adding memory (probably in VideoFile.hx there is problem with allocating new data for every frame; in lime.ndll decoding is done in place). If anyone got some solution for this, it would be good
Iāve tried to add libyuv from webm package to speed up yuv->rgb conversion, but it happened to have similar time as my poor conversion function. So it could be quicken by going into sse/asm (but i donāt know about portablility with sse code) or into shaders. Since I donāt know about neither of those two methods, probably Iāll leave it like it is.
Iāve sit on it a little bit more and done simple rendering with SDL texture. SDL is fast, so i put Big Buck Bunny here in 1080p (about 1900 x 1000) for eventual tests. You can check it also, of course movie is rendered as fast as it can, no sound yet, and no memory leaks
(I renamed the folder limevid) Is this something you recognize or is this a problem my end? I have run ālime setup windowsā and ālime rebuild windowsā. I get this error after running ālime rebuild windowsā
Thanks again - so pleased you've made so much progress with video and are so willing to share