I am reading through the source for the Video and VideoTexture objects. For the Canvas and WebGL/OpenGL cases I see how there is a method available for the renderer to case a frame to be copied from the non-displayed HTML5 video element to either the canvas or GL texture. What I am not finding is what is causing the renderer to request a redraw based on either a time base or new frame being available. I have waded through the source for those two objects, NetStream, and the internal video rendering methods. Can someone point me towards the right place?
Side note, I was sure I’d seen something like this that was listening to the timeupdate event for the video element, but now I can’t find it, even looking at different branches in GitHub.
Anyway, I trying to understand how this works to see if it’s going to work well for a project we have in mind.