Bug 933 - Force GLMediaPlayer not to depend on machine time
Summary: Force GLMediaPlayer not to depend on machine time
Status: CONFIRMED
Alias: None
Product: Jogl
Classification: JogAmp
Component: video (show other bugs)
Version: 3.0.0
Hardware: All all
: P4 enhancement
Assignee: Sven Gothel
URL:
Depends on:
Blocks:
 
Reported: 2013-12-29 23:00 CET by Ertong
Modified: 2023-07-12 01:41 CEST (History)
1 user (show)

See Also:
Type: FEATURE
SCM Refs:
Workaround: ---


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Ertong 2013-12-29 23:00:39 CET
I'm trying to build a video renderer using jogl.

The target FPS of the video is fixed (25, for example).
But rendering speed can be different.
So, for example, if rendering FPS is 100, I can render 60 sec video in 60*25/100=15 sec.

In this case I cannot rely on the system time, and my display() method should calculate it's own "time".
E.g. if we are rendering the frame #100, display() should build the scene that is expected to be seen at 100/25=4th second of the resulting video.

Well, now I'm trying to use GLMediaPlayer for including embedded video to the scene.
But getNextTexture is hard-coded to use system time.
And in the end, the resulting video contains the embedded video with wrong playing speed.

There is setPlaySpeed() method, but it is good only if rendering FPS is constant, but it is not true in reality.

I've tried to use seek(), but it is not working properly being called on every display().

I've tried to subclass FFMPEGMediaPlayer, but there are too much private fields.

In this case it would be logical to implement getNextTexture(long time) somehow, where time is the time form embedded  video beginning.

Original forum thread: http://forum.jogamp.org/How-force-GLMediaPlayer-not-to-depend-on-machine-time-td4031077.html
Comment 1 Sven Gothel 2013-12-29 23:19:57 CET
I see you have reused the same description
and I don't fully understand your use-case.

The player shall give you the real time frame at the given time of
calling getNextFrame(), i.e.:
  display: 100 fps
  video:    20 fps

  display call # 100 @ 1s -> video frame # 20
  display call # 200 @ 2s -> video frame # 40

Hence video frames will be redisplayed 'naturally' to keep 
the video in sync with real time.

This use case is already working properly.

Please elaborate your use case, 

+++

For editing or using different time base all we can do here IMHO
is to not consider time at all @ getNextFrame() and simply deliver the
next consecutive frame. The time of this frame will be included in the 
TextureFrame itself, hence the caller can take care of actions here.

One problem with this approach maybe audio, since audio playback will
always block until buffers become available - if used.
We may either need a diff. AudioSink (memory only) or disable it.

The above will be mostly usable for editing IMHO, 
but should be feasible for other 'time base issues' as well ?

+++
Comment 2 Ertong 2013-12-29 23:47:30 CET
I'll try to describe use case in different words.

I'm trying to render OpenGL frames to different video.
For example, I have a movie, play it on the plane, and save the result to new video. As a result, I'm interesting only in the resulting video. The rendering itself can be done even offscreen.

In this case:
Source video: 25 fps.
Rendering: 100 fps.
Destination video: 25 fps.

If you tie the movie to 100 fps, while playing the destination video, original one will be played 4 times faster.

----
>>is to not consider time at all @ getNextFrame() 
>>and simply deliver the next consecutive frame.
As far as I understand, you mean getNextTexture().
Hm ... I didn't know that TextureSequence.TextureFrame has timestamp. 
It will help. 
But it will be really useful to have time-independent implementation of getNextTexture()

>>One problem with this approach maybe audio
In this case it is not necessary to keep audio in sync while playing.
At this stage, I'm trying to manage audio afterwards.
So, audio will be turned off while rendering. 
But this is only my case, maybe somebody will have different needs. For example, it could be useful to enumerate audio frames along with video ones.
Comment 3 Sven Gothel 2013-12-29 23:58:06 CET
(In reply to comment #2)
> I'll try to describe use case in different words.
> 
> I'm trying to render OpenGL frames to different video.
> For example, I have a movie, play it on the plane, and save the result to
> new video. As a result, I'm interesting only in the resulting video. The
> rendering itself can be done even offscreen.
> 
> In this case:
> Source video: 25 fps.
> Rendering: 100 fps.
> Destination video: 25 fps.
> 
> If you tie the movie to 100 fps, while playing the destination video,
> original one will be played 4 times faster.

The current player would render 25 frames per seconds 
if GL rendering is above, i.e. 100 fps.

So you are saying 'you want to play it 4 times faster, i.e. at maximum 
renderable speed' ? (As described in my form reply .. 'video editing').

Sorry for my lack of understanding your case here.

> 
> ----
> >>is to not consider time at all @ getNextFrame() 
> >>and simply deliver the next consecutive frame.
> As far as I understand, you mean getNextTexture().
> Hm ... I didn't know that TextureSequence.TextureFrame has timestamp. 
> It will help. 
> But it will be really useful to have time-independent implementation of
> getNextTexture()

Surely we could simply turn off time based a/v sync,
then 'getNextTexture()' would deliver the next frame until it becomes available,
i.e. blocking. We may turn off the StreamWorker thread here.

There will be always a 'time base' in motion pictures,
i.e. at least the sequence between I-frames, of course.
Hence a method like 'getNext(long time)' will not be too useful,
as your experimentation w/ seek turned out ..

> 
> >>One problem with this approach maybe audio
> In this case it is not necessary to keep audio in sync while playing.
> At this stage, I'm trying to manage audio afterwards.
> So, audio will be turned off while rendering. 
> But this is only my case, maybe somebody will have different needs. For
> example, it could be useful to enumerate audio frames along with video ones.
Comment 4 Ertong 2013-12-30 00:04:11 CET
>>Surely we could simply turn off time based a/v sync,
>>then 'getNextTexture()' would deliver the next frame 
>>until it becomes available, i.e. blocking. We may turn 
>>off the StreamWorker thread here.
This ability will be enough for my case.
Thanks.
Comment 5 Ertong 2013-12-30 13:41:58 CET
And in addition to that ... can you implement blocking version of seek()?