Coverage Report

Created: 2026-02-24 07:11

next uncovered line (L), next uncovered region (R), next uncovered branch (B)
/src/gstreamer/subprojects/gst-plugins-base/gst-libs/gst/video/gstvideodecoder.c
Line
Count
Source
1
/* GStreamer
2
 * Copyright (C) 2008 David Schleef <ds@schleef.org>
3
 * Copyright (C) 2011 Mark Nauwelaerts <mark.nauwelaerts@collabora.co.uk>.
4
 * Copyright (C) 2011 Nokia Corporation. All rights reserved.
5
 *   Contact: Stefan Kost <stefan.kost@nokia.com>
6
 * Copyright (C) 2012 Collabora Ltd.
7
 *  Author : Edward Hervey <edward@collabora.com>
8
 *
9
 * This library is free software; you can redistribute it and/or
10
 * modify it under the terms of the GNU Library General Public
11
 * License as published by the Free Software Foundation; either
12
 * version 2 of the License, or (at your option) any later version.
13
 *
14
 * This library is distributed in the hope that it will be useful,
15
 * but WITHOUT ANY WARRANTY; without even the implied warranty of
16
 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
17
 * Library General Public License for more details.
18
 *
19
 * You should have received a copy of the GNU Library General Public
20
 * License along with this library; if not, write to the
21
 * Free Software Foundation, Inc., 51 Franklin St, Fifth Floor,
22
 * Boston, MA 02110-1301, USA.
23
 */
24
25
/**
26
 * SECTION:gstvideodecoder
27
 * @title: GstVideoDecoder
28
 * @short_description: Base class for video decoders
29
 *
30
 * This base class is for video decoders turning encoded data into raw video
31
 * frames.
32
 *
33
 * The GstVideoDecoder base class and derived subclasses should cooperate as
34
 * follows:
35
 *
36
 * ## Configuration
37
 *
38
 *   * Initially, GstVideoDecoder calls @start when the decoder element
39
 *     is activated, which allows the subclass to perform any global setup.
40
 *
41
 *   * GstVideoDecoder calls @set_format to inform the subclass of caps
42
 *     describing input video data that it is about to receive, including
43
 *     possibly configuration data.
44
 *     While unlikely, it might be called more than once, if changing input
45
 *     parameters require reconfiguration.
46
 *
47
 *   * Incoming data buffers are processed as needed, described in Data
48
 *     Processing below.
49
 *
50
 *   * GstVideoDecoder calls @stop at end of all processing.
51
 *
52
 * ## Data processing
53
 *
54
 *   * The base class gathers input data, and optionally allows subclass
55
 *     to parse this into subsequently manageable chunks, typically
56
 *     corresponding to and referred to as 'frames'.
57
 *
58
 *   * Each input frame is provided in turn to the subclass' @handle_frame
59
 *     callback.
60
 *   * When the subclass enables the subframe mode with `gst_video_decoder_set_subframe_mode`,
61
 *     the base class will provide to the subclass the same input frame with
62
 *     different input buffers to the subclass @handle_frame
63
 *     callback. During this call, the subclass needs to take
64
 *     ownership of the input_buffer as @GstVideoCodecFrame.input_buffer
65
 *     will have been changed before the next subframe buffer is received.
66
 *     The subclass will call `gst_video_decoder_have_last_subframe`
67
 *     when a new input frame can be created by the base class.
68
 *     Every subframe will share the same @GstVideoCodecFrame.output_buffer
69
 *     to write the decoding result. The subclass is responsible to protect
70
 *     its access.
71
 *
72
 *   * If codec processing results in decoded data, the subclass should call
73
 *     @gst_video_decoder_finish_frame to have decoded data pushed
74
 *     downstream. In subframe mode
75
 *     the subclass should call @gst_video_decoder_finish_subframe until the
76
 *     last subframe where it should call @gst_video_decoder_finish_frame.
77
 *     The subclass can detect the last subframe using GST_VIDEO_BUFFER_FLAG_MARKER
78
 *     on buffers or using its own logic to collect the subframes.
79
 *     In case of decoding failure, the subclass must call
80
 *     @gst_video_decoder_drop_frame or @gst_video_decoder_drop_subframe,
81
 *     to allow the base class to do timestamp and offset tracking, and possibly
82
 *     to requeue the frame for a later attempt in the case of reverse playback.
83
 *
84
 * ## Shutdown phase
85
 *
86
 *   * The GstVideoDecoder class calls @stop to inform the subclass that data
87
 *     parsing will be stopped.
88
 *
89
 * ## Additional Notes
90
 *
91
 *   * Seeking/Flushing
92
 *
93
 *     * When the pipeline is seeked or otherwise flushed, the subclass is
94
 *       informed via a call to its @reset callback, with the hard parameter
95
 *       set to true. This indicates the subclass should drop any internal data
96
 *       queues and timestamps and prepare for a fresh set of buffers to arrive
97
 *       for parsing and decoding.
98
 *
99
 *   * End Of Stream
100
 *
101
 *     * At end-of-stream, the subclass @parse function may be called some final
102
 *       times with the at_eos parameter set to true, indicating that the element
103
 *       should not expect any more data to be arriving, and it should parse and
104
 *       remaining frames and call gst_video_decoder_have_frame() if possible.
105
 *
106
 * The subclass is responsible for providing pad template caps for
107
 * source and sink pads. The pads need to be named "sink" and "src". It also
108
 * needs to provide information about the output caps, when they are known.
109
 * This may be when the base class calls the subclass' @set_format function,
110
 * though it might be during decoding, before calling
111
 * @gst_video_decoder_finish_frame. This is done via
112
 * @gst_video_decoder_set_output_state
113
 *
114
 * The subclass is also responsible for providing (presentation) timestamps
115
 * (likely based on corresponding input ones).  If that is not applicable
116
 * or possible, the base class provides limited framerate based interpolation.
117
 *
118
 * Similarly, the base class provides some limited (legacy) seeking support
119
 * if specifically requested by the subclass, as full-fledged support
120
 * should rather be left to upstream demuxer, parser or alike.  This simple
121
 * approach caters for seeking and duration reporting using estimated input
122
 * bitrates. To enable it, a subclass should call
123
 * @gst_video_decoder_set_estimate_rate to enable handling of incoming
124
 * byte-streams.
125
 *
126
 * The base class provides some support for reverse playback, in particular
127
 * in case incoming data is not packetized or upstream does not provide
128
 * fragments on keyframe boundaries.  However, the subclass should then be
129
 * prepared for the parsing and frame processing stage to occur separately
130
 * (in normal forward processing, the latter immediately follows the former),
131
 * The subclass also needs to ensure the parsing stage properly marks
132
 * keyframes, unless it knows the upstream elements will do so properly for
133
 * incoming data.
134
 *
135
 * The bare minimum that a functional subclass needs to implement is:
136
 *
137
 *   * Provide pad templates
138
 *   * Inform the base class of output caps via
139
 *      @gst_video_decoder_set_output_state
140
 *
141
 *   * Parse input data, if it is not considered packetized from upstream
142
 *      Data will be provided to @parse which should invoke
143
 *      @gst_video_decoder_add_to_frame and @gst_video_decoder_have_frame to
144
 *      separate the data belonging to each video frame.
145
 *
146
 *   * Accept data in @handle_frame and provide decoded results to
147
 *      @gst_video_decoder_finish_frame, or call @gst_video_decoder_drop_frame.
148
 */
149
150
#ifdef HAVE_CONFIG_H
151
#include "config.h"
152
#endif
153
154
/* TODO
155
 *
156
 * * Add a flag/boolean for I-frame-only/image decoders so we can do extra
157
 *   features, like applying QoS on input (as opposed to after the frame is
158
 *   decoded).
159
 * * Add a flag/boolean for decoders that require keyframes, so the base
160
 *   class can automatically discard non-keyframes before one has arrived
161
 * * Detect reordered frame/timestamps and fix the pts/dts
162
 * * Support for GstIndex (or shall we not care ?)
163
 * * Calculate actual latency based on input/output timestamp/frame_number
164
 *   and if it exceeds the recorded one, save it and emit a GST_MESSAGE_LATENCY
165
 * * Emit latency message when it changes
166
 *
167
 */
168
169
/* Implementation notes:
170
 * The Video Decoder base class operates in 2 primary processing modes, depending
171
 * on whether forward or reverse playback is requested.
172
 *
173
 * Forward playback:
174
 *   * Incoming buffer -> @parse() -> add_to_frame()/have_frame() ->
175
 *     handle_frame() -> push downstream
176
 *
177
 * Reverse playback is more complicated, since it involves gathering incoming
178
 * data regions as we loop backwards through the upstream data. The processing
179
 * concept (using incoming buffers as containing one frame each to simplify
180
 * things) is:
181
 *
182
 * Upstream data we want to play:
183
 *  Buffer encoded order:  1  2  3  4  5  6  7  8  9  EOS
184
 *  Keyframe flag:            K        K
185
 *  Groupings:             AAAAAAA  BBBBBBB  CCCCCCC
186
 *
187
 * Input:
188
 *  Buffer reception order:  7  8  9  4  5  6  1  2  3  EOS
189
 *  Keyframe flag:                       K        K
190
 *  Discont flag:            D        D        D
191
 *
192
 * - Each Discont marks a discont in the decoding order.
193
 * - The keyframes mark where we can start decoding.
194
 *
195
 * Initially, we prepend incoming buffers to the gather queue. Whenever the
196
 * discont flag is set on an incoming buffer, the gather queue is flushed out
197
 * before the new buffer is collected.
198
 *
199
 * The above data will be accumulated in the gather queue like this:
200
 *
201
 *   gather queue:  9  8  7
202
 *                        D
203
 *
204
 * When buffer 4 is received (with a DISCONT), we flush the gather queue like
205
 * this:
206
 *
207
 *   while (gather)
208
 *     take head of queue and prepend to parse queue (this reverses the
209
 *     sequence, so parse queue is 7 -> 8 -> 9)
210
 *
211
 *   Next, we process the parse queue, which now contains all un-parsed packets
212
 *   (including any leftover ones from the previous decode section)
213
 *
214
 *   for each buffer now in the parse queue:
215
 *     Call the subclass parse function, prepending each resulting frame to
216
 *     the parse_gather queue. Buffers which precede the first one that
217
 *     produces a parsed frame are retained in the parse queue for
218
 *     re-processing on the next cycle of parsing.
219
 *
220
 *   The parse_gather queue now contains frame objects ready for decoding,
221
 *   in reverse order.
222
 *   parse_gather: 9 -> 8 -> 7
223
 *
224
 *   while (parse_gather)
225
 *     Take the head of the queue and prepend it to the decode queue
226
 *     If the frame was a keyframe, process the decode queue
227
 *   decode is now 7-8-9
228
 *
229
 *  Processing the decode queue results in frames with attached output buffers
230
 *  stored in the 'output_queue' ready for outputting in reverse order.
231
 *
232
 * After we flushed the gather queue and parsed it, we add 4 to the (now empty)
233
 * gather queue. We get the following situation:
234
 *
235
 *  gather queue:    4
236
 *  decode queue:    7  8  9
237
 *
238
 * After we received 5 (Keyframe) and 6:
239
 *
240
 *  gather queue:    6  5  4
241
 *  decode queue:    7  8  9
242
 *
243
 * When we receive 1 (DISCONT) which triggers a flush of the gather queue:
244
 *
245
 *   Copy head of the gather queue (6) to decode queue:
246
 *
247
 *    gather queue:    5  4
248
 *    decode queue:    6  7  8  9
249
 *
250
 *   Copy head of the gather queue (5) to decode queue. This is a keyframe so we
251
 *   can start decoding.
252
 *
253
 *    gather queue:    4
254
 *    decode queue:    5  6  7  8  9
255
 *
256
 *   Decode frames in decode queue, store raw decoded data in output queue, we
257
 *   can take the head of the decode queue and prepend the decoded result in the
258
 *   output queue:
259
 *
260
 *    gather queue:    4
261
 *    decode queue:
262
 *    output queue:    9  8  7  6  5
263
 *
264
 *   Now output all the frames in the output queue, picking a frame from the
265
 *   head of the queue.
266
 *
267
 *   Copy head of the gather queue (4) to decode queue, we flushed the gather
268
 *   queue and can now store input buffer in the gather queue:
269
 *
270
 *    gather queue:    1
271
 *    decode queue:    4
272
 *
273
 *  When we receive EOS, the queue looks like:
274
 *
275
 *    gather queue:    3  2  1
276
 *    decode queue:    4
277
 *
278
 *  Fill decode queue, first keyframe we copy is 2:
279
 *
280
 *    gather queue:    1
281
 *    decode queue:    2  3  4
282
 *
283
 *  Decoded output:
284
 *
285
 *    gather queue:    1
286
 *    decode queue:
287
 *    output queue:    4  3  2
288
 *
289
 *  Leftover buffer 1 cannot be decoded and must be discarded.
290
 */
291
292
#include "gstvideodecoder.h"
293
#include "gstvideoutils.h"
294
#include "gstvideoutilsprivate.h"
295
296
#include <gst/video/video.h>
297
#include <gst/video/video-event.h>
298
#include <gst/video/gstvideopool.h>
299
#include <gst/video/gstvideometa.h>
300
#include <string.h>
301
302
GST_DEBUG_CATEGORY (videodecoder_debug);
303
#define GST_CAT_DEFAULT videodecoder_debug
304
305
/* properties */
306
6
#define DEFAULT_QOS                 TRUE
307
2
#define DEFAULT_MAX_ERRORS          GST_VIDEO_DECODER_MAX_ERRORS
308
2
#define DEFAULT_MIN_FORCE_KEY_UNIT_INTERVAL 0
309
2
#define DEFAULT_DISCARD_CORRUPTED_FRAMES FALSE
310
6
#define DEFAULT_AUTOMATIC_REQUEST_SYNC_POINTS FALSE
311
6
#define DEFAULT_AUTOMATIC_REQUEST_SYNC_POINT_FLAGS (GST_VIDEO_DECODER_REQUEST_SYNC_POINT_DISCARD_INPUT | GST_VIDEO_DECODER_REQUEST_SYNC_POINT_CORRUPT_OUTPUT)
312
313
/* Used for request_sync_point_frame_number. These are out of range for the
314
 * frame numbers and can be given special meaning */
315
2
#define REQUEST_SYNC_POINT_PENDING G_MAXUINT + 1
316
12
#define REQUEST_SYNC_POINT_UNSET G_MAXUINT64
317
318
enum
319
{
320
  PROP_0,
321
  PROP_QOS,
322
  PROP_MAX_ERRORS,
323
  PROP_MIN_FORCE_KEY_UNIT_INTERVAL,
324
  PROP_DISCARD_CORRUPTED_FRAMES,
325
  PROP_AUTOMATIC_REQUEST_SYNC_POINTS,
326
  PROP_AUTOMATIC_REQUEST_SYNC_POINT_FLAGS,
327
};
328
329
struct _GstVideoDecoderPrivate
330
{
331
  /* FIXME introduce a context ? */
332
333
  GstBufferPool *pool;
334
  GstAllocator *allocator;
335
  GstAllocationParams params;
336
337
  /* parse tracking */
338
  /* input data */
339
  GstAdapter *input_adapter;
340
  /* assembles current frame */
341
  GstAdapter *output_adapter;
342
343
  /* Whether we attempt to convert newsegment from bytes to
344
   * time using a bitrate estimation */
345
  gboolean do_estimate_rate;
346
347
  /* Whether input is considered packetized or not */
348
  gboolean packetized;
349
350
  /* whether input is considered as subframes */
351
  gboolean subframe_mode;
352
353
  /* Error handling */
354
  gint max_errors;
355
  gint error_count;
356
  gboolean had_output_data;
357
  gboolean had_input_data;
358
359
  gboolean needs_format;
360
  /* input_segment are output_segment identical */
361
  gboolean in_out_segment_sync;
362
363
  /* TRUE if we have an active set of instant rate flags */
364
  gboolean decode_flags_override;
365
  GstSegmentFlags decode_flags;
366
367
  /* ... being tracked here;
368
   * only available during parsing or when doing subframe decoding */
369
  GstVideoCodecFrame *current_frame;
370
  /* events that should apply to the current frame */
371
  /* FIXME 2.0: Use a GQueue or similar, see GstVideoCodecFrame::events */
372
  GList *current_frame_events;
373
  /* events that should be pushed before the next frame */
374
  /* FIXME 2.0: Use a GQueue or similar, see GstVideoCodecFrame::events */
375
  GList *pending_events;
376
377
  /* relative offset of input data */
378
  guint64 input_offset;
379
  /* relative offset of frame */
380
  guint64 frame_offset;
381
  /* tracking ts and offsets */
382
  GQueue timestamps;
383
384
  /* last outgoing ts */
385
  GstClockTime last_timestamp_out;
386
  /* incoming pts - dts */
387
  GstClockTime pts_delta;
388
  gboolean reordered_output;
389
  /* If reordered_output, counts the number of output frames which have ordered
390
   * pts. This is to be able to recover from temporary glitches.  */
391
  guint consecutive_orderered_output;
392
393
  /* FIXME: Consider using a GQueue or other better fitting data structure */
394
  /* reverse playback */
395
  /* collect input */
396
  GList *gather;
397
  /* to-be-parsed */
398
  GList *parse;
399
  /* collected parsed frames */
400
  GList *parse_gather;
401
  /* frames to be handled == decoded */
402
  GList *decode;
403
  /* collected output - of buffer objects, not frames */
404
  GList *output_queued;
405
406
  /* Properties */
407
  GstClockTime min_force_key_unit_interval;
408
  gboolean discard_corrupted_frames;
409
410
  /* Key unit related state */
411
  gboolean needs_sync_point;
412
  GstVideoDecoderRequestSyncPointFlags request_sync_point_flags;
413
  guint64 request_sync_point_frame_number;
414
  GstClockTime last_force_key_unit_time;
415
  /* -1 if we saw no sync point yet */
416
  guint64 distance_from_sync;
417
418
  gboolean automatic_request_sync_points;
419
  GstVideoDecoderRequestSyncPointFlags automatic_request_sync_point_flags;
420
421
  guint32 system_frame_number;
422
  guint32 decode_frame_number;
423
424
  GQueue frames;                /* Protected with OBJECT_LOCK */
425
  GstVideoCodecState *input_state;
426
  GstVideoCodecState *output_state;     /* OBJECT_LOCK and STREAM_LOCK */
427
  gboolean output_state_changed;
428
429
  /* QoS properties */
430
  gboolean do_qos;
431
  gdouble proportion;           /* OBJECT_LOCK */
432
  GstClockTime earliest_time;   /* OBJECT_LOCK */
433
  GstClockTime qos_frame_duration;      /* OBJECT_LOCK */
434
  gboolean discont;
435
  /* qos messages: frames dropped/processed */
436
  guint dropped;
437
  guint processed;
438
439
  /* Outgoing byte size ? */
440
  gint64 bytes_out;
441
  gint64 time;
442
443
  gint64 min_latency;
444
  gint64 max_latency;
445
446
  /* Tracks whether the latency message was posted at least once */
447
  gboolean posted_latency_msg;
448
449
  /* upstream stream tags (global tags are passed through as-is) */
450
  GstTagList *upstream_tags;
451
452
  /* subclass tags */
453
  GstTagList *tags;
454
  GstTagMergeMode tags_merge_mode;
455
456
  gboolean tags_changed;
457
458
  /* flags */
459
  gboolean use_default_pad_acceptcaps;
460
461
#ifndef GST_DISABLE_DEBUG
462
  /* Diagnostic time for reporting the time
463
   * from flush to first output */
464
  GstClockTime last_reset_time;
465
#endif
466
};
467
468
static GstElementClass *parent_class = NULL;
469
static gint private_offset = 0;
470
471
/* cached quark to avoid contention on the global quark table lock */
472
#define META_TAG_VIDEO meta_tag_video_quark
473
static GQuark meta_tag_video_quark;
474
475
static void gst_video_decoder_class_init (GstVideoDecoderClass * klass);
476
static void gst_video_decoder_init (GstVideoDecoder * dec,
477
    GstVideoDecoderClass * klass);
478
479
static void gst_video_decoder_finalize (GObject * object);
480
static void gst_video_decoder_get_property (GObject * object, guint property_id,
481
    GValue * value, GParamSpec * pspec);
482
static void gst_video_decoder_set_property (GObject * object, guint property_id,
483
    const GValue * value, GParamSpec * pspec);
484
485
static gboolean gst_video_decoder_setcaps (GstVideoDecoder * dec,
486
    GstCaps * caps);
487
static gboolean gst_video_decoder_sink_event (GstPad * pad, GstObject * parent,
488
    GstEvent * event);
489
static gboolean gst_video_decoder_src_event (GstPad * pad, GstObject * parent,
490
    GstEvent * event);
491
static GstFlowReturn gst_video_decoder_chain (GstPad * pad, GstObject * parent,
492
    GstBuffer * buf);
493
static gboolean gst_video_decoder_sink_query (GstPad * pad, GstObject * parent,
494
    GstQuery * query);
495
static GstStateChangeReturn gst_video_decoder_change_state (GstElement *
496
    element, GstStateChange transition);
497
static gboolean gst_video_decoder_src_query (GstPad * pad, GstObject * parent,
498
    GstQuery * query);
499
static void gst_video_decoder_reset (GstVideoDecoder * decoder, gboolean full,
500
    gboolean flush_hard);
501
502
static GstFlowReturn gst_video_decoder_decode_frame (GstVideoDecoder * decoder,
503
    GstVideoCodecFrame * frame);
504
505
static void gst_video_decoder_push_event_list (GstVideoDecoder * decoder,
506
    GList * events);
507
static GstClockTime gst_video_decoder_get_frame_duration (GstVideoDecoder *
508
    decoder, GstVideoCodecFrame * frame);
509
static GstVideoCodecFrame *gst_video_decoder_new_frame (GstVideoDecoder *
510
    decoder);
511
static GstFlowReturn gst_video_decoder_clip_and_push_buf (GstVideoDecoder *
512
    decoder, GstBuffer * buf);
513
static GstFlowReturn gst_video_decoder_flush_parse (GstVideoDecoder * dec,
514
    gboolean at_eos);
515
516
static void gst_video_decoder_clear_queues (GstVideoDecoder * dec);
517
518
static gboolean gst_video_decoder_sink_event_default (GstVideoDecoder * decoder,
519
    GstEvent * event);
520
static gboolean gst_video_decoder_src_event_default (GstVideoDecoder * decoder,
521
    GstEvent * event);
522
static gboolean gst_video_decoder_decide_allocation_default (GstVideoDecoder *
523
    decoder, GstQuery * query);
524
static gboolean gst_video_decoder_propose_allocation_default (GstVideoDecoder *
525
    decoder, GstQuery * query);
526
static gboolean gst_video_decoder_negotiate_default (GstVideoDecoder * decoder);
527
static GstFlowReturn gst_video_decoder_parse_available (GstVideoDecoder * dec,
528
    gboolean at_eos, gboolean new_buffer);
529
static gboolean gst_video_decoder_negotiate_unlocked (GstVideoDecoder *
530
    decoder);
531
static gboolean gst_video_decoder_sink_query_default (GstVideoDecoder * decoder,
532
    GstQuery * query);
533
static gboolean gst_video_decoder_src_query_default (GstVideoDecoder * decoder,
534
    GstQuery * query);
535
536
static gboolean gst_video_decoder_transform_meta_default (GstVideoDecoder *
537
    decoder, GstVideoCodecFrame * frame, GstMeta * meta);
538
539
static gboolean gst_video_decoder_handle_missing_data_default (GstVideoDecoder *
540
    decoder, GstClockTime timestamp, GstClockTime duration);
541
542
static void gst_video_decoder_replace_input_buffer (GstVideoDecoder * decoder,
543
    GstVideoCodecFrame * frame, GstBuffer ** dest_buffer);
544
545
static void gst_video_decoder_copy_metas (GstVideoDecoder * decoder,
546
    GstVideoCodecFrame * frame, GstBuffer * src_buffer,
547
    GstBuffer * dest_buffer);
548
549
static void gst_video_decoder_request_sync_point_internal (GstVideoDecoder *
550
    dec, GstClockTime deadline, GstVideoDecoderRequestSyncPointFlags flags);
551
552
/* we can't use G_DEFINE_ABSTRACT_TYPE because we need the klass in the _init
553
 * method to get to the padtemplates */
554
GType
555
gst_video_decoder_get_type (void)
556
8
{
557
8
  static gsize type = 0;
558
559
8
  if (g_once_init_enter (&type)) {
560
2
    GType _type;
561
2
    static const GTypeInfo info = {
562
2
      sizeof (GstVideoDecoderClass),
563
2
      NULL,
564
2
      NULL,
565
2
      (GClassInitFunc) gst_video_decoder_class_init,
566
2
      NULL,
567
2
      NULL,
568
2
      sizeof (GstVideoDecoder),
569
2
      0,
570
2
      (GInstanceInitFunc) gst_video_decoder_init,
571
2
    };
572
573
2
    _type = g_type_register_static (GST_TYPE_ELEMENT,
574
2
        "GstVideoDecoder", &info, G_TYPE_FLAG_ABSTRACT);
575
576
2
    private_offset =
577
2
        g_type_add_instance_private (_type, sizeof (GstVideoDecoderPrivate));
578
579
2
    g_once_init_leave (&type, _type);
580
2
  }
581
8
  return type;
582
8
}
583
584
static inline GstVideoDecoderPrivate *
585
gst_video_decoder_get_instance_private (GstVideoDecoder * self)
586
4
{
587
4
  return (G_STRUCT_MEMBER_P (self, private_offset));
588
4
}
589
590
static void
591
gst_video_decoder_class_init (GstVideoDecoderClass * klass)
592
2
{
593
2
  GObjectClass *gobject_class;
594
2
  GstElementClass *gstelement_class;
595
596
2
  gobject_class = G_OBJECT_CLASS (klass);
597
2
  gstelement_class = GST_ELEMENT_CLASS (klass);
598
599
2
  GST_DEBUG_CATEGORY_INIT (videodecoder_debug, "videodecoder", 0,
600
2
      "Base Video Decoder");
601
602
2
  parent_class = g_type_class_peek_parent (klass);
603
604
2
  if (private_offset != 0)
605
2
    g_type_class_adjust_private_offset (klass, &private_offset);
606
607
2
  gobject_class->finalize = gst_video_decoder_finalize;
608
2
  gobject_class->get_property = gst_video_decoder_get_property;
609
2
  gobject_class->set_property = gst_video_decoder_set_property;
610
611
2
  gstelement_class->change_state =
612
2
      GST_DEBUG_FUNCPTR (gst_video_decoder_change_state);
613
614
2
  klass->sink_event = gst_video_decoder_sink_event_default;
615
2
  klass->src_event = gst_video_decoder_src_event_default;
616
2
  klass->decide_allocation = gst_video_decoder_decide_allocation_default;
617
2
  klass->propose_allocation = gst_video_decoder_propose_allocation_default;
618
2
  klass->negotiate = gst_video_decoder_negotiate_default;
619
2
  klass->sink_query = gst_video_decoder_sink_query_default;
620
2
  klass->src_query = gst_video_decoder_src_query_default;
621
2
  klass->transform_meta = gst_video_decoder_transform_meta_default;
622
2
  klass->handle_missing_data = gst_video_decoder_handle_missing_data_default;
623
624
  /**
625
   * GstVideoDecoder:qos:
626
   *
627
   * If set to %TRUE the decoder will handle QoS events received
628
   * from downstream elements.
629
   * This includes dropping output frames which are detected as late
630
   * using the metrics reported by those events.
631
   *
632
   * Since: 1.18
633
   */
634
2
  g_object_class_install_property (gobject_class, PROP_QOS,
635
2
      g_param_spec_boolean ("qos", "Quality of Service",
636
2
          "Handle Quality-of-Service events from downstream",
637
2
          DEFAULT_QOS, G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS));
638
639
  /**
640
   * GstVideoDecoder:max-errors:
641
   *
642
   * Maximum number of tolerated consecutive decode errors. See
643
   * gst_video_decoder_set_max_errors() for more details.
644
   *
645
   * Since: 1.18
646
   */
647
2
  g_object_class_install_property (gobject_class, PROP_MAX_ERRORS,
648
2
      g_param_spec_int ("max-errors", "Max errors",
649
2
          "Max consecutive decoder errors before returning flow error",
650
2
          -1, G_MAXINT, DEFAULT_MAX_ERRORS,
651
2
          G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS));
652
653
  /**
654
   * GstVideoDecoder:min-force-key-unit-interval:
655
   *
656
   * Minimum interval between force-key-unit events sent upstream by the
657
   * decoder. Setting this to 0 will cause every event to be handled, setting
658
   * this to %GST_CLOCK_TIME_NONE will cause every event to be ignored.
659
   *
660
   * See gst_video_event_new_upstream_force_key_unit() for more details about
661
   * force-key-unit events.
662
   *
663
   * Since: 1.20
664
   */
665
2
  g_object_class_install_property (gobject_class,
666
2
      PROP_MIN_FORCE_KEY_UNIT_INTERVAL,
667
2
      g_param_spec_uint64 ("min-force-key-unit-interval",
668
2
          "Minimum Force Keyunit Interval",
669
2
          "Minimum interval between force-keyunit requests in nanoseconds", 0,
670
2
          G_MAXUINT64, DEFAULT_MIN_FORCE_KEY_UNIT_INTERVAL,
671
2
          G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS));
672
673
  /**
674
   * GstVideoDecoder:discard-corrupted-frames:
675
   *
676
   * If set to %TRUE the decoder will discard frames that are marked as
677
   * corrupted instead of outputting them.
678
   *
679
   * Since: 1.20
680
   */
681
2
  g_object_class_install_property (gobject_class, PROP_DISCARD_CORRUPTED_FRAMES,
682
2
      g_param_spec_boolean ("discard-corrupted-frames",
683
2
          "Discard Corrupted Frames",
684
2
          "Discard frames marked as corrupted instead of outputting them",
685
2
          DEFAULT_DISCARD_CORRUPTED_FRAMES,
686
2
          G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS));
687
688
  /**
689
   * GstVideoDecoder:automatic-request-sync-points:
690
   *
691
   * If set to %TRUE the decoder will automatically request sync points when
692
   * it seems like a good idea, e.g. if the first frames are not key frames or
693
   * if packet loss was reported by upstream.
694
   *
695
   * Since: 1.20
696
   */
697
2
  g_object_class_install_property (gobject_class,
698
2
      PROP_AUTOMATIC_REQUEST_SYNC_POINTS,
699
2
      g_param_spec_boolean ("automatic-request-sync-points",
700
2
          "Automatic Request Sync Points",
701
2
          "Automatically request sync points when it would be useful",
702
2
          DEFAULT_AUTOMATIC_REQUEST_SYNC_POINTS,
703
2
          G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS));
704
705
  /**
706
   * GstVideoDecoder:automatic-request-sync-point-flags:
707
   *
708
   * GstVideoDecoderRequestSyncPointFlags to use for the automatically
709
   * requested sync points if `automatic-request-sync-points` is enabled.
710
   *
711
   * Since: 1.20
712
   */
713
2
  g_object_class_install_property (gobject_class,
714
2
      PROP_AUTOMATIC_REQUEST_SYNC_POINT_FLAGS,
715
2
      g_param_spec_flags ("automatic-request-sync-point-flags",
716
2
          "Automatic Request Sync Point Flags",
717
2
          "Flags to use when automatically requesting sync points",
718
2
          GST_TYPE_VIDEO_DECODER_REQUEST_SYNC_POINT_FLAGS,
719
2
          DEFAULT_AUTOMATIC_REQUEST_SYNC_POINT_FLAGS,
720
2
          G_PARAM_READWRITE | G_PARAM_STATIC_STRINGS));
721
722
2
  meta_tag_video_quark = g_quark_from_static_string (GST_META_TAG_VIDEO_STR);
723
2
}
724
725
static void
726
gst_video_decoder_init (GstVideoDecoder * decoder, GstVideoDecoderClass * klass)
727
4
{
728
4
  GstPadTemplate *pad_template;
729
4
  GstPad *pad;
730
731
4
  GST_DEBUG_OBJECT (decoder, "gst_video_decoder_init");
732
733
4
  decoder->priv = gst_video_decoder_get_instance_private (decoder);
734
735
4
  pad_template =
736
4
      gst_element_class_get_pad_template (GST_ELEMENT_CLASS (klass), "sink");
737
4
  g_return_if_fail (pad_template != NULL);
738
739
4
  decoder->sinkpad = pad = gst_pad_new_from_template (pad_template, "sink");
740
741
4
  gst_pad_set_chain_function (pad, GST_DEBUG_FUNCPTR (gst_video_decoder_chain));
742
4
  gst_pad_set_event_function (pad,
743
4
      GST_DEBUG_FUNCPTR (gst_video_decoder_sink_event));
744
4
  gst_pad_set_query_function (pad,
745
4
      GST_DEBUG_FUNCPTR (gst_video_decoder_sink_query));
746
4
  gst_element_add_pad (GST_ELEMENT (decoder), decoder->sinkpad);
747
748
4
  pad_template =
749
4
      gst_element_class_get_pad_template (GST_ELEMENT_CLASS (klass), "src");
750
4
  g_return_if_fail (pad_template != NULL);
751
752
4
  decoder->srcpad = pad = gst_pad_new_from_template (pad_template, "src");
753
754
4
  gst_pad_set_event_function (pad,
755
4
      GST_DEBUG_FUNCPTR (gst_video_decoder_src_event));
756
4
  gst_pad_set_query_function (pad,
757
4
      GST_DEBUG_FUNCPTR (gst_video_decoder_src_query));
758
4
  gst_element_add_pad (GST_ELEMENT (decoder), decoder->srcpad);
759
760
4
  gst_segment_init (&decoder->input_segment, GST_FORMAT_TIME);
761
4
  gst_segment_init (&decoder->output_segment, GST_FORMAT_TIME);
762
763
4
  g_rec_mutex_init (&decoder->stream_lock);
764
765
4
  decoder->priv->input_adapter = gst_adapter_new ();
766
4
  decoder->priv->output_adapter = gst_adapter_new ();
767
4
  decoder->priv->packetized = TRUE;
768
4
  decoder->priv->needs_format = FALSE;
769
770
4
  g_queue_init (&decoder->priv->frames);
771
4
  g_queue_init (&decoder->priv->timestamps);
772
773
  /* properties */
774
4
  decoder->priv->do_qos = DEFAULT_QOS;
775
4
  decoder->priv->max_errors = GST_VIDEO_DECODER_MAX_ERRORS;
776
777
4
  decoder->priv->min_latency = 0;
778
4
  decoder->priv->max_latency = 0;
779
780
4
  decoder->priv->automatic_request_sync_points =
781
4
      DEFAULT_AUTOMATIC_REQUEST_SYNC_POINTS;
782
4
  decoder->priv->automatic_request_sync_point_flags =
783
4
      DEFAULT_AUTOMATIC_REQUEST_SYNC_POINT_FLAGS;
784
785
4
  gst_video_decoder_reset (decoder, TRUE, TRUE);
786
4
}
787
788
static GstVideoCodecState *
789
_new_input_state (GstCaps * caps)
790
3
{
791
3
  GstVideoCodecState *state;
792
3
  GstStructure *structure;
793
3
  const GValue *codec_data;
794
795
3
  state = g_new0 (GstVideoCodecState, 1);
796
3
  state->ref_count = 1;
797
3
  gst_video_info_init (&state->info);
798
3
  if (G_UNLIKELY (!gst_video_info_from_caps (&state->info, caps)))
799
0
    goto parse_fail;
800
3
  state->caps = gst_caps_ref (caps);
801
802
3
  structure = gst_caps_get_structure (caps, 0);
803
804
3
  codec_data = gst_structure_get_value (structure, "codec_data");
805
3
  if (codec_data && G_VALUE_TYPE (codec_data) == GST_TYPE_BUFFER)
806
0
    state->codec_data = GST_BUFFER (g_value_dup_boxed (codec_data));
807
808
3
  return state;
809
810
0
parse_fail:
811
0
  {
812
0
    g_free (state);
813
0
    return NULL;
814
3
  }
815
3
}
816
817
static GstVideoCodecState *
818
_new_output_state (GstVideoFormat fmt, GstVideoInterlaceMode interlace_mode,
819
    guint width, guint height, GstVideoCodecState * reference,
820
    gboolean copy_interlace_mode)
821
0
{
822
0
  GstVideoCodecState *state;
823
824
0
  state = g_new0 (GstVideoCodecState, 1);
825
0
  state->ref_count = 1;
826
0
  gst_video_info_init (&state->info);
827
0
  if (!gst_video_info_set_interlaced_format (&state->info, fmt, interlace_mode,
828
0
          width, height)) {
829
0
    g_free (state);
830
0
    return NULL;
831
0
  }
832
833
0
  if (reference) {
834
0
    GstVideoInfo *tgt, *ref;
835
836
0
    tgt = &state->info;
837
0
    ref = &reference->info;
838
839
    /* Copy over extra fields from reference state */
840
0
    if (copy_interlace_mode)
841
0
      tgt->interlace_mode = ref->interlace_mode;
842
0
    tgt->flags = ref->flags;
843
0
    tgt->chroma_site = ref->chroma_site;
844
0
    tgt->colorimetry = ref->colorimetry;
845
0
    GST_DEBUG ("reference par %d/%d fps %d/%d",
846
0
        ref->par_n, ref->par_d, ref->fps_n, ref->fps_d);
847
0
    tgt->par_n = ref->par_n;
848
0
    tgt->par_d = ref->par_d;
849
0
    tgt->fps_n = ref->fps_n;
850
0
    tgt->fps_d = ref->fps_d;
851
0
    tgt->views = ref->views;
852
853
0
    GST_VIDEO_INFO_FIELD_ORDER (tgt) = GST_VIDEO_INFO_FIELD_ORDER (ref);
854
855
0
    if (GST_VIDEO_INFO_MULTIVIEW_MODE (ref) != GST_VIDEO_MULTIVIEW_MODE_NONE) {
856
0
      GST_VIDEO_INFO_MULTIVIEW_MODE (tgt) = GST_VIDEO_INFO_MULTIVIEW_MODE (ref);
857
0
      GST_VIDEO_INFO_MULTIVIEW_FLAGS (tgt) =
858
0
          GST_VIDEO_INFO_MULTIVIEW_FLAGS (ref);
859
0
    } else {
860
      /* Default to MONO, overridden as needed by sub-classes */
861
0
      GST_VIDEO_INFO_MULTIVIEW_MODE (tgt) = GST_VIDEO_MULTIVIEW_MODE_MONO;
862
0
      GST_VIDEO_INFO_MULTIVIEW_FLAGS (tgt) = GST_VIDEO_MULTIVIEW_FLAGS_NONE;
863
0
    }
864
0
  }
865
866
0
  GST_DEBUG ("reference par %d/%d fps %d/%d",
867
0
      state->info.par_n, state->info.par_d,
868
0
      state->info.fps_n, state->info.fps_d);
869
870
0
  return state;
871
0
}
872
873
static gboolean
874
gst_video_decoder_setcaps (GstVideoDecoder * decoder, GstCaps * caps)
875
6
{
876
6
  GstVideoDecoderClass *decoder_class;
877
6
  GstVideoCodecState *state;
878
6
  gboolean ret = TRUE;
879
880
6
  decoder_class = GST_VIDEO_DECODER_GET_CLASS (decoder);
881
882
6
  GST_DEBUG_OBJECT (decoder, "setcaps %" GST_PTR_FORMAT, caps);
883
884
6
  GST_VIDEO_DECODER_STREAM_LOCK (decoder);
885
886
6
  if (decoder->priv->input_state) {
887
3
    GST_DEBUG_OBJECT (decoder,
888
3
        "Checking if caps changed old %" GST_PTR_FORMAT " new %" GST_PTR_FORMAT,
889
3
        decoder->priv->input_state->caps, caps);
890
3
    if (gst_caps_is_equal (decoder->priv->input_state->caps, caps))
891
3
      goto caps_not_changed;
892
3
  }
893
894
3
  state = _new_input_state (caps);
895
896
3
  if (G_UNLIKELY (state == NULL))
897
0
    goto parse_fail;
898
899
3
  if (decoder_class->set_format)
900
3
    ret = decoder_class->set_format (decoder, state);
901
902
3
  if (!ret)
903
0
    goto refused_format;
904
905
3
  if (decoder->priv->input_state)
906
0
    gst_video_codec_state_unref (decoder->priv->input_state);
907
3
  decoder->priv->input_state = state;
908
909
3
  GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
910
911
3
  return ret;
912
913
3
caps_not_changed:
914
3
  {
915
3
    GST_DEBUG_OBJECT (decoder, "Caps did not change - ignore");
916
3
    GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
917
3
    return TRUE;
918
3
  }
919
920
  /* ERRORS */
921
0
parse_fail:
922
0
  {
923
0
    GST_WARNING_OBJECT (decoder, "Failed to parse caps");
924
0
    GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
925
0
    return FALSE;
926
3
  }
927
928
0
refused_format:
929
0
  {
930
0
    GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
931
0
    GST_WARNING_OBJECT (decoder, "Subclass refused caps");
932
0
    gst_video_codec_state_unref (state);
933
0
    return FALSE;
934
3
  }
935
3
}
936
937
static void
938
gst_video_decoder_finalize (GObject * object)
939
4
{
940
4
  GstVideoDecoder *decoder;
941
942
4
  decoder = GST_VIDEO_DECODER (object);
943
944
4
  GST_DEBUG_OBJECT (object, "finalize");
945
946
4
  g_rec_mutex_clear (&decoder->stream_lock);
947
948
4
  if (decoder->priv->input_adapter) {
949
4
    g_object_unref (decoder->priv->input_adapter);
950
4
    decoder->priv->input_adapter = NULL;
951
4
  }
952
4
  if (decoder->priv->output_adapter) {
953
4
    g_object_unref (decoder->priv->output_adapter);
954
4
    decoder->priv->output_adapter = NULL;
955
4
  }
956
957
4
  if (decoder->priv->input_state)
958
0
    gst_video_codec_state_unref (decoder->priv->input_state);
959
4
  if (decoder->priv->output_state)
960
0
    gst_video_codec_state_unref (decoder->priv->output_state);
961
962
4
  if (decoder->priv->pool) {
963
0
    gst_object_unref (decoder->priv->pool);
964
0
    decoder->priv->pool = NULL;
965
0
  }
966
967
4
  if (decoder->priv->allocator) {
968
0
    gst_object_unref (decoder->priv->allocator);
969
0
    decoder->priv->allocator = NULL;
970
0
  }
971
972
4
  G_OBJECT_CLASS (parent_class)->finalize (object);
973
4
}
974
975
static void
976
gst_video_decoder_get_property (GObject * object, guint property_id,
977
    GValue * value, GParamSpec * pspec)
978
0
{
979
0
  GstVideoDecoder *dec = GST_VIDEO_DECODER (object);
980
0
  GstVideoDecoderPrivate *priv = dec->priv;
981
982
0
  switch (property_id) {
983
0
    case PROP_QOS:
984
0
      g_value_set_boolean (value, priv->do_qos);
985
0
      break;
986
0
    case PROP_MAX_ERRORS:
987
0
      g_value_set_int (value, gst_video_decoder_get_max_errors (dec));
988
0
      break;
989
0
    case PROP_MIN_FORCE_KEY_UNIT_INTERVAL:
990
0
      g_value_set_uint64 (value, priv->min_force_key_unit_interval);
991
0
      break;
992
0
    case PROP_DISCARD_CORRUPTED_FRAMES:
993
0
      g_value_set_boolean (value, priv->discard_corrupted_frames);
994
0
      break;
995
0
    case PROP_AUTOMATIC_REQUEST_SYNC_POINTS:
996
0
      g_value_set_boolean (value, priv->automatic_request_sync_points);
997
0
      break;
998
0
    case PROP_AUTOMATIC_REQUEST_SYNC_POINT_FLAGS:
999
0
      g_value_set_flags (value, priv->automatic_request_sync_point_flags);
1000
0
      break;
1001
0
    default:
1002
0
      G_OBJECT_WARN_INVALID_PROPERTY_ID (object, property_id, pspec);
1003
0
      break;
1004
0
  }
1005
0
}
1006
1007
static void
1008
gst_video_decoder_set_property (GObject * object, guint property_id,
1009
    const GValue * value, GParamSpec * pspec)
1010
0
{
1011
0
  GstVideoDecoder *dec = GST_VIDEO_DECODER (object);
1012
0
  GstVideoDecoderPrivate *priv = dec->priv;
1013
1014
0
  switch (property_id) {
1015
0
    case PROP_QOS:
1016
0
      priv->do_qos = g_value_get_boolean (value);
1017
0
      break;
1018
0
    case PROP_MAX_ERRORS:
1019
0
      gst_video_decoder_set_max_errors (dec, g_value_get_int (value));
1020
0
      break;
1021
0
    case PROP_MIN_FORCE_KEY_UNIT_INTERVAL:
1022
0
      priv->min_force_key_unit_interval = g_value_get_uint64 (value);
1023
0
      break;
1024
0
    case PROP_DISCARD_CORRUPTED_FRAMES:
1025
0
      priv->discard_corrupted_frames = g_value_get_boolean (value);
1026
0
      break;
1027
0
    case PROP_AUTOMATIC_REQUEST_SYNC_POINTS:
1028
0
      priv->automatic_request_sync_points = g_value_get_boolean (value);
1029
0
      break;
1030
0
    case PROP_AUTOMATIC_REQUEST_SYNC_POINT_FLAGS:
1031
0
      priv->automatic_request_sync_point_flags = g_value_get_flags (value);
1032
0
      break;
1033
0
    default:
1034
0
      G_OBJECT_WARN_INVALID_PROPERTY_ID (object, property_id, pspec);
1035
0
      break;
1036
0
  }
1037
0
}
1038
1039
/* hard == FLUSH, otherwise discont */
1040
static GstFlowReturn
1041
gst_video_decoder_flush (GstVideoDecoder * dec, gboolean hard)
1042
0
{
1043
0
  GstVideoDecoderClass *klass = GST_VIDEO_DECODER_GET_CLASS (dec);
1044
0
  GstFlowReturn ret = GST_FLOW_OK;
1045
1046
0
  GST_LOG_OBJECT (dec, "flush hard %d", hard);
1047
1048
  /* Inform subclass */
1049
0
  if (klass->reset) {
1050
0
    GST_FIXME_OBJECT (dec, "GstVideoDecoder::reset() is deprecated");
1051
0
    klass->reset (dec, hard);
1052
0
  }
1053
1054
0
  if (klass->flush)
1055
0
    klass->flush (dec);
1056
1057
  /* and get (re)set for the sequel */
1058
0
  gst_video_decoder_reset (dec, FALSE, hard);
1059
1060
0
  return ret;
1061
0
}
1062
1063
static GstEvent *
1064
gst_video_decoder_create_merged_tags_event (GstVideoDecoder * dec)
1065
0
{
1066
0
  GstTagList *merged_tags;
1067
1068
0
  GST_LOG_OBJECT (dec, "upstream : %" GST_PTR_FORMAT, dec->priv->upstream_tags);
1069
0
  GST_LOG_OBJECT (dec, "decoder  : %" GST_PTR_FORMAT, dec->priv->tags);
1070
0
  GST_LOG_OBJECT (dec, "mode     : %d", dec->priv->tags_merge_mode);
1071
1072
0
  merged_tags =
1073
0
      gst_tag_list_merge (dec->priv->upstream_tags, dec->priv->tags,
1074
0
      dec->priv->tags_merge_mode);
1075
1076
0
  GST_DEBUG_OBJECT (dec, "merged   : %" GST_PTR_FORMAT, merged_tags);
1077
1078
0
  if (merged_tags == NULL)
1079
0
    return NULL;
1080
1081
0
  if (gst_tag_list_is_empty (merged_tags)) {
1082
0
    gst_tag_list_unref (merged_tags);
1083
0
    return NULL;
1084
0
  }
1085
1086
0
  return gst_event_new_tag (merged_tags);
1087
0
}
1088
1089
static gboolean
1090
gst_video_decoder_push_event (GstVideoDecoder * decoder, GstEvent * event)
1091
6
{
1092
6
  switch (GST_EVENT_TYPE (event)) {
1093
0
    case GST_EVENT_SEGMENT:
1094
0
    {
1095
0
      GstSegment segment;
1096
1097
0
      gst_event_copy_segment (event, &segment);
1098
1099
0
      GST_DEBUG_OBJECT (decoder, "segment %" GST_SEGMENT_FORMAT, &segment);
1100
1101
0
      if (segment.format != GST_FORMAT_TIME) {
1102
0
        GST_DEBUG_OBJECT (decoder, "received non TIME newsegment");
1103
0
        break;
1104
0
      }
1105
1106
0
      GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1107
0
      decoder->output_segment = segment;
1108
0
      decoder->priv->in_out_segment_sync =
1109
0
          gst_segment_is_equal (&decoder->input_segment, &segment);
1110
0
      decoder->priv->last_timestamp_out = GST_CLOCK_TIME_NONE;
1111
0
      decoder->priv->earliest_time = GST_CLOCK_TIME_NONE;
1112
0
      GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1113
0
      break;
1114
0
    }
1115
6
    default:
1116
6
      break;
1117
6
  }
1118
1119
6
  GST_DEBUG_OBJECT (decoder, "pushing event %s",
1120
6
      gst_event_type_get_name (GST_EVENT_TYPE (event)));
1121
1122
6
  return gst_pad_push_event (decoder->srcpad, event);
1123
6
}
1124
1125
static GstFlowReturn
1126
gst_video_decoder_parse_available (GstVideoDecoder * dec, gboolean at_eos,
1127
    gboolean new_buffer)
1128
12
{
1129
12
  GstVideoDecoderClass *decoder_class = GST_VIDEO_DECODER_GET_CLASS (dec);
1130
12
  GstVideoDecoderPrivate *priv = dec->priv;
1131
12
  GstFlowReturn ret = GST_FLOW_OK;
1132
12
  gsize was_available, available;
1133
12
  guint inactive = 0;
1134
1135
12
  available = gst_adapter_available (priv->input_adapter);
1136
1137
15
  while (available || new_buffer) {
1138
6
    new_buffer = FALSE;
1139
    /* current frame may have been parsed and handled,
1140
     * so we need to set up a new one when asking subclass to parse */
1141
6
    if (priv->current_frame == NULL)
1142
0
      priv->current_frame = gst_video_decoder_new_frame (dec);
1143
1144
6
    was_available = available;
1145
6
    ret = decoder_class->parse (dec, priv->current_frame,
1146
6
        priv->input_adapter, at_eos);
1147
6
    if (ret != GST_FLOW_OK)
1148
3
      break;
1149
1150
    /* if the subclass returned success (GST_FLOW_OK), it is expected
1151
     * to have collected and submitted a frame, i.e. it should have
1152
     * called gst_video_decoder_have_frame(), or at least consumed a
1153
     * few bytes through gst_video_decoder_add_to_frame().
1154
     *
1155
     * Otherwise, this is an implementation bug, and we error out
1156
     * after 2 failed attempts */
1157
3
    available = gst_adapter_available (priv->input_adapter);
1158
3
    if (!priv->current_frame || available != was_available)
1159
3
      inactive = 0;
1160
0
    else if (++inactive == 2)
1161
0
      goto error_inactive;
1162
3
  }
1163
1164
12
  return ret;
1165
1166
  /* ERRORS */
1167
0
error_inactive:
1168
0
  {
1169
0
    GST_ERROR_OBJECT (dec, "Failed to consume data. Error in subclass?");
1170
0
    return GST_FLOW_ERROR;
1171
12
  }
1172
12
}
1173
1174
/* This function has to be called with the stream lock taken. */
1175
static GstFlowReturn
1176
gst_video_decoder_drain_out (GstVideoDecoder * dec, gboolean at_eos)
1177
6
{
1178
6
  GstVideoDecoderClass *decoder_class = GST_VIDEO_DECODER_GET_CLASS (dec);
1179
6
  GstVideoDecoderPrivate *priv = dec->priv;
1180
6
  GstFlowReturn ret = GST_FLOW_OK;
1181
1182
6
  if (dec->input_segment.rate > 0.0) {
1183
    /* Forward mode, if unpacketized, give the child class
1184
     * a final chance to flush out packets */
1185
6
    if (!priv->packetized) {
1186
6
      ret = gst_video_decoder_parse_available (dec, TRUE, FALSE);
1187
6
    }
1188
1189
6
    if (at_eos) {
1190
0
      if (decoder_class->finish)
1191
0
        ret = decoder_class->finish (dec);
1192
6
    } else {
1193
6
      if (decoder_class->drain) {
1194
0
        ret = decoder_class->drain (dec);
1195
6
      } else {
1196
6
        GST_FIXME_OBJECT (dec, "Sub-class should implement drain()");
1197
6
      }
1198
6
    }
1199
6
  } else {
1200
    /* Reverse playback mode */
1201
0
    ret = gst_video_decoder_flush_parse (dec, TRUE);
1202
0
  }
1203
1204
6
  return ret;
1205
6
}
1206
1207
static GList *
1208
_flush_events (GstPad * pad, GList * events)
1209
0
{
1210
0
  GList *tmp;
1211
1212
0
  for (tmp = events; tmp; tmp = tmp->next) {
1213
0
    if (GST_EVENT_TYPE (tmp->data) != GST_EVENT_EOS &&
1214
0
        GST_EVENT_TYPE (tmp->data) != GST_EVENT_SEGMENT &&
1215
0
        GST_EVENT_IS_STICKY (tmp->data)) {
1216
0
      gst_pad_store_sticky_event (pad, GST_EVENT_CAST (tmp->data));
1217
0
    }
1218
0
    gst_event_unref (tmp->data);
1219
0
  }
1220
0
  g_list_free (events);
1221
1222
0
  return NULL;
1223
0
}
1224
1225
/* Must be called holding the GST_VIDEO_DECODER_STREAM_LOCK */
1226
static gboolean
1227
gst_video_decoder_negotiate_default_caps (GstVideoDecoder * decoder)
1228
0
{
1229
0
  GstCaps *caps, *templcaps;
1230
0
  GstVideoCodecState *state;
1231
0
  GstVideoInfo info;
1232
0
  gint i;
1233
0
  gint caps_size;
1234
0
  GstStructure *structure;
1235
1236
0
  templcaps = gst_pad_get_pad_template_caps (decoder->srcpad);
1237
0
  caps = gst_pad_peer_query_caps (decoder->srcpad, templcaps);
1238
0
  if (caps)
1239
0
    gst_caps_unref (templcaps);
1240
0
  else
1241
0
    caps = templcaps;
1242
0
  templcaps = NULL;
1243
1244
0
  if (!caps || gst_caps_is_empty (caps) || gst_caps_is_any (caps))
1245
0
    goto caps_error;
1246
1247
0
  GST_LOG_OBJECT (decoder, "peer caps %" GST_PTR_FORMAT, caps);
1248
1249
  /* before fixating, try to use whatever upstream provided */
1250
0
  caps = gst_caps_make_writable (caps);
1251
0
  caps_size = gst_caps_get_size (caps);
1252
0
  if (decoder->priv->input_state && decoder->priv->input_state->caps) {
1253
0
    GstCaps *sinkcaps = decoder->priv->input_state->caps;
1254
0
    GstStructure *structure = gst_caps_get_structure (sinkcaps, 0);
1255
0
    gint width, height;
1256
1257
0
    if (gst_structure_get_int (structure, "width", &width)) {
1258
0
      for (i = 0; i < caps_size; i++) {
1259
0
        gst_structure_set (gst_caps_get_structure (caps, i), "width",
1260
0
            G_TYPE_INT, width, NULL);
1261
0
      }
1262
0
    }
1263
1264
0
    if (gst_structure_get_int (structure, "height", &height)) {
1265
0
      for (i = 0; i < caps_size; i++) {
1266
0
        gst_structure_set (gst_caps_get_structure (caps, i), "height",
1267
0
            G_TYPE_INT, height, NULL);
1268
0
      }
1269
0
    }
1270
0
  }
1271
1272
0
  for (i = 0; i < caps_size; i++) {
1273
0
    structure = gst_caps_get_structure (caps, i);
1274
    /* Random I420 1280x720 for fixation */
1275
0
    if (gst_structure_has_field (structure, "format"))
1276
0
      gst_structure_fixate_field_string (structure, "format", "I420");
1277
0
    else
1278
0
      gst_structure_set (structure, "format", G_TYPE_STRING, "I420", NULL);
1279
1280
0
    if (gst_structure_has_field (structure, "width"))
1281
0
      gst_structure_fixate_field_nearest_int (structure, "width", 1280);
1282
0
    else
1283
0
      gst_structure_set (structure, "width", G_TYPE_INT, 1280, NULL);
1284
1285
0
    if (gst_structure_has_field (structure, "height"))
1286
0
      gst_structure_fixate_field_nearest_int (structure, "height", 720);
1287
0
    else
1288
0
      gst_structure_set (structure, "height", G_TYPE_INT, 720, NULL);
1289
0
  }
1290
0
  caps = gst_caps_fixate (caps);
1291
1292
0
  if (!caps || !gst_video_info_from_caps (&info, caps))
1293
0
    goto caps_error;
1294
1295
0
  GST_INFO_OBJECT (decoder,
1296
0
      "Chose default caps %" GST_PTR_FORMAT " for initial gap", caps);
1297
0
  state =
1298
0
      gst_video_decoder_set_output_state (decoder, info.finfo->format,
1299
0
      info.width, info.height, decoder->priv->input_state);
1300
0
  gst_video_codec_state_unref (state);
1301
0
  gst_caps_unref (caps);
1302
1303
0
  return TRUE;
1304
1305
0
caps_error:
1306
0
  {
1307
0
    if (caps)
1308
0
      gst_caps_unref (caps);
1309
0
    return FALSE;
1310
0
  }
1311
0
}
1312
1313
/* Must be called holding the GST_VIDEO_DECODER_STREAM_LOCK */
1314
static gboolean
1315
gst_video_decoder_handle_missing_data_default (GstVideoDecoder * decoder,
1316
    GstClockTime timestamp, GstClockTime duration)
1317
0
{
1318
0
  GstVideoDecoderPrivate *priv;
1319
1320
0
  priv = decoder->priv;
1321
1322
  /* Exit early in case the decoder has been resetted and hasn't received a new segment event yet. */
1323
0
  if (decoder->input_segment.format != GST_FORMAT_TIME)
1324
0
    return FALSE;
1325
1326
0
  if (priv->automatic_request_sync_points) {
1327
0
    GstClockTime deadline =
1328
0
        gst_segment_to_running_time (&decoder->input_segment, GST_FORMAT_TIME,
1329
0
        timestamp);
1330
1331
0
    GST_DEBUG_OBJECT (decoder,
1332
0
        "Requesting sync point for missing data at running time %"
1333
0
        GST_TIME_FORMAT " timestamp %" GST_TIME_FORMAT " with duration %"
1334
0
        GST_TIME_FORMAT, GST_TIME_ARGS (deadline), GST_TIME_ARGS (timestamp),
1335
0
        GST_TIME_ARGS (duration));
1336
1337
0
    gst_video_decoder_request_sync_point_internal (decoder, deadline,
1338
0
        priv->automatic_request_sync_point_flags);
1339
0
  }
1340
1341
0
  return TRUE;
1342
0
}
1343
1344
static gboolean
1345
gst_video_decoder_handle_gap (GstVideoDecoder * decoder, GstEvent * event)
1346
0
{
1347
0
  GstVideoDecoderClass *decoder_class;
1348
0
  GstClockTime timestamp, duration;
1349
0
  GstGapFlags gap_flags = 0;
1350
0
  gboolean ret = FALSE;
1351
1352
0
  decoder_class = GST_VIDEO_DECODER_GET_CLASS (decoder);
1353
1354
0
  gst_event_parse_gap (event, &timestamp, &duration);
1355
0
  gst_event_parse_gap_flags (event, &gap_flags);
1356
1357
0
  GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1358
  /* If this is not missing data, or the subclass does not handle it
1359
   * specifically, then drain out the decoder and forward the event
1360
   * directly. */
1361
0
  if ((gap_flags & GST_GAP_FLAG_MISSING_DATA) == 0
1362
0
      || !decoder_class->handle_missing_data
1363
0
      || decoder_class->handle_missing_data (decoder, timestamp, duration)) {
1364
0
    GstFlowReturn flow_ret = GST_FLOW_OK;
1365
0
    gboolean needs_reconfigure = FALSE;
1366
0
    GList *events;
1367
0
    GList *frame_events;
1368
1369
0
    if (decoder->input_segment.flags & GST_SEEK_FLAG_TRICKMODE_KEY_UNITS) {
1370
0
      flow_ret = gst_video_decoder_drain_out (decoder, FALSE);
1371
0
    } else if (decoder->priv->frames.length > 0) {
1372
      // Queue up gap events if we didn't actually drain and frames are pending,
1373
      // and forward them later before the next frame
1374
0
      decoder->priv->current_frame_events =
1375
0
          g_list_prepend (decoder->priv->current_frame_events, event);
1376
0
      GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1377
0
      return TRUE;
1378
0
    }
1379
0
    ret = (flow_ret == GST_FLOW_OK);
1380
1381
    /* Ensure we have caps before forwarding the event */
1382
0
    if (!decoder->priv->output_state) {
1383
0
      if (!gst_video_decoder_negotiate_default_caps (decoder)) {
1384
0
        GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1385
0
        GST_ELEMENT_ERROR (decoder, STREAM, FORMAT, (NULL),
1386
0
            ("Decoder output not negotiated before GAP event."));
1387
0
        return gst_video_decoder_push_event (decoder, event);
1388
0
      }
1389
0
      needs_reconfigure = TRUE;
1390
0
    }
1391
1392
0
    needs_reconfigure = gst_pad_check_reconfigure (decoder->srcpad)
1393
0
        || needs_reconfigure;
1394
0
    if (decoder->priv->output_state_changed || needs_reconfigure) {
1395
0
      if (!gst_video_decoder_negotiate_unlocked (decoder)) {
1396
0
        GST_WARNING_OBJECT (decoder, "Failed to negotiate with downstream");
1397
0
        gst_pad_mark_reconfigure (decoder->srcpad);
1398
0
      }
1399
0
    }
1400
1401
0
    GST_DEBUG_OBJECT (decoder, "Pushing all pending serialized events"
1402
0
        " before the gap");
1403
0
    events = decoder->priv->pending_events;
1404
0
    frame_events = decoder->priv->current_frame_events;
1405
0
    decoder->priv->pending_events = NULL;
1406
0
    decoder->priv->current_frame_events = NULL;
1407
1408
0
    GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1409
1410
0
    gst_video_decoder_push_event_list (decoder, events);
1411
0
    gst_video_decoder_push_event_list (decoder, frame_events);
1412
1413
    /* Forward GAP immediately. Everything is drained after
1414
     * the GAP event and we can forward this event immediately
1415
     * now without having buffers out of order.
1416
     */
1417
0
    ret = gst_video_decoder_push_event (decoder, event);
1418
0
  } else {
1419
0
    GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1420
0
    gst_clear_event (&event);
1421
0
  }
1422
1423
0
  return ret;
1424
0
}
1425
1426
static gboolean
1427
gst_video_decoder_sink_event_default (GstVideoDecoder * decoder,
1428
    GstEvent * event)
1429
18
{
1430
18
  GstVideoDecoderPrivate *priv;
1431
18
  gboolean ret = FALSE;
1432
18
  gboolean forward_immediate = FALSE;
1433
1434
18
  priv = decoder->priv;
1435
1436
18
  switch (GST_EVENT_TYPE (event)) {
1437
6
    case GST_EVENT_STREAM_START:
1438
6
    {
1439
6
      GstFlowReturn flow_ret = GST_FLOW_OK;
1440
1441
6
      GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1442
6
      flow_ret = gst_video_decoder_drain_out (decoder, FALSE);
1443
6
      ret = (flow_ret == GST_FLOW_OK);
1444
1445
6
      GST_DEBUG_OBJECT (decoder, "received STREAM_START. Clearing taglist");
1446
      /* Flush upstream tags after a STREAM_START */
1447
6
      if (priv->upstream_tags) {
1448
0
        gst_tag_list_unref (priv->upstream_tags);
1449
0
        priv->upstream_tags = NULL;
1450
0
        priv->tags_changed = TRUE;
1451
0
      }
1452
6
      GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1453
1454
      /* Forward STREAM_START immediately. Everything is drained after
1455
       * the STREAM_START event and we can forward this event immediately
1456
       * now without having buffers out of order.
1457
       */
1458
6
      forward_immediate = TRUE;
1459
6
      break;
1460
0
    }
1461
6
    case GST_EVENT_CAPS:
1462
6
    {
1463
6
      GstCaps *caps;
1464
1465
6
      gst_event_parse_caps (event, &caps);
1466
6
      ret = gst_video_decoder_setcaps (decoder, caps);
1467
6
      gst_event_unref (event);
1468
6
      event = NULL;
1469
6
      break;
1470
0
    }
1471
0
    case GST_EVENT_SEGMENT_DONE:
1472
0
    {
1473
0
      GstFlowReturn flow_ret = GST_FLOW_OK;
1474
1475
0
      GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1476
0
      flow_ret = gst_video_decoder_drain_out (decoder, FALSE);
1477
0
      GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1478
0
      ret = (flow_ret == GST_FLOW_OK);
1479
1480
      /* Forward SEGMENT_DONE immediately. This is required
1481
       * because no buffer or serialized event might come
1482
       * after SEGMENT_DONE and nothing could trigger another
1483
       * _finish_frame() call.
1484
       *
1485
       * The subclass can override this behaviour by overriding
1486
       * the ::sink_event() vfunc and not chaining up to the
1487
       * parent class' ::sink_event() until a later time.
1488
       */
1489
0
      forward_immediate = TRUE;
1490
0
      break;
1491
0
    }
1492
0
    case GST_EVENT_EOS:
1493
0
    {
1494
0
      GstFlowReturn flow_ret = GST_FLOW_OK;
1495
1496
0
      GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1497
0
      flow_ret = gst_video_decoder_drain_out (decoder, TRUE);
1498
0
      GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1499
0
      ret = (flow_ret == GST_FLOW_OK);
1500
1501
      /* Error out even if EOS was ok when we had input, but no output */
1502
0
      if (ret && priv->had_input_data && !priv->had_output_data) {
1503
0
        GST_ELEMENT_ERROR (decoder, STREAM, DECODE,
1504
0
            ("No valid frames decoded before end of stream"),
1505
0
            ("no valid frames found"));
1506
0
      }
1507
1508
      /* Forward EOS immediately. This is required because no
1509
       * buffer or serialized event will come after EOS and
1510
       * nothing could trigger another _finish_frame() call.
1511
       *
1512
       * The subclass can override this behaviour by overriding
1513
       * the ::sink_event() vfunc and not chaining up to the
1514
       * parent class' ::sink_event() until a later time.
1515
       */
1516
0
      forward_immediate = TRUE;
1517
0
      break;
1518
0
    }
1519
0
    case GST_EVENT_GAP:
1520
0
    {
1521
0
      ret = gst_video_decoder_handle_gap (decoder, event);
1522
0
      event = NULL;
1523
0
      break;
1524
0
    }
1525
0
    case GST_EVENT_CUSTOM_DOWNSTREAM:
1526
0
    {
1527
0
      gboolean in_still;
1528
0
      GstFlowReturn flow_ret = GST_FLOW_OK;
1529
1530
0
      if (gst_video_event_parse_still_frame (event, &in_still)) {
1531
0
        if (in_still) {
1532
0
          GST_DEBUG_OBJECT (decoder, "draining current data for still-frame");
1533
0
          GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1534
0
          flow_ret = gst_video_decoder_drain_out (decoder, FALSE);
1535
0
          GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1536
0
          ret = (flow_ret == GST_FLOW_OK);
1537
0
        }
1538
        /* Forward STILL_FRAME immediately. Everything is drained after
1539
         * the STILL_FRAME event and we can forward this event immediately
1540
         * now without having buffers out of order.
1541
         */
1542
0
        forward_immediate = TRUE;
1543
0
      }
1544
0
      break;
1545
0
    }
1546
3
    case GST_EVENT_SEGMENT:
1547
3
    {
1548
3
      GstSegment segment;
1549
1550
3
      gst_event_copy_segment (event, &segment);
1551
1552
3
      if (segment.format == GST_FORMAT_TIME) {
1553
3
        GST_DEBUG_OBJECT (decoder,
1554
3
            "received TIME SEGMENT %" GST_SEGMENT_FORMAT, &segment);
1555
3
      } else {
1556
0
        gint64 start;
1557
1558
0
        GST_DEBUG_OBJECT (decoder,
1559
0
            "received SEGMENT %" GST_SEGMENT_FORMAT, &segment);
1560
1561
        /* handle newsegment as a result from our legacy simple seeking */
1562
        /* note that initial 0 should convert to 0 in any case */
1563
0
        if (priv->do_estimate_rate &&
1564
0
            gst_pad_query_convert (decoder->sinkpad, GST_FORMAT_BYTES,
1565
0
                segment.start, GST_FORMAT_TIME, &start)) {
1566
          /* best attempt convert */
1567
          /* as these are only estimates, stop is kept open-ended to avoid
1568
           * premature cutting */
1569
0
          GST_DEBUG_OBJECT (decoder,
1570
0
              "converted to TIME start %" GST_TIME_FORMAT,
1571
0
              GST_TIME_ARGS (start));
1572
0
          segment.start = start;
1573
0
          segment.stop = GST_CLOCK_TIME_NONE;
1574
0
          segment.time = start;
1575
          /* replace event */
1576
0
          gst_event_unref (event);
1577
0
          event = gst_event_new_segment (&segment);
1578
0
        } else {
1579
0
          goto newseg_wrong_format;
1580
0
        }
1581
0
      }
1582
1583
3
      GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1584
1585
      /* Update the decode flags in the segment if we have an instant-rate
1586
       * override active */
1587
3
      GST_OBJECT_LOCK (decoder);
1588
3
      if (!priv->decode_flags_override)
1589
3
        priv->decode_flags = segment.flags;
1590
0
      else {
1591
0
        segment.flags &= ~GST_SEGMENT_INSTANT_FLAGS;
1592
0
        segment.flags |= priv->decode_flags & GST_SEGMENT_INSTANT_FLAGS;
1593
0
      }
1594
1595
3
      decoder->input_segment = segment;
1596
3
      decoder->priv->in_out_segment_sync = FALSE;
1597
1598
3
      GST_OBJECT_UNLOCK (decoder);
1599
3
      GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1600
1601
3
      break;
1602
3
    }
1603
0
    case GST_EVENT_INSTANT_RATE_CHANGE:
1604
0
    {
1605
0
      GstSegmentFlags flags;
1606
0
      GstSegment *seg;
1607
1608
0
      gst_event_parse_instant_rate_change (event, NULL, &flags);
1609
1610
0
      GST_OBJECT_LOCK (decoder);
1611
0
      priv->decode_flags_override = TRUE;
1612
0
      priv->decode_flags = flags;
1613
1614
      /* Update the input segment flags */
1615
0
      seg = &decoder->input_segment;
1616
0
      seg->flags &= ~GST_SEGMENT_INSTANT_FLAGS;
1617
0
      seg->flags |= priv->decode_flags & GST_SEGMENT_INSTANT_FLAGS;
1618
0
      GST_OBJECT_UNLOCK (decoder);
1619
0
      break;
1620
3
    }
1621
0
    case GST_EVENT_FLUSH_STOP:
1622
0
    {
1623
0
      GList *l;
1624
1625
0
      GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1626
0
      for (l = priv->frames.head; l; l = l->next) {
1627
0
        GstVideoCodecFrame *frame = l->data;
1628
1629
0
        frame->events = _flush_events (decoder->srcpad, frame->events);
1630
0
      }
1631
0
      priv->current_frame_events = _flush_events (decoder->srcpad,
1632
0
          decoder->priv->current_frame_events);
1633
1634
      /* well, this is kind of worse than a DISCONT */
1635
0
      gst_video_decoder_flush (decoder, TRUE);
1636
0
      GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1637
      /* Forward FLUSH_STOP immediately. This is required because it is
1638
       * expected to be forwarded immediately and no buffers are queued
1639
       * anyway.
1640
       */
1641
0
      forward_immediate = TRUE;
1642
0
      break;
1643
3
    }
1644
3
    case GST_EVENT_TAG:
1645
3
    {
1646
3
      GstTagList *tags;
1647
1648
3
      gst_event_parse_tag (event, &tags);
1649
1650
3
      if (gst_tag_list_get_scope (tags) == GST_TAG_SCOPE_STREAM) {
1651
0
        GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1652
0
        if (priv->upstream_tags != tags) {
1653
0
          if (priv->upstream_tags)
1654
0
            gst_tag_list_unref (priv->upstream_tags);
1655
0
          priv->upstream_tags = gst_tag_list_ref (tags);
1656
0
          GST_INFO_OBJECT (decoder, "upstream tags: %" GST_PTR_FORMAT, tags);
1657
0
        }
1658
0
        gst_event_unref (event);
1659
0
        event = gst_video_decoder_create_merged_tags_event (decoder);
1660
0
        GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1661
0
        if (!event)
1662
0
          ret = TRUE;
1663
0
      }
1664
3
      break;
1665
3
    }
1666
0
    default:
1667
0
      break;
1668
18
  }
1669
1670
  /* Forward non-serialized events immediately, and all other
1671
   * events which can be forwarded immediately without potentially
1672
   * causing the event to go out of order with other events and
1673
   * buffers as decided above.
1674
   */
1675
18
  if (event) {
1676
12
    if (!GST_EVENT_IS_SERIALIZED (event) || forward_immediate) {
1677
6
      ret = gst_video_decoder_push_event (decoder, event);
1678
6
    } else {
1679
6
      GST_VIDEO_DECODER_STREAM_LOCK (decoder);
1680
6
      decoder->priv->current_frame_events =
1681
6
          g_list_prepend (decoder->priv->current_frame_events, event);
1682
6
      GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
1683
6
      ret = TRUE;
1684
6
    }
1685
12
  }
1686
1687
18
  return ret;
1688
1689
0
newseg_wrong_format:
1690
0
  {
1691
0
    GST_DEBUG_OBJECT (decoder, "received non TIME newsegment");
1692
0
    gst_event_unref (event);
1693
    /* SWALLOW EVENT */
1694
0
    return TRUE;
1695
18
  }
1696
18
}
1697
1698
static gboolean
1699
gst_video_decoder_sink_event (GstPad * pad, GstObject * parent,
1700
    GstEvent * event)
1701
18
{
1702
18
  GstVideoDecoder *decoder;
1703
18
  GstVideoDecoderClass *decoder_class;
1704
18
  gboolean ret = FALSE;
1705
1706
18
  decoder = GST_VIDEO_DECODER (parent);
1707
18
  decoder_class = GST_VIDEO_DECODER_GET_CLASS (decoder);
1708
1709
18
  GST_DEBUG_OBJECT (decoder, "received event %d, %s", GST_EVENT_TYPE (event),
1710
18
      GST_EVENT_TYPE_NAME (event));
1711
1712
18
  if (decoder_class->sink_event)
1713
18
    ret = decoder_class->sink_event (decoder, event);
1714
1715
18
  return ret;
1716
18
}
1717
1718
/* perform upstream byte <-> time conversion (duration, seeking)
1719
 * if subclass allows and if enough data for moderately decent conversion */
1720
static inline gboolean
1721
gst_video_decoder_do_byte (GstVideoDecoder * dec)
1722
0
{
1723
0
  gboolean ret;
1724
1725
0
  GST_OBJECT_LOCK (dec);
1726
0
  ret = dec->priv->do_estimate_rate && (dec->priv->bytes_out > 0)
1727
0
      && (dec->priv->time > GST_SECOND);
1728
0
  GST_OBJECT_UNLOCK (dec);
1729
1730
0
  return ret;
1731
0
}
1732
1733
static gboolean
1734
gst_video_decoder_do_seek (GstVideoDecoder * dec, GstEvent * event)
1735
0
{
1736
0
  GstFormat format;
1737
0
  GstSeekFlags flags;
1738
0
  GstSeekType start_type, end_type;
1739
0
  gdouble rate;
1740
0
  gint64 start, start_time, end_time;
1741
0
  GstSegment seek_segment;
1742
0
  guint32 seqnum;
1743
1744
0
  gst_event_parse_seek (event, &rate, &format, &flags, &start_type,
1745
0
      &start_time, &end_type, &end_time);
1746
1747
  /* we'll handle plain open-ended flushing seeks with the simple approach */
1748
0
  if (rate != 1.0) {
1749
0
    GST_DEBUG_OBJECT (dec, "unsupported seek: rate");
1750
0
    return FALSE;
1751
0
  }
1752
1753
0
  if (start_type != GST_SEEK_TYPE_SET) {
1754
0
    GST_DEBUG_OBJECT (dec, "unsupported seek: start time");
1755
0
    return FALSE;
1756
0
  }
1757
1758
0
  if ((end_type != GST_SEEK_TYPE_SET && end_type != GST_SEEK_TYPE_NONE) ||
1759
0
      (end_type == GST_SEEK_TYPE_SET && end_time != GST_CLOCK_TIME_NONE)) {
1760
0
    GST_DEBUG_OBJECT (dec, "unsupported seek: end time");
1761
0
    return FALSE;
1762
0
  }
1763
1764
0
  if (!(flags & GST_SEEK_FLAG_FLUSH)) {
1765
0
    GST_DEBUG_OBJECT (dec, "unsupported seek: not flushing");
1766
0
    return FALSE;
1767
0
  }
1768
1769
0
  memcpy (&seek_segment, &dec->output_segment, sizeof (seek_segment));
1770
0
  gst_segment_do_seek (&seek_segment, rate, format, flags, start_type,
1771
0
      start_time, end_type, end_time, NULL);
1772
0
  start_time = seek_segment.position;
1773
1774
0
  if (!gst_pad_query_convert (dec->sinkpad, GST_FORMAT_TIME, start_time,
1775
0
          GST_FORMAT_BYTES, &start)) {
1776
0
    GST_DEBUG_OBJECT (dec, "conversion failed");
1777
0
    return FALSE;
1778
0
  }
1779
1780
0
  seqnum = gst_event_get_seqnum (event);
1781
0
  event = gst_event_new_seek (1.0, GST_FORMAT_BYTES, flags,
1782
0
      GST_SEEK_TYPE_SET, start, GST_SEEK_TYPE_NONE, -1);
1783
0
  gst_event_set_seqnum (event, seqnum);
1784
1785
0
  GST_DEBUG_OBJECT (dec, "seeking to %" GST_TIME_FORMAT " at byte offset %"
1786
0
      G_GINT64_FORMAT, GST_TIME_ARGS (start_time), start);
1787
1788
0
  return gst_pad_push_event (dec->sinkpad, event);
1789
0
}
1790
1791
static gboolean
1792
gst_video_decoder_src_event_default (GstVideoDecoder * decoder,
1793
    GstEvent * event)
1794
0
{
1795
0
  GstVideoDecoderPrivate *priv;
1796
0
  gboolean res = FALSE;
1797
1798
0
  priv = decoder->priv;
1799
1800
0
  GST_DEBUG_OBJECT (decoder,
1801
0
      "received event %d, %s", GST_EVENT_TYPE (event),
1802
0
      GST_EVENT_TYPE_NAME (event));
1803
1804
0
  switch (GST_EVENT_TYPE (event)) {
1805
0
    case GST_EVENT_SEEK:
1806
0
    {
1807
0
      GstFormat format;
1808
0
      gdouble rate;
1809
0
      GstSeekFlags flags;
1810
0
      GstSeekType start_type, stop_type;
1811
0
      gint64 start, stop;
1812
0
      gint64 tstart, tstop;
1813
0
      guint32 seqnum;
1814
1815
0
      gst_event_parse_seek (event, &rate, &format, &flags, &start_type, &start,
1816
0
          &stop_type, &stop);
1817
0
      seqnum = gst_event_get_seqnum (event);
1818
1819
      /* upstream gets a chance first */
1820
0
      if ((res = gst_pad_push_event (decoder->sinkpad, event)))
1821
0
        break;
1822
1823
      /* if upstream fails for a time seek, maybe we can help if allowed */
1824
0
      if (format == GST_FORMAT_TIME) {
1825
0
        if (gst_video_decoder_do_byte (decoder))
1826
0
          res = gst_video_decoder_do_seek (decoder, event);
1827
0
        break;
1828
0
      }
1829
1830
      /* ... though a non-time seek can be aided as well */
1831
      /* First bring the requested format to time */
1832
0
      if (!(res =
1833
0
              gst_pad_query_convert (decoder->srcpad, format, start,
1834
0
                  GST_FORMAT_TIME, &tstart)))
1835
0
        goto convert_error;
1836
0
      if (!(res =
1837
0
              gst_pad_query_convert (decoder->srcpad, format, stop,
1838
0
                  GST_FORMAT_TIME, &tstop)))
1839
0
        goto convert_error;
1840
1841
      /* then seek with time on the peer */
1842
0
      event = gst_event_new_seek (rate, GST_FORMAT_TIME,
1843
0
          flags, start_type, tstart, stop_type, tstop);
1844
0
      gst_event_set_seqnum (event, seqnum);
1845
1846
0
      res = gst_pad_push_event (decoder->sinkpad, event);
1847
0
      break;
1848
0
    }
1849
0
    case GST_EVENT_QOS:
1850
0
    {
1851
0
      GstQOSType type;
1852
0
      gdouble proportion;
1853
0
      GstClockTimeDiff diff;
1854
0
      GstClockTime timestamp;
1855
1856
0
      gst_event_parse_qos (event, &type, &proportion, &diff, &timestamp);
1857
1858
0
      GST_OBJECT_LOCK (decoder);
1859
0
      priv->proportion = proportion;
1860
0
      if (G_LIKELY (GST_CLOCK_TIME_IS_VALID (timestamp))) {
1861
0
        if (G_UNLIKELY (diff > 0)) {
1862
0
          priv->earliest_time =
1863
0
              timestamp + MIN (2 * diff, GST_SECOND) + priv->qos_frame_duration;
1864
0
        } else {
1865
0
          priv->earliest_time = timestamp + diff;
1866
0
        }
1867
0
      } else {
1868
0
        priv->earliest_time = GST_CLOCK_TIME_NONE;
1869
0
      }
1870
0
      GST_OBJECT_UNLOCK (decoder);
1871
1872
0
      GST_DEBUG_OBJECT (decoder,
1873
0
          "got QoS %" GST_TIME_FORMAT ", %" GST_STIME_FORMAT ", %g",
1874
0
          GST_TIME_ARGS (timestamp), GST_STIME_ARGS (diff), proportion);
1875
1876
0
      res = gst_pad_push_event (decoder->sinkpad, event);
1877
0
      break;
1878
0
    }
1879
0
    default:
1880
0
      res = gst_pad_push_event (decoder->sinkpad, event);
1881
0
      break;
1882
0
  }
1883
0
done:
1884
0
  return res;
1885
1886
0
convert_error:
1887
0
  GST_DEBUG_OBJECT (decoder, "could not convert format");
1888
0
  goto done;
1889
0
}
1890
1891
static gboolean
1892
gst_video_decoder_src_event (GstPad * pad, GstObject * parent, GstEvent * event)
1893
0
{
1894
0
  GstVideoDecoder *decoder;
1895
0
  GstVideoDecoderClass *decoder_class;
1896
0
  gboolean ret = FALSE;
1897
1898
0
  decoder = GST_VIDEO_DECODER (parent);
1899
0
  decoder_class = GST_VIDEO_DECODER_GET_CLASS (decoder);
1900
1901
0
  GST_DEBUG_OBJECT (decoder, "received event %d, %s", GST_EVENT_TYPE (event),
1902
0
      GST_EVENT_TYPE_NAME (event));
1903
1904
0
  if (decoder_class->src_event)
1905
0
    ret = decoder_class->src_event (decoder, event);
1906
1907
0
  return ret;
1908
0
}
1909
1910
static gboolean
1911
gst_video_decoder_src_query_default (GstVideoDecoder * dec, GstQuery * query)
1912
8
{
1913
8
  GstPad *pad = GST_VIDEO_DECODER_SRC_PAD (dec);
1914
8
  gboolean res = TRUE;
1915
1916
8
  GST_LOG_OBJECT (dec, "handling query: %" GST_PTR_FORMAT, query);
1917
1918
8
  switch (GST_QUERY_TYPE (query)) {
1919
0
    case GST_QUERY_POSITION:
1920
0
    {
1921
0
      GstFormat format;
1922
0
      gint64 time, value;
1923
1924
      /* upstream gets a chance first */
1925
0
      if ((res = gst_pad_peer_query (dec->sinkpad, query))) {
1926
0
        GST_LOG_OBJECT (dec, "returning peer response");
1927
0
        break;
1928
0
      }
1929
1930
      /* Refuse BYTES format queries. If it made sense to
1931
       * answer them, upstream would have already */
1932
0
      gst_query_parse_position (query, &format, NULL);
1933
1934
0
      if (format == GST_FORMAT_BYTES) {
1935
0
        GST_LOG_OBJECT (dec, "Ignoring BYTES position query");
1936
0
        break;
1937
0
      }
1938
1939
      /* we start from the last seen time */
1940
0
      time = dec->priv->last_timestamp_out;
1941
      /* correct for the segment values */
1942
0
      time = gst_segment_to_stream_time (&dec->output_segment,
1943
0
          GST_FORMAT_TIME, time);
1944
1945
0
      GST_LOG_OBJECT (dec,
1946
0
          "query %p: our time: %" GST_TIME_FORMAT, query, GST_TIME_ARGS (time));
1947
1948
      /* and convert to the final format */
1949
0
      if (!(res = gst_pad_query_convert (pad, GST_FORMAT_TIME, time,
1950
0
                  format, &value)))
1951
0
        break;
1952
1953
0
      gst_query_set_position (query, format, value);
1954
1955
0
      GST_LOG_OBJECT (dec,
1956
0
          "query %p: we return %" G_GINT64_FORMAT " (format %u)", query, value,
1957
0
          format);
1958
0
      break;
1959
0
    }
1960
0
    case GST_QUERY_DURATION:
1961
0
    {
1962
0
      GstFormat format;
1963
1964
      /* upstream in any case */
1965
0
      if ((res = gst_pad_query_default (pad, GST_OBJECT (dec), query)))
1966
0
        break;
1967
1968
0
      gst_query_parse_duration (query, &format, NULL);
1969
      /* try answering TIME by converting from BYTE if subclass allows  */
1970
0
      if (format == GST_FORMAT_TIME && gst_video_decoder_do_byte (dec)) {
1971
0
        gint64 value;
1972
1973
0
        if (gst_pad_peer_query_duration (dec->sinkpad, GST_FORMAT_BYTES,
1974
0
                &value)) {
1975
0
          GST_LOG_OBJECT (dec, "upstream size %" G_GINT64_FORMAT, value);
1976
0
          if (gst_pad_query_convert (dec->sinkpad,
1977
0
                  GST_FORMAT_BYTES, value, GST_FORMAT_TIME, &value)) {
1978
0
            gst_query_set_duration (query, GST_FORMAT_TIME, value);
1979
0
            res = TRUE;
1980
0
          }
1981
0
        }
1982
0
      }
1983
0
      break;
1984
0
    }
1985
0
    case GST_QUERY_CONVERT:
1986
0
    {
1987
0
      GstFormat src_fmt, dest_fmt;
1988
0
      gint64 src_val, dest_val;
1989
1990
0
      GST_DEBUG_OBJECT (dec, "convert query");
1991
1992
0
      gst_query_parse_convert (query, &src_fmt, &src_val, &dest_fmt, &dest_val);
1993
0
      GST_OBJECT_LOCK (dec);
1994
0
      if (dec->priv->output_state != NULL)
1995
0
        res = __gst_video_rawvideo_convert (dec->priv->output_state,
1996
0
            src_fmt, src_val, &dest_fmt, &dest_val);
1997
0
      else
1998
0
        res = FALSE;
1999
0
      GST_OBJECT_UNLOCK (dec);
2000
0
      if (!res)
2001
0
        goto error;
2002
0
      gst_query_set_convert (query, src_fmt, src_val, dest_fmt, dest_val);
2003
0
      break;
2004
0
    }
2005
0
    case GST_QUERY_LATENCY:
2006
0
    {
2007
0
      gboolean live;
2008
0
      GstClockTime min_latency, max_latency;
2009
2010
0
      res = gst_pad_peer_query (dec->sinkpad, query);
2011
0
      if (res) {
2012
0
        gst_query_parse_latency (query, &live, &min_latency, &max_latency);
2013
0
        GST_DEBUG_OBJECT (dec, "Peer qlatency: live %d, min %"
2014
0
            GST_TIME_FORMAT " max %" GST_TIME_FORMAT, live,
2015
0
            GST_TIME_ARGS (min_latency), GST_TIME_ARGS (max_latency));
2016
2017
0
        GST_OBJECT_LOCK (dec);
2018
0
        min_latency += dec->priv->min_latency;
2019
0
        if (max_latency == GST_CLOCK_TIME_NONE
2020
0
            || dec->priv->max_latency == GST_CLOCK_TIME_NONE)
2021
0
          max_latency = GST_CLOCK_TIME_NONE;
2022
0
        else
2023
0
          max_latency += dec->priv->max_latency;
2024
0
        GST_OBJECT_UNLOCK (dec);
2025
2026
0
        gst_query_set_latency (query, live, min_latency, max_latency);
2027
0
      }
2028
0
    }
2029
0
      break;
2030
8
    default:
2031
8
      res = gst_pad_query_default (pad, GST_OBJECT (dec), query);
2032
8
  }
2033
8
  return res;
2034
2035
0
error:
2036
0
  GST_ERROR_OBJECT (dec, "query failed");
2037
0
  return res;
2038
8
}
2039
2040
static gboolean
2041
gst_video_decoder_src_query (GstPad * pad, GstObject * parent, GstQuery * query)
2042
8
{
2043
8
  GstVideoDecoder *decoder;
2044
8
  GstVideoDecoderClass *decoder_class;
2045
8
  gboolean ret = FALSE;
2046
2047
8
  decoder = GST_VIDEO_DECODER (parent);
2048
8
  decoder_class = GST_VIDEO_DECODER_GET_CLASS (decoder);
2049
2050
8
  GST_DEBUG_OBJECT (decoder, "received query %d, %s", GST_QUERY_TYPE (query),
2051
8
      GST_QUERY_TYPE_NAME (query));
2052
2053
8
  if (decoder_class->src_query)
2054
8
    ret = decoder_class->src_query (decoder, query);
2055
2056
8
  return ret;
2057
8
}
2058
2059
/**
2060
 * gst_video_decoder_proxy_getcaps:
2061
 * @decoder: a #GstVideoDecoder
2062
 * @caps: (nullable): initial caps
2063
 * @filter: (nullable): filter caps
2064
 *
2065
 * Returns caps that express @caps (or sink template caps if @caps == NULL)
2066
 * restricted to resolution/format/... combinations supported by downstream
2067
 * elements.
2068
 *
2069
 * Returns: (transfer full): a #GstCaps owned by caller
2070
 *
2071
 * Since: 1.6
2072
 */
2073
GstCaps *
2074
gst_video_decoder_proxy_getcaps (GstVideoDecoder * decoder, GstCaps * caps,
2075
    GstCaps * filter)
2076
0
{
2077
0
  return __gst_video_element_proxy_getcaps (GST_ELEMENT_CAST (decoder),
2078
0
      GST_VIDEO_DECODER_SINK_PAD (decoder),
2079
0
      GST_VIDEO_DECODER_SRC_PAD (decoder), caps, filter);
2080
0
}
2081
2082
static GstCaps *
2083
gst_video_decoder_sink_getcaps (GstVideoDecoder * decoder, GstCaps * filter)
2084
0
{
2085
0
  GstVideoDecoderClass *klass;
2086
0
  GstCaps *caps;
2087
2088
0
  klass = GST_VIDEO_DECODER_GET_CLASS (decoder);
2089
2090
0
  if (klass->getcaps)
2091
0
    caps = klass->getcaps (decoder, filter);
2092
0
  else
2093
0
    caps = gst_video_decoder_proxy_getcaps (decoder, NULL, filter);
2094
2095
0
  GST_LOG_OBJECT (decoder, "Returning caps %" GST_PTR_FORMAT, caps);
2096
2097
0
  return caps;
2098
0
}
2099
2100
static gboolean
2101
gst_video_decoder_sink_query_default (GstVideoDecoder * decoder,
2102
    GstQuery * query)
2103
13
{
2104
13
  GstPad *pad = GST_VIDEO_DECODER_SINK_PAD (decoder);
2105
13
  GstVideoDecoderPrivate *priv;
2106
13
  gboolean res = FALSE;
2107
2108
13
  priv = decoder->priv;
2109
2110
13
  GST_LOG_OBJECT (decoder, "handling query: %" GST_PTR_FORMAT, query);
2111
2112
13
  switch (GST_QUERY_TYPE (query)) {
2113
0
    case GST_QUERY_CONVERT:
2114
0
    {
2115
0
      GstFormat src_fmt, dest_fmt;
2116
0
      gint64 src_val, dest_val;
2117
2118
0
      gst_query_parse_convert (query, &src_fmt, &src_val, &dest_fmt, &dest_val);
2119
0
      GST_OBJECT_LOCK (decoder);
2120
0
      res =
2121
0
          __gst_video_encoded_video_convert (priv->bytes_out, priv->time,
2122
0
          src_fmt, src_val, &dest_fmt, &dest_val);
2123
0
      GST_OBJECT_UNLOCK (decoder);
2124
0
      if (!res)
2125
0
        goto error;
2126
0
      gst_query_set_convert (query, src_fmt, src_val, dest_fmt, dest_val);
2127
0
      break;
2128
0
    }
2129
0
    case GST_QUERY_ALLOCATION:{
2130
0
      GstVideoDecoderClass *klass = GST_VIDEO_DECODER_GET_CLASS (decoder);
2131
2132
0
      if (klass->propose_allocation)
2133
0
        res = klass->propose_allocation (decoder, query);
2134
0
      break;
2135
0
    }
2136
0
    case GST_QUERY_CAPS:{
2137
0
      GstCaps *filter, *caps;
2138
2139
0
      gst_query_parse_caps (query, &filter);
2140
0
      caps = gst_video_decoder_sink_getcaps (decoder, filter);
2141
0
      gst_query_set_caps_result (query, caps);
2142
0
      gst_caps_unref (caps);
2143
0
      res = TRUE;
2144
0
      break;
2145
0
    }
2146
13
    case GST_QUERY_ACCEPT_CAPS:{
2147
13
      if (decoder->priv->use_default_pad_acceptcaps) {
2148
13
        res =
2149
13
            gst_pad_query_default (GST_VIDEO_DECODER_SINK_PAD (decoder),
2150
13
            GST_OBJECT_CAST (decoder), query);
2151
13
      } else {
2152
0
        GstCaps *caps;
2153
0
        GstCaps *allowed_caps;
2154
0
        GstCaps *template_caps;
2155
0
        gboolean accept;
2156
2157
0
        gst_query_parse_accept_caps (query, &caps);
2158
2159
0
        template_caps = gst_pad_get_pad_template_caps (pad);
2160
0
        accept = gst_caps_is_subset (caps, template_caps);
2161
0
        gst_caps_unref (template_caps);
2162
2163
0
        if (accept) {
2164
0
          allowed_caps =
2165
0
              gst_pad_query_caps (GST_VIDEO_DECODER_SINK_PAD (decoder), caps);
2166
2167
0
          accept = gst_caps_can_intersect (caps, allowed_caps);
2168
2169
0
          gst_caps_unref (allowed_caps);
2170
0
        }
2171
2172
0
        gst_query_set_accept_caps_result (query, accept);
2173
0
        res = TRUE;
2174
0
      }
2175
13
      break;
2176
0
    }
2177
0
    default:
2178
0
      res = gst_pad_query_default (pad, GST_OBJECT (decoder), query);
2179
0
      break;
2180
13
  }
2181
13
done:
2182
2183
13
  return res;
2184
0
error:
2185
0
  GST_DEBUG_OBJECT (decoder, "query failed");
2186
0
  goto done;
2187
2188
13
}
2189
2190
static gboolean
2191
gst_video_decoder_sink_query (GstPad * pad, GstObject * parent,
2192
    GstQuery * query)
2193
13
{
2194
13
  GstVideoDecoder *decoder;
2195
13
  GstVideoDecoderClass *decoder_class;
2196
13
  gboolean ret = FALSE;
2197
2198
13
  decoder = GST_VIDEO_DECODER (parent);
2199
13
  decoder_class = GST_VIDEO_DECODER_GET_CLASS (decoder);
2200
2201
13
  GST_DEBUG_OBJECT (decoder, "received query %d, %s", GST_QUERY_TYPE (query),
2202
13
      GST_QUERY_TYPE_NAME (query));
2203
2204
13
  if (decoder_class->sink_query)
2205
13
    ret = decoder_class->sink_query (decoder, query);
2206
2207
13
  return ret;
2208
13
}
2209
2210
typedef struct _Timestamp Timestamp;
2211
struct _Timestamp
2212
{
2213
  guint64 offset;
2214
  GstClockTime pts;
2215
  GstClockTime dts;
2216
  GstClockTime duration;
2217
  guint flags;
2218
};
2219
2220
static void
2221
timestamp_free (Timestamp * ts)
2222
6
{
2223
6
  g_free (ts);
2224
6
}
2225
2226
static void
2227
gst_video_decoder_add_buffer_info (GstVideoDecoder * decoder,
2228
    GstBuffer * buffer)
2229
6
{
2230
6
  GstVideoDecoderPrivate *priv = decoder->priv;
2231
6
  Timestamp *ts;
2232
2233
6
  if (!GST_BUFFER_PTS_IS_VALID (buffer) &&
2234
5
      !GST_BUFFER_DTS_IS_VALID (buffer) &&
2235
5
      !GST_BUFFER_DURATION_IS_VALID (buffer) &&
2236
3
      GST_BUFFER_FLAGS (buffer) == 0) {
2237
    /* Save memory - don't bother storing info
2238
     * for buffers with no distinguishing info */
2239
0
    return;
2240
0
  }
2241
2242
6
  ts = g_new (Timestamp, 1);
2243
2244
6
  GST_LOG_OBJECT (decoder,
2245
6
      "adding PTS %" GST_TIME_FORMAT " DTS %" GST_TIME_FORMAT
2246
6
      " (offset:%" G_GUINT64_FORMAT ")",
2247
6
      GST_TIME_ARGS (GST_BUFFER_PTS (buffer)),
2248
6
      GST_TIME_ARGS (GST_BUFFER_DTS (buffer)), priv->input_offset);
2249
2250
6
  ts->offset = priv->input_offset;
2251
6
  ts->pts = GST_BUFFER_PTS (buffer);
2252
6
  ts->dts = GST_BUFFER_DTS (buffer);
2253
6
  ts->duration = GST_BUFFER_DURATION (buffer);
2254
6
  ts->flags = GST_BUFFER_FLAGS (buffer);
2255
2256
6
  g_queue_push_tail (&priv->timestamps, ts);
2257
2258
6
  if (g_queue_get_length (&priv->timestamps) > 40) {
2259
0
    GST_WARNING_OBJECT (decoder,
2260
0
        "decoder timestamp list getting long: %d timestamps,"
2261
0
        "possible internal leaking?", g_queue_get_length (&priv->timestamps));
2262
0
  }
2263
6
}
2264
2265
static void
2266
gst_video_decoder_get_buffer_info_at_offset (GstVideoDecoder *
2267
    decoder, guint64 offset, GstClockTime * pts, GstClockTime * dts,
2268
    GstClockTime * duration, guint * flags)
2269
6
{
2270
6
#ifndef GST_DISABLE_GST_DEBUG
2271
6
  guint64 got_offset = 0;
2272
6
#endif
2273
6
  Timestamp *ts;
2274
6
  GList *g;
2275
2276
6
  *pts = GST_CLOCK_TIME_NONE;
2277
6
  *dts = GST_CLOCK_TIME_NONE;
2278
6
  *duration = GST_CLOCK_TIME_NONE;
2279
6
  *flags = 0;
2280
2281
6
  g = decoder->priv->timestamps.head;
2282
10
  while (g) {
2283
6
    ts = g->data;
2284
6
    if (ts->offset <= offset) {
2285
4
      GList *next = g->next;
2286
4
#ifndef GST_DISABLE_GST_DEBUG
2287
4
      got_offset = ts->offset;
2288
4
#endif
2289
4
      *pts = ts->pts;
2290
4
      *dts = ts->dts;
2291
4
      *duration = ts->duration;
2292
4
      *flags = ts->flags;
2293
4
      g_queue_delete_link (&decoder->priv->timestamps, g);
2294
4
      g = next;
2295
4
      timestamp_free (ts);
2296
4
    } else {
2297
2
      break;
2298
2
    }
2299
6
  }
2300
2301
6
  GST_LOG_OBJECT (decoder,
2302
6
      "got PTS %" GST_TIME_FORMAT " DTS %" GST_TIME_FORMAT " flags %x @ offs %"
2303
6
      G_GUINT64_FORMAT " (wanted offset:%" G_GUINT64_FORMAT ")",
2304
6
      GST_TIME_ARGS (*pts), GST_TIME_ARGS (*dts), *flags, got_offset, offset);
2305
6
}
2306
2307
static void
2308
gst_video_decoder_clear_queues (GstVideoDecoder * dec)
2309
12
{
2310
12
  GstVideoDecoderPrivate *priv = dec->priv;
2311
2312
12
  g_list_free_full (priv->output_queued,
2313
12
      (GDestroyNotify) gst_mini_object_unref);
2314
12
  priv->output_queued = NULL;
2315
2316
12
  g_list_free_full (priv->gather, (GDestroyNotify) gst_mini_object_unref);
2317
12
  priv->gather = NULL;
2318
12
  g_list_free_full (priv->decode, (GDestroyNotify) gst_video_codec_frame_unref);
2319
12
  priv->decode = NULL;
2320
12
  g_list_free_full (priv->parse, (GDestroyNotify) gst_mini_object_unref);
2321
12
  priv->parse = NULL;
2322
12
  g_list_free_full (priv->parse_gather,
2323
12
      (GDestroyNotify) gst_video_codec_frame_unref);
2324
12
  priv->parse_gather = NULL;
2325
12
  g_queue_clear_full (&priv->frames,
2326
12
      (GDestroyNotify) gst_video_codec_frame_unref);
2327
12
}
2328
2329
static void
2330
gst_video_decoder_reset (GstVideoDecoder * decoder, gboolean full,
2331
    gboolean flush_hard)
2332
12
{
2333
12
  GstVideoDecoderPrivate *priv = decoder->priv;
2334
2335
12
  GST_DEBUG_OBJECT (decoder, "reset full %d", full);
2336
2337
12
  GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2338
2339
12
  if (full || flush_hard) {
2340
12
    gst_segment_init (&decoder->input_segment, GST_FORMAT_UNDEFINED);
2341
12
    gst_segment_init (&decoder->output_segment, GST_FORMAT_UNDEFINED);
2342
12
    gst_video_decoder_clear_queues (decoder);
2343
12
    decoder->priv->in_out_segment_sync = TRUE;
2344
2345
12
    if (priv->current_frame) {
2346
0
      gst_video_codec_frame_unref (priv->current_frame);
2347
0
      priv->current_frame = NULL;
2348
0
    }
2349
2350
12
    g_list_free_full (priv->current_frame_events,
2351
12
        (GDestroyNotify) gst_event_unref);
2352
12
    priv->current_frame_events = NULL;
2353
12
    g_list_free_full (priv->pending_events, (GDestroyNotify) gst_event_unref);
2354
12
    priv->pending_events = NULL;
2355
2356
12
    priv->error_count = 0;
2357
12
    priv->had_output_data = FALSE;
2358
12
    priv->had_input_data = FALSE;
2359
2360
12
    GST_OBJECT_LOCK (decoder);
2361
12
    priv->earliest_time = GST_CLOCK_TIME_NONE;
2362
12
    priv->proportion = 0.5;
2363
12
    priv->decode_flags_override = FALSE;
2364
2365
12
    priv->request_sync_point_flags = 0;
2366
12
    priv->request_sync_point_frame_number = REQUEST_SYNC_POINT_UNSET;
2367
12
    priv->last_force_key_unit_time = GST_CLOCK_TIME_NONE;
2368
12
    GST_OBJECT_UNLOCK (decoder);
2369
12
    priv->distance_from_sync = -1;
2370
12
  }
2371
2372
12
  if (full) {
2373
12
    if (priv->input_state)
2374
3
      gst_video_codec_state_unref (priv->input_state);
2375
12
    priv->input_state = NULL;
2376
12
    GST_OBJECT_LOCK (decoder);
2377
12
    if (priv->output_state)
2378
0
      gst_video_codec_state_unref (priv->output_state);
2379
12
    priv->output_state = NULL;
2380
2381
12
    priv->qos_frame_duration = 0;
2382
12
    GST_OBJECT_UNLOCK (decoder);
2383
2384
12
    if (priv->tags)
2385
0
      gst_tag_list_unref (priv->tags);
2386
12
    priv->tags = NULL;
2387
12
    priv->tags_merge_mode = GST_TAG_MERGE_APPEND;
2388
12
    if (priv->upstream_tags) {
2389
0
      gst_tag_list_unref (priv->upstream_tags);
2390
0
      priv->upstream_tags = NULL;
2391
0
    }
2392
12
    priv->tags_changed = FALSE;
2393
12
    priv->reordered_output = FALSE;
2394
12
    priv->consecutive_orderered_output = 0;
2395
2396
12
    priv->dropped = 0;
2397
12
    priv->processed = 0;
2398
2399
12
    priv->posted_latency_msg = FALSE;
2400
2401
12
    priv->decode_frame_number = 0;
2402
2403
12
    if (priv->pool) {
2404
0
      GST_DEBUG_OBJECT (decoder, "deactivate pool %" GST_PTR_FORMAT,
2405
0
          priv->pool);
2406
0
      gst_buffer_pool_set_active (priv->pool, FALSE);
2407
0
      gst_object_unref (priv->pool);
2408
0
      priv->pool = NULL;
2409
0
    }
2410
2411
12
    if (priv->allocator) {
2412
0
      gst_object_unref (priv->allocator);
2413
0
      priv->allocator = NULL;
2414
0
    }
2415
12
  }
2416
2417
12
  priv->discont = TRUE;
2418
2419
12
  priv->last_timestamp_out = GST_CLOCK_TIME_NONE;
2420
12
  priv->pts_delta = GST_CLOCK_TIME_NONE;
2421
2422
12
  priv->input_offset = 0;
2423
12
  priv->frame_offset = 0;
2424
12
  gst_adapter_clear (priv->input_adapter);
2425
12
  gst_adapter_clear (priv->output_adapter);
2426
12
  g_queue_clear_full (&priv->timestamps, (GDestroyNotify) timestamp_free);
2427
2428
12
  GST_OBJECT_LOCK (decoder);
2429
12
  priv->bytes_out = 0;
2430
12
  priv->time = 0;
2431
12
  GST_OBJECT_UNLOCK (decoder);
2432
2433
12
#ifndef GST_DISABLE_DEBUG
2434
12
  priv->last_reset_time = gst_util_get_timestamp ();
2435
12
#endif
2436
2437
12
  GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2438
12
}
2439
2440
static GstFlowReturn
2441
gst_video_decoder_chain_forward (GstVideoDecoder * decoder,
2442
    GstBuffer * buf, gboolean at_eos)
2443
6
{
2444
6
  GstVideoDecoderPrivate *priv;
2445
6
  GstVideoDecoderClass *klass GST_UNUSED_CHECKS;
2446
6
  GstFlowReturn ret = GST_FLOW_OK;
2447
2448
6
#ifndef G_DISABLE_CHECKS
2449
6
  klass = GST_VIDEO_DECODER_GET_CLASS (decoder);
2450
6
#endif
2451
6
  priv = decoder->priv;
2452
2453
6
  g_return_val_if_fail (priv->packetized || klass->parse, GST_FLOW_ERROR);
2454
2455
  /* Draining on DISCONT is handled in chain_reverse() for reverse playback,
2456
   * and this function would only be called to get everything collected GOP
2457
   * by GOP in the parse_gather list */
2458
6
  if (decoder->input_segment.rate > 0.0 && GST_BUFFER_IS_DISCONT (buf)
2459
6
      && (decoder->input_segment.flags & GST_SEEK_FLAG_TRICKMODE_KEY_UNITS))
2460
0
    ret = gst_video_decoder_drain_out (decoder, FALSE);
2461
2462
6
  if (priv->current_frame == NULL)
2463
6
    priv->current_frame = gst_video_decoder_new_frame (decoder);
2464
2465
6
  if (!priv->packetized)
2466
6
    gst_video_decoder_add_buffer_info (decoder, buf);
2467
2468
6
  priv->input_offset += gst_buffer_get_size (buf);
2469
2470
6
  if (GST_BUFFER_FLAG_IS_SET (buf, GST_BUFFER_FLAG_DECODE_ONLY))
2471
0
    GST_VIDEO_CODEC_FRAME_SET_DECODE_ONLY (priv->current_frame);
2472
2473
6
  if (priv->packetized) {
2474
0
    GstVideoCodecFrame *frame;
2475
0
    gboolean was_keyframe = FALSE;
2476
2477
0
    frame = priv->current_frame;
2478
2479
0
    if (gst_video_decoder_get_subframe_mode (decoder)) {
2480
0
      frame->abidata.ABI.num_subframes++;
2481
      /* End the frame if the marker flag is set */
2482
0
      if (!GST_BUFFER_FLAG_IS_SET (buf, GST_VIDEO_BUFFER_FLAG_MARKER)
2483
0
          && (decoder->input_segment.rate > 0.0))
2484
0
        priv->current_frame = gst_video_codec_frame_ref (frame);
2485
0
      else
2486
0
        priv->current_frame = NULL;
2487
0
    } else {
2488
0
      priv->current_frame = frame;
2489
0
    }
2490
2491
0
    if (!GST_BUFFER_FLAG_IS_SET (buf, GST_BUFFER_FLAG_DELTA_UNIT)) {
2492
0
      was_keyframe = TRUE;
2493
0
      GST_DEBUG_OBJECT (decoder, "Marking current_frame as sync point");
2494
0
      GST_VIDEO_CODEC_FRAME_SET_SYNC_POINT (frame);
2495
0
    }
2496
2497
0
    gst_video_decoder_replace_input_buffer (decoder, frame, &buf);
2498
2499
0
    if (decoder->input_segment.rate < 0.0) {
2500
0
      priv->parse_gather = g_list_prepend (priv->parse_gather, frame);
2501
0
      priv->current_frame = NULL;
2502
0
    } else {
2503
0
      ret = gst_video_decoder_decode_frame (decoder, frame);
2504
0
      if (!gst_video_decoder_get_subframe_mode (decoder))
2505
0
        priv->current_frame = NULL;
2506
0
    }
2507
    /* If in trick mode and it was a keyframe, drain decoder to avoid extra
2508
     * latency. Only do this for forwards playback as reverse playback handles
2509
     * draining on keyframes in flush_parse(), and would otherwise call back
2510
     * from drain_out() to here causing an infinite loop.
2511
     * Also this function is only called for reverse playback to gather frames
2512
     * GOP by GOP, and does not do any actual decoding. That would be done by
2513
     * flush_decode() */
2514
0
    if (ret == GST_FLOW_OK && was_keyframe && decoder->input_segment.rate > 0.0
2515
0
        && (decoder->input_segment.flags & GST_SEEK_FLAG_TRICKMODE_KEY_UNITS))
2516
0
      ret = gst_video_decoder_drain_out (decoder, FALSE);
2517
6
  } else {
2518
6
    gst_adapter_push (priv->input_adapter, buf);
2519
2520
6
    ret = gst_video_decoder_parse_available (decoder, at_eos, TRUE);
2521
6
  }
2522
2523
6
  if (ret == GST_VIDEO_DECODER_FLOW_NEED_DATA)
2524
0
    return GST_FLOW_OK;
2525
2526
6
  return ret;
2527
6
}
2528
2529
static GstFlowReturn
2530
gst_video_decoder_flush_decode (GstVideoDecoder * dec)
2531
0
{
2532
0
  GstVideoDecoderPrivate *priv = dec->priv;
2533
0
  GstFlowReturn res = GST_FLOW_OK;
2534
0
  GList *walk;
2535
0
  GstVideoCodecFrame *current_frame = NULL;
2536
0
  gboolean last_subframe;
2537
0
  GST_DEBUG_OBJECT (dec, "flushing buffers to decode");
2538
2539
0
  walk = priv->decode;
2540
0
  while (walk) {
2541
0
    GList *next;
2542
0
    GstVideoCodecFrame *frame = (GstVideoCodecFrame *) (walk->data);
2543
0
    last_subframe = TRUE;
2544
    /* In subframe mode, we need to get rid of intermediary frames
2545
     * created during the buffer gather stage. That's why that we keep a current
2546
     * frame as the main frame and drop all the frame afterwhile until the end
2547
     * of the subframes batch.
2548
     * */
2549
0
    if (gst_video_decoder_get_subframe_mode (dec)) {
2550
0
      if (current_frame == NULL) {
2551
0
        current_frame = gst_video_codec_frame_ref (frame);
2552
0
      } else {
2553
0
        if (current_frame->input_buffer) {
2554
0
          gst_video_decoder_copy_metas (dec, current_frame,
2555
0
              current_frame->input_buffer, current_frame->output_buffer);
2556
0
          gst_buffer_unref (current_frame->input_buffer);
2557
0
        }
2558
0
        current_frame->input_buffer = gst_buffer_ref (frame->input_buffer);
2559
0
        gst_video_codec_frame_unref (frame);
2560
0
      }
2561
0
      last_subframe = GST_BUFFER_FLAG_IS_SET (current_frame->input_buffer,
2562
0
          GST_VIDEO_BUFFER_FLAG_MARKER);
2563
0
    } else {
2564
0
      current_frame = frame;
2565
0
    }
2566
2567
0
    GST_DEBUG_OBJECT (dec, "decoding frame %p buffer %p, PTS %" GST_TIME_FORMAT
2568
0
        ", DTS %" GST_TIME_FORMAT, current_frame, current_frame->input_buffer,
2569
0
        GST_TIME_ARGS (GST_BUFFER_PTS (current_frame->input_buffer)),
2570
0
        GST_TIME_ARGS (GST_BUFFER_DTS (current_frame->input_buffer)));
2571
2572
0
    next = walk->next;
2573
2574
0
    priv->decode = g_list_delete_link (priv->decode, walk);
2575
2576
    /* decode buffer, resulting data prepended to queue */
2577
0
    res = gst_video_decoder_decode_frame (dec, current_frame);
2578
0
    if (res != GST_FLOW_OK)
2579
0
      break;
2580
0
    if (!gst_video_decoder_get_subframe_mode (dec)
2581
0
        || last_subframe)
2582
0
      current_frame = NULL;
2583
0
    walk = next;
2584
0
  }
2585
2586
0
  return res;
2587
0
}
2588
2589
/* gst_video_decoder_flush_parse is called from the
2590
 * chain_reverse() function when a buffer containing
2591
 * a DISCONT - indicating that reverse playback
2592
 * looped back to the next data block, and therefore
2593
 * all available data should be fed through the
2594
 * decoder and frames gathered for reversed output
2595
 */
2596
static GstFlowReturn
2597
gst_video_decoder_flush_parse (GstVideoDecoder * dec, gboolean at_eos)
2598
0
{
2599
0
  GstVideoDecoderPrivate *priv = dec->priv;
2600
0
  GstFlowReturn res = GST_FLOW_OK;
2601
0
  GList *walk;
2602
0
  GstVideoDecoderClass *decoder_class;
2603
2604
0
  decoder_class = GST_VIDEO_DECODER_GET_CLASS (dec);
2605
2606
0
  GST_DEBUG_OBJECT (dec, "flushing buffers to parsing");
2607
2608
  /* Reverse the gather list, and prepend it to the parse list,
2609
   * then flush to parse whatever we can */
2610
0
  priv->gather = g_list_reverse (priv->gather);
2611
0
  priv->parse = g_list_concat (priv->gather, priv->parse);
2612
0
  priv->gather = NULL;
2613
2614
  /* clear buffer and decoder state */
2615
0
  gst_video_decoder_flush (dec, FALSE);
2616
2617
0
  walk = priv->parse;
2618
0
  while (walk) {
2619
0
    GstBuffer *buf = GST_BUFFER_CAST (walk->data);
2620
0
    GList *next = walk->next;
2621
2622
0
    GST_DEBUG_OBJECT (dec, "parsing buffer %p, PTS %" GST_TIME_FORMAT
2623
0
        ", DTS %" GST_TIME_FORMAT " flags %x", buf,
2624
0
        GST_TIME_ARGS (GST_BUFFER_PTS (buf)),
2625
0
        GST_TIME_ARGS (GST_BUFFER_DTS (buf)), GST_BUFFER_FLAGS (buf));
2626
2627
    /* parse buffer, resulting frames prepended to parse_gather queue */
2628
0
    gst_buffer_ref (buf);
2629
0
    res = gst_video_decoder_chain_forward (dec, buf, at_eos);
2630
2631
    /* if we generated output, we can discard the buffer, else we
2632
     * keep it in the queue */
2633
0
    if (priv->parse_gather) {
2634
0
      GST_DEBUG_OBJECT (dec, "parsed buffer to %p", priv->parse_gather->data);
2635
0
      priv->parse = g_list_delete_link (priv->parse, walk);
2636
0
      gst_buffer_unref (buf);
2637
0
    } else {
2638
0
      GST_DEBUG_OBJECT (dec, "buffer did not decode, keeping");
2639
0
    }
2640
0
    walk = next;
2641
0
  }
2642
2643
0
  walk = priv->parse_gather;
2644
0
  while (walk) {
2645
0
    GstVideoCodecFrame *frame = (GstVideoCodecFrame *) (walk->data);
2646
0
    GList *walk2;
2647
2648
    /* this is reverse playback, check if we need to apply some segment
2649
     * to the output before decoding, as during decoding the segment.rate
2650
     * must be used to determine if a buffer should be pushed or added to
2651
     * the output list for reverse pushing.
2652
     *
2653
     * The new segment is not immediately pushed here because we must
2654
     * wait for negotiation to happen before it can be pushed to avoid
2655
     * pushing a segment before caps event. Negotiation only happens
2656
     * when finish_frame is called.
2657
     */
2658
0
    for (walk2 = frame->events; walk2;) {
2659
0
      GList *cur = walk2;
2660
0
      GstEvent *event = walk2->data;
2661
2662
0
      walk2 = g_list_next (walk2);
2663
0
      if (GST_EVENT_TYPE (event) <= GST_EVENT_SEGMENT) {
2664
2665
0
        if (GST_EVENT_TYPE (event) == GST_EVENT_SEGMENT) {
2666
0
          GstSegment segment;
2667
2668
0
          GST_DEBUG_OBJECT (dec, "Segment at frame %p %" GST_TIME_FORMAT,
2669
0
              frame, GST_TIME_ARGS (GST_BUFFER_PTS (frame->input_buffer)));
2670
0
          gst_event_copy_segment (event, &segment);
2671
0
          if (segment.format == GST_FORMAT_TIME) {
2672
0
            dec->output_segment = segment;
2673
0
            dec->priv->in_out_segment_sync =
2674
0
                gst_segment_is_equal (&dec->input_segment, &segment);
2675
0
          }
2676
0
        }
2677
0
        dec->priv->pending_events =
2678
0
            g_list_append (dec->priv->pending_events, event);
2679
0
        frame->events = g_list_delete_link (frame->events, cur);
2680
0
      }
2681
0
    }
2682
2683
0
    walk = walk->next;
2684
0
  }
2685
2686
  /* now we can process frames. Start by moving each frame from the parse_gather
2687
   * to the decode list, reverse the order as we go, and stopping when/if we
2688
   * copy a keyframe. */
2689
0
  GST_DEBUG_OBJECT (dec, "checking parsed frames for a keyframe to decode");
2690
0
  walk = priv->parse_gather;
2691
0
  while (walk) {
2692
0
    GstVideoCodecFrame *frame = (GstVideoCodecFrame *) (walk->data);
2693
2694
    /* remove from the gather list */
2695
0
    priv->parse_gather = g_list_remove_link (priv->parse_gather, walk);
2696
2697
    /* move it to the front of the decode queue */
2698
0
    priv->decode = g_list_concat (walk, priv->decode);
2699
2700
    /* if we copied a keyframe, flush and decode the decode queue */
2701
0
    if (GST_VIDEO_CODEC_FRAME_IS_SYNC_POINT (frame)) {
2702
0
      GST_DEBUG_OBJECT (dec, "found keyframe %p with PTS %" GST_TIME_FORMAT
2703
0
          ", DTS %" GST_TIME_FORMAT, frame,
2704
0
          GST_TIME_ARGS (GST_BUFFER_PTS (frame->input_buffer)),
2705
0
          GST_TIME_ARGS (GST_BUFFER_DTS (frame->input_buffer)));
2706
0
      res = gst_video_decoder_flush_decode (dec);
2707
0
      if (res != GST_FLOW_OK)
2708
0
        goto done;
2709
2710
      /* We need to tell the subclass to drain now.
2711
       * We prefer the drain vfunc, but for backward-compat
2712
       * we use a finish() vfunc if drain isn't implemented */
2713
0
      if (decoder_class->drain) {
2714
0
        GST_DEBUG_OBJECT (dec, "Draining");
2715
0
        res = decoder_class->drain (dec);
2716
0
      } else if (decoder_class->finish) {
2717
0
        GST_FIXME_OBJECT (dec, "Sub-class should implement drain(). "
2718
0
            "Calling finish() for backwards-compat");
2719
0
        res = decoder_class->finish (dec);
2720
0
      }
2721
2722
0
      if (res != GST_FLOW_OK)
2723
0
        goto done;
2724
2725
      /* now send queued data downstream */
2726
0
      walk = priv->output_queued;
2727
0
      while (walk) {
2728
0
        GstBuffer *buf = GST_BUFFER_CAST (walk->data);
2729
2730
0
        priv->output_queued =
2731
0
            g_list_delete_link (priv->output_queued, priv->output_queued);
2732
2733
0
        if (G_LIKELY (res == GST_FLOW_OK)) {
2734
          /* avoid stray DISCONT from forward processing,
2735
           * which have no meaning in reverse pushing */
2736
0
          GST_BUFFER_FLAG_UNSET (buf, GST_BUFFER_FLAG_DISCONT);
2737
2738
          /* Last chance to calculate a timestamp as we loop backwards
2739
           * through the list */
2740
0
          if (GST_BUFFER_TIMESTAMP (buf) != GST_CLOCK_TIME_NONE)
2741
0
            priv->last_timestamp_out = GST_BUFFER_TIMESTAMP (buf);
2742
0
          else if (priv->last_timestamp_out != GST_CLOCK_TIME_NONE &&
2743
0
              GST_BUFFER_DURATION (buf) != GST_CLOCK_TIME_NONE) {
2744
0
            GST_BUFFER_TIMESTAMP (buf) =
2745
0
                priv->last_timestamp_out - GST_BUFFER_DURATION (buf);
2746
0
            priv->last_timestamp_out = GST_BUFFER_TIMESTAMP (buf);
2747
0
            GST_LOG_OBJECT (dec,
2748
0
                "Calculated TS %" GST_TIME_FORMAT " working backwards",
2749
0
                GST_TIME_ARGS (priv->last_timestamp_out));
2750
0
          }
2751
2752
0
          res = gst_video_decoder_clip_and_push_buf (dec, buf);
2753
0
        } else {
2754
0
          gst_buffer_unref (buf);
2755
0
        }
2756
2757
0
        walk = priv->output_queued;
2758
0
      }
2759
2760
      /* clear buffer and decoder state again
2761
       * before moving to the previous keyframe */
2762
0
      gst_video_decoder_flush (dec, FALSE);
2763
0
    }
2764
2765
0
    walk = priv->parse_gather;
2766
0
  }
2767
2768
0
done:
2769
0
  return res;
2770
0
}
2771
2772
static GstFlowReturn
2773
gst_video_decoder_chain_reverse (GstVideoDecoder * dec, GstBuffer * buf)
2774
0
{
2775
0
  GstVideoDecoderPrivate *priv = dec->priv;
2776
0
  GstFlowReturn result = GST_FLOW_OK;
2777
2778
  /* if we have a discont, move buffers to the decode list */
2779
0
  if (!buf || GST_BUFFER_IS_DISCONT (buf)) {
2780
0
    GST_DEBUG_OBJECT (dec, "received discont");
2781
2782
    /* parse and decode stuff in the gather and parse queues */
2783
0
    result = gst_video_decoder_flush_parse (dec, FALSE);
2784
0
  }
2785
2786
0
  if (G_LIKELY (buf)) {
2787
0
    GST_DEBUG_OBJECT (dec, "gathering buffer %p of size %" G_GSIZE_FORMAT ", "
2788
0
        "PTS %" GST_TIME_FORMAT ", DTS %" GST_TIME_FORMAT ", dur %"
2789
0
        GST_TIME_FORMAT, buf, gst_buffer_get_size (buf),
2790
0
        GST_TIME_ARGS (GST_BUFFER_PTS (buf)),
2791
0
        GST_TIME_ARGS (GST_BUFFER_DTS (buf)),
2792
0
        GST_TIME_ARGS (GST_BUFFER_DURATION (buf)));
2793
2794
    /* add buffer to gather queue */
2795
0
    priv->gather = g_list_prepend (priv->gather, buf);
2796
0
  }
2797
2798
0
  return result;
2799
0
}
2800
2801
static GstFlowReturn
2802
gst_video_decoder_chain (GstPad * pad, GstObject * parent, GstBuffer * buf)
2803
6
{
2804
6
  GstVideoDecoder *decoder;
2805
6
  GstFlowReturn ret = GST_FLOW_OK;
2806
2807
6
  decoder = GST_VIDEO_DECODER (parent);
2808
2809
6
  if (G_UNLIKELY (!decoder->priv->input_state && decoder->priv->needs_format))
2810
0
    goto not_negotiated;
2811
2812
6
  GST_LOG_OBJECT (decoder,
2813
6
      "chain PTS %" GST_TIME_FORMAT ", DTS %" GST_TIME_FORMAT " duration %"
2814
6
      GST_TIME_FORMAT " size %" G_GSIZE_FORMAT " flags %x",
2815
6
      GST_TIME_ARGS (GST_BUFFER_PTS (buf)),
2816
6
      GST_TIME_ARGS (GST_BUFFER_DTS (buf)),
2817
6
      GST_TIME_ARGS (GST_BUFFER_DURATION (buf)),
2818
6
      gst_buffer_get_size (buf), GST_BUFFER_FLAGS (buf));
2819
2820
6
  GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2821
2822
  /* NOTE:
2823
   * requiring the pad to be negotiated makes it impossible to use
2824
   * oggdemux or filesrc ! decoder */
2825
2826
6
  if (decoder->input_segment.format == GST_FORMAT_UNDEFINED) {
2827
0
    GstEvent *event;
2828
0
    GstSegment *segment = &decoder->input_segment;
2829
2830
0
    GST_WARNING_OBJECT (decoder,
2831
0
        "Received buffer without a new-segment. "
2832
0
        "Assuming timestamps start from 0.");
2833
2834
0
    gst_segment_init (segment, GST_FORMAT_TIME);
2835
2836
0
    event = gst_event_new_segment (segment);
2837
2838
0
    decoder->priv->current_frame_events =
2839
0
        g_list_prepend (decoder->priv->current_frame_events, event);
2840
0
  }
2841
2842
6
  decoder->priv->had_input_data = TRUE;
2843
2844
6
  if (decoder->input_segment.rate > 0.0)
2845
6
    ret = gst_video_decoder_chain_forward (decoder, buf, FALSE);
2846
0
  else
2847
0
    ret = gst_video_decoder_chain_reverse (decoder, buf);
2848
2849
6
  GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2850
6
  return ret;
2851
2852
  /* ERRORS */
2853
0
not_negotiated:
2854
0
  {
2855
0
    GST_ELEMENT_ERROR (decoder, CORE, NEGOTIATION, (NULL),
2856
0
        ("decoder not initialized"));
2857
0
    gst_buffer_unref (buf);
2858
0
    return GST_FLOW_NOT_NEGOTIATED;
2859
6
  }
2860
6
}
2861
2862
static GstStateChangeReturn
2863
gst_video_decoder_change_state (GstElement * element, GstStateChange transition)
2864
20
{
2865
20
  GstVideoDecoder *decoder;
2866
20
  GstVideoDecoderClass *decoder_class;
2867
20
  GstStateChangeReturn ret;
2868
2869
20
  decoder = GST_VIDEO_DECODER (element);
2870
20
  decoder_class = GST_VIDEO_DECODER_GET_CLASS (element);
2871
2872
20
  switch (transition) {
2873
4
    case GST_STATE_CHANGE_NULL_TO_READY:
2874
      /* open device/library if needed */
2875
4
      if (decoder_class->open && !decoder_class->open (decoder))
2876
0
        goto open_failed;
2877
4
      break;
2878
4
    case GST_STATE_CHANGE_READY_TO_PAUSED:
2879
4
      GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2880
4
      gst_video_decoder_reset (decoder, TRUE, TRUE);
2881
4
      GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2882
2883
      /* Initialize device/library if needed */
2884
4
      if (decoder_class->start && !decoder_class->start (decoder))
2885
0
        goto start_failed;
2886
4
      break;
2887
12
    default:
2888
12
      break;
2889
20
  }
2890
2891
20
  ret = GST_ELEMENT_CLASS (parent_class)->change_state (element, transition);
2892
2893
20
  switch (transition) {
2894
4
    case GST_STATE_CHANGE_PAUSED_TO_READY:{
2895
4
      gboolean stopped = TRUE;
2896
2897
4
      if (decoder_class->stop)
2898
4
        stopped = decoder_class->stop (decoder);
2899
2900
4
      GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2901
4
      gst_video_decoder_reset (decoder, TRUE, TRUE);
2902
4
      GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2903
2904
4
      if (!stopped)
2905
0
        goto stop_failed;
2906
2907
4
      break;
2908
4
    }
2909
4
    case GST_STATE_CHANGE_READY_TO_NULL:
2910
      /* close device/library if needed */
2911
4
      if (decoder_class->close && !decoder_class->close (decoder))
2912
0
        goto close_failed;
2913
4
      break;
2914
12
    default:
2915
12
      break;
2916
20
  }
2917
2918
20
  return ret;
2919
2920
  /* Errors */
2921
0
open_failed:
2922
0
  {
2923
0
    GST_ELEMENT_ERROR (decoder, LIBRARY, INIT, (NULL),
2924
0
        ("Failed to open decoder"));
2925
0
    return GST_STATE_CHANGE_FAILURE;
2926
20
  }
2927
2928
0
start_failed:
2929
0
  {
2930
0
    GST_ELEMENT_ERROR (decoder, LIBRARY, INIT, (NULL),
2931
0
        ("Failed to start decoder"));
2932
0
    return GST_STATE_CHANGE_FAILURE;
2933
20
  }
2934
2935
0
stop_failed:
2936
0
  {
2937
0
    GST_ELEMENT_ERROR (decoder, LIBRARY, INIT, (NULL),
2938
0
        ("Failed to stop decoder"));
2939
0
    return GST_STATE_CHANGE_FAILURE;
2940
20
  }
2941
2942
0
close_failed:
2943
0
  {
2944
0
    GST_ELEMENT_ERROR (decoder, LIBRARY, INIT, (NULL),
2945
0
        ("Failed to close decoder"));
2946
0
    return GST_STATE_CHANGE_FAILURE;
2947
20
  }
2948
20
}
2949
2950
static GstVideoCodecFrame *
2951
gst_video_decoder_new_frame (GstVideoDecoder * decoder)
2952
6
{
2953
6
  GstVideoDecoderPrivate *priv = decoder->priv;
2954
6
  GstVideoCodecFrame *frame;
2955
2956
6
  frame = g_new0 (GstVideoCodecFrame, 1);
2957
2958
6
  frame->ref_count = 1;
2959
2960
6
  GST_VIDEO_DECODER_STREAM_LOCK (decoder);
2961
6
  frame->system_frame_number = priv->system_frame_number;
2962
6
  priv->system_frame_number++;
2963
6
  frame->decode_frame_number = priv->decode_frame_number;
2964
6
  priv->decode_frame_number++;
2965
2966
6
  frame->dts = GST_CLOCK_TIME_NONE;
2967
6
  frame->pts = GST_CLOCK_TIME_NONE;
2968
6
  frame->duration = GST_CLOCK_TIME_NONE;
2969
6
  frame->events = priv->current_frame_events;
2970
6
  priv->current_frame_events = NULL;
2971
2972
6
  GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
2973
2974
6
  GST_LOG_OBJECT (decoder, "Created new frame %p (sfn:%d)",
2975
6
      frame, frame->system_frame_number);
2976
2977
6
  return frame;
2978
6
}
2979
2980
static void
2981
gst_video_decoder_push_event_list (GstVideoDecoder * decoder, GList * events)
2982
0
{
2983
0
  GList *l;
2984
2985
  /* events are stored in reverse order */
2986
0
  for (l = g_list_last (events); l; l = g_list_previous (l)) {
2987
0
    GST_LOG_OBJECT (decoder, "pushing %s event", GST_EVENT_TYPE_NAME (l->data));
2988
0
    gst_video_decoder_push_event (decoder, l->data);
2989
0
  }
2990
0
  g_list_free (events);
2991
0
}
2992
2993
static void
2994
gst_video_decoder_prepare_finish_frame (GstVideoDecoder *
2995
    decoder, GstVideoCodecFrame * frame, gboolean dropping)
2996
3
{
2997
3
  GstVideoDecoderPrivate *priv = decoder->priv;
2998
3
  GList *l, *events = NULL;
2999
3
  gboolean sync, frames_without_dts, frames_without_pts;
3000
3
  GstClockTime min_dts, min_pts;
3001
3
  GstVideoCodecFrame *earliest_dts_frame, *earliest_pts_frame;
3002
3003
3
#ifndef GST_DISABLE_GST_DEBUG
3004
3
  GST_LOG_OBJECT (decoder, "n %d in %" G_GSIZE_FORMAT " out %" G_GSIZE_FORMAT,
3005
3
      priv->frames.length,
3006
3
      gst_adapter_available (priv->input_adapter),
3007
3
      gst_adapter_available (priv->output_adapter));
3008
3
#endif
3009
3010
3
  sync = GST_VIDEO_CODEC_FRAME_IS_SYNC_POINT (frame);
3011
3012
3
  GST_LOG_OBJECT (decoder,
3013
3
      "finish frame %p (#%d)(sub=#%d) sync:%d PTS:%" GST_TIME_FORMAT " DTS:%"
3014
3
      GST_TIME_FORMAT,
3015
3
      frame, frame->system_frame_number, frame->abidata.ABI.num_subframes,
3016
3
      sync, GST_TIME_ARGS (frame->pts), GST_TIME_ARGS (frame->dts));
3017
3018
  /* Push all pending events that arrived before this frame */
3019
3
  for (l = priv->frames.head; l; l = l->next) {
3020
3
    GstVideoCodecFrame *tmp = l->data;
3021
3022
3
    if (tmp->events) {
3023
3
      events = g_list_concat (tmp->events, events);
3024
3
      tmp->events = NULL;
3025
3
    }
3026
3027
3
    if (tmp == frame)
3028
3
      break;
3029
3
  }
3030
3031
3
  if (dropping || !decoder->priv->output_state) {
3032
    /* Push before the next frame that is not dropped */
3033
3
    decoder->priv->pending_events =
3034
3
        g_list_concat (events, decoder->priv->pending_events);
3035
3
  } else {
3036
0
    gst_video_decoder_push_event_list (decoder, decoder->priv->pending_events);
3037
0
    decoder->priv->pending_events = NULL;
3038
3039
0
    gst_video_decoder_push_event_list (decoder, events);
3040
0
  }
3041
3042
  /* Check if the data should not be displayed. For example altref/invisible
3043
   * frame in vp8. In this case we should not update the timestamps. */
3044
3
  if (GST_VIDEO_CODEC_FRAME_IS_DECODE_ONLY (frame))
3045
0
    return;
3046
3047
  /* If the frame is meant to be output but we don't have an output_buffer
3048
   * we have a problem :) */
3049
3
  if (G_UNLIKELY ((frame->output_buffer == NULL) && !dropping))
3050
0
    goto no_output_buffer;
3051
3052
  /* If we were in reordered output, check if it was just temporary and we can
3053
   * resume normal output */
3054
3
  if (priv->reordered_output && GST_CLOCK_TIME_IS_VALID (frame->pts)) {
3055
0
    if (frame->pts > priv->last_timestamp_out) {
3056
0
      priv->consecutive_orderered_output += 1;
3057
      /* FIXME : Make this tolerance a configurable property. */
3058
0
      if (priv->consecutive_orderered_output >= 30) {
3059
0
        GST_DEBUG_OBJECT (decoder,
3060
0
            "Saw enough increasing timestamps from decoder, resuming normal timestamp handling");
3061
0
        priv->reordered_output = FALSE;
3062
0
        priv->consecutive_orderered_output = 0;
3063
0
      }
3064
0
    } else {
3065
0
      priv->consecutive_orderered_output = 0;
3066
0
    }
3067
0
  }
3068
3069
3
  if (frame->duration == GST_CLOCK_TIME_NONE) {
3070
3
    frame->duration = gst_video_decoder_get_frame_duration (decoder, frame);
3071
3
    GST_LOG_OBJECT (decoder,
3072
3
        "Guessing duration %" GST_TIME_FORMAT " for frame...",
3073
3
        GST_TIME_ARGS (frame->duration));
3074
3
  }
3075
3076
  /* The following code is to fix issues with PTS and DTS:
3077
   * * Because the input PTS and/or DTS was mis-used (using DTS as PTS, or PTS
3078
   *   as DTS)
3079
   * * Because the input was missing PTS and/or DTS
3080
   *
3081
   * For that, we will collected 3 important information from the frames in
3082
   * flight:
3083
   * * Whether all frames had a valid PTS or a valid DTS
3084
   * * Which frame has the lowest PTS (and its value)
3085
   * * Which frame has the lowest DTS (And its value)
3086
   */
3087
3
  frames_without_pts = frames_without_dts = FALSE;
3088
3
  min_dts = min_pts = GST_CLOCK_TIME_NONE;
3089
3
  earliest_pts_frame = earliest_dts_frame = NULL;
3090
3091
  /* Check what is the earliest PTS and DTS in our pendings frames */
3092
6
  for (l = priv->frames.head; l; l = l->next) {
3093
3
    GstVideoCodecFrame *tmp = l->data;
3094
3095
    /* ABI.ts contains DTS */
3096
3
    if (!GST_CLOCK_TIME_IS_VALID (tmp->abidata.ABI.ts)) {
3097
3
      frames_without_dts = TRUE;
3098
3
    } else if (!GST_CLOCK_TIME_IS_VALID (min_dts)
3099
0
        || tmp->abidata.ABI.ts < min_dts) {
3100
0
      min_dts = tmp->abidata.ABI.ts;
3101
0
      earliest_dts_frame = tmp;
3102
0
    }
3103
3104
    /* ABI.ts2 contains PTS */
3105
3
    if (!GST_CLOCK_TIME_IS_VALID (tmp->abidata.ABI.ts2)) {
3106
3
      frames_without_pts = TRUE;
3107
3
    } else if (!GST_CLOCK_TIME_IS_VALID (min_pts)
3108
0
        || tmp->abidata.ABI.ts2 < min_pts) {
3109
0
      min_pts = tmp->abidata.ABI.ts2;
3110
0
      earliest_pts_frame = tmp;
3111
0
    }
3112
3
  }
3113
  /* save dts if needed */
3114
3
  if (earliest_dts_frame && earliest_dts_frame != frame) {
3115
0
    earliest_dts_frame->abidata.ABI.ts = frame->abidata.ABI.ts;
3116
0
  }
3117
  /* save pts if needed */
3118
3
  if (earliest_pts_frame && earliest_pts_frame != frame) {
3119
0
    earliest_pts_frame->abidata.ABI.ts2 = frame->abidata.ABI.ts2;
3120
0
  }
3121
3122
  /* First attempt at recovering missing PTS:
3123
   * * If we figured out the PTS<->DTS delta (from a keyframe)
3124
   * * AND all frames have a valid DTS (i.e. it is not sparsely timestamped
3125
   *   input)
3126
   * * AND we are not dealing with ordering issues
3127
   *
3128
   * We can figure out the pts from the lowest DTS and the PTS<->DTS delta
3129
   */
3130
3
  if (!priv->reordered_output &&
3131
3
      !GST_CLOCK_TIME_IS_VALID (frame->pts) && !frames_without_dts &&
3132
0
      GST_CLOCK_TIME_IS_VALID (priv->pts_delta)) {
3133
0
    frame->pts = min_dts + priv->pts_delta;
3134
0
    GST_DEBUG_OBJECT (decoder,
3135
0
        "no valid PTS, using oldest DTS %" GST_TIME_FORMAT,
3136
0
        GST_TIME_ARGS (frame->pts));
3137
0
  }
3138
3139
  /* if we detected reordered output, then PTS are void, however those were
3140
   * obtained; bogus input, subclass etc */
3141
3
  if (priv->reordered_output && !frames_without_pts) {
3142
0
    GST_DEBUG_OBJECT (decoder, "invalidating PTS");
3143
0
    frame->pts = GST_CLOCK_TIME_NONE;
3144
0
  }
3145
3146
  /* If the frame doesn't have a PTS we can take the earliest PTS from our
3147
   * pending frame list (Only valid if all pending frames have PTS) */
3148
3
  if (!GST_CLOCK_TIME_IS_VALID (frame->pts) && !frames_without_pts) {
3149
0
    frame->pts = min_pts;
3150
0
    GST_DEBUG_OBJECT (decoder,
3151
0
        "no valid PTS, using oldest PTS %" GST_TIME_FORMAT,
3152
0
        GST_TIME_ARGS (frame->pts));
3153
0
  }
3154
3155
3
  if (frame->pts == GST_CLOCK_TIME_NONE) {
3156
    /* Last ditch timestamp guess: Just add the duration to the previous
3157
     * frame. If it's the first frame, just use the segment start. */
3158
3
    if (frame->duration != GST_CLOCK_TIME_NONE) {
3159
0
      if (GST_CLOCK_TIME_IS_VALID (priv->last_timestamp_out))
3160
0
        frame->pts = priv->last_timestamp_out + frame->duration;
3161
0
      else if (frame->dts != GST_CLOCK_TIME_NONE) {
3162
0
        frame->pts = frame->dts;
3163
0
        GST_LOG_OBJECT (decoder,
3164
0
            "Setting DTS as PTS %" GST_TIME_FORMAT " for frame...",
3165
0
            GST_TIME_ARGS (frame->pts));
3166
0
      } else if (decoder->output_segment.rate > 0.0)
3167
0
        frame->pts = decoder->output_segment.start;
3168
0
      GST_INFO_OBJECT (decoder,
3169
0
          "Guessing PTS=%" GST_TIME_FORMAT " for frame... DTS=%"
3170
0
          GST_TIME_FORMAT, GST_TIME_ARGS (frame->pts),
3171
0
          GST_TIME_ARGS (frame->dts));
3172
3
    } else if (sync && frame->dts != GST_CLOCK_TIME_NONE) {
3173
0
      frame->pts = frame->dts;
3174
0
      GST_LOG_OBJECT (decoder,
3175
0
          "Setting DTS as PTS %" GST_TIME_FORMAT " for frame...",
3176
0
          GST_TIME_ARGS (frame->pts));
3177
0
    }
3178
3
  }
3179
3180
3
  if (GST_CLOCK_TIME_IS_VALID (priv->last_timestamp_out) &&
3181
0
      frame->pts < priv->last_timestamp_out) {
3182
0
    GST_WARNING_OBJECT (decoder,
3183
0
        "decreasing timestamp (%" GST_TIME_FORMAT " < %" GST_TIME_FORMAT ")",
3184
0
        GST_TIME_ARGS (frame->pts), GST_TIME_ARGS (priv->last_timestamp_out));
3185
0
    priv->reordered_output = TRUE;
3186
    /* make it a bit less weird downstream */
3187
0
    frame->pts = priv->last_timestamp_out;
3188
0
  }
3189
3190
3
  if (GST_CLOCK_TIME_IS_VALID (frame->pts))
3191
0
    priv->last_timestamp_out = frame->pts;
3192
3193
3
  return;
3194
3195
  /* ERRORS */
3196
0
no_output_buffer:
3197
0
  {
3198
0
    GST_ERROR_OBJECT (decoder, "No buffer to output !");
3199
0
  }
3200
0
}
3201
3202
/**
3203
 * gst_video_decoder_release_frame:
3204
 * @dec: a #GstVideoDecoder
3205
 * @frame: (transfer full): the #GstVideoCodecFrame to release
3206
 *
3207
 * Similar to gst_video_decoder_drop_frame(), but simply releases @frame
3208
 * without any processing other than removing it from list of pending frames,
3209
 * after which it is considered finished and released.
3210
 *
3211
 * Since: 1.2.2
3212
 */
3213
void
3214
gst_video_decoder_release_frame (GstVideoDecoder * dec,
3215
    GstVideoCodecFrame * frame)
3216
3
{
3217
3
  GList *link;
3218
3219
  /* unref once from the list */
3220
3
  GST_VIDEO_DECODER_STREAM_LOCK (dec);
3221
3
  link = g_queue_find (&dec->priv->frames, frame);
3222
3
  if (link) {
3223
3
    gst_video_codec_frame_unref (frame);
3224
3
    g_queue_delete_link (&dec->priv->frames, link);
3225
3
  }
3226
3
  if (frame->events) {
3227
0
    dec->priv->pending_events =
3228
0
        g_list_concat (frame->events, dec->priv->pending_events);
3229
0
    frame->events = NULL;
3230
0
  }
3231
3
  GST_VIDEO_DECODER_STREAM_UNLOCK (dec);
3232
3233
  /* unref because this function takes ownership */
3234
3
  gst_video_codec_frame_unref (frame);
3235
3
}
3236
3237
/* called with STREAM_LOCK */
3238
static void
3239
gst_video_decoder_post_qos_drop (GstVideoDecoder * dec, GstClockTime timestamp)
3240
3
{
3241
3
  GstClockTime stream_time, jitter, earliest_time, qostime;
3242
3
  GstSegment *segment;
3243
3
  GstMessage *qos_msg;
3244
3
  gdouble proportion;
3245
3
  dec->priv->dropped++;
3246
3247
  /* post QoS message */
3248
3
  GST_OBJECT_LOCK (dec);
3249
3
  proportion = dec->priv->proportion;
3250
3
  earliest_time = dec->priv->earliest_time;
3251
3
  GST_OBJECT_UNLOCK (dec);
3252
3253
3
  segment = &dec->output_segment;
3254
3
  if (G_UNLIKELY (segment->format == GST_FORMAT_UNDEFINED))
3255
3
    segment = &dec->input_segment;
3256
3
  stream_time =
3257
3
      gst_segment_to_stream_time (segment, GST_FORMAT_TIME, timestamp);
3258
3
  qostime = gst_segment_to_running_time (segment, GST_FORMAT_TIME, timestamp);
3259
3
  jitter = GST_CLOCK_DIFF (qostime, earliest_time);
3260
3
  qos_msg =
3261
3
      gst_message_new_qos (GST_OBJECT_CAST (dec), FALSE, qostime, stream_time,
3262
3
      timestamp, GST_CLOCK_TIME_NONE);
3263
3
  gst_message_set_qos_values (qos_msg, jitter, proportion, 1000000);
3264
3
  gst_message_set_qos_stats (qos_msg, GST_FORMAT_BUFFERS,
3265
3
      dec->priv->processed, dec->priv->dropped);
3266
3
  gst_element_post_message (GST_ELEMENT_CAST (dec), qos_msg);
3267
3
}
3268
3269
/**
3270
 * gst_video_decoder_drop_frame:
3271
 * @dec: a #GstVideoDecoder
3272
 * @frame: (transfer full): the #GstVideoCodecFrame to drop
3273
 *
3274
 * Similar to gst_video_decoder_finish_frame(), but drops @frame in any
3275
 * case and posts a QoS message with the frame's details on the bus.
3276
 * In any case, the frame is considered finished and released.
3277
 *
3278
 * Returns: a #GstFlowReturn, usually GST_FLOW_OK.
3279
 */
3280
GstFlowReturn
3281
gst_video_decoder_drop_frame (GstVideoDecoder * dec, GstVideoCodecFrame * frame)
3282
3
{
3283
3
  GST_LOG_OBJECT (dec, "drop frame %p", frame);
3284
3285
3
  if (gst_video_decoder_get_subframe_mode (dec))
3286
3
    GST_DEBUG_OBJECT (dec, "Drop subframe %d. Must be the last one.",
3287
3
        frame->abidata.ABI.num_subframes);
3288
3289
3
  GST_VIDEO_DECODER_STREAM_LOCK (dec);
3290
3291
3
  gst_video_decoder_prepare_finish_frame (dec, frame, TRUE);
3292
3293
3
  GST_DEBUG_OBJECT (dec, "dropping frame %" GST_TIME_FORMAT,
3294
3
      GST_TIME_ARGS (frame->pts));
3295
3296
3
  gst_video_decoder_post_qos_drop (dec, frame->pts);
3297
3298
  /* now free the frame */
3299
3
  gst_video_decoder_release_frame (dec, frame);
3300
3301
  /* store that we have valid decoded data */
3302
3
  dec->priv->had_output_data = TRUE;
3303
3304
3
  GST_VIDEO_DECODER_STREAM_UNLOCK (dec);
3305
3306
3
  return GST_FLOW_OK;
3307
3
}
3308
3309
/**
3310
 * gst_video_decoder_drop_subframe:
3311
 * @dec: a #GstVideoDecoder
3312
 * @frame: (transfer full): the #GstVideoCodecFrame
3313
 *
3314
 * Drops input data.
3315
 * The frame is not considered finished until the whole frame
3316
 * is finished or dropped by the subclass.
3317
 *
3318
 * Returns: a #GstFlowReturn, usually GST_FLOW_OK.
3319
 *
3320
 * Since: 1.20
3321
 */
3322
GstFlowReturn
3323
gst_video_decoder_drop_subframe (GstVideoDecoder * dec,
3324
    GstVideoCodecFrame * frame)
3325
0
{
3326
0
  g_return_val_if_fail (gst_video_decoder_get_subframe_mode (dec),
3327
0
      GST_FLOW_NOT_SUPPORTED);
3328
3329
0
  GST_LOG_OBJECT (dec, "drop subframe %p num=%d", frame->input_buffer,
3330
0
      gst_video_decoder_get_input_subframe_index (dec, frame));
3331
3332
0
  GST_VIDEO_DECODER_STREAM_LOCK (dec);
3333
3334
0
  gst_video_codec_frame_unref (frame);
3335
3336
0
  GST_VIDEO_DECODER_STREAM_UNLOCK (dec);
3337
3338
0
  return GST_FLOW_OK;
3339
0
}
3340
3341
static gboolean
3342
gst_video_decoder_transform_meta_default (GstVideoDecoder *
3343
    decoder, GstVideoCodecFrame * frame, GstMeta * meta)
3344
0
{
3345
0
  const GstMetaInfo *info = meta->info;
3346
0
  const gchar *const *tags;
3347
0
  const gchar *const supported_tags[] = {
3348
0
    GST_META_TAG_VIDEO_STR,
3349
0
    GST_META_TAG_VIDEO_ORIENTATION_STR,
3350
0
    GST_META_TAG_VIDEO_SIZE_STR,
3351
0
    NULL,
3352
0
  };
3353
3354
0
  tags = gst_meta_api_type_get_tags (info->api);
3355
3356
0
  if (!tags)
3357
0
    return TRUE;
3358
3359
0
  while (*tags) {
3360
0
    if (!g_strv_contains (supported_tags, *tags))
3361
0
      return FALSE;
3362
0
    tags++;
3363
0
  }
3364
3365
0
  return TRUE;
3366
0
}
3367
3368
typedef struct
3369
{
3370
  GstVideoDecoder *decoder;
3371
  GstVideoCodecFrame *frame;
3372
  GstBuffer *buffer;
3373
} CopyMetaData;
3374
3375
static gboolean
3376
foreach_metadata (GstBuffer * inbuf, GstMeta ** meta, gpointer user_data)
3377
0
{
3378
0
  CopyMetaData *data = user_data;
3379
0
  GstVideoDecoder *decoder = data->decoder;
3380
0
  GstVideoDecoderClass *klass = GST_VIDEO_DECODER_GET_CLASS (decoder);
3381
0
  GstVideoCodecFrame *frame = data->frame;
3382
0
  GstBuffer *buffer = data->buffer;
3383
0
  const GstMetaInfo *info = (*meta)->info;
3384
0
  gboolean do_copy = FALSE;
3385
3386
0
  if (gst_meta_api_type_has_tag (info->api, _gst_meta_tag_memory)
3387
0
      || gst_meta_api_type_has_tag (info->api, _gst_meta_tag_memory_reference)) {
3388
    /* never call the transform_meta with memory specific metadata */
3389
0
    GST_DEBUG_OBJECT (decoder, "not copying memory specific metadata %s",
3390
0
        g_type_name (info->api));
3391
0
    do_copy = FALSE;
3392
0
  } else if (klass->transform_meta) {
3393
0
    do_copy = klass->transform_meta (decoder, frame, *meta);
3394
0
    GST_DEBUG_OBJECT (decoder, "transformed metadata %s: copy: %d",
3395
0
        g_type_name (info->api), do_copy);
3396
0
  }
3397
3398
  /* we only copy metadata when the subclass implemented a transform_meta
3399
   * function and when it returns %TRUE */
3400
0
  if (do_copy && info->transform_func) {
3401
0
    GstMetaTransformCopy copy_data = { FALSE, 0, -1 };
3402
0
    GST_DEBUG_OBJECT (decoder, "copy metadata %s", g_type_name (info->api));
3403
    /* simply copy then */
3404
3405
0
    info->transform_func (buffer, *meta, inbuf, _gst_meta_transform_copy,
3406
0
        &copy_data);
3407
0
  }
3408
0
  return TRUE;
3409
0
}
3410
3411
static void
3412
gst_video_decoder_copy_metas (GstVideoDecoder * decoder,
3413
    GstVideoCodecFrame * frame, GstBuffer * src_buffer, GstBuffer * dest_buffer)
3414
0
{
3415
0
  GstVideoDecoderClass *decoder_class = GST_VIDEO_DECODER_GET_CLASS (decoder);
3416
3417
0
  if (decoder_class->transform_meta) {
3418
0
    if (G_LIKELY (frame)) {
3419
0
      CopyMetaData data;
3420
3421
0
      data.decoder = decoder;
3422
0
      data.frame = frame;
3423
0
      data.buffer = dest_buffer;
3424
0
      gst_buffer_foreach_meta (src_buffer, foreach_metadata, &data);
3425
0
    } else {
3426
0
      GST_WARNING_OBJECT (decoder,
3427
0
          "Can't copy metadata because input frame disappeared");
3428
0
    }
3429
0
  }
3430
0
}
3431
3432
static void
3433
gst_video_decoder_replace_input_buffer (GstVideoDecoder * decoder,
3434
    GstVideoCodecFrame * frame, GstBuffer ** dest_buffer)
3435
6
{
3436
6
  if (frame->input_buffer) {
3437
0
    *dest_buffer = gst_buffer_make_writable (*dest_buffer);
3438
0
    gst_video_decoder_copy_metas (decoder, frame, frame->input_buffer,
3439
0
        *dest_buffer);
3440
0
    gst_buffer_unref (frame->input_buffer);
3441
0
  }
3442
3443
6
  frame->input_buffer = *dest_buffer;
3444
6
}
3445
3446
/**
3447
 * gst_video_decoder_finish_frame:
3448
 * @decoder: a #GstVideoDecoder
3449
 * @frame: (transfer full): a decoded #GstVideoCodecFrame
3450
 *
3451
 * @frame should have a valid decoded data buffer, whose metadata fields
3452
 * are then appropriately set according to frame data and pushed downstream.
3453
 * If no output data is provided, @frame is considered skipped.
3454
 * In any case, the frame is considered finished and released.
3455
 *
3456
 * After calling this function the output buffer of the frame is to be
3457
 * considered read-only. This function will also change the metadata
3458
 * of the buffer.
3459
 *
3460
 * Returns: a #GstFlowReturn resulting from sending data downstream
3461
 */
3462
GstFlowReturn
3463
gst_video_decoder_finish_frame (GstVideoDecoder * decoder,
3464
    GstVideoCodecFrame * frame)
3465
0
{
3466
0
  GstFlowReturn ret = GST_FLOW_OK;
3467
0
  GstVideoDecoderPrivate *priv = decoder->priv;
3468
0
  GstBuffer *output_buffer;
3469
0
  gboolean needs_reconfigure = FALSE;
3470
3471
0
  GST_LOG_OBJECT (decoder, "finish frame %p", frame);
3472
3473
0
  GST_VIDEO_DECODER_STREAM_LOCK (decoder);
3474
3475
0
  needs_reconfigure = gst_pad_check_reconfigure (decoder->srcpad);
3476
0
  if (G_UNLIKELY (priv->output_state_changed || (priv->output_state
3477
0
              && needs_reconfigure))) {
3478
0
    if (!gst_video_decoder_negotiate_unlocked (decoder)) {
3479
0
      gst_pad_mark_reconfigure (decoder->srcpad);
3480
0
      if (GST_PAD_IS_FLUSHING (decoder->srcpad))
3481
0
        ret = GST_FLOW_FLUSHING;
3482
0
      else
3483
0
        ret = GST_FLOW_NOT_NEGOTIATED;
3484
0
      goto done;
3485
0
    }
3486
0
  }
3487
3488
0
  gst_video_decoder_prepare_finish_frame (decoder, frame, FALSE);
3489
0
  priv->processed++;
3490
3491
0
  if (priv->tags_changed) {
3492
0
    GstEvent *tags_event;
3493
3494
0
    tags_event = gst_video_decoder_create_merged_tags_event (decoder);
3495
3496
0
    if (tags_event != NULL)
3497
0
      gst_video_decoder_push_event (decoder, tags_event);
3498
3499
0
    priv->tags_changed = FALSE;
3500
0
  }
3501
3502
  /* no buffer data means this frame is skipped */
3503
0
  if (!frame->output_buffer || GST_VIDEO_CODEC_FRAME_IS_DECODE_ONLY (frame)) {
3504
0
    GST_DEBUG_OBJECT (decoder,
3505
0
        "skipping frame %" GST_TIME_FORMAT " because not output was produced",
3506
0
        GST_TIME_ARGS (frame->pts));
3507
0
    goto done;
3508
0
  }
3509
3510
  /* Mark output as corrupted if the subclass requested so and we're either
3511
   * still before the sync point after the request, or we don't even know the
3512
   * frame number of the sync point yet (it is 0) */
3513
0
  GST_OBJECT_LOCK (decoder);
3514
0
  if (frame->system_frame_number <= priv->request_sync_point_frame_number
3515
0
      && priv->request_sync_point_frame_number != REQUEST_SYNC_POINT_UNSET) {
3516
0
    if (priv->request_sync_point_flags &
3517
0
        GST_VIDEO_DECODER_REQUEST_SYNC_POINT_CORRUPT_OUTPUT) {
3518
0
      GST_DEBUG_OBJECT (decoder,
3519
0
          "marking frame %" GST_TIME_FORMAT
3520
0
          " as corrupted because it is still before the sync point",
3521
0
          GST_TIME_ARGS (frame->pts));
3522
0
      GST_VIDEO_CODEC_FRAME_FLAG_SET (frame,
3523
0
          GST_VIDEO_CODEC_FRAME_FLAG_CORRUPTED);
3524
0
    }
3525
0
  } else {
3526
    /* Reset to -1 to mark it as unset now that we've reached the frame */
3527
0
    priv->request_sync_point_frame_number = REQUEST_SYNC_POINT_UNSET;
3528
0
  }
3529
0
  GST_OBJECT_UNLOCK (decoder);
3530
3531
0
  if (priv->discard_corrupted_frames
3532
0
      && (GST_VIDEO_CODEC_FRAME_FLAG_IS_SET (frame,
3533
0
              GST_VIDEO_CODEC_FRAME_FLAG_CORRUPTED)
3534
0
          || GST_BUFFER_FLAG_IS_SET (frame->output_buffer,
3535
0
              GST_BUFFER_FLAG_CORRUPTED))) {
3536
0
    GST_DEBUG_OBJECT (decoder,
3537
0
        "skipping frame %" GST_TIME_FORMAT " because it is corrupted",
3538
0
        GST_TIME_ARGS (frame->pts));
3539
0
    goto done;
3540
0
  }
3541
3542
  /* We need a writable buffer for the metadata changes below */
3543
0
  output_buffer = frame->output_buffer =
3544
0
      gst_buffer_make_writable (frame->output_buffer);
3545
3546
0
  GST_BUFFER_FLAG_UNSET (output_buffer, GST_BUFFER_FLAG_DELTA_UNIT);
3547
3548
0
  GST_BUFFER_PTS (output_buffer) = frame->pts;
3549
0
  GST_BUFFER_DTS (output_buffer) = GST_CLOCK_TIME_NONE;
3550
0
  GST_BUFFER_DURATION (output_buffer) = frame->duration;
3551
3552
0
  GST_BUFFER_OFFSET (output_buffer) = GST_BUFFER_OFFSET_NONE;
3553
0
  GST_BUFFER_OFFSET_END (output_buffer) = GST_BUFFER_OFFSET_NONE;
3554
3555
0
  if (priv->discont) {
3556
0
    GST_BUFFER_FLAG_SET (output_buffer, GST_BUFFER_FLAG_DISCONT);
3557
0
  }
3558
3559
0
  if (GST_VIDEO_CODEC_FRAME_FLAG_IS_SET (frame,
3560
0
          GST_VIDEO_CODEC_FRAME_FLAG_CORRUPTED)) {
3561
0
    GST_DEBUG_OBJECT (decoder,
3562
0
        "marking frame %" GST_TIME_FORMAT " as corrupted",
3563
0
        GST_TIME_ARGS (frame->pts));
3564
0
    GST_BUFFER_FLAG_SET (output_buffer, GST_BUFFER_FLAG_CORRUPTED);
3565
0
  }
3566
3567
0
  gst_video_decoder_copy_metas (decoder, frame, frame->input_buffer,
3568
0
      frame->output_buffer);
3569
3570
  /* Get an additional ref to the buffer, which is going to be pushed
3571
   * downstream, the original ref is owned by the frame
3572
   */
3573
0
  output_buffer = gst_buffer_ref (output_buffer);
3574
3575
  /* Release frame so the buffer is writable when we push it downstream
3576
   * if possible, i.e. if the subclass does not hold additional references
3577
   * to the frame
3578
   */
3579
0
  gst_video_decoder_release_frame (decoder, frame);
3580
0
  frame = NULL;
3581
3582
0
  if (decoder->output_segment.rate < 0.0
3583
0
      && !(decoder->output_segment.flags & GST_SEEK_FLAG_TRICKMODE_KEY_UNITS)) {
3584
0
    GST_LOG_OBJECT (decoder, "queued frame");
3585
0
    priv->output_queued = g_list_prepend (priv->output_queued, output_buffer);
3586
0
  } else {
3587
0
    ret = gst_video_decoder_clip_and_push_buf (decoder, output_buffer);
3588
0
  }
3589
3590
0
done:
3591
0
  if (frame)
3592
0
    gst_video_decoder_release_frame (decoder, frame);
3593
0
  GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
3594
0
  return ret;
3595
0
}
3596
3597
/**
3598
 * gst_video_decoder_finish_subframe:
3599
 * @decoder: a #GstVideoDecoder
3600
 * @frame: (transfer full): the #GstVideoCodecFrame
3601
 *
3602
 * Indicate that a subframe has been finished to be decoded
3603
 * by the subclass. This method should be called for all subframes
3604
 * except the last subframe where @gst_video_decoder_finish_frame
3605
 * should be called instead.
3606
 *
3607
 * Returns: a #GstFlowReturn, usually GST_FLOW_OK.
3608
 *
3609
 * Since: 1.20
3610
 */
3611
GstFlowReturn
3612
gst_video_decoder_finish_subframe (GstVideoDecoder * decoder,
3613
    GstVideoCodecFrame * frame)
3614
0
{
3615
0
  g_return_val_if_fail (gst_video_decoder_get_subframe_mode (decoder),
3616
0
      GST_FLOW_NOT_SUPPORTED);
3617
3618
0
  GST_LOG_OBJECT (decoder, "finish subframe %p num=%d", frame->input_buffer,
3619
0
      gst_video_decoder_get_input_subframe_index (decoder, frame));
3620
3621
0
  GST_VIDEO_DECODER_STREAM_LOCK (decoder);
3622
0
  frame->abidata.ABI.subframes_processed++;
3623
0
  gst_video_codec_frame_unref (frame);
3624
3625
0
  GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
3626
3627
0
  return GST_FLOW_OK;
3628
0
}
3629
3630
/* With stream lock, takes the frame reference */
3631
static GstFlowReturn
3632
gst_video_decoder_clip_and_push_buf (GstVideoDecoder * decoder, GstBuffer * buf)
3633
0
{
3634
0
  GstFlowReturn ret = GST_FLOW_OK;
3635
0
  GstVideoDecoderPrivate *priv = decoder->priv;
3636
0
  guint64 start, stop;
3637
0
  guint64 cstart, cstop;
3638
0
  GstSegment *segment;
3639
0
  GstClockTime duration;
3640
3641
  /* Check for clipping */
3642
0
  start = GST_BUFFER_PTS (buf);
3643
0
  duration = GST_BUFFER_DURATION (buf);
3644
3645
  /* store that we have valid decoded data */
3646
0
  priv->had_output_data = TRUE;
3647
3648
0
  stop = GST_CLOCK_TIME_NONE;
3649
3650
0
  if (GST_CLOCK_TIME_IS_VALID (start) && GST_CLOCK_TIME_IS_VALID (duration)) {
3651
0
    stop = start + duration;
3652
0
  } else if (GST_CLOCK_TIME_IS_VALID (start)
3653
0
      && !GST_CLOCK_TIME_IS_VALID (duration)) {
3654
    /* If we don't clip away buffers that far before the segment we
3655
     * can cause the pipeline to lockup. This can happen if audio is
3656
     * properly clipped, and thus the audio sink does not preroll yet
3657
     * but the video sink prerolls because we already outputted a
3658
     * buffer here... and then queues run full.
3659
     *
3660
     * In the worst case we will clip one buffer too many here now if no
3661
     * framerate is given, no buffer duration is given and the actual
3662
     * framerate is lower than 25fps */
3663
0
    stop = start + 40 * GST_MSECOND;
3664
0
  }
3665
3666
0
  segment = &decoder->output_segment;
3667
0
  if (gst_segment_clip (segment, GST_FORMAT_TIME, start, stop, &cstart, &cstop)) {
3668
0
    GST_BUFFER_PTS (buf) = cstart;
3669
3670
0
    if (stop != GST_CLOCK_TIME_NONE && GST_CLOCK_TIME_IS_VALID (duration))
3671
0
      GST_BUFFER_DURATION (buf) = cstop - cstart;
3672
3673
0
    GST_LOG_OBJECT (decoder,
3674
0
        "accepting buffer inside segment: %" GST_TIME_FORMAT " %"
3675
0
        GST_TIME_FORMAT " seg %" GST_TIME_FORMAT " to %" GST_TIME_FORMAT
3676
0
        " time %" GST_TIME_FORMAT,
3677
0
        GST_TIME_ARGS (cstart),
3678
0
        GST_TIME_ARGS (cstop),
3679
0
        GST_TIME_ARGS (segment->start), GST_TIME_ARGS (segment->stop),
3680
0
        GST_TIME_ARGS (segment->time));
3681
0
  } else {
3682
0
    GST_LOG_OBJECT (decoder,
3683
0
        "dropping buffer outside segment: %" GST_TIME_FORMAT
3684
0
        " %" GST_TIME_FORMAT
3685
0
        " seg %" GST_TIME_FORMAT " to %" GST_TIME_FORMAT
3686
0
        " time %" GST_TIME_FORMAT,
3687
0
        GST_TIME_ARGS (start), GST_TIME_ARGS (stop),
3688
0
        GST_TIME_ARGS (segment->start),
3689
0
        GST_TIME_ARGS (segment->stop), GST_TIME_ARGS (segment->time));
3690
    /* only check and return EOS if upstream still
3691
     * in the same segment and interested as such */
3692
0
    if (decoder->priv->in_out_segment_sync) {
3693
0
      if (segment->rate >= 0) {
3694
0
        if (GST_BUFFER_PTS (buf) >= segment->stop)
3695
0
          ret = GST_FLOW_EOS;
3696
0
      } else if (GST_BUFFER_PTS (buf) < segment->start) {
3697
0
        ret = GST_FLOW_EOS;
3698
0
      }
3699
0
    }
3700
0
    gst_buffer_unref (buf);
3701
0
    goto done;
3702
0
  }
3703
3704
  /* Check if the buffer is too late (QoS). */
3705
0
  if (priv->do_qos && GST_CLOCK_TIME_IS_VALID (priv->earliest_time)) {
3706
0
    GstClockTime deadline = GST_CLOCK_TIME_NONE;
3707
    /* We prefer to use the frame stop position for checking for QoS since we
3708
     * don't want to drop a frame which is partially late */
3709
0
    if (GST_CLOCK_TIME_IS_VALID (cstop))
3710
0
      deadline = gst_segment_to_running_time (segment, GST_FORMAT_TIME, cstop);
3711
0
    else if (GST_CLOCK_TIME_IS_VALID (cstart))
3712
0
      deadline = gst_segment_to_running_time (segment, GST_FORMAT_TIME, cstart);
3713
0
    if (GST_CLOCK_TIME_IS_VALID (deadline) && deadline < priv->earliest_time) {
3714
0
      GST_WARNING_OBJECT (decoder,
3715
0
          "Dropping frame due to QoS. start:%" GST_TIME_FORMAT " deadline:%"
3716
0
          GST_TIME_FORMAT " earliest_time:%" GST_TIME_FORMAT,
3717
0
          GST_TIME_ARGS (start), GST_TIME_ARGS (deadline),
3718
0
          GST_TIME_ARGS (priv->earliest_time));
3719
0
      gst_video_decoder_post_qos_drop (decoder, cstart);
3720
0
      gst_buffer_unref (buf);
3721
0
      priv->discont = TRUE;
3722
0
      goto done;
3723
0
    }
3724
0
  }
3725
3726
  /* Set DISCONT flag here ! */
3727
3728
0
  if (priv->discont) {
3729
0
    GST_DEBUG_OBJECT (decoder, "Setting discont on output buffer");
3730
0
    GST_BUFFER_FLAG_SET (buf, GST_BUFFER_FLAG_DISCONT);
3731
0
    priv->discont = FALSE;
3732
0
  }
3733
3734
  /* update rate estimate */
3735
0
  GST_OBJECT_LOCK (decoder);
3736
0
  priv->bytes_out += gst_buffer_get_size (buf);
3737
0
  if (GST_CLOCK_TIME_IS_VALID (duration)) {
3738
0
    priv->time += duration;
3739
0
  } else {
3740
    /* FIXME : Use difference between current and previous outgoing
3741
     * timestamp, and relate to difference between current and previous
3742
     * bytes */
3743
    /* better none than nothing valid */
3744
0
    priv->time = GST_CLOCK_TIME_NONE;
3745
0
  }
3746
0
  GST_OBJECT_UNLOCK (decoder);
3747
3748
0
  GST_DEBUG_OBJECT (decoder, "pushing buffer %p of size %" G_GSIZE_FORMAT ", "
3749
0
      "PTS %" GST_TIME_FORMAT ", dur %" GST_TIME_FORMAT, buf,
3750
0
      gst_buffer_get_size (buf),
3751
0
      GST_TIME_ARGS (GST_BUFFER_PTS (buf)),
3752
0
      GST_TIME_ARGS (GST_BUFFER_DURATION (buf)));
3753
3754
  /* we got data, so note things are looking up again, reduce
3755
   * the error count, if there is one */
3756
0
  if (G_UNLIKELY (priv->error_count))
3757
0
    priv->error_count = 0;
3758
3759
0
#ifndef GST_DISABLE_DEBUG
3760
0
  if (G_UNLIKELY (priv->last_reset_time != GST_CLOCK_TIME_NONE)) {
3761
0
    GstClockTime elapsed = gst_util_get_timestamp () - priv->last_reset_time;
3762
3763
    /* First buffer since reset, report how long we took */
3764
0
    GST_INFO_OBJECT (decoder, "First buffer since flush took %" GST_TIME_FORMAT
3765
0
        " to produce", GST_TIME_ARGS (elapsed));
3766
0
    priv->last_reset_time = GST_CLOCK_TIME_NONE;
3767
0
  }
3768
0
#endif
3769
3770
  /* release STREAM_LOCK not to block upstream
3771
   * while pushing buffer downstream */
3772
0
  GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
3773
0
  ret = gst_pad_push (decoder->srcpad, buf);
3774
0
  GST_VIDEO_DECODER_STREAM_LOCK (decoder);
3775
3776
0
done:
3777
0
  return ret;
3778
0
}
3779
3780
/**
3781
 * gst_video_decoder_add_to_frame:
3782
 * @decoder: a #GstVideoDecoder
3783
 * @n_bytes: the number of bytes to add
3784
 *
3785
 * Removes next @n_bytes of input data and adds it to currently parsed frame.
3786
 */
3787
void
3788
gst_video_decoder_add_to_frame (GstVideoDecoder * decoder, int n_bytes)
3789
6
{
3790
6
  GstVideoDecoderPrivate *priv = decoder->priv;
3791
6
  GstBuffer *buf;
3792
3793
6
  GST_LOG_OBJECT (decoder, "add %d bytes to frame", n_bytes);
3794
3795
6
  if (n_bytes == 0)
3796
2
    return;
3797
3798
4
  GST_VIDEO_DECODER_STREAM_LOCK (decoder);
3799
4
  if (gst_adapter_available (priv->output_adapter) == 0) {
3800
4
    priv->frame_offset =
3801
4
        priv->input_offset - gst_adapter_available (priv->input_adapter);
3802
4
  }
3803
4
  buf = gst_adapter_take_buffer (priv->input_adapter, n_bytes);
3804
3805
4
  gst_adapter_push (priv->output_adapter, buf);
3806
4
  GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
3807
4
}
3808
3809
/**
3810
 * gst_video_decoder_get_pending_frame_size:
3811
 * @decoder: a #GstVideoDecoder
3812
 *
3813
 * Returns the number of bytes previously added to the current frame
3814
 * by calling gst_video_decoder_add_to_frame().
3815
 *
3816
 * Returns: The number of bytes pending for the current frame
3817
 *
3818
 * Since: 1.4
3819
 */
3820
gsize
3821
gst_video_decoder_get_pending_frame_size (GstVideoDecoder * decoder)
3822
0
{
3823
0
  GstVideoDecoderPrivate *priv = decoder->priv;
3824
0
  gsize ret;
3825
3826
0
  GST_VIDEO_DECODER_STREAM_LOCK (decoder);
3827
0
  ret = gst_adapter_available (priv->output_adapter);
3828
0
  GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
3829
3830
0
  GST_LOG_OBJECT (decoder, "Current pending frame has %" G_GSIZE_FORMAT "bytes",
3831
0
      ret);
3832
3833
0
  return ret;
3834
0
}
3835
3836
static guint64
3837
gst_video_decoder_get_frame_duration (GstVideoDecoder * decoder,
3838
    GstVideoCodecFrame * frame)
3839
3
{
3840
3
  GstVideoCodecState *state = decoder->priv->output_state;
3841
3842
  /* it's possible that we don't have a state yet when we are dropping the
3843
   * initial buffers */
3844
3
  if (state == NULL)
3845
3
    return GST_CLOCK_TIME_NONE;
3846
3847
0
  if (state->info.fps_d == 0 || state->info.fps_n == 0) {
3848
0
    return GST_CLOCK_TIME_NONE;
3849
0
  }
3850
3851
  /* FIXME: For interlaced frames this needs to take into account
3852
   * the number of valid fields in the frame
3853
   */
3854
3855
0
  return gst_util_uint64_scale (GST_SECOND, state->info.fps_d,
3856
0
      state->info.fps_n);
3857
0
}
3858
3859
/**
3860
 * gst_video_decoder_have_frame:
3861
 * @decoder: a #GstVideoDecoder
3862
 *
3863
 * Gathers all data collected for currently parsed frame, gathers corresponding
3864
 * metadata and passes it along for further processing, i.e. @handle_frame.
3865
 *
3866
 * Returns: a #GstFlowReturn
3867
 */
3868
GstFlowReturn
3869
gst_video_decoder_have_frame (GstVideoDecoder * decoder)
3870
6
{
3871
6
  GstVideoDecoderPrivate *priv = decoder->priv;
3872
6
  GstBuffer *buffer;
3873
6
  int n_available;
3874
6
  GstClockTime pts, dts, duration;
3875
6
  guint flags;
3876
6
  GstFlowReturn ret = GST_FLOW_OK;
3877
3878
6
  GST_LOG_OBJECT (decoder, "have_frame at offset %" G_GUINT64_FORMAT,
3879
6
      priv->frame_offset);
3880
3881
6
  GST_VIDEO_DECODER_STREAM_LOCK (decoder);
3882
3883
6
  n_available = gst_adapter_available (priv->output_adapter);
3884
6
  if (n_available) {
3885
4
    buffer = gst_adapter_take_buffer (priv->output_adapter, n_available);
3886
4
  } else {
3887
2
    buffer = gst_buffer_new_and_alloc (0);
3888
2
  }
3889
3890
6
  gst_video_decoder_replace_input_buffer (decoder, priv->current_frame,
3891
6
      &buffer);
3892
3893
6
  gst_video_decoder_get_buffer_info_at_offset (decoder,
3894
6
      priv->frame_offset, &pts, &dts, &duration, &flags);
3895
3896
6
  GST_BUFFER_PTS (buffer) = pts;
3897
6
  GST_BUFFER_DTS (buffer) = dts;
3898
6
  GST_BUFFER_DURATION (buffer) = duration;
3899
6
  GST_BUFFER_FLAGS (buffer) = flags;
3900
3901
6
  GST_LOG_OBJECT (decoder, "collected frame size %d, "
3902
6
      "PTS %" GST_TIME_FORMAT ", DTS %" GST_TIME_FORMAT ", dur %"
3903
6
      GST_TIME_FORMAT, n_available, GST_TIME_ARGS (pts), GST_TIME_ARGS (dts),
3904
6
      GST_TIME_ARGS (duration));
3905
3906
6
  if (!GST_BUFFER_FLAG_IS_SET (buffer, GST_BUFFER_FLAG_DELTA_UNIT)) {
3907
2
    GST_DEBUG_OBJECT (decoder, "Marking as sync point");
3908
2
    GST_VIDEO_CODEC_FRAME_SET_SYNC_POINT (priv->current_frame);
3909
2
  }
3910
3911
6
  if (GST_BUFFER_FLAG_IS_SET (buffer, GST_BUFFER_FLAG_CORRUPTED)) {
3912
0
    GST_DEBUG_OBJECT (decoder, "Marking as corrupted");
3913
0
    GST_VIDEO_CODEC_FRAME_FLAG_SET (priv->current_frame,
3914
0
        GST_VIDEO_CODEC_FRAME_FLAG_CORRUPTED);
3915
0
  }
3916
3917
  /* In reverse playback, just capture and queue frames for later processing */
3918
6
  if (decoder->input_segment.rate < 0.0) {
3919
0
    priv->parse_gather =
3920
0
        g_list_prepend (priv->parse_gather, priv->current_frame);
3921
0
    priv->current_frame = NULL;
3922
6
  } else {
3923
6
    GstVideoCodecFrame *frame = priv->current_frame;
3924
3925
    /* In subframe mode, we keep a ref for ourselves
3926
     * as this frame will be kept during the data collection
3927
     * in parsed mode. The frame reference will be released by
3928
     * finish_(sub)frame or drop_(sub)frame.*/
3929
6
    if (gst_video_decoder_get_subframe_mode (decoder)) {
3930
0
      frame->abidata.ABI.num_subframes++;
3931
0
      gst_video_codec_frame_ref (priv->current_frame);
3932
6
    } else {
3933
6
      priv->current_frame = NULL;
3934
6
    }
3935
3936
    /* Decode the frame, which gives away our ref */
3937
6
    ret = gst_video_decoder_decode_frame (decoder, frame);
3938
6
  }
3939
3940
6
  GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
3941
3942
6
  return ret;
3943
6
}
3944
3945
/* Pass the frame in priv->current_frame through the
3946
 * handle_frame() callback for decoding and passing to gvd_finish_frame(),
3947
 * or dropping by passing to gvd_drop_frame() */
3948
static GstFlowReturn
3949
gst_video_decoder_decode_frame (GstVideoDecoder * decoder,
3950
    GstVideoCodecFrame * frame)
3951
6
{
3952
6
  GstVideoDecoderPrivate *priv = decoder->priv;
3953
6
  GstVideoDecoderClass *decoder_class;
3954
6
  GstFlowReturn ret = GST_FLOW_OK;
3955
3956
6
  decoder_class = GST_VIDEO_DECODER_GET_CLASS (decoder);
3957
3958
  /* FIXME : This should only have to be checked once (either the subclass has an
3959
   * implementation, or it doesn't) */
3960
6
  g_return_val_if_fail (decoder_class->handle_frame != NULL, GST_FLOW_ERROR);
3961
6
  g_return_val_if_fail (frame != NULL, GST_FLOW_ERROR);
3962
3963
6
  frame->pts = GST_BUFFER_PTS (frame->input_buffer);
3964
6
  frame->dts = GST_BUFFER_DTS (frame->input_buffer);
3965
6
  frame->duration = GST_BUFFER_DURATION (frame->input_buffer);
3966
6
  frame->deadline =
3967
6
      gst_segment_to_running_time (&decoder->input_segment, GST_FORMAT_TIME,
3968
6
      frame->pts);
3969
3970
  /* For keyframes, PTS = DTS + constant_offset, usually 0 to 3 frame
3971
   * durations. */
3972
  /* FIXME upstream can be quite wrong about the keyframe aspect,
3973
   * so we could be going off here as well,
3974
   * maybe let subclass decide if it really is/was a keyframe */
3975
6
  if (GST_VIDEO_CODEC_FRAME_IS_SYNC_POINT (frame)) {
3976
2
    priv->distance_from_sync = 0;
3977
3978
2
    GST_OBJECT_LOCK (decoder);
3979
2
    priv->request_sync_point_flags &=
3980
2
        ~GST_VIDEO_DECODER_REQUEST_SYNC_POINT_DISCARD_INPUT;
3981
2
    if (priv->request_sync_point_frame_number == REQUEST_SYNC_POINT_PENDING)
3982
0
      priv->request_sync_point_frame_number = frame->system_frame_number;
3983
2
    GST_OBJECT_UNLOCK (decoder);
3984
3985
2
    if (GST_CLOCK_TIME_IS_VALID (frame->pts)
3986
0
        && GST_CLOCK_TIME_IS_VALID (frame->dts)) {
3987
      /* just in case they are not equal as might ideally be,
3988
       * e.g. quicktime has a (positive) delta approach */
3989
0
      priv->pts_delta = frame->pts - frame->dts;
3990
0
      GST_DEBUG_OBJECT (decoder, "PTS delta %d ms",
3991
0
          (gint) (priv->pts_delta / GST_MSECOND));
3992
0
    }
3993
4
  } else {
3994
4
    if (priv->distance_from_sync == -1 && priv->automatic_request_sync_points) {
3995
0
      GST_DEBUG_OBJECT (decoder,
3996
0
          "Didn't receive a keyframe yet, requesting sync point");
3997
0
      gst_video_decoder_request_sync_point (decoder, frame,
3998
0
          priv->automatic_request_sync_point_flags);
3999
0
    }
4000
4001
4
    GST_OBJECT_LOCK (decoder);
4002
4
    if ((priv->needs_sync_point && priv->distance_from_sync == -1)
4003
4
        || (priv->request_sync_point_flags &
4004
4
            GST_VIDEO_DECODER_REQUEST_SYNC_POINT_DISCARD_INPUT)) {
4005
0
      GST_WARNING_OBJECT (decoder,
4006
0
          "Subclass requires a sync point but we didn't receive one yet, discarding input");
4007
0
      GST_OBJECT_UNLOCK (decoder);
4008
0
      if (priv->automatic_request_sync_points) {
4009
0
        gst_video_decoder_request_sync_point (decoder, frame,
4010
0
            priv->automatic_request_sync_point_flags);
4011
0
      }
4012
0
      gst_video_decoder_release_frame (decoder, frame);
4013
0
      return GST_FLOW_OK;
4014
0
    }
4015
4
    GST_OBJECT_UNLOCK (decoder);
4016
4017
4
    priv->distance_from_sync++;
4018
4
  }
4019
4020
6
  frame->distance_from_sync = priv->distance_from_sync;
4021
4022
6
  if (!gst_video_decoder_get_subframe_mode (decoder)
4023
6
      || frame->abidata.ABI.num_subframes == 1) {
4024
6
    frame->abidata.ABI.ts = frame->dts;
4025
6
    frame->abidata.ABI.ts2 = frame->pts;
4026
6
  }
4027
4028
6
  GST_LOG_OBJECT (decoder,
4029
6
      "frame %p PTS %" GST_TIME_FORMAT ", DTS %" GST_TIME_FORMAT ", dist %d",
4030
6
      frame, GST_TIME_ARGS (frame->pts), GST_TIME_ARGS (frame->dts),
4031
6
      frame->distance_from_sync);
4032
  /* FIXME: suboptimal way to add a unique frame to the list, in case of subframe mode. */
4033
6
  if (!g_queue_find (&priv->frames, frame)) {
4034
6
    g_queue_push_tail (&priv->frames, gst_video_codec_frame_ref (frame));
4035
6
  } else {
4036
0
    GST_LOG_OBJECT (decoder,
4037
0
        "Do not add an existing frame used to decode subframes");
4038
0
  }
4039
4040
6
  if (priv->frames.length > 10) {
4041
0
    GST_DEBUG_OBJECT (decoder, "decoder frame list getting long: %d frames,"
4042
0
        "possible internal leaking?", priv->frames.length);
4043
0
  }
4044
4045
  /* do something with frame */
4046
6
  ret = decoder_class->handle_frame (decoder, frame);
4047
6
  if (ret != GST_FLOW_OK)
4048
6
    GST_DEBUG_OBJECT (decoder, "flow error %s", gst_flow_get_name (ret));
4049
4050
  /* the frame has either been added to parse_gather or sent to
4051
     handle frame so there is no need to unref it */
4052
6
  return ret;
4053
6
}
4054
4055
4056
/**
4057
 * gst_video_decoder_get_output_state:
4058
 * @decoder: a #GstVideoDecoder
4059
 *
4060
 * Get the #GstVideoCodecState currently describing the output stream.
4061
 *
4062
 * Returns: (transfer full) (nullable): #GstVideoCodecState describing format of video data.
4063
 */
4064
GstVideoCodecState *
4065
gst_video_decoder_get_output_state (GstVideoDecoder * decoder)
4066
0
{
4067
0
  GstVideoCodecState *state = NULL;
4068
4069
0
  GST_OBJECT_LOCK (decoder);
4070
0
  if (decoder->priv->output_state)
4071
0
    state = gst_video_codec_state_ref (decoder->priv->output_state);
4072
0
  GST_OBJECT_UNLOCK (decoder);
4073
4074
0
  return state;
4075
0
}
4076
4077
static GstVideoCodecState *
4078
_set_interlaced_output_state (GstVideoDecoder * decoder,
4079
    GstVideoFormat fmt, GstVideoInterlaceMode interlace_mode, guint width,
4080
    guint height, GstVideoCodecState * reference, gboolean copy_interlace_mode)
4081
0
{
4082
0
  GstVideoDecoderPrivate *priv = decoder->priv;
4083
0
  GstVideoCodecState *state;
4084
4085
0
  g_assert ((copy_interlace_mode
4086
0
          && interlace_mode == GST_VIDEO_INTERLACE_MODE_PROGRESSIVE)
4087
0
      || !copy_interlace_mode);
4088
4089
0
  GST_DEBUG_OBJECT (decoder,
4090
0
      "fmt:%d, width:%d, height:%d, interlace-mode: %s, reference:%p", fmt,
4091
0
      width, height, gst_video_interlace_mode_to_string (interlace_mode),
4092
0
      reference);
4093
4094
  /* Create the new output state */
4095
0
  state =
4096
0
      _new_output_state (fmt, interlace_mode, width, height, reference,
4097
0
      copy_interlace_mode);
4098
0
  if (!state)
4099
0
    return NULL;
4100
4101
0
  GST_VIDEO_DECODER_STREAM_LOCK (decoder);
4102
4103
0
  GST_OBJECT_LOCK (decoder);
4104
  /* Replace existing output state by new one */
4105
0
  if (priv->output_state)
4106
0
    gst_video_codec_state_unref (priv->output_state);
4107
0
  priv->output_state = gst_video_codec_state_ref (state);
4108
4109
0
  if (priv->output_state != NULL && priv->output_state->info.fps_n > 0) {
4110
0
    priv->qos_frame_duration =
4111
0
        gst_util_uint64_scale (GST_SECOND, priv->output_state->info.fps_d,
4112
0
        priv->output_state->info.fps_n);
4113
0
  } else {
4114
0
    priv->qos_frame_duration = 0;
4115
0
  }
4116
0
  priv->output_state_changed = TRUE;
4117
0
  GST_OBJECT_UNLOCK (decoder);
4118
4119
0
  GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
4120
4121
0
  return state;
4122
0
}
4123
4124
/**
4125
 * gst_video_decoder_set_output_state:
4126
 * @decoder: a #GstVideoDecoder
4127
 * @fmt: a #GstVideoFormat
4128
 * @width: The width in pixels
4129
 * @height: The height in pixels
4130
 * @reference: (nullable) (transfer none): An optional reference #GstVideoCodecState
4131
 *
4132
 * Creates a new #GstVideoCodecState with the specified @fmt, @width and @height
4133
 * as the output state for the decoder.
4134
 * Any previously set output state on @decoder will be replaced by the newly
4135
 * created one.
4136
 *
4137
 * If the subclass wishes to copy over existing fields (like pixel aspec ratio,
4138
 * or framerate) from an existing #GstVideoCodecState, it can be provided as a
4139
 * @reference.
4140
 *
4141
 * If the subclass wishes to override some fields from the output state (like
4142
 * pixel-aspect-ratio or framerate) it can do so on the returned #GstVideoCodecState.
4143
 *
4144
 * The new output state will only take effect (set on pads and buffers) starting
4145
 * from the next call to #gst_video_decoder_finish_frame().
4146
 *
4147
 * Returns: (transfer full) (nullable): the newly configured output state.
4148
 */
4149
GstVideoCodecState *
4150
gst_video_decoder_set_output_state (GstVideoDecoder * decoder,
4151
    GstVideoFormat fmt, guint width, guint height,
4152
    GstVideoCodecState * reference)
4153
0
{
4154
0
  return _set_interlaced_output_state (decoder, fmt,
4155
0
      GST_VIDEO_INTERLACE_MODE_PROGRESSIVE, width, height, reference, TRUE);
4156
0
}
4157
4158
/**
4159
 * gst_video_decoder_set_interlaced_output_state:
4160
 * @decoder: a #GstVideoDecoder
4161
 * @fmt: a #GstVideoFormat
4162
 * @width: The width in pixels
4163
 * @height: The height in pixels
4164
 * @interlace_mode: A #GstVideoInterlaceMode
4165
 * @reference: (nullable) (transfer none): An optional reference #GstVideoCodecState
4166
 *
4167
 * Same as #gst_video_decoder_set_output_state() but also allows you to also set
4168
 * the interlacing mode.
4169
 *
4170
 * Returns: (transfer full) (nullable): the newly configured output state.
4171
 *
4172
 * Since: 1.16.
4173
 */
4174
GstVideoCodecState *
4175
gst_video_decoder_set_interlaced_output_state (GstVideoDecoder * decoder,
4176
    GstVideoFormat fmt, GstVideoInterlaceMode interlace_mode, guint width,
4177
    guint height, GstVideoCodecState * reference)
4178
0
{
4179
0
  return _set_interlaced_output_state (decoder, fmt, interlace_mode, width,
4180
0
      height, reference, FALSE);
4181
0
}
4182
4183
4184
/**
4185
 * gst_video_decoder_get_oldest_frame:
4186
 * @decoder: a #GstVideoDecoder
4187
 *
4188
 * Get the oldest pending unfinished #GstVideoCodecFrame
4189
 *
4190
 * Returns: (transfer full) (nullable): oldest pending unfinished #GstVideoCodecFrame.
4191
 */
4192
GstVideoCodecFrame *
4193
gst_video_decoder_get_oldest_frame (GstVideoDecoder * decoder)
4194
0
{
4195
0
  GstVideoCodecFrame *frame = NULL;
4196
4197
0
  GST_VIDEO_DECODER_STREAM_LOCK (decoder);
4198
0
  if (decoder->priv->frames.head)
4199
0
    frame = gst_video_codec_frame_ref (decoder->priv->frames.head->data);
4200
0
  GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
4201
4202
0
  return (GstVideoCodecFrame *) frame;
4203
0
}
4204
4205
/**
4206
 * gst_video_decoder_get_frame:
4207
 * @decoder: a #GstVideoDecoder
4208
 * @frame_number: system_frame_number of a frame
4209
 *
4210
 * Get a pending unfinished #GstVideoCodecFrame
4211
 *
4212
 * Returns: (transfer full) (nullable): pending unfinished #GstVideoCodecFrame identified by @frame_number.
4213
 */
4214
GstVideoCodecFrame *
4215
gst_video_decoder_get_frame (GstVideoDecoder * decoder, int frame_number)
4216
0
{
4217
0
  GList *g;
4218
0
  GstVideoCodecFrame *frame = NULL;
4219
4220
0
  GST_DEBUG_OBJECT (decoder, "frame_number : %d", frame_number);
4221
4222
0
  GST_VIDEO_DECODER_STREAM_LOCK (decoder);
4223
0
  for (g = decoder->priv->frames.head; g; g = g->next) {
4224
0
    GstVideoCodecFrame *tmp = g->data;
4225
4226
0
    if (tmp->system_frame_number == frame_number) {
4227
0
      frame = gst_video_codec_frame_ref (tmp);
4228
0
      break;
4229
0
    }
4230
0
  }
4231
0
  GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
4232
4233
0
  return frame;
4234
0
}
4235
4236
/**
4237
 * gst_video_decoder_get_frames:
4238
 * @decoder: a #GstVideoDecoder
4239
 *
4240
 * Get all pending unfinished #GstVideoCodecFrame
4241
 *
4242
 * Returns: (transfer full) (element-type GstVideoCodecFrame): pending unfinished #GstVideoCodecFrame.
4243
 */
4244
GList *
4245
gst_video_decoder_get_frames (GstVideoDecoder * decoder)
4246
0
{
4247
0
  GList *frames;
4248
4249
0
  GST_VIDEO_DECODER_STREAM_LOCK (decoder);
4250
0
  frames =
4251
0
      g_list_copy_deep (decoder->priv->frames.head,
4252
0
      (GCopyFunc) gst_video_codec_frame_ref, NULL);
4253
0
  GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
4254
4255
0
  return frames;
4256
0
}
4257
4258
static gboolean
4259
gst_video_decoder_decide_allocation_default (GstVideoDecoder * decoder,
4260
    GstQuery * query)
4261
0
{
4262
0
  GstCaps *outcaps = NULL;
4263
0
  GstBufferPool *pool = NULL;
4264
0
  guint size, min, max;
4265
0
  GstAllocator *allocator = NULL;
4266
0
  GstAllocationParams params;
4267
0
  GstStructure *config;
4268
0
  gboolean update_pool, update_allocator;
4269
0
  GstVideoInfo vinfo;
4270
4271
0
  gst_query_parse_allocation (query, &outcaps, NULL);
4272
0
  gst_video_info_init (&vinfo);
4273
0
  if (outcaps)
4274
0
    gst_video_info_from_caps (&vinfo, outcaps);
4275
4276
  /* we got configuration from our peer or the decide_allocation method,
4277
   * parse them */
4278
0
  if (gst_query_get_n_allocation_params (query) > 0) {
4279
    /* try the allocator */
4280
0
    gst_query_parse_nth_allocation_param (query, 0, &allocator, &params);
4281
0
    update_allocator = TRUE;
4282
0
  } else {
4283
0
    allocator = NULL;
4284
0
    gst_allocation_params_init (&params);
4285
0
    update_allocator = FALSE;
4286
0
  }
4287
4288
0
  if (gst_query_get_n_allocation_pools (query) > 0) {
4289
0
    gst_query_parse_nth_allocation_pool (query, 0, &pool, &size, &min, &max);
4290
0
    size = MAX (size, vinfo.size);
4291
0
    update_pool = TRUE;
4292
0
  } else {
4293
0
    pool = NULL;
4294
0
    size = vinfo.size;
4295
0
    min = max = 0;
4296
4297
0
    update_pool = FALSE;
4298
0
  }
4299
4300
0
  if (pool == NULL) {
4301
    /* no pool, we can make our own */
4302
0
    GST_DEBUG_OBJECT (decoder, "no pool, making new pool");
4303
0
    pool = gst_video_buffer_pool_new ();
4304
0
    {
4305
0
      gchar *name = g_strdup_printf ("%s-pool", GST_OBJECT_NAME (decoder));
4306
0
      g_object_set (pool, "name", name, NULL);
4307
0
      g_free (name);
4308
0
    }
4309
0
  }
4310
4311
  /* now configure */
4312
0
  config = gst_buffer_pool_get_config (pool);
4313
0
  gst_buffer_pool_config_set_params (config, outcaps, size, min, max);
4314
0
  gst_buffer_pool_config_set_allocator (config, allocator, &params);
4315
4316
0
  GST_DEBUG_OBJECT (decoder,
4317
0
      "setting config %" GST_PTR_FORMAT " in pool %" GST_PTR_FORMAT, config,
4318
0
      pool);
4319
0
  if (!gst_buffer_pool_set_config (pool, config)) {
4320
0
    config = gst_buffer_pool_get_config (pool);
4321
4322
    /* If change are not acceptable, fallback to generic pool */
4323
0
    if (!gst_buffer_pool_config_validate_params (config, outcaps, size, min,
4324
0
            max)) {
4325
0
      gst_structure_free (config);
4326
0
      gst_clear_object (&pool);
4327
0
    } else if (!gst_buffer_pool_set_config (pool, config)) {
4328
0
      gst_clear_object (&pool);
4329
0
    }
4330
4331
0
    if (!pool) {
4332
0
      GST_DEBUG_OBJECT (decoder, "unsupported pool, making new pool");
4333
0
      gchar *name =
4334
0
          g_strdup_printf ("%s-fallback-pool", GST_OBJECT_NAME (decoder));
4335
0
      pool = gst_video_buffer_pool_new ();
4336
0
      g_object_set (pool, "name", name, NULL);
4337
0
      g_free (name);
4338
4339
0
      config = gst_buffer_pool_get_config (pool);
4340
0
      gst_buffer_pool_config_set_params (config, outcaps, size, min, max);
4341
0
      gst_buffer_pool_config_set_allocator (config, allocator, &params);
4342
4343
0
      if (!gst_buffer_pool_set_config (pool, config))
4344
0
        goto config_failed;
4345
0
    }
4346
0
  }
4347
4348
0
  if (update_allocator)
4349
0
    gst_query_set_nth_allocation_param (query, 0, allocator, &params);
4350
0
  else
4351
0
    gst_query_add_allocation_param (query, allocator, &params);
4352
0
  if (allocator)
4353
0
    gst_object_unref (allocator);
4354
4355
0
  if (update_pool)
4356
0
    gst_query_set_nth_allocation_pool (query, 0, pool, size, min, max);
4357
0
  else
4358
0
    gst_query_add_allocation_pool (query, pool, size, min, max);
4359
4360
0
  if (pool)
4361
0
    gst_object_unref (pool);
4362
4363
0
  return TRUE;
4364
4365
0
config_failed:
4366
0
  if (allocator)
4367
0
    gst_object_unref (allocator);
4368
0
  if (pool)
4369
0
    gst_object_unref (pool);
4370
0
  GST_ELEMENT_ERROR (decoder, RESOURCE, SETTINGS,
4371
0
      ("Failed to configure the buffer pool"),
4372
0
      ("Configuration is most likely invalid, please report this issue."));
4373
0
  return FALSE;
4374
0
}
4375
4376
static gboolean
4377
gst_video_decoder_propose_allocation_default (GstVideoDecoder * decoder,
4378
    GstQuery * query)
4379
0
{
4380
0
  return TRUE;
4381
0
}
4382
4383
static gboolean
4384
gst_video_decoder_negotiate_pool (GstVideoDecoder * decoder, GstCaps * caps)
4385
0
{
4386
0
  GstVideoDecoderClass *klass;
4387
0
  GstQuery *query = NULL;
4388
0
  GstBufferPool *pool = NULL;
4389
0
  GstAllocator *allocator;
4390
0
  GstAllocationParams params;
4391
0
  gboolean ret = TRUE;
4392
4393
0
  klass = GST_VIDEO_DECODER_GET_CLASS (decoder);
4394
4395
0
  query = gst_query_new_allocation (caps, TRUE);
4396
4397
0
  GST_DEBUG_OBJECT (decoder, "do query ALLOCATION");
4398
4399
0
  if (!gst_pad_peer_query (decoder->srcpad, query)) {
4400
0
    GST_DEBUG_OBJECT (decoder, "didn't get downstream ALLOCATION hints");
4401
0
  }
4402
4403
0
  g_assert (klass->decide_allocation != NULL);
4404
0
  ret = klass->decide_allocation (decoder, query);
4405
4406
0
  GST_DEBUG_OBJECT (decoder, "ALLOCATION (%d) params: %" GST_PTR_FORMAT, ret,
4407
0
      query);
4408
4409
0
  if (!ret)
4410
0
    goto no_decide_allocation;
4411
4412
  /* we got configuration from our peer or the decide_allocation method,
4413
   * parse them */
4414
0
  if (gst_query_get_n_allocation_params (query) > 0) {
4415
0
    gst_query_parse_nth_allocation_param (query, 0, &allocator, &params);
4416
0
  } else {
4417
0
    allocator = NULL;
4418
0
    gst_allocation_params_init (&params);
4419
0
  }
4420
4421
0
  if (gst_query_get_n_allocation_pools (query) > 0)
4422
0
    gst_query_parse_nth_allocation_pool (query, 0, &pool, NULL, NULL, NULL);
4423
0
  if (!pool) {
4424
0
    if (allocator)
4425
0
      gst_object_unref (allocator);
4426
0
    ret = FALSE;
4427
0
    goto no_decide_allocation;
4428
0
  }
4429
4430
0
  if (decoder->priv->allocator)
4431
0
    gst_object_unref (decoder->priv->allocator);
4432
0
  decoder->priv->allocator = allocator;
4433
0
  decoder->priv->params = params;
4434
4435
0
  if (decoder->priv->pool) {
4436
    /* do not set the bufferpool to inactive here, it will be done
4437
     * on its finalize function. As videodecoder do late renegotiation
4438
     * it might happen that some element downstream is already using this
4439
     * same bufferpool and deactivating it will make it fail.
4440
     * Happens when a downstream element changes from passthrough to
4441
     * non-passthrough and gets this same bufferpool to use */
4442
0
    GST_DEBUG_OBJECT (decoder, "unref pool %" GST_PTR_FORMAT,
4443
0
        decoder->priv->pool);
4444
0
    gst_object_unref (decoder->priv->pool);
4445
0
  }
4446
0
  decoder->priv->pool = pool;
4447
4448
0
  if (pool) {
4449
    /* and activate */
4450
0
    GST_DEBUG_OBJECT (decoder, "activate pool %" GST_PTR_FORMAT, pool);
4451
0
    gst_buffer_pool_set_active (pool, TRUE);
4452
0
  }
4453
4454
0
done:
4455
0
  if (query)
4456
0
    gst_query_unref (query);
4457
4458
0
  return ret;
4459
4460
  /* Errors */
4461
0
no_decide_allocation:
4462
0
  {
4463
0
    GST_WARNING_OBJECT (decoder, "Subclass failed to decide allocation");
4464
0
    goto done;
4465
0
  }
4466
0
}
4467
4468
static gboolean
4469
gst_video_decoder_negotiate_default (GstVideoDecoder * decoder)
4470
0
{
4471
0
  GstVideoCodecState *state = decoder->priv->output_state;
4472
0
  gboolean ret = TRUE;
4473
0
  GstVideoCodecFrame *frame;
4474
0
  GstCaps *prevcaps;
4475
0
  GstCaps *incaps;
4476
4477
0
  if (!state) {
4478
0
    GST_DEBUG_OBJECT (decoder,
4479
0
        "Trying to negotiate the pool with out setting the o/p format");
4480
0
    ret = gst_video_decoder_negotiate_pool (decoder, NULL);
4481
0
    goto done;
4482
0
  }
4483
4484
0
  g_return_val_if_fail (GST_VIDEO_INFO_WIDTH (&state->info) != 0, FALSE);
4485
0
  g_return_val_if_fail (GST_VIDEO_INFO_HEIGHT (&state->info) != 0, FALSE);
4486
4487
  /* If the base class didn't set any multiview params, assume mono
4488
   * now */
4489
0
  if (GST_VIDEO_INFO_MULTIVIEW_MODE (&state->info) ==
4490
0
      GST_VIDEO_MULTIVIEW_MODE_NONE) {
4491
0
    GST_VIDEO_INFO_MULTIVIEW_MODE (&state->info) =
4492
0
        GST_VIDEO_MULTIVIEW_MODE_MONO;
4493
0
    GST_VIDEO_INFO_MULTIVIEW_FLAGS (&state->info) =
4494
0
        GST_VIDEO_MULTIVIEW_FLAGS_NONE;
4495
0
  }
4496
4497
0
  GST_DEBUG_OBJECT (decoder, "output_state par %d/%d fps %d/%d",
4498
0
      state->info.par_n, state->info.par_d,
4499
0
      state->info.fps_n, state->info.fps_d);
4500
4501
0
  if (state->caps == NULL)
4502
0
    state->caps = gst_video_info_to_caps (&state->info);
4503
4504
0
  incaps = gst_pad_get_current_caps (GST_VIDEO_DECODER_SINK_PAD (decoder));
4505
0
  if (incaps) {
4506
0
    GstStructure *in_struct;
4507
4508
0
    in_struct = gst_caps_get_structure (incaps, 0);
4509
0
    if (gst_structure_has_field (in_struct, "mastering-display-info") ||
4510
0
        gst_structure_has_field (in_struct, "content-light-level")) {
4511
0
      const gchar *s;
4512
4513
      /* prefer upstream information */
4514
0
      state->caps = gst_caps_make_writable (state->caps);
4515
0
      if ((s = gst_structure_get_string (in_struct, "mastering-display-info"))) {
4516
0
        gst_caps_set_simple (state->caps,
4517
0
            "mastering-display-info", G_TYPE_STRING, s, NULL);
4518
0
      }
4519
4520
0
      if ((s = gst_structure_get_string (in_struct, "content-light-level"))) {
4521
0
        gst_caps_set_simple (state->caps,
4522
0
            "content-light-level", G_TYPE_STRING, s, NULL);
4523
0
      }
4524
0
    }
4525
4526
0
    gst_caps_unref (incaps);
4527
0
  }
4528
4529
0
  if (state->allocation_caps == NULL)
4530
0
    state->allocation_caps = gst_caps_ref (state->caps);
4531
4532
0
  GST_DEBUG_OBJECT (decoder, "setting caps %" GST_PTR_FORMAT, state->caps);
4533
4534
  /* Push all pending pre-caps events of the oldest frame before
4535
   * setting caps */
4536
0
  frame = decoder->priv->frames.head ? decoder->priv->frames.head->data : NULL;
4537
0
  if (frame || decoder->priv->current_frame_events) {
4538
0
    GList **events, *l;
4539
4540
0
    if (frame) {
4541
0
      events = &frame->events;
4542
0
    } else {
4543
0
      events = &decoder->priv->current_frame_events;
4544
0
    }
4545
4546
0
    for (l = g_list_last (*events); l;) {
4547
0
      GstEvent *event = GST_EVENT (l->data);
4548
0
      GList *tmp;
4549
4550
0
      if (GST_EVENT_TYPE (event) < GST_EVENT_CAPS) {
4551
0
        gst_video_decoder_push_event (decoder, event);
4552
0
        tmp = l;
4553
0
        l = l->prev;
4554
0
        *events = g_list_delete_link (*events, tmp);
4555
0
      } else {
4556
0
        l = l->prev;
4557
0
      }
4558
0
    }
4559
0
  }
4560
4561
0
  prevcaps = gst_pad_get_current_caps (decoder->srcpad);
4562
0
  if (!prevcaps || !gst_caps_is_equal (prevcaps, state->caps)) {
4563
0
    if (!prevcaps) {
4564
0
      GST_DEBUG_OBJECT (decoder, "decoder src pad has currently NULL caps");
4565
0
    }
4566
0
    ret = gst_pad_set_caps (decoder->srcpad, state->caps);
4567
0
  } else {
4568
0
    ret = TRUE;
4569
0
    GST_DEBUG_OBJECT (decoder,
4570
0
        "current src pad and output state caps are the same");
4571
0
  }
4572
0
  if (prevcaps)
4573
0
    gst_caps_unref (prevcaps);
4574
4575
0
  if (!ret)
4576
0
    goto done;
4577
0
  decoder->priv->output_state_changed = FALSE;
4578
  /* Negotiate pool */
4579
0
  ret = gst_video_decoder_negotiate_pool (decoder, state->allocation_caps);
4580
4581
0
done:
4582
0
  return ret;
4583
0
}
4584
4585
static gboolean
4586
gst_video_decoder_negotiate_unlocked (GstVideoDecoder * decoder)
4587
0
{
4588
0
  GstVideoDecoderClass *klass = GST_VIDEO_DECODER_GET_CLASS (decoder);
4589
0
  gboolean ret = TRUE;
4590
4591
0
  if (G_LIKELY (klass->negotiate))
4592
0
    ret = klass->negotiate (decoder);
4593
4594
0
  return ret;
4595
0
}
4596
4597
/**
4598
 * gst_video_decoder_negotiate:
4599
 * @decoder: a #GstVideoDecoder
4600
 *
4601
 * Negotiate with downstream elements to currently configured #GstVideoCodecState.
4602
 * Unmark GST_PAD_FLAG_NEED_RECONFIGURE in any case. But mark it again if
4603
 * negotiate fails.
4604
 *
4605
 * Returns: %TRUE if the negotiation succeeded, else %FALSE.
4606
 */
4607
gboolean
4608
gst_video_decoder_negotiate (GstVideoDecoder * decoder)
4609
0
{
4610
0
  GstVideoDecoderClass *klass;
4611
0
  gboolean ret = TRUE;
4612
4613
0
  g_return_val_if_fail (GST_IS_VIDEO_DECODER (decoder), FALSE);
4614
4615
0
  klass = GST_VIDEO_DECODER_GET_CLASS (decoder);
4616
4617
0
  GST_VIDEO_DECODER_STREAM_LOCK (decoder);
4618
0
  gst_pad_check_reconfigure (decoder->srcpad);
4619
0
  if (klass->negotiate) {
4620
0
    ret = klass->negotiate (decoder);
4621
0
    if (!ret)
4622
0
      gst_pad_mark_reconfigure (decoder->srcpad);
4623
0
  }
4624
0
  GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
4625
4626
0
  return ret;
4627
0
}
4628
4629
/**
4630
 * gst_video_decoder_allocate_output_buffer:
4631
 * @decoder: a #GstVideoDecoder
4632
 *
4633
 * Helper function that allocates a buffer to hold a video frame for @decoder's
4634
 * current #GstVideoCodecState.
4635
 *
4636
 * You should use gst_video_decoder_allocate_output_frame() instead of this
4637
 * function, if possible at all.
4638
 *
4639
 * Returns: (transfer full) (nullable): allocated buffer, or NULL if no buffer could be
4640
 *     allocated (e.g. when downstream is flushing or shutting down)
4641
 */
4642
GstBuffer *
4643
gst_video_decoder_allocate_output_buffer (GstVideoDecoder * decoder)
4644
0
{
4645
0
  GstFlowReturn flow;
4646
0
  GstBuffer *buffer = NULL;
4647
0
  gboolean needs_reconfigure = FALSE;
4648
4649
0
  GST_DEBUG ("alloc src buffer");
4650
4651
0
  GST_VIDEO_DECODER_STREAM_LOCK (decoder);
4652
0
  needs_reconfigure = gst_pad_check_reconfigure (decoder->srcpad);
4653
0
  if (G_UNLIKELY (!decoder->priv->output_state
4654
0
          || decoder->priv->output_state_changed || needs_reconfigure)) {
4655
0
    if (!gst_video_decoder_negotiate_unlocked (decoder)) {
4656
0
      if (decoder->priv->output_state) {
4657
0
        GST_DEBUG_OBJECT (decoder, "Failed to negotiate, fallback allocation");
4658
0
        gst_pad_mark_reconfigure (decoder->srcpad);
4659
0
        goto fallback;
4660
0
      } else {
4661
0
        GST_DEBUG_OBJECT (decoder, "Failed to negotiate, output_buffer=NULL");
4662
0
        goto failed_allocation;
4663
0
      }
4664
0
    }
4665
0
  }
4666
4667
0
  if (!decoder->priv->pool)
4668
0
    goto fallback;
4669
4670
0
  flow = gst_buffer_pool_acquire_buffer (decoder->priv->pool, &buffer, NULL);
4671
4672
0
  if (flow != GST_FLOW_OK) {
4673
0
    GST_INFO_OBJECT (decoder, "couldn't allocate output buffer, flow %s",
4674
0
        gst_flow_get_name (flow));
4675
0
    if (decoder->priv->output_state && decoder->priv->output_state->info.size)
4676
0
      goto fallback;
4677
0
    else
4678
0
      goto failed_allocation;
4679
0
  }
4680
0
  GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
4681
4682
0
  return buffer;
4683
4684
0
fallback:
4685
0
  GST_INFO_OBJECT (decoder,
4686
0
      "Fallback allocation, creating new buffer which doesn't belongs to any buffer pool");
4687
0
  buffer =
4688
0
      gst_buffer_new_allocate (NULL, decoder->priv->output_state->info.size,
4689
0
      NULL);
4690
4691
0
failed_allocation:
4692
0
  GST_ERROR_OBJECT (decoder, "Failed to allocate the buffer..");
4693
0
  GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
4694
4695
0
  return buffer;
4696
0
}
4697
4698
/**
4699
 * gst_video_decoder_allocate_output_frame:
4700
 * @decoder: a #GstVideoDecoder
4701
 * @frame: a #GstVideoCodecFrame
4702
 *
4703
 * Helper function that allocates a buffer to hold a video frame for @decoder's
4704
 * current #GstVideoCodecState.  Subclass should already have configured video
4705
 * state and set src pad caps.
4706
 *
4707
 * The buffer allocated here is owned by the frame and you should only
4708
 * keep references to the frame, not the buffer.
4709
 *
4710
 * Returns: %GST_FLOW_OK if an output buffer could be allocated
4711
 */
4712
GstFlowReturn
4713
gst_video_decoder_allocate_output_frame (GstVideoDecoder *
4714
    decoder, GstVideoCodecFrame * frame)
4715
0
{
4716
0
  return gst_video_decoder_allocate_output_frame_with_params (decoder, frame,
4717
0
      NULL);
4718
0
}
4719
4720
/**
4721
 * gst_video_decoder_allocate_output_frame_with_params:
4722
 * @decoder: a #GstVideoDecoder
4723
 * @frame: a #GstVideoCodecFrame
4724
 * @params: a #GstBufferPoolAcquireParams
4725
 *
4726
 * Same as #gst_video_decoder_allocate_output_frame except it allows passing
4727
 * #GstBufferPoolAcquireParams to the sub call gst_buffer_pool_acquire_buffer.
4728
 *
4729
 * Returns: %GST_FLOW_OK if an output buffer could be allocated
4730
 *
4731
 * Since: 1.12
4732
 */
4733
GstFlowReturn
4734
gst_video_decoder_allocate_output_frame_with_params (GstVideoDecoder *
4735
    decoder, GstVideoCodecFrame * frame, GstBufferPoolAcquireParams * params)
4736
0
{
4737
0
  GstFlowReturn flow_ret;
4738
0
  GstVideoCodecState *state;
4739
0
  int num_bytes;
4740
0
  gboolean needs_reconfigure = FALSE;
4741
4742
0
  g_return_val_if_fail (decoder->priv->output_state, GST_FLOW_NOT_NEGOTIATED);
4743
0
  g_return_val_if_fail (frame->output_buffer == NULL, GST_FLOW_ERROR);
4744
4745
0
  GST_VIDEO_DECODER_STREAM_LOCK (decoder);
4746
4747
0
  state = decoder->priv->output_state;
4748
0
  if (state == NULL) {
4749
0
    g_warning ("Output state should be set before allocating frame");
4750
0
    goto error;
4751
0
  }
4752
0
  num_bytes = GST_VIDEO_INFO_SIZE (&state->info);
4753
0
  if (num_bytes == 0) {
4754
0
    g_warning ("Frame size should not be 0");
4755
0
    goto error;
4756
0
  }
4757
4758
0
  needs_reconfigure = gst_pad_check_reconfigure (decoder->srcpad);
4759
0
  if (G_UNLIKELY (decoder->priv->output_state_changed || needs_reconfigure)) {
4760
0
    if (!gst_video_decoder_negotiate_unlocked (decoder)) {
4761
0
      gst_pad_mark_reconfigure (decoder->srcpad);
4762
0
      if (GST_PAD_IS_FLUSHING (decoder->srcpad)) {
4763
0
        GST_DEBUG_OBJECT (decoder,
4764
0
            "Failed to negotiate a pool: pad is flushing");
4765
0
        goto flushing;
4766
0
      } else if (!decoder->priv->pool || decoder->priv->output_state_changed) {
4767
0
        GST_DEBUG_OBJECT (decoder,
4768
0
            "Failed to negotiate a pool and no previous pool to reuse");
4769
0
        goto error;
4770
0
      } else {
4771
0
        GST_DEBUG_OBJECT (decoder,
4772
0
            "Failed to negotiate a pool, falling back to the previous pool");
4773
0
      }
4774
0
    }
4775
0
  }
4776
4777
0
  GST_LOG_OBJECT (decoder, "alloc buffer size %d", num_bytes);
4778
4779
0
  flow_ret = gst_buffer_pool_acquire_buffer (decoder->priv->pool,
4780
0
      &frame->output_buffer, params);
4781
4782
0
  GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
4783
4784
0
  return flow_ret;
4785
4786
0
flushing:
4787
0
  GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
4788
0
  return GST_FLOW_FLUSHING;
4789
4790
0
error:
4791
0
  GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
4792
0
  return GST_FLOW_ERROR;
4793
0
}
4794
4795
/**
4796
 * gst_video_decoder_get_max_decode_time:
4797
 * @decoder: a #GstVideoDecoder
4798
 * @frame: a #GstVideoCodecFrame
4799
 *
4800
 * Determines maximum possible decoding time for @frame that will
4801
 * allow it to decode and arrive in time (as determined by QoS events).
4802
 * In particular, a negative result means decoding in time is no longer possible
4803
 * and should therefore occur as soon/skippy as possible.
4804
 *
4805
 * Returns: max decoding time.
4806
 */
4807
GstClockTimeDiff
4808
gst_video_decoder_get_max_decode_time (GstVideoDecoder *
4809
    decoder, GstVideoCodecFrame * frame)
4810
0
{
4811
0
  GstClockTimeDiff deadline;
4812
0
  GstClockTime earliest_time;
4813
4814
0
  GST_OBJECT_LOCK (decoder);
4815
0
  earliest_time = decoder->priv->earliest_time;
4816
0
  if (GST_CLOCK_TIME_IS_VALID (earliest_time)
4817
0
      && GST_CLOCK_TIME_IS_VALID (frame->deadline))
4818
0
    deadline = GST_CLOCK_DIFF (earliest_time, frame->deadline);
4819
0
  else
4820
0
    deadline = G_MAXINT64;
4821
4822
0
  GST_LOG_OBJECT (decoder, "earliest %" GST_TIME_FORMAT
4823
0
      ", frame deadline %" GST_TIME_FORMAT ", deadline %" GST_STIME_FORMAT,
4824
0
      GST_TIME_ARGS (earliest_time), GST_TIME_ARGS (frame->deadline),
4825
0
      GST_STIME_ARGS (deadline));
4826
4827
0
  GST_OBJECT_UNLOCK (decoder);
4828
4829
0
  return deadline;
4830
0
}
4831
4832
/**
4833
 * gst_video_decoder_get_qos_proportion:
4834
 * @decoder: a #GstVideoDecoder
4835
 *     current QoS proportion, or %NULL
4836
 *
4837
 * Returns: The current QoS proportion.
4838
 *
4839
 * Since: 1.0.3
4840
 */
4841
gdouble
4842
gst_video_decoder_get_qos_proportion (GstVideoDecoder * decoder)
4843
0
{
4844
0
  gdouble proportion;
4845
4846
0
  g_return_val_if_fail (GST_IS_VIDEO_DECODER (decoder), 1.0);
4847
4848
0
  GST_OBJECT_LOCK (decoder);
4849
0
  proportion = decoder->priv->proportion;
4850
0
  GST_OBJECT_UNLOCK (decoder);
4851
4852
0
  return proportion;
4853
0
}
4854
4855
GstFlowReturn
4856
_gst_video_decoder_error (GstVideoDecoder * dec, gint weight,
4857
    GQuark domain, gint code, gchar * txt, gchar * dbg, const gchar * file,
4858
    const gchar * function, gint line)
4859
0
{
4860
0
  if (txt)
4861
0
    GST_WARNING_OBJECT (dec, "error: %s", txt);
4862
0
  if (dbg)
4863
0
    GST_WARNING_OBJECT (dec, "error: %s", dbg);
4864
0
  dec->priv->error_count += weight;
4865
0
  dec->priv->discont = TRUE;
4866
0
  if (dec->priv->max_errors >= 0 &&
4867
0
      dec->priv->error_count > dec->priv->max_errors) {
4868
0
    gst_element_message_full (GST_ELEMENT (dec), GST_MESSAGE_ERROR,
4869
0
        domain, code, txt, dbg, file, function, line);
4870
0
    return GST_FLOW_ERROR;
4871
0
  } else {
4872
0
    g_free (txt);
4873
0
    g_free (dbg);
4874
0
    return GST_FLOW_OK;
4875
0
  }
4876
0
}
4877
4878
/**
4879
 * gst_video_decoder_set_max_errors:
4880
 * @dec: a #GstVideoDecoder
4881
 * @num: max tolerated errors
4882
 *
4883
 * Sets numbers of tolerated decoder errors, where a tolerated one is then only
4884
 * warned about, but more than tolerated will lead to fatal error.  You can set
4885
 * -1 for never returning fatal errors. Default is set to
4886
 * GST_VIDEO_DECODER_MAX_ERRORS.
4887
 *
4888
 * The '-1' option was added in 1.4
4889
 */
4890
void
4891
gst_video_decoder_set_max_errors (GstVideoDecoder * dec, gint num)
4892
0
{
4893
0
  g_return_if_fail (GST_IS_VIDEO_DECODER (dec));
4894
4895
0
  dec->priv->max_errors = num;
4896
0
}
4897
4898
/**
4899
 * gst_video_decoder_get_max_errors:
4900
 * @dec: a #GstVideoDecoder
4901
 *
4902
 * Returns: currently configured decoder tolerated error count.
4903
 */
4904
gint
4905
gst_video_decoder_get_max_errors (GstVideoDecoder * dec)
4906
0
{
4907
0
  g_return_val_if_fail (GST_IS_VIDEO_DECODER (dec), 0);
4908
4909
0
  return dec->priv->max_errors;
4910
0
}
4911
4912
/**
4913
 * gst_video_decoder_set_needs_format:
4914
 * @dec: a #GstVideoDecoder
4915
 * @enabled: new state
4916
 *
4917
 * Configures decoder format needs.  If enabled, subclass needs to be
4918
 * negotiated with format caps before it can process any data.  It will then
4919
 * never be handed any data before it has been configured.
4920
 * Otherwise, it might be handed data without having been configured and
4921
 * is then expected being able to do so either by default
4922
 * or based on the input data.
4923
 *
4924
 * Since: 1.4
4925
 */
4926
void
4927
gst_video_decoder_set_needs_format (GstVideoDecoder * dec, gboolean enabled)
4928
4
{
4929
4
  g_return_if_fail (GST_IS_VIDEO_DECODER (dec));
4930
4931
4
  dec->priv->needs_format = enabled;
4932
4
}
4933
4934
/**
4935
 * gst_video_decoder_get_needs_format:
4936
 * @dec: a #GstVideoDecoder
4937
 *
4938
 * Queries decoder required format handling.
4939
 *
4940
 * Returns: %TRUE if required format handling is enabled.
4941
 *
4942
 * Since: 1.4
4943
 */
4944
gboolean
4945
gst_video_decoder_get_needs_format (GstVideoDecoder * dec)
4946
0
{
4947
0
  gboolean result;
4948
4949
0
  g_return_val_if_fail (GST_IS_VIDEO_DECODER (dec), FALSE);
4950
4951
0
  result = dec->priv->needs_format;
4952
4953
0
  return result;
4954
0
}
4955
4956
/**
4957
 * gst_video_decoder_set_packetized:
4958
 * @decoder: a #GstVideoDecoder
4959
 * @packetized: whether the input data should be considered as packetized.
4960
 *
4961
 * Allows baseclass to consider input data as packetized or not. If the
4962
 * input is packetized, then the @parse method will not be called.
4963
 */
4964
void
4965
gst_video_decoder_set_packetized (GstVideoDecoder * decoder,
4966
    gboolean packetized)
4967
4
{
4968
4
  decoder->priv->packetized = packetized;
4969
4
}
4970
4971
/**
4972
 * gst_video_decoder_get_packetized:
4973
 * @decoder: a #GstVideoDecoder
4974
 *
4975
 * Queries whether input data is considered packetized or not by the
4976
 * base class.
4977
 *
4978
 * Returns: TRUE if input data is considered packetized.
4979
 */
4980
gboolean
4981
gst_video_decoder_get_packetized (GstVideoDecoder * decoder)
4982
0
{
4983
0
  return decoder->priv->packetized;
4984
0
}
4985
4986
/**
4987
 * gst_video_decoder_have_last_subframe:
4988
 * @decoder: a #GstVideoDecoder
4989
 * @frame: (transfer none): the #GstVideoCodecFrame to update
4990
 *
4991
 * Indicates that the last subframe has been processed by the decoder
4992
 * in @frame. This will release the current frame in video decoder
4993
 * allowing to receive new frames from upstream elements. This method
4994
 * must be called in the subclass @handle_frame callback.
4995
 *
4996
 * Returns: a #GstFlowReturn, usually GST_FLOW_OK.
4997
 *
4998
 * Since: 1.20
4999
 */
5000
GstFlowReturn
5001
gst_video_decoder_have_last_subframe (GstVideoDecoder * decoder,
5002
    GstVideoCodecFrame * frame)
5003
0
{
5004
0
  g_return_val_if_fail (gst_video_decoder_get_subframe_mode (decoder),
5005
0
      GST_FLOW_OK);
5006
  /* unref once from the list */
5007
0
  GST_VIDEO_DECODER_STREAM_LOCK (decoder);
5008
0
  if (decoder->priv->current_frame == frame) {
5009
0
    gst_video_codec_frame_unref (decoder->priv->current_frame);
5010
0
    decoder->priv->current_frame = NULL;
5011
0
  }
5012
0
  GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
5013
5014
0
  return GST_FLOW_OK;
5015
0
}
5016
5017
/**
5018
 * gst_video_decoder_set_subframe_mode:
5019
 * @decoder: a #GstVideoDecoder
5020
 * @subframe_mode: whether the input data should be considered as subframes.
5021
 *
5022
 * If this is set to TRUE, it informs the base class that the subclass
5023
 * can receive the data at a granularity lower than one frame.
5024
 *
5025
 * Note that in this mode, the subclass has two options. It can either
5026
 * require the presence of a GST_VIDEO_BUFFER_FLAG_MARKER to mark the
5027
 * end of a frame. Or it can operate in such a way that it will decode
5028
 * a single frame at a time. In this second case, every buffer that
5029
 * arrives to the element is considered part of the same frame until
5030
 * gst_video_decoder_finish_frame() is called.
5031
 *
5032
 * In either case, the same #GstVideoCodecFrame will be passed to the
5033
 * GstVideoDecoderClass:handle_frame vmethod repeatedly with a
5034
 * different GstVideoCodecFrame:input_buffer every time until the end of the
5035
 * frame has been signaled using either method.
5036
 * This method must be called during the decoder subclass @set_format call.
5037
 *
5038
 * Since: 1.20
5039
 */
5040
void
5041
gst_video_decoder_set_subframe_mode (GstVideoDecoder * decoder,
5042
    gboolean subframe_mode)
5043
0
{
5044
0
  decoder->priv->subframe_mode = subframe_mode;
5045
0
}
5046
5047
/**
5048
 * gst_video_decoder_get_subframe_mode:
5049
 * @decoder: a #GstVideoDecoder
5050
 *
5051
 * Queries whether input data is considered as subframes or not by the
5052
 * base class. If FALSE, each input buffer will be considered as a full
5053
 * frame.
5054
 *
5055
 * Returns: TRUE if input data is considered as sub frames.
5056
 *
5057
 * Since: 1.20
5058
 */
5059
gboolean
5060
gst_video_decoder_get_subframe_mode (GstVideoDecoder * decoder)
5061
15
{
5062
15
  return decoder->priv->subframe_mode;
5063
15
}
5064
5065
/**
5066
 * gst_video_decoder_get_input_subframe_index:
5067
 * @decoder: a #GstVideoDecoder
5068
 * @frame: (transfer none): the #GstVideoCodecFrame to update
5069
 *
5070
 * Queries the number of the last subframe received by
5071
 * the decoder baseclass in the @frame.
5072
 *
5073
 * Returns: the current subframe index received in subframe mode, 1 otherwise.
5074
 *
5075
 * Since: 1.20
5076
 */
5077
guint
5078
gst_video_decoder_get_input_subframe_index (GstVideoDecoder * decoder,
5079
    GstVideoCodecFrame * frame)
5080
0
{
5081
0
  if (gst_video_decoder_get_subframe_mode (decoder))
5082
0
    return frame->abidata.ABI.num_subframes;
5083
0
  return 1;
5084
0
}
5085
5086
/**
5087
 * gst_video_decoder_get_processed_subframe_index:
5088
 * @decoder: a #GstVideoDecoder
5089
 * @frame: (transfer none): the #GstVideoCodecFrame to update
5090
 *
5091
 * Queries the number of subframes in the frame processed by
5092
 * the decoder baseclass.
5093
 *
5094
 * Returns: the current subframe processed received in subframe mode.
5095
 *
5096
 * Since: 1.20
5097
 */
5098
guint
5099
gst_video_decoder_get_processed_subframe_index (GstVideoDecoder * decoder,
5100
    GstVideoCodecFrame * frame)
5101
0
{
5102
0
  return frame->abidata.ABI.subframes_processed;
5103
0
}
5104
5105
/**
5106
 * gst_video_decoder_set_estimate_rate:
5107
 * @dec: a #GstVideoDecoder
5108
 * @enabled: whether to enable byte to time conversion
5109
 *
5110
 * Allows baseclass to perform byte to time estimated conversion.
5111
 */
5112
void
5113
gst_video_decoder_set_estimate_rate (GstVideoDecoder * dec, gboolean enabled)
5114
0
{
5115
0
  g_return_if_fail (GST_IS_VIDEO_DECODER (dec));
5116
5117
0
  dec->priv->do_estimate_rate = enabled;
5118
0
}
5119
5120
/**
5121
 * gst_video_decoder_get_estimate_rate:
5122
 * @dec: a #GstVideoDecoder
5123
 *
5124
 * Returns: currently configured byte to time conversion setting
5125
 */
5126
gboolean
5127
gst_video_decoder_get_estimate_rate (GstVideoDecoder * dec)
5128
0
{
5129
0
  g_return_val_if_fail (GST_IS_VIDEO_DECODER (dec), 0);
5130
5131
0
  return dec->priv->do_estimate_rate;
5132
0
}
5133
5134
/**
5135
 * gst_video_decoder_set_latency:
5136
 * @decoder: a #GstVideoDecoder
5137
 * @min_latency: minimum latency
5138
 * @max_latency: maximum latency
5139
 *
5140
 * Lets #GstVideoDecoder sub-classes tell the baseclass what the decoder latency
5141
 * is. If the provided values changed from previously provided ones, this will
5142
 * also post a LATENCY message on the bus so the pipeline can reconfigure its
5143
 * global latency.
5144
 */
5145
void
5146
gst_video_decoder_set_latency (GstVideoDecoder * decoder,
5147
    GstClockTime min_latency, GstClockTime max_latency)
5148
0
{
5149
0
  gboolean post_message = FALSE;
5150
0
  g_return_if_fail (GST_CLOCK_TIME_IS_VALID (min_latency));
5151
0
  g_return_if_fail (max_latency >= min_latency);
5152
5153
0
  GST_DEBUG_OBJECT (decoder,
5154
0
      "min_latency:%" GST_TIME_FORMAT " max_latency:%" GST_TIME_FORMAT,
5155
0
      GST_TIME_ARGS (min_latency), GST_TIME_ARGS (max_latency));
5156
5157
0
  GST_OBJECT_LOCK (decoder);
5158
0
  if (decoder->priv->min_latency != min_latency) {
5159
0
    decoder->priv->min_latency = min_latency;
5160
0
    post_message = TRUE;
5161
0
  }
5162
0
  if (decoder->priv->max_latency != max_latency) {
5163
0
    decoder->priv->max_latency = max_latency;
5164
0
    post_message = TRUE;
5165
0
  }
5166
0
  if (!decoder->priv->posted_latency_msg) {
5167
0
    decoder->priv->posted_latency_msg = TRUE;
5168
0
    post_message = TRUE;
5169
0
  }
5170
0
  GST_OBJECT_UNLOCK (decoder);
5171
5172
0
  if (post_message)
5173
0
    gst_element_post_message (GST_ELEMENT_CAST (decoder),
5174
0
        gst_message_new_latency (GST_OBJECT_CAST (decoder)));
5175
0
}
5176
5177
/**
5178
 * gst_video_decoder_get_latency:
5179
 * @decoder: a #GstVideoDecoder
5180
 * @min_latency: (out) (optional): address of variable in which to store the
5181
 *     configured minimum latency, or %NULL
5182
 * @max_latency: (out) (optional): address of variable in which to store the
5183
 *     configured mximum latency, or %NULL
5184
 *
5185
 * Query the configured decoder latency. Results will be returned via
5186
 * @min_latency and @max_latency.
5187
 */
5188
void
5189
gst_video_decoder_get_latency (GstVideoDecoder * decoder,
5190
    GstClockTime * min_latency, GstClockTime * max_latency)
5191
0
{
5192
0
  GST_OBJECT_LOCK (decoder);
5193
0
  if (min_latency)
5194
0
    *min_latency = decoder->priv->min_latency;
5195
0
  if (max_latency)
5196
0
    *max_latency = decoder->priv->max_latency;
5197
0
  GST_OBJECT_UNLOCK (decoder);
5198
0
}
5199
5200
/**
5201
 * gst_video_decoder_merge_tags:
5202
 * @decoder: a #GstVideoDecoder
5203
 * @tags: (nullable): a #GstTagList to merge, or NULL to unset
5204
 *     previously-set tags
5205
 * @mode: the #GstTagMergeMode to use, usually #GST_TAG_MERGE_REPLACE
5206
 *
5207
 * Sets the audio decoder tags and how they should be merged with any
5208
 * upstream stream tags. This will override any tags previously-set
5209
 * with gst_audio_decoder_merge_tags().
5210
 *
5211
 * Note that this is provided for convenience, and the subclass is
5212
 * not required to use this and can still do tag handling on its own.
5213
 *
5214
 * MT safe.
5215
 */
5216
void
5217
gst_video_decoder_merge_tags (GstVideoDecoder * decoder,
5218
    const GstTagList * tags, GstTagMergeMode mode)
5219
0
{
5220
0
  g_return_if_fail (GST_IS_VIDEO_DECODER (decoder));
5221
0
  g_return_if_fail (tags == NULL || GST_IS_TAG_LIST (tags));
5222
0
  g_return_if_fail (tags == NULL || mode != GST_TAG_MERGE_UNDEFINED);
5223
5224
0
  GST_VIDEO_DECODER_STREAM_LOCK (decoder);
5225
0
  if (decoder->priv->tags != tags) {
5226
0
    if (decoder->priv->tags) {
5227
0
      gst_tag_list_unref (decoder->priv->tags);
5228
0
      decoder->priv->tags = NULL;
5229
0
      decoder->priv->tags_merge_mode = GST_TAG_MERGE_APPEND;
5230
0
    }
5231
0
    if (tags) {
5232
0
      decoder->priv->tags = gst_tag_list_ref ((GstTagList *) tags);
5233
0
      decoder->priv->tags_merge_mode = mode;
5234
0
    }
5235
5236
0
    GST_DEBUG_OBJECT (decoder, "set decoder tags to %" GST_PTR_FORMAT, tags);
5237
0
    decoder->priv->tags_changed = TRUE;
5238
0
  }
5239
0
  GST_VIDEO_DECODER_STREAM_UNLOCK (decoder);
5240
0
}
5241
5242
/**
5243
 * gst_video_decoder_get_buffer_pool:
5244
 * @decoder: a #GstVideoDecoder
5245
 *
5246
 * Returns: (transfer full) (nullable): the instance of the #GstBufferPool used
5247
 * by the decoder; free it after use it
5248
 */
5249
GstBufferPool *
5250
gst_video_decoder_get_buffer_pool (GstVideoDecoder * decoder)
5251
0
{
5252
0
  g_return_val_if_fail (GST_IS_VIDEO_DECODER (decoder), NULL);
5253
5254
0
  if (decoder->priv->pool)
5255
0
    return gst_object_ref (decoder->priv->pool);
5256
5257
0
  return NULL;
5258
0
}
5259
5260
/**
5261
 * gst_video_decoder_get_allocator:
5262
 * @decoder: a #GstVideoDecoder
5263
 * @allocator: (out) (optional) (nullable) (transfer full): the #GstAllocator
5264
 * used
5265
 * @params: (out) (optional) (transfer full): the
5266
 * #GstAllocationParams of @allocator
5267
 *
5268
 * Lets #GstVideoDecoder sub-classes to know the memory @allocator
5269
 * used by the base class and its @params.
5270
 *
5271
 * Unref the @allocator after use it.
5272
 */
5273
void
5274
gst_video_decoder_get_allocator (GstVideoDecoder * decoder,
5275
    GstAllocator ** allocator, GstAllocationParams * params)
5276
0
{
5277
0
  g_return_if_fail (GST_IS_VIDEO_DECODER (decoder));
5278
5279
0
  if (allocator)
5280
0
    *allocator = decoder->priv->allocator ?
5281
0
        gst_object_ref (decoder->priv->allocator) : NULL;
5282
5283
0
  if (params)
5284
0
    *params = decoder->priv->params;
5285
0
}
5286
5287
/**
5288
 * gst_video_decoder_set_use_default_pad_acceptcaps:
5289
 * @decoder: a #GstVideoDecoder
5290
 * @use: if the default pad accept-caps query handling should be used
5291
 *
5292
 * Lets #GstVideoDecoder sub-classes decide if they want the sink pad
5293
 * to use the default pad query handler to reply to accept-caps queries.
5294
 *
5295
 * By setting this to true it is possible to further customize the default
5296
 * handler with %GST_PAD_SET_ACCEPT_INTERSECT and
5297
 * %GST_PAD_SET_ACCEPT_TEMPLATE
5298
 *
5299
 * Since: 1.6
5300
 */
5301
void
5302
gst_video_decoder_set_use_default_pad_acceptcaps (GstVideoDecoder * decoder,
5303
    gboolean use)
5304
4
{
5305
4
  decoder->priv->use_default_pad_acceptcaps = use;
5306
4
}
5307
5308
static void
5309
gst_video_decoder_request_sync_point_internal (GstVideoDecoder * dec,
5310
    GstClockTime deadline, GstVideoDecoderRequestSyncPointFlags flags)
5311
0
{
5312
0
  GstEvent *fku = NULL;
5313
0
  GstVideoDecoderPrivate *priv;
5314
5315
0
  g_return_if_fail (GST_IS_VIDEO_DECODER (dec));
5316
5317
0
  priv = dec->priv;
5318
5319
0
  GST_OBJECT_LOCK (dec);
5320
5321
  /* Check if we're allowed to send a new force-keyunit event.
5322
   * frame->deadline is set to the running time of the PTS. */
5323
0
  if (priv->min_force_key_unit_interval == 0 ||
5324
0
      deadline == GST_CLOCK_TIME_NONE ||
5325
0
      (priv->min_force_key_unit_interval != GST_CLOCK_TIME_NONE &&
5326
0
          (priv->last_force_key_unit_time == GST_CLOCK_TIME_NONE
5327
0
              || (priv->last_force_key_unit_time +
5328
0
                  priv->min_force_key_unit_interval <= deadline)))) {
5329
0
    GST_DEBUG_OBJECT (dec,
5330
0
        "Requesting a new key-unit for frame with deadline %" GST_TIME_FORMAT,
5331
0
        GST_TIME_ARGS (deadline));
5332
0
    fku =
5333
0
        gst_video_event_new_upstream_force_key_unit (GST_CLOCK_TIME_NONE, FALSE,
5334
0
        0);
5335
0
    priv->last_force_key_unit_time = deadline;
5336
0
  } else {
5337
0
    GST_DEBUG_OBJECT (dec,
5338
0
        "Can't request a new key-unit for frame with deadline %"
5339
0
        GST_TIME_FORMAT, GST_TIME_ARGS (deadline));
5340
0
  }
5341
0
  priv->request_sync_point_flags |= flags;
5342
  /* We don't know yet the frame number of the sync point so set it to a
5343
   * frame number higher than any allowed frame number */
5344
0
  priv->request_sync_point_frame_number = REQUEST_SYNC_POINT_PENDING;
5345
0
  GST_OBJECT_UNLOCK (dec);
5346
5347
0
  if (fku)
5348
0
    gst_pad_push_event (dec->sinkpad, fku);
5349
0
}
5350
5351
/**
5352
 * gst_video_decoder_request_sync_point:
5353
 * @dec: a #GstVideoDecoder
5354
 * @frame: a #GstVideoCodecFrame
5355
 * @flags: #GstVideoDecoderRequestSyncPointFlags
5356
 *
5357
 * Allows the #GstVideoDecoder subclass to request from the base class that
5358
 * a new sync should be requested from upstream, and that @frame was the frame
5359
 * when the subclass noticed that a new sync point is required. A reason for
5360
 * the subclass to do this could be missing reference frames, for example.
5361
 *
5362
 * The base class will then request a new sync point from upstream as long as
5363
 * the time that passed since the last one is exceeding
5364
 * #GstVideoDecoder:min-force-key-unit-interval.
5365
 *
5366
 * The subclass can signal via @flags how the frames until the next sync point
5367
 * should be handled:
5368
 *
5369
 *   * If %GST_VIDEO_DECODER_REQUEST_SYNC_POINT_DISCARD_INPUT is selected then
5370
 *     all following input frames until the next sync point are discarded.
5371
 *     This can be useful if the lack of a sync point will prevent all further
5372
 *     decoding and the decoder implementation is not very robust in handling
5373
 *     missing references frames.
5374
 *   * If %GST_VIDEO_DECODER_REQUEST_SYNC_POINT_CORRUPT_OUTPUT is selected
5375
 *     then all output frames following @frame are marked as corrupted via
5376
 *     %GST_BUFFER_FLAG_CORRUPTED. Corrupted frames can be automatically
5377
 *     dropped by the base class, see #GstVideoDecoder:discard-corrupted-frames.
5378
 *     Subclasses can manually mark frames as corrupted via %GST_VIDEO_CODEC_FRAME_FLAG_CORRUPTED
5379
 *     before calling gst_video_decoder_finish_frame().
5380
 *
5381
 * Since: 1.20
5382
 */
5383
void
5384
gst_video_decoder_request_sync_point (GstVideoDecoder * dec,
5385
    GstVideoCodecFrame * frame, GstVideoDecoderRequestSyncPointFlags flags)
5386
0
{
5387
0
  g_return_if_fail (GST_IS_VIDEO_DECODER (dec));
5388
0
  g_return_if_fail (frame != NULL);
5389
5390
0
  gst_video_decoder_request_sync_point_internal (dec, frame->deadline, flags);
5391
0
}
5392
5393
/**
5394
 * gst_video_decoder_set_needs_sync_point:
5395
 * @dec: a #GstVideoDecoder
5396
 * @enabled: new state
5397
 *
5398
 * Configures whether the decoder requires a sync point before it starts
5399
 * outputting data in the beginning. If enabled, the base class will discard
5400
 * all non-sync point frames in the beginning and after a flush and does not
5401
 * pass it to the subclass.
5402
 *
5403
 * If the first frame is not a sync point, the base class will request a sync
5404
 * point via the force-key-unit event.
5405
 *
5406
 * Since: 1.20
5407
 */
5408
void
5409
gst_video_decoder_set_needs_sync_point (GstVideoDecoder * dec, gboolean enabled)
5410
0
{
5411
0
  g_return_if_fail (GST_IS_VIDEO_DECODER (dec));
5412
5413
0
  dec->priv->needs_sync_point = enabled;
5414
0
}
5415
5416
/**
5417
 * gst_video_decoder_get_needs_sync_point:
5418
 * @dec: a #GstVideoDecoder
5419
 *
5420
 * Queries if the decoder requires a sync point before it starts outputting
5421
 * data in the beginning.
5422
 *
5423
 * Returns: %TRUE if a sync point is required in the beginning.
5424
 *
5425
 * Since: 1.20
5426
 */
5427
gboolean
5428
gst_video_decoder_get_needs_sync_point (GstVideoDecoder * dec)
5429
0
{
5430
0
  gboolean result;
5431
5432
0
  g_return_val_if_fail (GST_IS_VIDEO_DECODER (dec), FALSE);
5433
5434
0
  result = dec->priv->needs_sync_point;
5435
5436
0
  return result;
5437
0
}