The commit changed to use memset for initialiazation of non-trivial
strucutures, where initialization using {0} caused warnings. Also,
removed {0} initializations where appropriate initialization calls
are in place.
Change-Id: Ifd03e34aa80688e382124eb889c0fc1ec43c48e6
New name: vpx_temporal_svc_encoder.c
Also, update comment to note that example supports VP8 and VP9.
Change-Id: I6fffab81296f918ebca740192a5c609593852dff
1. Save stats for each spatial layer
2. Add frame buffer management for svc first pass rc
3. Set default spatial layer to 1
4. Flush encoder at the end of stream in test app
This only supports spatial svc.
Change-Id: Ia89cfa87bb6394e6c0405b921d86c426d0a0c9ae
The only difference between two examples was usage of VPX_EFLAG_FORCE_KF
flag for frame encoding. Moving this functionality into simple_encoder
with additional command line option.
Change-Id: Ia3c4209be073eeb541d4ac6b41bd0f12812f6676
2. Add read/write for RC stats file
The two pass RC for svc does not work yet. This is just the first
step. We need further development to make it working.
Change-Id: I8ef0e177dff0b5ed3c97a916beea5123717cc6f2
The only difference between two examples was a setting of
g_error_resilient flag in error-resilient example. Moving this
functionality into simple_encoder with additional command line option.
Change-Id: I0245793320125926e1bf208cc1e87aef39ca478d
Current setting was specific to 1 layer case.
rc_target_bitrate is total bitrate for whole stream,
so set it to ts_target_bitrate for highest/top temporal layer.
Change-Id: I83de73364956fa21c0a7c971c9f390d4840457e6
Use unsigned int instead of uint64_t for duration and deadline
arguments to functions get_frame_stats() and encode_frame().
Change-Id: I1f26a7afc38ae89916b2c67415ced26fdc9d53e7
Only use layered average size if number_temporal_layers > 1.
Also removed unneeded commented-out line, and change some parameter
setting in vpx_temporal_scalable_patterns.c
Change-Id: Ic86e43e7daf0313e8c5a4aba1497299158111955
This patch adds a buffer-based rate control for temporal layers,
under CBR mode.
Added vpx_temporal_scalable_patters.c encoder for testing temporal
layers, for both vp9 and vp8 (replaces the old vp8_scalable_patterns).
Updated datarate unittest with tests for temporal layer rate-targeting.
Change-Id: I8900a854288b9354d9c697cfeb0243a9fd6790b1
Right now only IVF format is supported which is enough for example code.
Other formats like y4m, webm, raw yuv will be supported later.
Change-Id: I34c6f20731c1851947587ca5c589d7856b675164
the public typedef already includes a const, quiets
'same type qualifier used more than once' warnings
Change-Id: Ib118b3b116fba59d4c6ead84d85b26e5d3ed363d
New API is supposed to be used from example code. Current implementation
only supports IVF containers (will be extended to Y4M).
Change-Id: Ib7da87237690b1a28297bdf03bc41c6836a84b7e
Various fixups to resolve issues when building vp9-preview under the more stringent
checks placed on the experimental branch.
Change-Id: I21749de83552e1e75c799003f849e6a0f1a35b07
Creates a merge between the master and experimental branches. Fixes a
number of conflicts in the build system to allow *either* VP8 or VP9
to be built. Specifically either:
$ configure --disable-vp9 $ configure --disable-vp8
--disable-unit-tests
VP9 still exports its symbols and files as VP8, so that will be
resolved in the next commit.
Unit tests are broken in VP9, but this isn't a new issue. They are
fixed upstream on origin/experimental as of this writing, but rebasing
this merge proved difficult, so will tackle that in a second merge
commit.
Change-Id: I2b7d852c18efd58d1ebc621b8041fe0260442c21
This is a code snapshot of experimental work currently ongoing for a
next-generation codec.
The codebase has been cut down considerably from the libvpx baseline.
For example, we are currently only supporting VBR 2-pass rate control
and have removed most of the code relating to coding speed, threading,
error resilience, partitions and various other features. This is in
part to make the codebase easier to work on and experiment with, but
also because we want to have an open discussion about how the bitstream
will be structured and partitioned and not have that conversation
constrained by past work.
Our basic working pattern has been to initially encapsulate experiments
using configure options linked to #IF CONFIG_XXX statements in the
code. Once experiments have matured and we are reasonably happy that
they give benefit and can be merged without breaking other experiments,
we remove the conditional compile statements and merge them in.
Current changes include:
* Temporal coding experiment for segments (though still only 4 max, it
will likely be increased).
* Segment feature experiment - to allow various bits of information to
be coded at the segment level. Features tested so far include mode
and reference frame information, limiting end of block offset and
transform size, alongside Q and loop filter parameters, but this set
is very fluid.
* Support for 8x8 transform - 8x8 dct with 2nd order 2x2 haar is used
in MBs using 16x16 prediction modes within inter frames.
* Compound prediction (combination of signals from existing predictors
to create a new predictor).
* 8 tap interpolation filters and 1/8th pel motion vectors.
* Loop filter modifications.
* Various entropy modifications and changes to how entropy contexts and
updates are handled.
* Extended quantizer range matched to transform precision improvements.
There are also ongoing further experiments that we hope to merge in the
near future: For example, coding of motion and other aspects of the
prediction signal to better support larger image formats, use of larger
block sizes (e.g. 32x32 and up) and lossless non-transform based coding
options (especially for key frames). It is our hope that we will be
able to make regular updates and we will warmly welcome community
contributions.
Please be warned that, at this stage, the codebase is currently slower
than VP8 stable branch as most new code has not been optimized, and
even the 'C' has been deliberately written to be simple and obvious,
not fast.
The following graphs have the initial test results, numbers in the
tables measure the compression improvement in terms of percentage. The
build has the following optional experiments configured:
--enable-experimental --enable-enhanced_interp --enable-uvintra
--enable-high_precision_mv --enable-sixteenth_subpel_uv
CIF Size clips:
http://getwebm.org/tmp/cif/
HD size clips:
http://getwebm.org/tmp/hd/
(stable_20120309 represents encoding results of WebM master branch
build as of commit#7a15907)
They were encoded using the following encode parameters:
--good --cpu-used=0 -t 0 --lag-in-frames=25 --min-q=0 --max-q=63
--end-usage=0 --auto-alt-ref=1 -p 2 --pass=2 --kf-max-dist=9999
--kf-min-dist=0 --drop-frame=0 --static-thresh=0 --bias-pct=50
--minsection-pct=0 --maxsection-pct=800 --sharpness=0
--arnr-maxframes=7 --arnr-strength=3(for HD,6 for CIF)
--arnr-type=3
Change-Id: I5c62ed09cfff5815a2bb34e7820d6a810c23183c
Adds a multiframe postprocessing module to enhance the quality of
certain frames that are coded at lower quality than preceding frames.
The module can be invoked from the commandline by use of the --mfqe
option, and will be most beneficial for enhancing the quality of
frames decoded using scalable patterns.
Uses the vp8_variance_var16x16 and vp8_variance_sad16x16 function
pointers to compute SAD and Variance of blocks.
Change-Id: Id73d2a6e3572d07f9f8e36bbce00a4fc5ffd8961
Added the ability to create rate-targeted, temporally
scalable, VP8 compatible bitstreams.
The application vp8_scalable_patterns.c demonstrates how
to use this capability. Users can create output bitstreams
containing upto 5 temporally separable streams encoded
as a single VP8 bitstream.
(previously abandoned as:
I92d1483e887adb274d07ce9e567e4d0314881b0a)
Change-Id: I156250a3fe930be57c069d508c41b6a7a4ea8d6a
Also includes a couple of error concealment bug fixes:
- the segment_id wasn't properly initialized when missing
- when interpolating and no neighbors are found, set to zero
- clear the qcoef buffer when concealing an MB
Change-Id: Id79c876b41d78b559a2241e9cd0fd2cae6198f49
The error-concealer is plugged in after any motion vectors have been
decoded. It tries to estimate any missing motion vectors from the
motion vectors of the previous frame. Intra blocks with missing
residual are replaced with inter blocks with estimated motion vectors.
This feature was developed in a separate sandbox
(sandbox/holmer/error-concealment).
Change-Id: I5c8917b031078d79dbafd90f6006680e84a23412