As we are no longer able to sort the candidate
mvrefs in both encoder and decode and given
that the cost of explicit signalling has proved
prohibitive, it no longer makes sense to find more
than 2 candidates.
This patch:
Modifies and simplifies add_candidate_mv()
Removes the forced addition of a 0 vector in the
MAX_MV_REF_CANDIDATES-1 position (in preparation
to reducing MAX_MV_REF_CANDIDATES to 2).
Re-orders the addition of candidates slightly.
This actually gives small gains (circa 0.2% on std-hd)
A subsequent patch will remove NEW_MVREF experiment,
reduce MAX_MV_REF_CANDIDATES to 2 and remove distance
weights as these are implicit now in the order.
Change-Id: I3dbe1a6f8a1a18b3c108257069c22a1141a207a4
Adjustments take heavier account of the frame near a kf
in deciding boost and limit the total number that can contribute.
Also adjusted the minq calculations such that in most cases we
generate a smaller key frame.
Modified the code that accounts for how static the sequence is and
added some adjustment based on image size. This is still very
crude but smaller images tend to behave better with a larger
delta between KF Q and other frames than larger image formats.
Changes give sizable gains in overall PSNR on all the test sets but the
biggest gains (~3%) were on the std-hd set.
The gains were smaller for SSIM but still significant.
Average PSNR results are mixed because this metric can very easily
be altered by having a very good / lossless coding of one or two frames.
Some of the YT and YT-HD clips in particular have blank lead ins and
allowing lossless coding of these appears to make a big difference to
average PSNR but it reality does not help much at all.
Change-Id: I6bfe485a1d330b47c783832f1717c95c535464ec
Consider the previous behavior for the MV 1 3/8 (11/8 pel). In the
existing code, the fractional part of the MV is considered separately,
and rounded is applied, giving a result of 6/8. Rounding is not required
in this case, as we're increasing the precision from a q3 to a q4, and
the correct value 11/16 can be represented exactly.
Slight gain observed (+.033 average on derf)
Change-Id: I320e160e8b12f1dd66aa0ce7966b5088870fe9f8
This commit converts the luma versions of vp9_build_inter_predictors_sb
to use a common function. Update the convolution functions to support
block sizes larger than 16x16, and add a foreach_predicted_block walker.
Next step will be to calculate the UV motion vector and implement SBUV,
then fold in vp9_build_inter16x16_predictors_mb and SPLITMV.
At the 16x16, 32x32, and 64x64 levels implemented in this commit, each
plane is predicted with only a single call to vp9_build_inter_predictor.
This is not yet called for SPLITMV. If the notion of SPLITMV/I8X8/I4X4
goes away, then the prediction block walker can go away, since we'll
always predict the whole bsize in a single step. Implemented using a
block walker at this stage for SPLITMV, as a 4x4 "prediction block size"
within the BLOCK_SIZE_MB16X16 macroblock. It would also support other
rectangular sizes too, if the blocks smaller than 16x16 remain
implemented as a SPLITMV-like thing. Just using 4x4 for now.
There's also a potential to combine with the foreach_transformed_block
walker if the logic for calculating the size of the subsampled
transform is made more straightforward, perhaps as a consequence of
supporing smaller macroblocks than 16x16. Will watch what happens there.
Change-Id: Iddd9973398542216601b630c628b9b7fdee33fe2
Use in-place buffers (dst of MACROBLOCKD) for macroblock prediction.
This makes the macroblock buffer handling consistent with those of
superblock. Remove predictor buffer MACROBLOCKD.
Change-Id: Id1bcd898961097b1e6230c10f0130753a59fc6df
Moving all the probability updates after frame context selection.
This makes it clean and simple to store all the probs in single
struct that can be sent to hardware codec.
Change-Id: I2ec3de81adbd468d8ef34a914caae80a18c3ef56
Adds RD integration for 32x16, 16x32, 64x32 and 32x64 rectangular blocks.
Derf almost +0.6%, HD a little over +1.0%, STDHD +1.3%.
Change-Id: Id651fdb6a655fdbb5c47009757e63317acfb88a5
Enable recursive partition information coding from SB64X64 down to
MB16X16. The bit-stream syntax is now supporting rectangular block
sizes. It starts from SB64X64 and recursively describes the partition
type of the current block. If the partition type is PARTITION_NONE,
the block is coded as a single unit; if it is PARTITION_HORZ or
PARTITION_VERT, the block is segmented into two independently coded
rectangular units, with no further partition needed; otherwise, the
block is segmented into 4 square blocks. i.e., PARTITION_SPLIT case,
each can be potentially further partitioned.
Forward adaptive probability modeling is used for the partition
information coding, conditioned on the current block size.
Change-Id: I499365fb547839d555498e3bcc0387d8a3587d87
This function is now called from configures the ARNR
filter so it belongs with the other temporal filter
functions.
Change-Id: I64211875918364b5b8edfb97743e573c6def1663
Normalization of the frame boost value was being done
when it reached the value 1028. The intention was to
keep to a range of 10 bits, so it should have been
clipped above 1023.
Change-Id: I0afdddc1d2eb9e7822ec4578903cbe6ec0b33b91
This flag was added to VP8 to allow a mode where MB-level skipping
was not allowed, saving a bit per mb. It was never used in practice,
and hasn't been tested in VP9, so remove it.
Change-Id: Id450ec6904c6d06c1919508e7efc52d05cde5631
Static threshold results slightly up (+0.1% on derf), probably b/c
we now take the filter (sharp/lowpass) into account for the breakout
decision.
Change-Id: I9f597601da434205142afd05f32690e7ba8fd690
This is work-in-progress, it implements multiple ARF
encoding behind an experimental flag.
It adds the ability to insert multiple ARF frames into a
single ARF group. This patch implements the reordering
of the coded frames, and implements a fixed-length coding
pattern. It applies a fixed quantizer strategy based on
where the frame is in the coding sequence.
Further work to modify the rate control strategy is
ongoing and will be submitted via a set of future patches.
In this first step, each ARF group is recursively
bisected and an ARF frame added at that position in the
sequence. The recursion continues until ARF frames are
within MIN_GF_INTERVAL frames.
The code sits behind the "multiple-arf" experimental
flag ("CONFIG_MULTIPLE_ARF"). The experimental flag
"oneshotq" ("CONFIG_ONESHOTQ") also needs to be enabled
for this patch to work correctly.
Change-Id: Ie473b05ebb43ac473c0cfb659b2b8042823085e2
Combine superblock inter predictors into a unified function that
allows configurable block width and height. The inter predictions
of block sizes smaller than 16x16 are handled differently. To be
continued on merging them later.
Change-Id: I14075959dd5e221f00c205c99ca35c1c31ef728e
The probabilities derived from these statistics are used in bitstream
writing; therefore, we should only do this when we actually decide to
use macroblock coding (over superblock coding). Derf gains +0.15%.
Change-Id: I196814c070a7c79889590658ce10a6eb07454389
The intra predictor supports configurable block sizes. It can handle
intra prediction down to 4x4 sizes, when enabled in BLOCK_SIZE_TYPE.
Change-Id: I7399ec2512393aa98aadda9813ca0c83e19af854
Rename pick_mb_modes to pick_mb_mode, since it now handles only a
single macroblock. This is consistent with pick_sb_mode handling a
single non-macroblock.
Change-Id: I896fdfa06436b2d8c24d6474718cc74420df6b3b
This patch changes the default with the modecoefprob expt
to use mode-based forward updates with one-node pegged
modeling.
The maximum difference with fully trained tables is now
less that 0.1%.
Change-Id: I06b44322e10c6703f93f3c1d48d973b1136a0618
This patch will use the dest buffer instead of the
predictor buffer. This will allow us in future commits
to remove the extra mem copy that occurs in the dequant
functions when eob == 0. We should also be able to remove
extra params that are passed into the dequant functions.
Change-Id: I7241bc1ab797a430418b1f3a95b5476db7455f6a
More specifically, remove vp9_quantize_mb*, vp9_optimize_mb*,
vp9_inverse_transform_mb* and vp9_transform_mb*. Instead, use the
generic _sb* functions that take a size argument, and call them with
BLOCK_SIZE_MB16X16.
Change-Id: I33024afea95d3a23ffbc1df7da426e4645110f29
With these fixed, the codec produces identical results regardless of
what literal values are used for the enum members in BLOCK_SIZE_*.
Change-Id: I26db8e08019b58ba432af1f0950ebe6b0eb4ad8c
Merge various super_block_yrd and super_block_uvrd versions into one
common function that works for all sizes. Make transform size selection
size-agnostic also. This fixes a slight bug in the intra UV superblock
code where it used the wrong transform size for txsz > 8x8, and stores
the txsz selection for superblocks properly (instead of forgetting it).
Lastly, it removes the trellis search that was done for 16x16 intra
predictors, since trellis is relatively expensive and should thus only
be done after RD mode selection.
Gives basically identical results on derf (+0.009%).
Change-Id: If4485c6f0a0fe4038b3172f7a238477c35a6f8d3
The strategy to run fast loop filter picking for encoder speed-up
should be revisited at a later stage.
Change-Id: I3b75e06d767cff41be952a42e63b3292f4eab996
Merge sb32x32 and sb64x64 functions; allow for rectangular sizes. Code
gives identical encoder results before and after. There are a few
macros for rectangular block sizes under the sbsegment experiment; this
experiment is not yet functional and should not yet be used.
Change-Id: I71f93b5d2a1596e99a6f01f29c3f0a456694d728
Clamp only the motion vectors inferred from neighboring reference
macroblocks. The motion vectors obtained through motion search in
NEWMV mode are constrained during the search process, which allows
a relatively larger referencing region than the inferred mvs.
Hence further clamping the best mv provided by the motion search may
affect the efficacy of NEWMV mode.
Synchronized the decoding process. The decoded mvs in NEWMV modes
should be guaranteed to fit in the effective range. Put a mv range
clamping function there for security purpose.
This improves the coding performance of high motion sequences, e.g.,
derf set:
foreman 0.233%
husky 0.175%
icd 0.135%
mother_daughter 0.337%
pamphlet 0.561%
stdhd set:
blue_sky 0.408%
city 0.455%
also saw sunflower goes down by -0.469%.
Change-Id: I3fcbba669e56dab779857a8126a91b926e899cb5
Start grouping data per-plane, as part of refactoring to support
additional planes, and chroma planes with other-than 4:2:0
subsampling.
Change-Id: Idb76a0e23ab239180c818025bae1f36f1608bb23
This function expects real Q values as inputs
not index values.
The use-age her impacts the Q chosen for force key
frames. Though this is a bug fix I have not yet verified
whether following the bug fix the q multiplier value used is
correct.
Change-Id: I49f6da894d90baeb1e86c820c335f02dc80d3b66
Took vp9_setup_scale_factors_for_frame() out from
vp9_setup_interp_filters(), so that it is only called once per
frame instead of per macroblock. Decoder tests showed a 1.5%
performance gain.
Change-Id: I770cb09eb2140ab85132f82aed388ac0bdd3a0aa
Using clamp and MIN/MAX functions instead of plain C code. Lower case
variable names. Removing redundant parenthesis.
Change-Id: Ibf7cc5fbe4fbdb5029049a599af71534176e6f42
We used to calculate SSIM only over the postproc buffer, whereas we
calculate PSNR for both. Compared to postproc-SSIM, this is about 0.3%
higher for derf, 1.4% lower for hd and 0.5% lower for stdhd, although
it is highly variable on a per-clip basis.
Change-Id: I8dd491f0f5b4201dedfb15d288c854d5d4caa10f
The patch adds the flexibility to use standard EOB based coding
on smaller block sizes and nzc based coding on larger blocksizes.
The tx-sizes that use nzc based coding and those that use EOB based
coding are controlled by a function get_nzc_used().
By default, this function uses nzc based coding for 16x16 and 32x32
transform blocks, which seem to bridge the performance gap
substantially.
All sets are now lower by 0.5% to 0.7%, as opposed to ~1.8% before.
Change-Id: I06abed3df57b52d241ea1f51b0d571c71e38fd0b
This threshold effectively limits the amount of motion
from one end of a GF/ARF group to the other.
This patch makes the threshold depend on image size.
Change-Id: Id45d1d7bced815f86ddd037be53164894b00b82f
Almost all arguments for vp9_build_inter32x32_predictors_sb and
vp9_build_inter64x64_predictors_sb can be deduced from the first macroblock
argument.
Change-Id: I5d477a607586d05698d5b3b9b9bc03891dd3fe83
Adds an experiment to use a weighted prediction of two INTER
predictors, where the weight is one of (1/4, 3/4), (3/8, 5/8),
(1/2, 1/2), (5/8, 3/8) or (3/4, 1/4), and is chosen implicitly
based on consistency of the predictors to the already
reconstructed pixels to the top and left of the current macroblock
or superblock.
Currently the weighting is not applied to SPLITMV modes, which
default to the usual (1/2, 1/2) weighting. However the code is in
place controlled by a macro. The same weighting is used for Y and
UV components, where the weight is derived from analyzing the Y
component only.
Results (over compound inter-intra experiment)
derf: +0.18%
yt: +0.34%
hd: +0.49%
stdhd: +0.23%
The experiment suggests bigger benefit for explicitly signaled weights.
Change-Id: I5438539ff4485c5752874cd1eb078ff14bf5235a
These are mostly just for experimental purposes. I saw small gains (in
the 0.1% range) when playing with this on derf.
Change-Id: Ib21eed477bbb46bddcd73b21c5c708a5b46abedc
Now that the first AC coefficient in both directions use the same DC
as their context, there no longer is a purpose in letting both have
their own band. Merging these two bands allows us to split bands for
some of the very high-frequency AC bands.
In addition, I'm redoing the banding for the 1D-ADST col/row scans. I
don't think the old banding made any sense at all (it merged the last
coefficient of the first row/col in the same band as the first two of
the second row/col), which was clearly an oversight from the band being
applied in scan-order (rather than in their actual position). Now,
coefficients at the same position will be in the same band, regardless
what scan order is used. I think this makes most sense for the purpose
of banding, which is basically "predict energy for this coefficient
depending on the energy of context coefficients" (i.e. pt).
After full re-training, together with previous patch, derf gains about
1.2-1.3%, and hd/stdhd gain about 0.9-1.0%.
Change-Id: I7a0cc12ba724e88b278034113cb4adaaebf87e0c
Pearson correlation for above or left is significantly higher than for
previous-in-scan-order (absolute values depend on position in scan, but
in general, we gain about 0.1-0.2 by using either above or left; using
both basically just makes this even better). For eob branch skipping,
we continue to use the previous token in scan order.
This helps about 0.9% on derf after re-training on a limited data set.
Full re-training and results on larger-resolution clips are pending.
Note that this commit breaks trellis, so we can probably get further
gains out of it by fixing trellis at some later point.
Change-Id: Iead68e296fc3a105cca746b5e3da9555d6010cfe
Lower case variable names, declaration and initialization on the same line,
removing redundant casts to double.
Change-Id: I7ea3905bed827aa6faac11a78401b85e448b57f9
Adds a per-frame, strength adjustable, in loop deringing filter. Uses
the existing vp9_post_proc_down_and_across 5 tap thresholded blur
code, with a brute force search for the threshold.
Results almost strictly positive on the YT HD set, either having no
effect or helping PSNR in the range of 1-3% (overall average 0.8%).
Results more mixed for the CIF set, (-0.5 min, 1.4 max, 0.1 avg).
This has an almost strictly negative impact to SSIM, so examining a
different filter or a more balanced search heuristic is in order.
Other test set results pending.
Change-Id: I5ca6ee8fe292dfa3f2eab7f65332423fa1710b58
Replaces the default tables for single coefficient magnitudes with
those obtained from an appropriate distribution. The EOB node
is left unchanged. The model is represeted as a 256-size codebook
where the index corresponds to the probability of the Zero or the
One node. Two variations are implemented corresponding to whether
the Zero node or the One-node is used as the peg. The main advantage
is that the default prob tables will become considerably smaller and
manageable. Besides there is substantially less risk of over-fitting
for a training set.
Various distributions are tried and the one that gives the best
results is the family of Generalized Gaussian distributions with
shape parameter 0.75. The results are within about 0.2% of fully
trained tables for the Zero peg variant, and within 0.1% of the
One peg variant.
The forward updates are optionally (controlled by a macro)
model-based, i.e. restricted to only convey probabilities from the
codebook. Backward updates can also be optionally (controlled by
another macro) model-based, but is turned off by default. Currently
model-based forward updates work about the same as unconstrained
updates, but there is a drop in performance with backward-updates
being model based.
The model based approach also allows the probabilities for the key
frames to be adjusted from the defaults based on the base_qindex of
the frame. Currently the adjustment function is a placeholder that
adjusts the prob of EOB and Zero node from the nominal one at higher
quality (lower qindex) or lower quality (higher qindex) ends of the
range. The rest of the probabilities are then derived based on the
model from the adjusted prob of zero.
Change-Id: Iae050f3cbcc6d8b3f204e8dc395ae47b3b2192c9
As things stand the zero bin mode boost is hurting somewhat.
In part this seems to be because the boost applied as is
interferes with the rd mode selection loop.
Average gains (derf 0.072, yt 0.243, ythd 0.179 std-hd 0.212%)
Change-Id: Icaecea3908d9a7352370e49b8fa822f2c2c49dc1
Renaming Width to width, Height to height and Version to version in
several structs and function signatures.
Change-Id: I084c3f7e747cb2ce3345aff27a3dff9b13a87543