Bug 1366201 - P3. Update ffvp9/ffvp8 to 3.4 branch. r=gerald

Structure of code was slightly modified so that it should be no longer necessary to re-generate the config_*.h files, greatly simplifying the resync process.

MozReview-Commit-ID: Ap6HpJAANT6

--HG--
extra : rebase_source : 52e5e3b9b2401644dc536d746219e5f3864c600c
This commit is contained in:
Jean-Yves Avenard 2017-10-24 21:44:23 +02:00
Родитель 07d56f7ff5
Коммит 9bbe2b99fa
178 изменённых файлов: 17112 добавлений и 28020 удалений

Просмотреть файл

@ -1,339 +0,0 @@
GNU GENERAL PUBLIC LICENSE
Version 2, June 1991
Copyright (C) 1989, 1991 Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The licenses for most software are designed to take away your
freedom to share and change it. By contrast, the GNU General Public
License is intended to guarantee your freedom to share and change free
software--to make sure the software is free for all its users. This
General Public License applies to most of the Free Software
Foundation's software and to any other program whose authors commit to
using it. (Some other Free Software Foundation software is covered by
the GNU Lesser General Public License instead.) You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
this service if you wish), that you receive source code or can get it
if you want it, that you can change the software or use pieces of it
in new free programs; and that you know you can do these things.
To protect your rights, we need to make restrictions that forbid
anyone to deny you these rights or to ask you to surrender the rights.
These restrictions translate to certain responsibilities for you if you
distribute copies of the software, or if you modify it.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must give the recipients all the rights that
you have. You must make sure that they, too, receive or can get the
source code. And you must show them these terms so they know their
rights.
We protect your rights with two steps: (1) copyright the software, and
(2) offer you this license which gives you legal permission to copy,
distribute and/or modify the software.
Also, for each author's protection and ours, we want to make certain
that everyone understands that there is no warranty for this free
software. If the software is modified by someone else and passed on, we
want its recipients to know that what they have is not the original, so
that any problems introduced by others will not reflect on the original
authors' reputations.
Finally, any free program is threatened constantly by software
patents. We wish to avoid the danger that redistributors of a free
program will individually obtain patent licenses, in effect making the
program proprietary. To prevent this, we have made it clear that any
patent must be licensed for everyone's free use or not licensed at all.
The precise terms and conditions for copying, distribution and
modification follow.
GNU GENERAL PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. This License applies to any program or other work which contains
a notice placed by the copyright holder saying it may be distributed
under the terms of this General Public License. The "Program", below,
refers to any such program or work, and a "work based on the Program"
means either the Program or any derivative work under copyright law:
that is to say, a work containing the Program or a portion of it,
either verbatim or with modifications and/or translated into another
language. (Hereinafter, translation is included without limitation in
the term "modification".) Each licensee is addressed as "you".
Activities other than copying, distribution and modification are not
covered by this License; they are outside its scope. The act of
running the Program is not restricted, and the output from the Program
is covered only if its contents constitute a work based on the
Program (independent of having been made by running the Program).
Whether that is true depends on what the Program does.
1. You may copy and distribute verbatim copies of the Program's
source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate
copyright notice and disclaimer of warranty; keep intact all the
notices that refer to this License and to the absence of any warranty;
and give any other recipients of the Program a copy of this License
along with the Program.
You may charge a fee for the physical act of transferring a copy, and
you may at your option offer warranty protection in exchange for a fee.
2. You may modify your copy or copies of the Program or any portion
of it, thus forming a work based on the Program, and copy and
distribute such modifications or work under the terms of Section 1
above, provided that you also meet all of these conditions:
a) You must cause the modified files to carry prominent notices
stating that you changed the files and the date of any change.
b) You must cause any work that you distribute or publish, that in
whole or in part contains or is derived from the Program or any
part thereof, to be licensed as a whole at no charge to all third
parties under the terms of this License.
c) If the modified program normally reads commands interactively
when run, you must cause it, when started running for such
interactive use in the most ordinary way, to print or display an
announcement including an appropriate copyright notice and a
notice that there is no warranty (or else, saying that you provide
a warranty) and that users may redistribute the program under
these conditions, and telling the user how to view a copy of this
License. (Exception: if the Program itself is interactive but
does not normally print such an announcement, your work based on
the Program is not required to print an announcement.)
These requirements apply to the modified work as a whole. If
identifiable sections of that work are not derived from the Program,
and can be reasonably considered independent and separate works in
themselves, then this License, and its terms, do not apply to those
sections when you distribute them as separate works. But when you
distribute the same sections as part of a whole which is a work based
on the Program, the distribution of the whole must be on the terms of
this License, whose permissions for other licensees extend to the
entire whole, and thus to each and every part regardless of who wrote it.
Thus, it is not the intent of this section to claim rights or contest
your rights to work written entirely by you; rather, the intent is to
exercise the right to control the distribution of derivative or
collective works based on the Program.
In addition, mere aggregation of another work not based on the Program
with the Program (or with a work based on the Program) on a volume of
a storage or distribution medium does not bring the other work under
the scope of this License.
3. You may copy and distribute the Program (or a work based on it,
under Section 2) in object code or executable form under the terms of
Sections 1 and 2 above provided that you also do one of the following:
a) Accompany it with the complete corresponding machine-readable
source code, which must be distributed under the terms of Sections
1 and 2 above on a medium customarily used for software interchange; or,
b) Accompany it with a written offer, valid for at least three
years, to give any third party, for a charge no more than your
cost of physically performing source distribution, a complete
machine-readable copy of the corresponding source code, to be
distributed under the terms of Sections 1 and 2 above on a medium
customarily used for software interchange; or,
c) Accompany it with the information you received as to the offer
to distribute corresponding source code. (This alternative is
allowed only for noncommercial distribution and only if you
received the program in object code or executable form with such
an offer, in accord with Subsection b above.)
The source code for a work means the preferred form of the work for
making modifications to it. For an executable work, complete source
code means all the source code for all modules it contains, plus any
associated interface definition files, plus the scripts used to
control compilation and installation of the executable. However, as a
special exception, the source code distributed need not include
anything that is normally distributed (in either source or binary
form) with the major components (compiler, kernel, and so on) of the
operating system on which the executable runs, unless that component
itself accompanies the executable.
If distribution of executable or object code is made by offering
access to copy from a designated place, then offering equivalent
access to copy the source code from the same place counts as
distribution of the source code, even though third parties are not
compelled to copy the source along with the object code.
4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.
5. You are not required to accept this License, since you have not
signed it. However, nothing else grants you permission to modify or
distribute the Program or its derivative works. These actions are
prohibited by law if you do not accept this License. Therefore, by
modifying or distributing the Program (or any work based on the
Program), you indicate your acceptance of this License to do so, and
all its terms and conditions for copying, distributing or modifying
the Program or works based on it.
6. Each time you redistribute the Program (or any work based on the
Program), the recipient automatically receives a license from the
original licensor to copy, distribute or modify the Program subject to
these terms and conditions. You may not impose any further
restrictions on the recipients' exercise of the rights granted herein.
You are not responsible for enforcing compliance by third parties to
this License.
7. If, as a consequence of a court judgment or allegation of patent
infringement or for any other reason (not limited to patent issues),
conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot
distribute so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you
may not distribute the Program at all. For example, if a patent
license would not permit royalty-free redistribution of the Program by
all those who receive copies directly or indirectly through you, then
the only way you could satisfy both it and this License would be to
refrain entirely from distribution of the Program.
If any portion of this section is held invalid or unenforceable under
any particular circumstance, the balance of the section is intended to
apply and the section as a whole is intended to apply in other
circumstances.
It is not the purpose of this section to induce you to infringe any
patents or other property right claims or to contest validity of any
such claims; this section has the sole purpose of protecting the
integrity of the free software distribution system, which is
implemented by public license practices. Many people have made
generous contributions to the wide range of software distributed
through that system in reliance on consistent application of that
system; it is up to the author/donor to decide if he or she is willing
to distribute software through any other system and a licensee cannot
impose that choice.
This section is intended to make thoroughly clear what is believed to
be a consequence of the rest of this License.
8. If the distribution and/or use of the Program is restricted in
certain countries either by patents or by copyrighted interfaces, the
original copyright holder who places the Program under this License
may add an explicit geographical distribution limitation excluding
those countries, so that distribution is permitted only in or among
countries not thus excluded. In such case, this License incorporates
the limitation as if written in the body of this License.
9. The Free Software Foundation may publish revised and/or new versions
of the General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Program
specifies a version number of this License which applies to it and "any
later version", you have the option of following the terms and conditions
either of that version or of any later version published by the Free
Software Foundation. If the Program does not specify a version number of
this License, you may choose any version ever published by the Free Software
Foundation.
10. If you wish to incorporate parts of the Program into other free
programs whose distribution conditions are different, write to the author
to ask for permission. For software which is copyrighted by the Free
Software Foundation, write to the Free Software Foundation; we sometimes
make exceptions for this. Our decision will be guided by the two goals
of preserving the free status of all derivatives of our free software and
of promoting the sharing and reuse of software generally.
NO WARRANTY
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
REPAIR OR CORRECTION.
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
convey the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License along
with this program; if not, write to the Free Software Foundation, Inc.,
51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
Also add information on how to contact you by electronic and paper mail.
If the program is interactive, make it output a short notice like this
when it starts in an interactive mode:
Gnomovision version 69, Copyright (C) year name of author
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, the commands you use may
be called something other than `show w' and `show c'; they could even be
mouse-clicks or menu items--whatever suits your program.
You should also get your employer (if you work as a programmer) or your
school, if any, to sign a "copyright disclaimer" for the program, if
necessary. Here is a sample; alter the names:
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
`Gnomovision' (which makes passes at compilers) written by James Hacker.
<signature of Ty Coon>, 1 April 1989
Ty Coon, President of Vice
This General Public License does not permit incorporating your program into
proprietary programs. If your program is a subroutine library, you may
consider it more useful to permit linking proprietary applications with the
library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License.

Просмотреть файл

@ -1,674 +0,0 @@
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Preamble
The GNU General Public License is a free, copyleft license for
software and other kinds of works.
The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
the GNU General Public License is intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users. We, the Free Software Foundation, use the
GNU General Public License for most of our software; it applies also to
any other work released this way by its authors. You can apply it to
your programs, too.
When we speak of free software, we are referring to freedom, not
price. Our General Public Licenses are designed to make sure that you
have the freedom to distribute copies of free software (and charge for
them if you wish), that you receive source code or can get it if you
want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you
these rights or asking you to surrender the rights. Therefore, you have
certain responsibilities if you distribute copies of the software, or if
you modify it: responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether
gratis or for a fee, you must pass on to the recipients the same
freedoms that you received. You must make sure that they, too, receive
or can get the source code. And you must show them these terms so they
know their rights.
Developers that use the GNU GPL protect your rights with two steps:
(1) assert copyright on the software, and (2) offer you this License
giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains
that there is no warranty for this free software. For both users' and
authors' sake, the GPL requires that modified versions be marked as
changed, so that their problems will not be attributed erroneously to
authors of previous versions.
Some devices are designed to deny users access to install or run
modified versions of the software inside them, although the manufacturer
can do so. This is fundamentally incompatible with the aim of
protecting users' freedom to change the software. The systematic
pattern of such abuse occurs in the area of products for individuals to
use, which is precisely where it is most unacceptable. Therefore, we
have designed this version of the GPL to prohibit the practice for those
products. If such problems arise substantially in other domains, we
stand ready to extend this provision to those domains in future versions
of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents.
States should not allow patents to restrict development and use of
software on general-purpose computers, but in those that do, we wish to
avoid the special danger that patents applied to a free program could
make it effectively proprietary. To prevent this, the GPL assures that
patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and
modification follow.
TERMS AND CONDITIONS
0. Definitions.
"This License" refers to version 3 of the GNU General Public License.
"Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
"The Program" refers to any copyrightable work licensed under this
License. Each licensee is addressed as "you". "Licensees" and
"recipients" may be individuals or organizations.
To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
A "covered work" means either the unmodified Program or a work based
on the Program.
To "propagate" a work means to do anything with it that, without
permission, would make you directly or secondarily liable for
infringement under applicable copyright law, except executing it on a
computer or modifying a private copy. Propagation includes copying,
distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
1. Source Code.
The "source code" for a work means the preferred form of the work
for making modifications to it. "Object code" means any non-source
form of a work.
A "Standard Interface" means an interface that either is an official
standard defined by a recognized standards body, or, in the case of
interfaces specified for a particular programming language, one that
is widely used among developers working in that language.
The "System Libraries" of an executable work include anything, other
than the work as a whole, that (a) is included in the normal form of
packaging a Major Component, but which is not part of that Major
Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
The "Corresponding Source" for a work in object code form means all
the source code needed to generate, install, and (for an executable
work) run the object code and to modify the work, including scripts to
control those activities. However, it does not include the work's
System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
The Corresponding Source need not include anything that users
can regenerate automatically from other parts of the Corresponding
Source.
The Corresponding Source for a work in source code form is that
same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of
copyright on the Program, and are irrevocable provided the stated
conditions are met. This License explicitly affirms your unlimited
permission to run the unmodified Program. The output from running a
covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
You may make, run and propagate covered works that you do not
convey, without conditions so long as your license otherwise remains
in force. You may convey covered works to others for the sole purpose
of having them make modifications exclusively for you, or provide you
with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
b) The work must carry prominent notices stating that it is
released under this License and any conditions added under section
7. This requirement modifies the requirement in section 4 to
"keep intact all notices".
c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display
Appropriate Legal Notices; however, if the Program has interactive
interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
a) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by the
Corresponding Source fixed on a durable physical medium
customarily used for software interchange.
b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
c) Convey individual copies of the object code with a copy of the
written offer to provide the Corresponding Source. This
alternative is allowed only occasionally and noncommercially, and
only if you received the object code with such an offer, in accord
with subsection 6b.
d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided
you inform other peers where the object code and Corresponding
Source of the work are being offered to the general public at no
charge under subsection 6d.
A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU Affero General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License,
section 13, concerning interaction through a network will apply to the
combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU General Public License from time to time. Such new versions will
be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If the program does terminal interaction, make it output a short
notice like this when it starts in an interactive mode:
<program> Copyright (C) <year> <name of author>
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
This is free software, and you are welcome to redistribute it
under certain conditions; type `show c' for details.
The hypothetical commands `show w' and `show c' should show the appropriate
parts of the General Public License. Of course, your program's commands
might be different; for a GUI interface, you would use an "about box".
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU GPL, see
<http://www.gnu.org/licenses/>.
The GNU General Public License does not permit incorporating your program
into proprietary programs. If your program is a subroutine library, you
may consider it more useful to permit linking proprietary applications with
the library. If this is what you want to do, use the GNU Lesser General
Public License instead of this License. But first, please read
<http://www.gnu.org/philosophy/why-not-lgpl.html>.

Просмотреть файл

@ -1,6 +0,0 @@
See the Git history of the project (git://source.ffmpeg.org/ffmpeg) to
get the names of people who have contributed to FFmpeg.
To check the log, you can type the command "git log" in the FFmpeg
source directory, or browse the online repository at
http://source.ffmpeg.org.

Разница между файлами не показана из-за своего большого размера Загрузить разницу

293
media/ffvpx/FILES Normal file
Просмотреть файл

@ -0,0 +1,293 @@
./COPYING.LGPLv2.1
./COPYING.LGPLv3
./compat/va_copy.h
./compat/w32pthreads.h
./libavcodec/audioconvert.c
./libavcodec/audioconvert.h
./libavcodec/avpicture.c
./libavcodec/bit_depth_template.c
./libavcodec/fdctdsp.h
./libavcodec/flacdata.c
./libavcodec/flacdata.h
./libavcodec/flacdsp_lpc_template.c
./libavcodec/frame_thread_encoder.h
./libavcodec/golomb.c
./libavcodec/h263dsp.h
./libavcodec/h264pred.c
./libavcodec/h264pred.h
./libavcodec/h264pred_template.c
./libavcodec/log2_tab.c
./libavcodec/mathtables.c
./libavcodec/motion_est.h
./libavcodec/mpeg12data.h
./libavcodec/mpegpicture.h
./libavcodec/mpegutils.h
./libavcodec/mpegvideodsp.h
./libavcodec/mpegvideoencdsp.h
./libavcodec/parser.h
./libavcodec/profiles.c
./libavcodec/profiles.h
./libavcodec/pthread.c
./libavcodec/pthread_internal.h
./libavcodec/qpeldsp.h
./libavcodec/qsv_api.c
./libavcodec/raw.h
./libavcodec/rectangle.h
./libavcodec/resample.c
./libavcodec/resample2.c
./libavcodec/reverse.c
./libavcodec/rl.h
./libavcodec/rnd_avg.h
./libavcodec/unary.h
./libavcodec/videodsp_template.c
./libavcodec/vorbis_parser.c
./libavcodec/vorbis_parser_internal.h
./libavcodec/vp8data.h
./libavcodec/vp9_parser.c
./libavcodec/vp9dsp_10bpp.c
./libavcodec/vp9dsp_12bpp.c
./libavcodec/vp9dsp_8bpp.c
./libavcodec/x86/constants.c
./libavcodec/x86/flacdsp.asm
./libavcodec/x86/mathops.h
./libavcodec/x86/vp56_arith.h
./libavcodec/x86/vp9dsp_init.h
./libavcodec/x86/vp9dsp_init_10bpp.c
./libavcodec/x86/vp9dsp_init_12bpp.c
./libavcodec/x86/vp9intrapred.asm
./libavcodec/x86/vp9itxfm_16bpp.asm
./libavcodec/x86/vp9itxfm_template.asm
./libavcodec/x86/vp9mc_16bpp.asm
./libavcodec/x86/h264_i386.h
./libavcodec/x86/vp9lpf.asm
./libavcodec/x86/vp9lpf_16bpp.asm
./libavcodec/x86/constants.h
./libavcodec/x86/flacdsp_init.c
./libavcodec/x86/h264_intrapred.asm
./libavcodec/x86/h264_intrapred_10bit.asm
./libavcodec/x86/h264_intrapred_init.c
./libavcodec/x86/videodsp.asm
./libavcodec/x86/videodsp_init.c
./libavcodec/x86/vp8dsp.asm
./libavcodec/x86/vp8dsp_init.c
./libavcodec/x86/vp8dsp_loopfilter.asm
./libavcodec/x86/vp9dsp_init.c
./libavcodec/x86/vp9dsp_init_16bpp.c
./libavcodec/x86/vp9dsp_init_16bpp_template.c
./libavcodec/x86/vp9intrapred_16bpp.asm
./libavcodec/x86/vp9itxfm.asm
./libavcodec/x86/vp9mc.asm
./libavcodec/xiph.c
./libavcodec/xiph.h
./libavcodec/bsf.h
./libavcodec/h264dsp.h
./libavcodec/imgconvert.c
./libavcodec/avcodec.symbols
./libavcodec/bsf.c
./libavcodec/decode.c
./libavcodec/decode.h
./libavcodec/hwaccel.h
./libavcodec/vp9block.c
./libavcodec/vp9data.c
./libavcodec/vp9dec.h
./libavcodec/vp9lpf.c
./libavcodec/vp9mvs.c
./libavcodec/vp9prob.c
./libavcodec/vp9recon.c
./libavcodec/vp9shared.h
./libavcodec/allcodecs.c
./libavcodec/avcodec.h
./libavcodec/avpacket.c
./libavcodec/bitstream.c
./libavcodec/blockdsp.h
./libavcodec/bytestream.h
./libavcodec/codec_desc.c
./libavcodec/dct.h
./libavcodec/dummy_funcs.c
./libavcodec/error_resilience.h
./libavcodec/flac.c
./libavcodec/flac.h
./libavcodec/flac_parser.c
./libavcodec/flacdec.c
./libavcodec/flacdsp.c
./libavcodec/flacdsp.h
./libavcodec/flacdsp_template.c
./libavcodec/get_bits.h
./libavcodec/golomb.h
./libavcodec/h264chroma.h
./libavcodec/hpeldsp.h
./libavcodec/idctdsp.h
./libavcodec/internal.h
./libavcodec/mathops.h
./libavcodec/me_cmp.h
./libavcodec/mpegvideo.h
./libavcodec/options.c
./libavcodec/options_table.h
./libavcodec/parser.c
./libavcodec/pixblockdsp.h
./libavcodec/pthread_frame.c
./libavcodec/pthread_slice.c
./libavcodec/put_bits.h
./libavcodec/ratecontrol.h
./libavcodec/raw.c
./libavcodec/thread.h
./libavcodec/utils.c
./libavcodec/version.h
./libavcodec/videodsp.c
./libavcodec/videodsp.h
./libavcodec/vlc.h
./libavcodec/vorbis_parser.h
./libavcodec/vp3dsp.h
./libavcodec/vp56.h
./libavcodec/vp56dsp.h
./libavcodec/vp56rac.c
./libavcodec/vp8.c
./libavcodec/vp8.h
./libavcodec/vp8_parser.c
./libavcodec/vp8dsp.c
./libavcodec/vp8dsp.h
./libavcodec/vp9.c
./libavcodec/vp9.h
./libavcodec/vp9_mc_template.c
./libavcodec/vp9data.h
./libavcodec/vp9dsp.c
./libavcodec/vp9dsp.h
./libavcodec/vp9dsp_template.c
./libavcodec/bitstream_filters.c
./libavcodec/bitstream_filter.c
./libavcodec/bsf_list.c
./libavcodec/null_bsf.c
./libavutil/adler32.c
./libavutil/atomic.c
./libavutil/atomic.h
./libavutil/atomic_win32.h
./libavutil/avutilres.rc
./libavutil/base64.c
./libavutil/base64.h
./libavutil/bprint.c
./libavutil/bprint.h
./libavutil/bswap.h
./libavutil/color_utils.c
./libavutil/color_utils.h
./libavutil/colorspace.h
./libavutil/common.h
./libavutil/crc.c
./libavutil/dict.h
./libavutil/display.c
./libavutil/error.c
./libavutil/error.h
./libavutil/eval.h
./libavutil/ffmath.h
./libavutil/fftime.h
./libavutil/ffversion.h
./libavutil/fifo.c
./libavutil/fifo.h
./libavutil/fixed_dsp.c
./libavutil/fixed_dsp.h
./libavutil/integer.c
./libavutil/integer.h
./libavutil/intfloat.h
./libavutil/intmath.c
./libavutil/intmath.h
./libavutil/libm.h
./libavutil/lls.c
./libavutil/lls.h
./libavutil/log2_tab.c
./libavutil/lzo.c
./libavutil/lzo.h
./libavutil/macros.h
./libavutil/mem_internal.h
./libavutil/motion_vector.h
./libavutil/parseutils.h
./libavutil/pixelutils.c
./libavutil/pixelutils.h
./libavutil/qsort.h
./libavutil/rational.c
./libavutil/replaygain.h
./libavutil/reverse.c
./libavutil/threadmessage.h
./libavutil/time_internal.h
./libavutil/wchar_filename.h
./libavutil/x86/bswap.h
./libavutil/x86/cpuid.asm
./libavutil/x86/fixed_dsp.asm
./libavutil/x86/fixed_dsp_init.c
./libavutil/x86/intmath.h
./libavutil/x86/intreadwrite.h
./libavutil/x86/lls.asm
./libavutil/x86/lls_init.c
./libavutil/x86/pixelutils.asm
./libavutil/x86/pixelutils.h
./libavutil/x86/pixelutils_init.c
./libavutil/x86/timer.h
./libavutil/x86/asm.h
./libavutil/x86/imgutils.asm
./libavutil/x86/imgutils_init.c
./libavutil/x86/cpu.c
./libavutil/x86/cpu.h
./libavutil/x86/emms.asm
./libavutil/x86/emms.h
./libavutil/x86/float_dsp.asm
./libavutil/x86/float_dsp_init.c
./libavutil/x86/x86inc.asm
./libavutil/x86/x86util.asm
./libavutil/adler32.h
./libavutil/avassert.h
./libavutil/avconfig.h
./libavutil/crc.h
./libavutil/dict.c
./libavutil/dynarray.h
./libavutil/log.h
./libavutil/mathematics.h
./libavutil/rational.h
./libavutil/samplefmt.c
./libavutil/samplefmt.h
./libavutil/timestamp.h
./libavutil/opt.c
./libavutil/dummy_funcs.c
./libavutil/imgutils_internal.h
./libavutil/reverse.h
./libavutil/slicethread.c
./libavutil/slicethread.h
./libavutil/atomic_gcc.h
./libavutil/attributes.h
./libavutil/avstring.c
./libavutil/avstring.h
./libavutil/avutil.h
./libavutil/avutil.symbols
./libavutil/buffer.c
./libavutil/buffer.h
./libavutil/buffer_internal.h
./libavutil/channel_layout.c
./libavutil/channel_layout.h
./libavutil/cpu.c
./libavutil/cpu.h
./libavutil/cpu_internal.h
./libavutil/display.h
./libavutil/eval.c
./libavutil/float_dsp.c
./libavutil/float_dsp.h
./libavutil/frame.c
./libavutil/frame.h
./libavutil/hwcontext.h
./libavutil/imgutils.c
./libavutil/imgutils.h
./libavutil/internal.h
./libavutil/intreadwrite.h
./libavutil/log.c
./libavutil/mathematics.c
./libavutil/mem.c
./libavutil/mem.h
./libavutil/opt.h
./libavutil/parseutils.c
./libavutil/pixdesc.c
./libavutil/pixdesc.h
./libavutil/pixfmt.h
./libavutil/thread.h
./libavutil/threadmessage.c
./libavutil/time.c
./libavutil/timecode.c
./libavutil/timecode.h
./libavutil/timer.h
./libavutil/utils.c
./libavutil/version.h

Просмотреть файл

@ -1,17 +0,0 @@
#Installing FFmpeg:
1. Type `./configure` to create the configuration. A list of configure
options is printed by running `configure --help`.
`configure` can be launched from a directory different from the FFmpeg
sources to build the objects out of tree. To do this, use an absolute
path when launching `configure`, e.g. `/ffmpegdir/ffmpeg/configure`.
2. Then type `make` to build FFmpeg. GNU Make 3.81 or later is required.
3. Type `make install` to install all binaries and libraries you built.
NOTICE
------
- Non system dependencies (e.g. libx264, libvpx) are disabled by default.

Просмотреть файл

@ -1,114 +0,0 @@
#FFmpeg:
Most files in FFmpeg are under the GNU Lesser General Public License version 2.1
or later (LGPL v2.1+). Read the file `COPYING.LGPLv2.1` for details. Some other
files have MIT/X11/BSD-style licenses. In combination the LGPL v2.1+ applies to
FFmpeg.
Some optional parts of FFmpeg are licensed under the GNU General Public License
version 2 or later (GPL v2+). See the file `COPYING.GPLv2` for details. None of
these parts are used by default, you have to explicitly pass `--enable-gpl` to
configure to activate them. In this case, FFmpeg's license changes to GPL v2+.
Specifically, the GPL parts of FFmpeg are:
- libpostproc
- optional x86 optimizations in the files
- `libavcodec/x86/flac_dsp_gpl.asm`
- `libavcodec/x86/idct_mmx.c`
- `libavfilter/x86/vf_removegrain.asm`
- libutvideo encoding/decoding wrappers in
`libavcodec/libutvideo*.cpp`
- the X11 grabber in `libavdevice/x11grab.c`
- the swresample test app in
`libswresample/swresample-test.c`
- the `texi2pod.pl` tool
- the following filters in libavfilter:
- `f_ebur128.c`
- `vf_blackframe.c`
- `vf_boxblur.c`
- `vf_colormatrix.c`
- `vf_cover_rect.c`
- `vf_cropdetect.c`
- `vf_delogo.c`
- `vf_eq.c`
- `vf_find_rect.c`
- `vf_fspp.c`
- `vf_geq.c`
- `vf_histeq.c`
- `vf_hqdn3d.c`
- `vf_interlace.c`
- `vf_kerndeint.c`
- `vf_mcdeint.c`
- `vf_mpdecimate.c`
- `vf_owdenoise.c`
- `vf_perspective.c`
- `vf_phase.c`
- `vf_pp.c`
- `vf_pp7.c`
- `vf_pullup.c`
- `vf_sab.c`
- `vf_smartblur.c`
- `vf_repeatfields.c`
- `vf_spp.c`
- `vf_stereo3d.c`
- `vf_super2xsai.c`
- `vf_tinterlace.c`
- `vf_uspp.c`
- `vsrc_mptestsrc.c`
Should you, for whatever reason, prefer to use version 3 of the (L)GPL, then
the configure parameter `--enable-version3` will activate this licensing option
for you. Read the file `COPYING.LGPLv3` or, if you have enabled GPL parts,
`COPYING.GPLv3` to learn the exact legal terms that apply in this case.
There are a handful of files under other licensing terms, namely:
* The files `libavcodec/jfdctfst.c`, `libavcodec/jfdctint_template.c` and
`libavcodec/jrevdct.c` are taken from libjpeg, see the top of the files for
licensing details. Specifically note that you must credit the IJG in the
documentation accompanying your program if you only distribute executables.
You must also indicate any changes including additions and deletions to
those three files in the documentation.
* `tests/reference.pnm` is under the expat license.
external libraries
==================
FFmpeg can be combined with a number of external libraries, which sometimes
affect the licensing of binaries resulting from the combination.
compatible libraries
--------------------
The following libraries are under GPL:
- frei0r
- libcdio
- librubberband
- libutvideo
- libvidstab
- libx264
- libx265
- libxavs
- libxvid
When combining them with FFmpeg, FFmpeg needs to be licensed as GPL as well by
passing `--enable-gpl` to configure.
The OpenCORE and VisualOn libraries are under the Apache License 2.0. That
license is incompatible with the LGPL v2.1 and the GPL v2, but not with
version 3 of those licenses. So to combine these libraries with FFmpeg, the
license version needs to be upgraded by passing `--enable-version3` to configure.
incompatible libraries
----------------------
The Fraunhofer AAC library and FAAC are under licenses which
are incompatible with the GPLv2 and v3. We do not know for certain if their
licenses are compatible with the LGPL.
If you wish to enable these libraries, pass `--enable-nonfree` to configure.
But note that if you enable any of these libraries the resulting binary will
be under a complex license mix that is more restrictive than the LGPL and that
may result in additional obligations. It is possible that these
restrictions cause the resulting binary to be unredistributeable.

Просмотреть файл

@ -1,619 +0,0 @@
FFmpeg maintainers
==================
Below is a list of the people maintaining different parts of the
FFmpeg code.
Please try to keep entries where you are the maintainer up to date!
Names in () mean that the maintainer currently has no time to maintain the code.
A (CC <address>) after the name means that the maintainer prefers to be CC-ed on
patches and related discussions.
Project Leader
==============
final design decisions
Applications
============
ffmpeg:
ffmpeg.c Michael Niedermayer
ffplay:
ffplay.c Marton Balint
ffprobe:
ffprobe.c Stefano Sabatini
ffserver:
ffserver.c Reynaldo H. Verdejo Pinochet
Commandline utility code:
cmdutils.c, cmdutils.h Michael Niedermayer
QuickTime faststart:
tools/qt-faststart.c Baptiste Coudurier
Miscellaneous Areas
===================
documentation Stefano Sabatini, Mike Melanson, Timothy Gu, Lou Logan
build system (configure, makefiles) Diego Biurrun, Mans Rullgard
project server Árpád Gereöffy, Michael Niedermayer, Reimar Doeffinger, Alexander Strasser, Lou Logan
presets Robert Swain
metadata subsystem Aurelien Jacobs
release management Michael Niedermayer
Communication
=============
website Deby Barbara Lepage
fate.ffmpeg.org Timothy Gu
Trac bug tracker Alexander Strasser, Michael Niedermayer, Carl Eugen Hoyos, Lou Logan
mailing lists Michael Niedermayer, Baptiste Coudurier, Lou Logan
Google+ Paul B Mahol, Michael Niedermayer, Alexander Strasser
Twitter Lou Logan, Reynaldo H. Verdejo Pinochet
Launchpad Timothy Gu
libavutil
=========
External Interfaces:
libavutil/avutil.h Michael Niedermayer
Internal Interfaces:
libavutil/common.h Michael Niedermayer
Other:
aes_ctr.c, aes_ctr.h Eran Kornblau
bprint Nicolas George
bswap.h
des Reimar Doeffinger
dynarray.h Nicolas George
eval.c, eval.h Michael Niedermayer
float_dsp Loren Merritt
hash Reimar Doeffinger
intfloat* Michael Niedermayer
integer.c, integer.h Michael Niedermayer
lzo Reimar Doeffinger
mathematics.c, mathematics.h Michael Niedermayer
mem.c, mem.h Michael Niedermayer
opencl.c, opencl.h Wei Gao
opt.c, opt.h Michael Niedermayer
rational.c, rational.h Michael Niedermayer
rc4 Reimar Doeffinger
ripemd.c, ripemd.h James Almer
timecode Clément Bœsch
libavcodec
==========
Generic Parts:
External Interfaces:
avcodec.h Michael Niedermayer
utility code:
utils.c Michael Niedermayer
audio and video frame extraction:
parser.c Michael Niedermayer
bitstream reading:
bitstream.c, bitstream.h Michael Niedermayer
CABAC:
cabac.h, cabac.c Michael Niedermayer
codec names:
codec_names.sh Nicolas George
DSP utilities:
dsputils.c, dsputils.h Michael Niedermayer
entropy coding:
rangecoder.c, rangecoder.h Michael Niedermayer
lzw.* Michael Niedermayer
floating point AAN DCT:
faandct.c, faandct.h Michael Niedermayer
Golomb coding:
golomb.c, golomb.h Michael Niedermayer
LPC:
lpc.c, lpc.h Justin Ruggles
motion estimation:
motion* Michael Niedermayer
rate control:
ratecontrol.c Michael Niedermayer
libxvid_rc.c Michael Niedermayer
simple IDCT:
simple_idct.c, simple_idct.h Michael Niedermayer
postprocessing:
libpostproc/* Michael Niedermayer
table generation:
tableprint.c, tableprint.h Reimar Doeffinger
fixed point FFT:
fft* Zeljko Lukac
Text Subtitles Clément Bœsch
Codecs:
4xm.c Michael Niedermayer
8bps.c Roberto Togni
8svx.c Jaikrishnan Menon
aacenc*, aaccoder.c Rostislav Pehlivanov
aasc.c Kostya Shishkov
ac3* Justin Ruggles
alacenc.c Jaikrishnan Menon
alsdec.c Thilo Borgmann
apedec.c Kostya Shishkov
ass* Aurelien Jacobs
asv* Michael Niedermayer
atrac3* Benjamin Larsson
atrac3plus* Maxim Poliakovski
bgmc.c, bgmc.h Thilo Borgmann
bink.c Kostya Shishkov
binkaudio.c Peter Ross
bmp.c Mans Rullgard, Kostya Shishkov
cavs* Stefan Gehrer
cdxl.c Paul B Mahol
celp_filters.* Vitor Sessak
cinepak.c Roberto Togni
cinepakenc.c Rl / Aetey G.T. AB
ccaption_dec.c Anshul Maheshwari
cljr Alex Beregszaszi
cllc.c Derek Buitenhuis
cook.c, cookdata.h Benjamin Larsson
cpia.c Stephan Hilb
crystalhd.c Philip Langdale
cscd.c Reimar Doeffinger
dca.c Kostya Shishkov, Benjamin Larsson
dirac* Rostislav Pehlivanov
dnxhd* Baptiste Coudurier
dpcm.c Mike Melanson
dss_sp.c Oleksij Rempel, Michael Niedermayer
dv.c Roman Shaposhnik
dvbsubdec.c Anshul Maheshwari
dxa.c Kostya Shishkov
eacmv*, eaidct*, eat* Peter Ross
evrc* Paul B Mahol
exif.c, exif.h Thilo Borgmann
ffv1* Michael Niedermayer
ffwavesynth.c Nicolas George
fic.c Derek Buitenhuis
flac* Justin Ruggles
flashsv* Benjamin Larsson
flicvideo.c Mike Melanson
g722.c Martin Storsjo
g726.c Roman Shaposhnik
gifdec.c Baptiste Coudurier
h261* Michael Niedermayer
h263* Michael Niedermayer
h264* Loren Merritt, Michael Niedermayer
hap* Tom Butterworth
huffyuv* Michael Niedermayer, Christophe Gisquet
idcinvideo.c Mike Melanson
imc* Benjamin Larsson
indeo2* Kostya Shishkov
indeo5* Kostya Shishkov
interplayvideo.c Mike Melanson
ivi* Kostya Shishkov
jacosub* Clément Bœsch
jpeg2000* Nicolas Bertrand
jpeg_ls.c Kostya Shishkov
jvdec.c Peter Ross
kmvc.c Kostya Shishkov
lcl*.c Roberto Togni, Reimar Doeffinger
libcelt_dec.c Nicolas George
libdirac* David Conrad
libgsm.c Michel Bardiaux
libkvazaar.c Arttu Ylä-Outinen
libopenjpeg.c Jaikrishnan Menon
libopenjpegenc.c Michael Bradshaw
libschroedinger* David Conrad
libspeexdec.c Justin Ruggles
libtheoraenc.c David Conrad
libutvideo* Carl Eugen Hoyos
libvorbis.c David Conrad
libvpx* James Zern
libx264.c Mans Rullgard, Jason Garrett-Glaser
libx265.c Derek Buitenhuis
libxavs.c Stefan Gehrer
libzvbi-teletextdec.c Marton Balint
loco.c Kostya Shishkov
lzo.h, lzo.c Reimar Doeffinger
mdec.c Michael Niedermayer
mimic.c Ramiro Polla
mjpeg*.c Michael Niedermayer
mlp* Ramiro Polla
mmvideo.c Peter Ross
mpc* Kostya Shishkov
mpeg12.c, mpeg12data.h Michael Niedermayer
mpegvideo.c, mpegvideo.h Michael Niedermayer
mqc* Nicolas Bertrand
msmpeg4.c, msmpeg4data.h Michael Niedermayer
msrle.c Mike Melanson
msvideo1.c Mike Melanson
nellymoserdec.c Benjamin Larsson
nuv.c Reimar Doeffinger
nvenc.c Timo Rothenpieler
paf.* Paul B Mahol
pcx.c Ivo van Poorten
pgssubdec.c Reimar Doeffinger
ptx.c Ivo van Poorten
qcelp* Reynaldo H. Verdejo Pinochet
qdm2.c, qdm2data.h Roberto Togni, Benjamin Larsson
qdrw.c Kostya Shishkov
qpeg.c Kostya Shishkov
qsv* Ivan Uskov
qtrle.c Mike Melanson
ra144.c, ra144.h, ra288.c, ra288.h Roberto Togni
resample2.c Michael Niedermayer
rl2.c Sascha Sommer
rpza.c Roberto Togni
rtjpeg.c, rtjpeg.h Reimar Doeffinger
rv10.c Michael Niedermayer
rv3* Kostya Shishkov
rv4* Kostya Shishkov, Christophe Gisquet
s3tc* Ivo van Poorten
smacker.c Kostya Shishkov
smc.c Mike Melanson
smvjpegdec.c Ash Hughes
snow* Michael Niedermayer, Loren Merritt
sonic.c Alex Beregszaszi
srt* Aurelien Jacobs
sunrast.c Ivo van Poorten
svq3.c Michael Niedermayer
tak* Paul B Mahol
targa.c Kostya Shishkov
tiff.c Kostya Shishkov
truemotion1* Mike Melanson
truemotion2* Kostya Shishkov
truespeech.c Kostya Shishkov
tscc.c Kostya Shishkov
tta.c Alex Beregszaszi, Jaikrishnan Menon
ttaenc.c Paul B Mahol
txd.c Ivo van Poorten
ulti* Kostya Shishkov
v410*.c Derek Buitenhuis
vb.c Kostya Shishkov
vble.c Derek Buitenhuis
vc1* Kostya Shishkov, Christophe Gisquet
vc2* Rostislav Pehlivanov
vcr1.c Michael Niedermayer
vda_h264_dec.c Xidorn Quan
vima.c Paul B Mahol
vmnc.c Kostya Shishkov
vorbisdec.c Denes Balatoni, David Conrad
vorbisenc.c Oded Shimon
vp3* Mike Melanson
vp5 Aurelien Jacobs
vp6 Aurelien Jacobs
vp8 David Conrad, Jason Garrett-Glaser, Ronald Bultje
vp9 Ronald Bultje, Clément Bœsch
vqavideo.c Mike Melanson
wavpack.c Kostya Shishkov
wmaprodec.c Sascha Sommer
wmavoice.c Ronald S. Bultje
wmv2.c Michael Niedermayer
wnv1.c Kostya Shishkov
xan.c Mike Melanson
xbm* Paul B Mahol
xface Stefano Sabatini
xl.c Kostya Shishkov
xvmc.c Ivan Kalvachev
xwd* Paul B Mahol
zerocodec.c Derek Buitenhuis
zmbv* Kostya Shishkov
Hardware acceleration:
crystalhd.c Philip Langdale
dxva2* Hendrik Leppkes, Laurent Aimar
vaapi* Gwenole Beauchesne
vda* Sebastien Zwickert
vdpau* Philip Langdale, Carl Eugen Hoyos
videotoolbox* Sebastien Zwickert
libavdevice
===========
External Interface:
libavdevice/avdevice.h
avfoundation.m Thilo Borgmann
decklink* Deti Fliegl
dshow.c Roger Pack (CC rogerdpack@gmail.com)
fbdev_enc.c Lukasz Marek
gdigrab.c Roger Pack (CC rogerdpack@gmail.com)
iec61883.c Georg Lippitsch
lavfi Stefano Sabatini
libdc1394.c Roman Shaposhnik
opengl_enc.c Lukasz Marek
pulse_audio_enc.c Lukasz Marek
qtkit.m Thilo Borgmann
sdl Stefano Sabatini
v4l2.c Giorgio Vazzana
vfwcap.c Ramiro Polla
xv.c Lukasz Marek
libavfilter
===========
Generic parts:
graphdump.c Nicolas George
Filters:
f_drawgraph.c Paul B Mahol
af_adelay.c Paul B Mahol
af_aecho.c Paul B Mahol
af_afade.c Paul B Mahol
af_amerge.c Nicolas George
af_aphaser.c Paul B Mahol
af_aresample.c Michael Niedermayer
af_astats.c Paul B Mahol
af_atempo.c Pavel Koshevoy
af_biquads.c Paul B Mahol
af_chorus.c Paul B Mahol
af_compand.c Paul B Mahol
af_ladspa.c Paul B Mahol
af_pan.c Nicolas George
af_sidechaincompress.c Paul B Mahol
af_silenceremove.c Paul B Mahol
avf_aphasemeter.c Paul B Mahol
avf_avectorscope.c Paul B Mahol
avf_showcqt.c Muhammad Faiz
vf_blend.c Paul B Mahol
vf_chromakey.c Timo Rothenpieler
vf_colorchannelmixer.c Paul B Mahol
vf_colorbalance.c Paul B Mahol
vf_colorkey.c Timo Rothenpieler
vf_colorlevels.c Paul B Mahol
vf_deband.c Paul B Mahol
vf_dejudder.c Nicholas Robbins
vf_delogo.c Jean Delvare (CC <jdelvare@suse.com>)
vf_drawbox.c/drawgrid Andrey Utkin
vf_extractplanes.c Paul B Mahol
vf_histogram.c Paul B Mahol
vf_hqx.c Clément Bœsch
vf_idet.c Pascal Massimino
vf_il.c Paul B Mahol
vf_lenscorrection.c Daniel Oberhoff
vf_mergeplanes.c Paul B Mahol
vf_neighbor.c Paul B Mahol
vf_psnr.c Paul B Mahol
vf_random.c Paul B Mahol
vf_scale.c Michael Niedermayer
vf_separatefields.c Paul B Mahol
vf_ssim.c Paul B Mahol
vf_stereo3d.c Paul B Mahol
vf_telecine.c Paul B Mahol
vf_yadif.c Michael Niedermayer
vf_zoompan.c Paul B Mahol
Sources:
vsrc_mandelbrot.c Michael Niedermayer
libavformat
===========
Generic parts:
External Interface:
libavformat/avformat.h Michael Niedermayer
Utility Code:
libavformat/utils.c Michael Niedermayer
Muxers/Demuxers:
4xm.c Mike Melanson
aadec.c Vesselin Bontchev (vesselin.bontchev at yandex dot com)
adtsenc.c Robert Swain
afc.c Paul B Mahol
aiffdec.c Baptiste Coudurier, Matthieu Bouron
aiffenc.c Baptiste Coudurier, Matthieu Bouron
ape.c Kostya Shishkov
apngdec.c Benoit Fouet
ass* Aurelien Jacobs
astdec.c Paul B Mahol
astenc.c James Almer
avi* Michael Niedermayer
avisynth.c AvxSynth Team (avxsynth.testing at gmail dot com)
avr.c Paul B Mahol
bink.c Peter Ross
brstm.c Paul B Mahol
caf* Peter Ross
cdxl.c Paul B Mahol
crc.c Michael Niedermayer
daud.c Reimar Doeffinger
dss.c Oleksij Rempel, Michael Niedermayer
dtshddec.c Paul B Mahol
dv.c Roman Shaposhnik
dxa.c Kostya Shishkov
electronicarts.c Peter Ross
epafdec.c Paul B Mahol
ffm* Baptiste Coudurier
flac* Justin Ruggles
flic.c Mike Melanson
flvdec.c, flvenc.c Michael Niedermayer
gxf.c Reimar Doeffinger
gxfenc.c Baptiste Coudurier
hls.c Anssi Hannula
hls encryption (hlsenc.c) Christian Suloway
idcin.c Mike Melanson
idroqdec.c Mike Melanson
iff.c Jaikrishnan Menon
img2*.c Michael Niedermayer
ipmovie.c Mike Melanson
ircam* Paul B Mahol
iss.c Stefan Gehrer
jacosub* Clément Bœsch
jvdec.c Peter Ross
libmodplug.c Clément Bœsch
libnut.c Oded Shimon
lmlm4.c Ivo van Poorten
lvfdec.c Paul B Mahol
lxfdec.c Tomas Härdin
matroska.c Aurelien Jacobs
matroskadec.c Aurelien Jacobs
matroskaenc.c David Conrad
matroska subtitles (matroskaenc.c) John Peebles
metadata* Aurelien Jacobs
mgsts.c Paul B Mahol
microdvd* Aurelien Jacobs
mm.c Peter Ross
mov.c Michael Niedermayer, Baptiste Coudurier
movenc.c Baptiste Coudurier, Matthieu Bouron
movenccenc.c Eran Kornblau
mpc.c Kostya Shishkov
mpeg.c Michael Niedermayer
mpegenc.c Michael Niedermayer
mpegts.c Marton Balint
mpegtsenc.c Baptiste Coudurier
msnwc_tcp.c Ramiro Polla
mtv.c Reynaldo H. Verdejo Pinochet
mxf* Baptiste Coudurier
mxfdec.c Tomas Härdin
nistspheredec.c Paul B Mahol
nsvdec.c Francois Revol
nut* Michael Niedermayer
nuv.c Reimar Doeffinger
oggdec.c, oggdec.h David Conrad
oggenc.c Baptiste Coudurier
oggparse*.c David Conrad
oggparsedaala* Rostislav Pehlivanov
oma.c Maxim Poliakovski
paf.c Paul B Mahol
psxstr.c Mike Melanson
pva.c Ivo van Poorten
pvfdec.c Paul B Mahol
r3d.c Baptiste Coudurier
raw.c Michael Niedermayer
rdt.c Ronald S. Bultje
rl2.c Sascha Sommer
rmdec.c, rmenc.c Ronald S. Bultje, Kostya Shishkov
rtmp* Kostya Shishkov
rtp.c, rtpenc.c Martin Storsjo
rtpdec_ac3.* Gilles Chanteperdrix
rtpdec_dv.* Thomas Volkert
rtpdec_h261.*, rtpenc_h261.* Thomas Volkert
rtpdec_hevc.*, rtpenc_hevc.* Thomas Volkert
rtpdec_mpa_robust.* Gilles Chanteperdrix
rtpdec_asf.* Ronald S. Bultje
rtpdec_vp9.c Thomas Volkert
rtpenc_mpv.*, rtpenc_aac.* Martin Storsjo
rtsp.c Luca Barbato
sbgdec.c Nicolas George
sdp.c Martin Storsjo
segafilm.c Mike Melanson
segment.c Stefano Sabatini
siff.c Kostya Shishkov
smacker.c Kostya Shishkov
smjpeg* Paul B Mahol
spdif* Anssi Hannula
srtdec.c Aurelien Jacobs
swf.c Baptiste Coudurier
takdec.c Paul B Mahol
tta.c Alex Beregszaszi
txd.c Ivo van Poorten
voc.c Aurelien Jacobs
wav.c Michael Niedermayer
wc3movie.c Mike Melanson
webm dash (matroskaenc.c) Vignesh Venkatasubramanian
webvtt* Matthew J Heaney
westwood.c Mike Melanson
wtv.c Peter Ross
wv.c Kostya Shishkov
wvenc.c Paul B Mahol
Protocols:
async.c Zhang Rui
bluray.c Petri Hintukainen
ftp.c Lukasz Marek
http.c Ronald S. Bultje
libssh.c Lukasz Marek
mms*.c Ronald S. Bultje
udp.c Luca Abeni
icecast.c Marvin Scholz
libswresample
=============
Generic parts:
audioconvert.c Michael Niedermayer
dither.c Michael Niedermayer
rematrix*.c Michael Niedermayer
swresample*.c Michael Niedermayer
Resamplers:
resample*.c Michael Niedermayer
soxr_resample.c Rob Sykes
Operating systems / CPU architectures
=====================================
Alpha Mans Rullgard, Falk Hueffner
ARM Mans Rullgard
AVR32 Mans Rullgard
MIPS Mans Rullgard, Nedeljko Babic
Mac OS X / PowerPC Romain Dolbeau, Guillaume Poirier
Amiga / PowerPC Colin Ward
Linux / PowerPC Luca Barbato
Windows MinGW Alex Beregszaszi, Ramiro Polla
Windows Cygwin Victor Paesa
Windows MSVC Matthew Oliver, Hendrik Leppkes
Windows ICL Matthew Oliver
ADI/Blackfin DSP Marc Hoffman
Sparc Roman Shaposhnik
x86 Michael Niedermayer
Releases
========
2.8 Michael Niedermayer
2.7 Michael Niedermayer
2.6 Michael Niedermayer
2.5 Michael Niedermayer
2.4 Michael Niedermayer
If you want to maintain an older release, please contact us
GnuPG Fingerprints of maintainers and contributors
==================================================
Alexander Strasser 1C96 78B7 83CB 8AA7 9AF5 D1EB A7D8 A57B A876 E58F
Anssi Hannula 1A92 FF42 2DD9 8D2E 8AF7 65A9 4278 C520 513D F3CB
Anton Khirnov 6D0C 6625 56F8 65D1 E5F5 814B B50A 1241 C067 07AB
Ash Hughes 694D 43D2 D180 C7C7 6421 ABD3 A641 D0B7 623D 6029
Attila Kinali 11F0 F9A6 A1D2 11F6 C745 D10C 6520 BCDD F2DF E765
Baptiste Coudurier 8D77 134D 20CC 9220 201F C5DB 0AC9 325C 5C1A BAAA
Ben Littler 3EE3 3723 E560 3214 A8CD 4DEB 2CDB FCE7 768C 8D2C
Benoit Fouet B22A 4F4F 43EF 636B BB66 FCDC 0023 AE1E 2985 49C8
Clément Bœsch 52D0 3A82 D445 F194 DB8B 2B16 87EE 2CB8 F4B8 FCF9
Daniel Verkamp 78A6 07ED 782C 653E C628 B8B9 F0EB 8DD8 2F0E 21C7
Diego Biurrun 8227 1E31 B6D9 4994 7427 E220 9CAE D6CC 4757 FCC5
FFmpeg release signing key FCF9 86EA 15E6 E293 A564 4F10 B432 2F04 D676 58D8
Ganesh Ajjanagadde C96A 848E 97C3 CEA2 AB72 5CE4 45F9 6A2D 3C36 FB1B
Gwenole Beauchesne 2E63 B3A6 3E44 37E2 017D 2704 53C7 6266 B153 99C4
Jaikrishnan Menon 61A1 F09F 01C9 2D45 78E1 C862 25DC 8831 AF70 D368
Jean Delvare 7CA6 9F44 60F1 BDC4 1FD2 C858 A552 6B9B B3CD 4E6A
Justin Ruggles 3136 ECC0 C10D 6C04 5F43 CA29 FCBE CD2A 3787 1EBF
Loren Merritt ABD9 08F4 C920 3F65 D8BE 35D7 1540 DAA7 060F 56DE
Lou Logan 7D68 DC73 CBEF EABB 671A B6CF 621C 2E28 82F8 DC3A
Luca Barbato 6677 4209 213C 8843 5B67 29E7 E84C 78C2 84E9 0E34
Michael Niedermayer 9FF2 128B 147E F673 0BAD F133 611E C787 040B 0FAB
Nicolas George 24CE 01CE 9ACC 5CEB 74D8 8D9D B063 D997 36E5 4C93
Panagiotis Issaris 6571 13A3 33D9 3726 F728 AA98 F643 B12E ECF3 E029
Peter Ross A907 E02F A6E5 0CD2 34CD 20D2 6760 79C5 AC40 DD6B
Philip Langdale 5DC5 8D66 5FBA 3A43 18EC 045E F8D6 B194 6A75 682E
Reimar Doeffinger C61D 16E5 9E2C D10C 8958 38A4 0899 A2B9 06D4 D9C7
Reinhard Tartler 9300 5DC2 7E87 6C37 ED7B CA9A 9808 3544 9453 48A4
Reynaldo H. Verdejo Pinochet 6E27 CD34 170C C78E 4D4F 5F40 C18E 077F 3114 452A
Robert Swain EE7A 56EA 4A81 A7B5 2001 A521 67FA 362D A2FC 3E71
Sascha Sommer 38A0 F88B 868E 9D3A 97D4 D6A0 E823 706F 1E07 0D3C
Stefano Sabatini 0D0B AD6B 5330 BBAD D3D6 6A0C 719C 2839 FC43 2D5F
Stephan Hilb 4F38 0B3A 5F39 B99B F505 E562 8D5C 5554 4E17 8863
Tiancheng "Timothy" Gu 9456 AFC0 814A 8139 E994 8351 7FE6 B095 B582 B0D4
Tim Nicholson 38CF DB09 3ED0 F607 8B67 6CED 0C0B FC44 8B0B FC83
Tomas Härdin A79D 4E3D F38F 763F 91F5 8B33 A01E 8AE0 41BB 2551
Wei Gao 4269 7741 857A 0E60 9EC5 08D2 4744 4EFA 62C1 87B9

Просмотреть файл

@ -1,49 +0,0 @@
FFmpeg README
=============
FFmpeg is a collection of libraries and tools to process multimedia content
such as audio, video, subtitles and related metadata.
## Libraries
* `libavcodec` provides implementation of a wider range of codecs.
* `libavformat` implements streaming protocols, container formats and basic I/O access.
* `libavutil` includes hashers, decompressors and miscellaneous utility functions.
* `libavfilter` provides a mean to alter decoded Audio and Video through chain of filters.
* `libavdevice` provides an abstraction to access capture and playback devices.
* `libswresample` implements audio mixing and resampling routines.
* `libswscale` implements color conversion and scaling routines.
## Tools
* [ffmpeg](https://ffmpeg.org/ffmpeg.html) is a command line toolbox to
manipulate, convert and stream multimedia content.
* [ffplay](https://ffmpeg.org/ffplay.html) is a minimalistic multimedia player.
* [ffprobe](https://ffmpeg.org/ffprobe.html) is a simple analysis tool to inspect
multimedia content.
* [ffserver](https://ffmpeg.org/ffserver.html) is a multimedia streaming server
for live broadcasts.
* Additional small tools such as `aviocat`, `ismindex` and `qt-faststart`.
## Documentation
The offline documentation is available in the **doc/** directory.
The online documentation is available in the main [website](https://ffmpeg.org)
and in the [wiki](https://trac.ffmpeg.org).
### Examples
Coding examples are available in the **doc/examples** directory.
## License
FFmpeg codebase is mainly LGPL-licensed with optional components licensed under
GPL. Please refer to the LICENSE file for detailed information.
## Contributing
Patches should be submitted to the ffmpeg-devel mailing list using
`git format-patch` or `git send-email`. Github pull requests should be
avoided because they are not part of our review process. Few developers
follow pull requests so they will likely be ignored.

Просмотреть файл

@ -1,6 +1,6 @@
This directory contains files used in gecko builds from FFmpeg
(http://ffmpeg.org). The current files are from FFmpeg as of
revision n3.2-65-gee56777
revision n3.4-1-g587fadaef1
All source files match their path from the library's source archive.
Currently, we only use the vp8 and vp9 portion of the library, and only on x86
@ -12,7 +12,7 @@ Once yasm is upgraded to 1.2 or later, AVX2 code could be re-enabled.
Add --disable-avx2 to configure on those platforms.
configuration files were generated as follow using the configure script:
./configure --disable-everything --disable-protocols --disable-demuxers --disable-muxers --disable-filters --disable-programs --disable-doc --disable-parsers --enable-parser=vp8 --enable-parser=vp9 --enable-decoder=vp8 --enable-decoder=vp9 --disable-static --enable-shared --disable-debug --disable-sdl --disable-libxcb --disable-securetransport --disable-iconv --disable-swresample --disable-swscale --disable-avdevice --disable-avfilter --disable-avformat --disable-d3d11va --disable-dxva2 --disable-vaapi --disable-vda --disable-vdpau --disable-videotoolbox --enable-asm --enable-yasm
./configure --disable-everything --disable-protocols --disable-demuxers --disable-muxers --disable-filters --disable-programs --disable-doc --disable-parsers --enable-parser=vp8 --enable-parser=vp9 --enable-decoder=vp8 --enable-decoder=vp9 --disable-static --enable-shared --disable-debug --disable-sdl2 --disable-libxcb --disable-securetransport --disable-iconv --disable-swresample --disable-swscale --disable-avdevice --disable-avfilter --disable-avformat --disable-d3d11va --disable-dxva2 --disable-vaapi --disable-vda --disable-vdpau --disable-videotoolbox --enable-decoder=flac --enable-parser=flac --enable-asm --enable-yasm
config*:
replace: /HAVE_(MALLOC_H|ARC4RANDOM|LOCALTIME_R|MEMALIGN|POSIX_MEMALIGN)/d
@ -30,3 +30,13 @@ replace: s/HAVE_SYSCTL 1/HAVE_SYSCTL 0
config_win32/64.h/asm:
add to configure command: --toolchain=msvc
Regenerate defaults_disabled.{h,asm} with:
$ grep -E ".*_(INDEV|OUTDEV|DECODER|ENCODER|DEMUXER|MUXER|PARSER|FILTER|HWACCEL|PROTOCOL|ENCODERS|DECODERS|HWACCELS|INDEVS|OUTDEVS|FILTERS|DEMUXERS|MUXERS|PROTOCOLS) 0" config.h > ~/Work/Mozilla/mozilla-central/media/ffvpx/defaults_disabled.h
$ grep -E ".*_(INDEV|OUTDEV|DECODER|ENCODER|DEMUXER|MUXER|PARSER|FILTER|HWACCEL|PROTOCOL|ENCODERS|DECODERS|HWACCELS|INDEVS|OUTDEVS|FILTERS|DEMUXERS|MUXERS|PROTOCOLS) 0" config.asm > ~/Work/Mozilla/mozilla-central/media/ffvpx/defaults_disabled.asm
All new decoders/muxers/encoders/... should be added in the list of dummy functions found in libavcodec/dummy_funcs.c
otherwise linkage will fail on Windows. On other platforms they are optimised out and aren't necessary.
To update the source tree, the files listed in FILES should typically be able to be copied as-is from ffmpeg tree.
Compilation will reveal if any files are missing.

Просмотреть файл

@ -1 +0,0 @@
3.0

Просмотреть файл

@ -1,15 +0,0 @@
┌─────────────────────────────────────────┐
│ RELEASE NOTES for FFmpeg 3.0 "Einstein" │
└─────────────────────────────────────────┘
The FFmpeg Project proudly presents FFmpeg 3.0 "Einstein", about 5
months after the release of FFmpeg 2.8.
A complete Changelog is available at the root of the project, and the
complete Git history on http://source.ffmpeg.org.
We hope you will like this release as much as we enjoyed working on it, and
as usual, if you have any questions about it, or any FFmpeg related topic,
feel free to join us on the #ffmpeg IRC channel (on irc.freenode.net) or ask
on the mailing-lists.

Просмотреть файл

@ -0,0 +1,181 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef COMPAT_ATOMICS_WIN32_STDATOMIC_H
#define COMPAT_ATOMICS_WIN32_STDATOMIC_H
#define WIN32_LEAN_AND_MEAN
#include <stddef.h>
#include <stdint.h>
#include <windows.h>
#define ATOMIC_FLAG_INIT 0
#define ATOMIC_VAR_INIT(value) (value)
#define atomic_init(obj, value) \
do { \
*(obj) = (value); \
} while(0)
#define kill_dependency(y) ((void)0)
#define atomic_thread_fence(order) \
MemoryBarrier();
#define atomic_signal_fence(order) \
((void)0)
#define atomic_is_lock_free(obj) 0
typedef intptr_t atomic_flag;
typedef intptr_t atomic_bool;
typedef intptr_t atomic_char;
typedef intptr_t atomic_schar;
typedef intptr_t atomic_uchar;
typedef intptr_t atomic_short;
typedef intptr_t atomic_ushort;
typedef intptr_t atomic_int;
typedef intptr_t atomic_uint;
typedef intptr_t atomic_long;
typedef intptr_t atomic_ulong;
typedef intptr_t atomic_llong;
typedef intptr_t atomic_ullong;
typedef intptr_t atomic_wchar_t;
typedef intptr_t atomic_int_least8_t;
typedef intptr_t atomic_uint_least8_t;
typedef intptr_t atomic_int_least16_t;
typedef intptr_t atomic_uint_least16_t;
typedef intptr_t atomic_int_least32_t;
typedef intptr_t atomic_uint_least32_t;
typedef intptr_t atomic_int_least64_t;
typedef intptr_t atomic_uint_least64_t;
typedef intptr_t atomic_int_fast8_t;
typedef intptr_t atomic_uint_fast8_t;
typedef intptr_t atomic_int_fast16_t;
typedef intptr_t atomic_uint_fast16_t;
typedef intptr_t atomic_int_fast32_t;
typedef intptr_t atomic_uint_fast32_t;
typedef intptr_t atomic_int_fast64_t;
typedef intptr_t atomic_uint_fast64_t;
typedef intptr_t atomic_intptr_t;
typedef intptr_t atomic_uintptr_t;
typedef intptr_t atomic_size_t;
typedef intptr_t atomic_ptrdiff_t;
typedef intptr_t atomic_intmax_t;
typedef intptr_t atomic_uintmax_t;
#define atomic_store(object, desired) \
do { \
*(object) = (desired); \
MemoryBarrier(); \
} while (0)
#define atomic_store_explicit(object, desired, order) \
atomic_store(object, desired)
#define atomic_load(object) \
(MemoryBarrier(), *(object))
#define atomic_load_explicit(object, order) \
atomic_load(object)
#define atomic_exchange(object, desired) \
InterlockedExchangePointer(object, desired);
#define atomic_exchange_explicit(object, desired, order) \
atomic_exchange(object, desired)
static inline int atomic_compare_exchange_strong(intptr_t *object, intptr_t *expected,
intptr_t desired)
{
intptr_t old = *expected;
*expected = (intptr_t)InterlockedCompareExchangePointer(
(PVOID *)object, (PVOID)desired, (PVOID)old);
return *expected == old;
}
#define atomic_compare_exchange_strong_explicit(object, expected, desired, success, failure) \
atomic_compare_exchange_strong(object, expected, desired)
#define atomic_compare_exchange_weak(object, expected, desired) \
atomic_compare_exchange_strong(object, expected, desired)
#define atomic_compare_exchange_weak_explicit(object, expected, desired, success, failure) \
atomic_compare_exchange_weak(object, expected, desired)
#ifdef _WIN64
#define atomic_fetch_add(object, operand) \
InterlockedExchangeAdd64(object, operand)
#define atomic_fetch_sub(object, operand) \
InterlockedExchangeAdd64(object, -(operand))
#define atomic_fetch_or(object, operand) \
InterlockedOr64(object, operand)
#define atomic_fetch_xor(object, operand) \
InterlockedXor64(object, operand)
#define atomic_fetch_and(object, operand) \
InterlockedAnd64(object, operand)
#else
#define atomic_fetch_add(object, operand) \
InterlockedExchangeAdd(object, operand)
#define atomic_fetch_sub(object, operand) \
InterlockedExchangeAdd(object, -(operand))
#define atomic_fetch_or(object, operand) \
InterlockedOr(object, operand)
#define atomic_fetch_xor(object, operand) \
InterlockedXor(object, operand)
#define atomic_fetch_and(object, operand) \
InterlockedAnd(object, operand)
#endif /* _WIN64 */
#define atomic_fetch_add_explicit(object, operand, order) \
atomic_fetch_add(object, operand)
#define atomic_fetch_sub_explicit(object, operand, order) \
atomic_fetch_sub(object, operand)
#define atomic_fetch_or_explicit(object, operand, order) \
atomic_fetch_or(object, operand)
#define atomic_fetch_xor_explicit(object, operand, order) \
atomic_fetch_xor(object, operand)
#define atomic_fetch_and_explicit(object, operand, order) \
atomic_fetch_and(object, operand)
#define atomic_flag_test_and_set(object) \
atomic_exchange(object, 1)
#define atomic_flag_test_and_set_explicit(object, order) \
atomic_flag_test_and_set(object)
#define atomic_flag_clear(object) \
atomic_store(object, 0)
#define atomic_flag_clear_explicit(object, order) \
atomic_flag_clear(object)
#endif /* COMPAT_ATOMICS_WIN32_STDATOMIC_H */

Просмотреть файл

@ -77,7 +77,7 @@ typedef struct pthread_cond_t {
static av_unused unsigned __stdcall attribute_align_arg win32thread_worker(void *arg)
{
pthread_t *h = arg;
pthread_t *h = (pthread_t*)arg;
h->ret = h->func(h->arg);
return 0;
}
@ -270,7 +270,7 @@ static av_unused int pthread_cond_init(pthread_cond_t *cond, const void *unused_
}
/* non native condition variables */
win32_cond = av_mallocz(sizeof(win32_cond_t));
win32_cond = (win32_cond_t*)av_mallocz(sizeof(win32_cond_t));
if (!win32_cond)
return ENOMEM;
cond->Ptr = win32_cond;
@ -288,7 +288,7 @@ static av_unused int pthread_cond_init(pthread_cond_t *cond, const void *unused_
static av_unused int pthread_cond_destroy(pthread_cond_t *cond)
{
win32_cond_t *win32_cond = cond->Ptr;
win32_cond_t *win32_cond = (win32_cond_t*)cond->Ptr;
/* native condition variables do not destroy */
if (cond_init)
return 0;
@ -305,7 +305,7 @@ static av_unused int pthread_cond_destroy(pthread_cond_t *cond)
static av_unused int pthread_cond_broadcast(pthread_cond_t *cond)
{
win32_cond_t *win32_cond = cond->Ptr;
win32_cond_t *win32_cond = (win32_cond_t*)cond->Ptr;
int have_waiter;
if (cond_broadcast) {
@ -337,7 +337,7 @@ static av_unused int pthread_cond_broadcast(pthread_cond_t *cond)
static av_unused int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex)
{
win32_cond_t *win32_cond = cond->Ptr;
win32_cond_t *win32_cond = (win32_cond_t*)cond->Ptr;
int last_waiter;
if (cond_wait) {
cond_wait(cond, mutex, INFINITE);
@ -369,7 +369,7 @@ static av_unused int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mu
static av_unused int pthread_cond_signal(pthread_cond_t *cond)
{
win32_cond_t *win32_cond = cond->Ptr;
win32_cond_t *win32_cond = (win32_cond_t*)cond->Ptr;
int have_waiter;
if (cond_signal) {
cond_signal(cond);
@ -397,20 +397,20 @@ static av_unused int pthread_cond_signal(pthread_cond_t *cond)
static av_unused void w32thread_init(void)
{
#if _WIN32_WINNT < 0x0600
HANDLE kernel_dll = GetModuleHandle(TEXT("kernel32.dll"));
HMODULE kernel_dll = GetModuleHandle(TEXT("kernel32.dll"));
/* if one is available, then they should all be available */
cond_init =
(void*)GetProcAddress(kernel_dll, "InitializeConditionVariable");
cond_broadcast =
(void*)GetProcAddress(kernel_dll, "WakeAllConditionVariable");
cond_signal =
(void*)GetProcAddress(kernel_dll, "WakeConditionVariable");
cond_wait =
(void*)GetProcAddress(kernel_dll, "SleepConditionVariableCS");
initonce_begin =
(void*)GetProcAddress(kernel_dll, "InitOnceBeginInitialize");
initonce_complete =
(void*)GetProcAddress(kernel_dll, "InitOnceComplete");
cond_init = (void (WINAPI*)(pthread_cond_t *))
GetProcAddress(kernel_dll, "InitializeConditionVariable");
cond_broadcast = (void (WINAPI*)(pthread_cond_t *))
GetProcAddress(kernel_dll, "WakeAllConditionVariable");
cond_signal = (void (WINAPI*)(pthread_cond_t *))
GetProcAddress(kernel_dll, "WakeConditionVariable");
cond_wait = (BOOL (WINAPI*)(pthread_cond_t *, pthread_mutex_t *, DWORD))
GetProcAddress(kernel_dll, "SleepConditionVariableCS");
initonce_begin = (BOOL (WINAPI*)(pthread_once_t *, DWORD, BOOL *, void **))
GetProcAddress(kernel_dll, "InitOnceBeginInitialize");
initonce_complete = (BOOL (WINAPI*)(pthread_once_t *, DWORD, void *))
GetProcAddress(kernel_dll, "InitOnceComplete");
#endif
}

Просмотреть файл

@ -1,3 +1,4 @@
#ifndef MOZ_FFVPX_CONFIG_COMMON_H
#define MOZ_FFVPX_CONFIG_COMMON_H
#include "defaults_disabled.h"
#endif

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -29,6 +29,8 @@ if CONFIG['FFVPX_ASFLAGS']:
else:
# Default to unix, similar to how ASFLAGS setup works in configure.in
ASFLAGS += ['-Pconfig_unix64.asm']
# default disabled components
ASFLAGS += ['-Pdefaults_disabled.asm']
LOCAL_INCLUDES += ['/media/ffvpx']
@ -88,7 +90,7 @@ elif CONFIG['_MSC_VER']:
'-wd4057', '-wd4204', '-wd4706', '-wd4305', '-wd4152', '-wd4324',
'-we4013', '-wd4100', '-wd4214', '-wd4307', '-wd4273', '-wd4554',
]
LOCAL_INCLUDES += ['/media/ffvpx/compat/atomics/win32']
DEFINES['HAVE_AV_CONFIG_H'] = True
if CONFIG['MOZ_DEBUG']:

Просмотреть файл

@ -25,6 +25,7 @@
*/
#include "config.h"
#include "libavutil/thread.h"
#include "avcodec.h"
#include "version.h"
@ -58,20 +59,14 @@
av_register_codec_parser(&ff_##x##_parser); \
}
void avcodec_register_all(void)
static void register_all(void)
{
static int initialized;
if (initialized)
return;
initialized = 1;
/* hardware accelerators */
REGISTER_HWACCEL(H263_CUVID, h263_cuvid);
REGISTER_HWACCEL(H263_VAAPI, h263_vaapi);
REGISTER_HWACCEL(H263_VIDEOTOOLBOX, h263_videotoolbox);
REGISTER_HWACCEL(H264_CUVID, h264_cuvid);
REGISTER_HWACCEL(H264_D3D11VA, h264_d3d11va);
REGISTER_HWACCEL(H264_D3D11VA2, h264_d3d11va2);
REGISTER_HWACCEL(H264_DXVA2, h264_dxva2);
REGISTER_HWACCEL(H264_MEDIACODEC, h264_mediacodec);
REGISTER_HWACCEL(H264_MMAL, h264_mmal);
@ -83,11 +78,13 @@ void avcodec_register_all(void)
REGISTER_HWACCEL(H264_VIDEOTOOLBOX, h264_videotoolbox);
REGISTER_HWACCEL(HEVC_CUVID, hevc_cuvid);
REGISTER_HWACCEL(HEVC_D3D11VA, hevc_d3d11va);
REGISTER_HWACCEL(HEVC_D3D11VA2, hevc_d3d11va2);
REGISTER_HWACCEL(HEVC_DXVA2, hevc_dxva2);
REGISTER_HWACCEL(HEVC_MEDIACODEC, hevc_mediacodec);
REGISTER_HWACCEL(HEVC_QSV, hevc_qsv);
REGISTER_HWACCEL(HEVC_VAAPI, hevc_vaapi);
REGISTER_HWACCEL(HEVC_VDPAU, hevc_vdpau);
REGISTER_HWACCEL(HEVC_VIDEOTOOLBOX, hevc_videotoolbox);
REGISTER_HWACCEL(MJPEG_CUVID, mjpeg_cuvid);
REGISTER_HWACCEL(MPEG1_CUVID, mpeg1_cuvid);
REGISTER_HWACCEL(MPEG1_XVMC, mpeg1_xvmc);
@ -96,12 +93,14 @@ void avcodec_register_all(void)
REGISTER_HWACCEL(MPEG2_CUVID, mpeg2_cuvid);
REGISTER_HWACCEL(MPEG2_XVMC, mpeg2_xvmc);
REGISTER_HWACCEL(MPEG2_D3D11VA, mpeg2_d3d11va);
REGISTER_HWACCEL(MPEG2_D3D11VA2, mpeg2_d3d11va2);
REGISTER_HWACCEL(MPEG2_DXVA2, mpeg2_dxva2);
REGISTER_HWACCEL(MPEG2_MMAL, mpeg2_mmal);
REGISTER_HWACCEL(MPEG2_QSV, mpeg2_qsv);
REGISTER_HWACCEL(MPEG2_VAAPI, mpeg2_vaapi);
REGISTER_HWACCEL(MPEG2_VDPAU, mpeg2_vdpau);
REGISTER_HWACCEL(MPEG2_VIDEOTOOLBOX, mpeg2_videotoolbox);
REGISTER_HWACCEL(MPEG2_MEDIACODEC, mpeg2_mediacodec);
REGISTER_HWACCEL(MPEG4_CUVID, mpeg4_cuvid);
REGISTER_HWACCEL(MPEG4_MEDIACODEC, mpeg4_mediacodec);
REGISTER_HWACCEL(MPEG4_MMAL, mpeg4_mmal);
@ -110,6 +109,7 @@ void avcodec_register_all(void)
REGISTER_HWACCEL(MPEG4_VIDEOTOOLBOX, mpeg4_videotoolbox);
REGISTER_HWACCEL(VC1_CUVID, vc1_cuvid);
REGISTER_HWACCEL(VC1_D3D11VA, vc1_d3d11va);
REGISTER_HWACCEL(VC1_D3D11VA2, vc1_d3d11va2);
REGISTER_HWACCEL(VC1_DXVA2, vc1_dxva2);
REGISTER_HWACCEL(VC1_VAAPI, vc1_vaapi);
REGISTER_HWACCEL(VC1_VDPAU, vc1_vdpau);
@ -117,12 +117,15 @@ void avcodec_register_all(void)
REGISTER_HWACCEL(VC1_QSV, vc1_qsv);
REGISTER_HWACCEL(VP8_CUVID, vp8_cuvid);
REGISTER_HWACCEL(VP8_MEDIACODEC, vp8_mediacodec);
REGISTER_HWACCEL(VP8_QSV, vp8_qsv);
REGISTER_HWACCEL(VP9_CUVID, vp9_cuvid);
REGISTER_HWACCEL(VP9_D3D11VA, vp9_d3d11va);
REGISTER_HWACCEL(VP9_D3D11VA2, vp9_d3d11va2);
REGISTER_HWACCEL(VP9_DXVA2, vp9_dxva2);
REGISTER_HWACCEL(VP9_MEDIACODEC, vp9_mediacodec);
REGISTER_HWACCEL(VP9_VAAPI, vp9_vaapi);
REGISTER_HWACCEL(WMV3_D3D11VA, wmv3_d3d11va);
REGISTER_HWACCEL(WMV3_D3D11VA2, wmv3_d3d11va2);
REGISTER_HWACCEL(WMV3_DXVA2, wmv3_dxva2);
REGISTER_HWACCEL(WMV3_VAAPI, wmv3_vaapi);
REGISTER_HWACCEL(WMV3_VDPAU, wmv3_vdpau);
@ -158,6 +161,7 @@ void avcodec_register_all(void)
REGISTER_DECODER(CDXL, cdxl);
REGISTER_DECODER(CFHD, cfhd);
REGISTER_ENCDEC (CINEPAK, cinepak);
REGISTER_DECODER(CLEARVIDEO, clearvideo);
REGISTER_ENCDEC (CLJR, cljr);
REGISTER_DECODER(CLLC, cllc);
REGISTER_ENCDEC (COMFORTNOISE, comfortnoise);
@ -189,24 +193,30 @@ void avcodec_register_all(void)
REGISTER_ENCDEC (FFV1, ffv1);
REGISTER_ENCDEC (FFVHUFF, ffvhuff);
REGISTER_DECODER(FIC, fic);
REGISTER_ENCDEC (FITS, fits);
REGISTER_ENCDEC (FLASHSV, flashsv);
REGISTER_ENCDEC (FLASHSV2, flashsv2);
REGISTER_DECODER(FLIC, flic);
REGISTER_ENCDEC (FLV, flv);
REGISTER_DECODER(FMVC, fmvc);
REGISTER_DECODER(FOURXM, fourxm);
REGISTER_DECODER(FRAPS, fraps);
REGISTER_DECODER(FRWU, frwu);
REGISTER_DECODER(G2M, g2m);
REGISTER_DECODER(GDV, gdv);
REGISTER_ENCDEC (GIF, gif);
REGISTER_ENCDEC (H261, h261);
REGISTER_ENCDEC (H263, h263);
REGISTER_DECODER(H263I, h263i);
REGISTER_ENCDEC (H263P, h263p);
REGISTER_DECODER(H263_V4L2M2M, h263_v4l2m2m);
REGISTER_DECODER(H264, h264);
REGISTER_DECODER(H264_CRYSTALHD, h264_crystalhd);
REGISTER_DECODER(H264_V4L2M2M, h264_v4l2m2m);
REGISTER_DECODER(H264_MEDIACODEC, h264_mediacodec);
REGISTER_DECODER(H264_MMAL, h264_mmal);
REGISTER_DECODER(H264_QSV, h264_qsv);
REGISTER_DECODER(H264_RKMPP, h264_rkmpp);
REGISTER_DECODER(H264_VDA, h264_vda);
#if FF_API_VDPAU
REGISTER_DECODER(H264_VDPAU, h264_vdpau);
@ -214,6 +224,8 @@ void avcodec_register_all(void)
REGISTER_ENCDEC (HAP, hap);
REGISTER_DECODER(HEVC, hevc);
REGISTER_DECODER(HEVC_QSV, hevc_qsv);
REGISTER_DECODER(HEVC_RKMPP, hevc_rkmpp);
REGISTER_DECODER(HEVC_V4L2M2M, hevc_v4l2m2m);
REGISTER_DECODER(HNM4_VIDEO, hnm4_video);
REGISTER_DECODER(HQ_HQA, hq_hqa);
REGISTER_DECODER(HQX, hqx);
@ -248,6 +260,7 @@ void avcodec_register_all(void)
REGISTER_ENCDEC (MPEG2VIDEO, mpeg2video);
REGISTER_ENCDEC (MPEG4, mpeg4);
REGISTER_DECODER(MPEG4_CRYSTALHD, mpeg4_crystalhd);
REGISTER_DECODER(MPEG4_V4L2M2M, mpeg4_v4l2m2m);
REGISTER_DECODER(MPEG4_MMAL, mpeg4_mmal);
#if FF_API_VDPAU
REGISTER_DECODER(MPEG4_VDPAU, mpeg4_vdpau);
@ -257,14 +270,18 @@ void avcodec_register_all(void)
REGISTER_DECODER(MPEG_VDPAU, mpeg_vdpau);
REGISTER_DECODER(MPEG1_VDPAU, mpeg1_vdpau);
#endif
REGISTER_DECODER(MPEG1_V4L2M2M, mpeg1_v4l2m2m);
REGISTER_DECODER(MPEG2_MMAL, mpeg2_mmal);
REGISTER_DECODER(MPEG2_CRYSTALHD, mpeg2_crystalhd);
REGISTER_DECODER(MPEG2_V4L2M2M, mpeg2_v4l2m2m);
REGISTER_DECODER(MPEG2_QSV, mpeg2_qsv);
REGISTER_DECODER(MPEG2_MEDIACODEC, mpeg2_mediacodec);
REGISTER_DECODER(MSA1, msa1);
REGISTER_DECODER(MSMPEG4_CRYSTALHD, msmpeg4_crystalhd);
REGISTER_DECODER(MSCC, mscc);
REGISTER_DECODER(MSMPEG4V1, msmpeg4v1);
REGISTER_ENCDEC (MSMPEG4V2, msmpeg4v2);
REGISTER_ENCDEC (MSMPEG4V3, msmpeg4v3);
REGISTER_DECODER(MSMPEG4_CRYSTALHD, msmpeg4_crystalhd);
REGISTER_DECODER(MSRLE, msrle);
REGISTER_DECODER(MSS1, mss1);
REGISTER_DECODER(MSS2, mss2);
@ -282,12 +299,14 @@ void avcodec_register_all(void)
REGISTER_ENCDEC (PGM, pgm);
REGISTER_ENCDEC (PGMYUV, pgmyuv);
REGISTER_DECODER(PICTOR, pictor);
REGISTER_DECODER(PIXLET, pixlet);
REGISTER_ENCDEC (PNG, png);
REGISTER_ENCDEC (PPM, ppm);
REGISTER_ENCDEC (PRORES, prores);
REGISTER_ENCODER(PRORES_AW, prores_aw);
REGISTER_ENCODER(PRORES_KS, prores_ks);
REGISTER_DECODER(PRORES_LGPL, prores_lgpl);
REGISTER_DECODER(PSD, psd);
REGISTER_DECODER(PTX, ptx);
REGISTER_DECODER(QDRAW, qdraw);
REGISTER_DECODER(QPEG, qpeg);
@ -305,6 +324,7 @@ void avcodec_register_all(void)
REGISTER_DECODER(RV40, rv40);
REGISTER_ENCDEC (S302M, s302m);
REGISTER_DECODER(SANM, sanm);
REGISTER_DECODER(SCPR, scpr);
REGISTER_DECODER(SCREENPRESSO, screenpresso);
REGISTER_DECODER(SDX2_DPCM, sdx2_dpcm);
REGISTER_ENCDEC (SGI, sgi);
@ -315,6 +335,8 @@ void avcodec_register_all(void)
REGISTER_DECODER(SMVJPEG, smvjpeg);
REGISTER_ENCDEC (SNOW, snow);
REGISTER_DECODER(SP5X, sp5x);
REGISTER_DECODER(SPEEDHQ, speedhq);
REGISTER_DECODER(SRGC, srgc);
REGISTER_ENCDEC (SUNRAST, sunrast);
REGISTER_ENCDEC (SVQ1, svq1);
REGISTER_DECODER(SVQ3, svq3);
@ -349,6 +371,7 @@ void avcodec_register_all(void)
REGISTER_DECODER(VC1IMAGE, vc1image);
REGISTER_DECODER(VC1_MMAL, vc1_mmal);
REGISTER_DECODER(VC1_QSV, vc1_qsv);
REGISTER_DECODER(VC1_V4L2M2M, vc1_v4l2m2m);
REGISTER_ENCODER(VC2, vc2);
REGISTER_DECODER(VCR1, vcr1);
REGISTER_DECODER(VMDVIDEO, vmdvideo);
@ -360,10 +383,15 @@ void avcodec_register_all(void)
REGISTER_DECODER(VP6F, vp6f);
REGISTER_DECODER(VP7, vp7);
REGISTER_DECODER(VP8, vp8);
REGISTER_DECODER(VP8_RKMPP, vp8_rkmpp);
REGISTER_DECODER(VP8_V4L2M2M, vp8_v4l2m2m);
REGISTER_DECODER(VP9, vp9);
REGISTER_DECODER(VP9_RKMPP, vp9_rkmpp);
REGISTER_DECODER(VP9_V4L2M2M, vp9_v4l2m2m);
REGISTER_DECODER(VQA, vqa);
REGISTER_DECODER(BITPACKED, bitpacked);
REGISTER_DECODER(WEBP, webp);
REGISTER_ENCODER(WRAPPED_AVFRAME, wrapped_avframe);
REGISTER_ENCDEC (WRAPPED_AVFRAME, wrapped_avframe);
REGISTER_ENCDEC (WMV1, wmv1);
REGISTER_ENCDEC (WMV2, wmv2);
REGISTER_DECODER(WMV3, wmv3);
@ -378,6 +406,7 @@ void avcodec_register_all(void)
REGISTER_ENCDEC (XBM, xbm);
REGISTER_ENCDEC (XFACE, xface);
REGISTER_DECODER(XL, xl);
REGISTER_DECODER(XPM, xpm);
REGISTER_ENCDEC (XWD, xwd);
REGISTER_ENCDEC (Y41P, y41p);
REGISTER_DECODER(YLC, ylc);
@ -401,12 +430,15 @@ void avcodec_register_all(void)
REGISTER_DECODER(APE, ape);
REGISTER_DECODER(ATRAC1, atrac1);
REGISTER_DECODER(ATRAC3, atrac3);
REGISTER_DECODER(ATRAC3AL, atrac3al);
REGISTER_DECODER(ATRAC3P, atrac3p);
REGISTER_DECODER(ATRAC3PAL, atrac3pal);
REGISTER_DECODER(BINKAUDIO_DCT, binkaudio_dct);
REGISTER_DECODER(BINKAUDIO_RDFT, binkaudio_rdft);
REGISTER_DECODER(BMV_AUDIO, bmv_audio);
REGISTER_DECODER(COOK, cook);
REGISTER_ENCDEC (DCA, dca);
REGISTER_DECODER(DOLBY_E, dolby_e);
REGISTER_DECODER(DSD_LSBF, dsd_lsbf);
REGISTER_DECODER(DSD_MSBF, dsd_msbf);
REGISTER_DECODER(DSD_LSBF_PLANAR, dsd_lsbf_planar);
@ -444,10 +476,11 @@ void avcodec_register_all(void)
REGISTER_DECODER(MPC8, mpc8);
REGISTER_ENCDEC (NELLYMOSER, nellymoser);
REGISTER_DECODER(ON2AVC, on2avc);
REGISTER_DECODER(OPUS, opus);
REGISTER_ENCDEC (OPUS, opus);
REGISTER_DECODER(PAF_AUDIO, paf_audio);
REGISTER_DECODER(QCELP, qcelp);
REGISTER_DECODER(QDM2, qdm2);
REGISTER_DECODER(QDMC, qdmc);
REGISTER_ENCDEC (RA_144, ra_144);
REGISTER_DECODER(RA_288, ra_288);
REGISTER_DECODER(RALF, ralf);
@ -477,6 +510,8 @@ void avcodec_register_all(void)
REGISTER_ENCDEC (PCM_ALAW, pcm_alaw);
REGISTER_DECODER(PCM_BLURAY, pcm_bluray);
REGISTER_DECODER(PCM_DVD, pcm_dvd);
REGISTER_DECODER(PCM_F16LE, pcm_f16le);
REGISTER_DECODER(PCM_F24LE, pcm_f24le);
REGISTER_ENCDEC (PCM_F32BE, pcm_f32be);
REGISTER_ENCDEC (PCM_F32LE, pcm_f32le);
REGISTER_ENCDEC (PCM_F64BE, pcm_f64be);
@ -508,6 +543,7 @@ void avcodec_register_all(void)
REGISTER_DECODER(PCM_ZORK, pcm_zork);
/* DPCM codecs */
REGISTER_DECODER(GREMLIN_DPCM, gremlin_dpcm);
REGISTER_DECODER(INTERPLAY_DPCM, interplay_dpcm);
REGISTER_ENCDEC (ROQ_DPCM, roq_dpcm);
REGISTER_DECODER(SOL_DPCM, sol_dpcm);
@ -528,7 +564,7 @@ void avcodec_register_all(void)
REGISTER_DECODER(ADPCM_EA_XAS, adpcm_ea_xas);
REGISTER_ENCDEC (ADPCM_G722, adpcm_g722);
REGISTER_ENCDEC (ADPCM_G726, adpcm_g726);
REGISTER_DECODER(ADPCM_G726LE, adpcm_g726le);
REGISTER_ENCDEC (ADPCM_G726LE, adpcm_g726le);
REGISTER_DECODER(ADPCM_IMA_AMV, adpcm_ima_amv);
REGISTER_DECODER(ADPCM_IMA_APC, adpcm_ima_apc);
REGISTER_DECODER(ADPCM_IMA_DAT4, adpcm_ima_dat4);
@ -606,7 +642,7 @@ void avcodec_register_all(void)
REGISTER_DECODER(LIBOPENCORE_AMRWB, libopencore_amrwb);
REGISTER_ENCDEC (LIBOPENJPEG, libopenjpeg);
REGISTER_ENCDEC (LIBOPUS, libopus);
REGISTER_ENCDEC (LIBSCHROEDINGER, libschroedinger);
REGISTER_DECODER(LIBRSVG, librsvg);
REGISTER_ENCODER(LIBSHINE, libshine);
REGISTER_ENCDEC (LIBSPEEX, libspeex);
REGISTER_ENCODER(LIBTHEORA, libtheora);
@ -633,12 +669,13 @@ void avcodec_register_all(void)
/* external libraries, that shouldn't be used by default if one of the
* above is available */
REGISTER_ENCODER(H263_V4L2M2M, h263_v4l2m2m);
REGISTER_ENCDEC (LIBOPENH264, libopenh264);
REGISTER_DECODER(H263_CUVID, h263_cuvid);
REGISTER_DECODER(H264_CUVID, h264_cuvid);
REGISTER_ENCODER(H264_NVENC, h264_nvenc);
REGISTER_ENCODER(H264_OMX, h264_omx);
REGISTER_ENCODER(H264_QSV, h264_qsv);
REGISTER_ENCODER(H264_V4L2M2M, h264_v4l2m2m);
REGISTER_ENCODER(H264_VAAPI, h264_vaapi);
REGISTER_ENCODER(H264_VIDEOTOOLBOX, h264_videotoolbox);
#if FF_API_NVENC_OLD_NAME
@ -650,6 +687,7 @@ void avcodec_register_all(void)
REGISTER_DECODER(HEVC_MEDIACODEC, hevc_mediacodec);
REGISTER_ENCODER(HEVC_NVENC, hevc_nvenc);
REGISTER_ENCODER(HEVC_QSV, hevc_qsv);
REGISTER_ENCODER(HEVC_V4L2M2M, hevc_v4l2m2m);
REGISTER_ENCODER(HEVC_VAAPI, hevc_vaapi);
REGISTER_ENCODER(LIBKVAZAAR, libkvazaar);
REGISTER_DECODER(MJPEG_CUVID, mjpeg_cuvid);
@ -657,13 +695,19 @@ void avcodec_register_all(void)
REGISTER_DECODER(MPEG1_CUVID, mpeg1_cuvid);
REGISTER_DECODER(MPEG2_CUVID, mpeg2_cuvid);
REGISTER_ENCODER(MPEG2_QSV, mpeg2_qsv);
REGISTER_ENCODER(MPEG2_VAAPI, mpeg2_vaapi);
REGISTER_DECODER(MPEG4_CUVID, mpeg4_cuvid);
REGISTER_DECODER(MPEG4_MEDIACODEC, mpeg4_mediacodec);
REGISTER_ENCODER(MPEG4_V4L2M2M, mpeg4_v4l2m2m);
REGISTER_DECODER(VC1_CUVID, vc1_cuvid);
REGISTER_DECODER(VP8_CUVID, vp8_cuvid);
REGISTER_DECODER(VP8_MEDIACODEC, vp8_mediacodec);
REGISTER_DECODER(VP8_QSV, vp8_qsv);
REGISTER_ENCODER(VP8_V4L2M2M, vp8_v4l2m2m);
REGISTER_ENCODER(VP8_VAAPI, vp8_vaapi);
REGISTER_DECODER(VP9_CUVID, vp9_cuvid);
REGISTER_DECODER(VP9_MEDIACODEC, vp9_mediacodec);
REGISTER_ENCODER(VP9_VAAPI, vp9_vaapi);
/* parsers */
REGISTER_PARSER(AAC, aac);
@ -698,10 +742,19 @@ void avcodec_register_all(void)
REGISTER_PARSER(PNM, pnm);
REGISTER_PARSER(RV30, rv30);
REGISTER_PARSER(RV40, rv40);
REGISTER_PARSER(SIPR, sipr);
REGISTER_PARSER(TAK, tak);
REGISTER_PARSER(VC1, vc1);
REGISTER_PARSER(VORBIS, vorbis);
REGISTER_PARSER(VP3, vp3);
REGISTER_PARSER(VP8, vp8);
REGISTER_PARSER(VP9, vp9);
REGISTER_PARSER(XMA, xma);
}
void avcodec_register_all(void)
{
static AVOnce control = AV_ONCE_INIT;
ff_thread_once(&control, register_all);
}

Просмотреть файл

@ -89,7 +89,7 @@
* - Send valid input:
* - For decoding, call avcodec_send_packet() to give the decoder raw
* compressed data in an AVPacket.
* - For encoding, call avcodec_send_frame() to give the decoder an AVFrame
* - For encoding, call avcodec_send_frame() to give the encoder an AVFrame
* containing uncompressed audio or video.
* In both cases, it is recommended that AVPackets and AVFrames are
* refcounted, or libavcodec might have to copy the input data. (libavformat
@ -112,6 +112,12 @@
* are filled. This situation is handled transparently if you follow the steps
* outlined above.
*
* In theory, sending input can result in EAGAIN - this should happen only if
* not all output was received. You can use this to structure alternative decode
* or encode loops other than the one suggested above. For example, you could
* try sending new input on each iteration, and try to receive output if that
* returns EAGAIN.
*
* End of stream situations. These require "flushing" (aka draining) the codec,
* as the codec might buffer multiple frames or packets internally for
* performance or out of necessity (consider B-frames).
@ -136,8 +142,9 @@
*
* Not all codecs will follow a rigid and predictable dataflow; the only
* guarantee is that an AVERROR(EAGAIN) return value on a send/receive call on
* one end implies that a receive/send call on the other end will succeed. In
* general, no codec will permit unlimited buffering of input or output.
* one end implies that a receive/send call on the other end will succeed, or
* at least will not fail with AVERROR(EAGAIN). In general, no codec will
* permit unlimited buffering of input or output.
*
* This API replaces the following legacy functions:
* - avcodec_decode_video2() and avcodec_decode_audio4():
@ -146,7 +153,8 @@
* Unlike with the old video decoding API, multiple frames might result from
* a packet. For audio, splitting the input packet into frames by partially
* decoding packets becomes transparent to the API user. You never need to
* feed an AVPacket to the API twice.
* feed an AVPacket to the API twice (unless it is rejected with AVERROR(EAGAIN) - then
* no data was read from the packet).
* Additionally, sending a flush/draining packet is required only once.
* - avcodec_encode_video2()/avcodec_encode_audio2():
* Use avcodec_send_frame() to feed input to the encoder, then use
@ -159,7 +167,22 @@
* and will result in undefined behavior.
*
* Some codecs might require using the new API; using the old API will return
* an error when calling it.
* an error when calling it. All codecs support the new API.
*
* A codec is not allowed to return AVERROR(EAGAIN) for both sending and receiving. This
* would be an invalid state, which could put the codec user into an endless
* loop. The API has no concept of time either: it cannot happen that trying to
* do avcodec_send_packet() results in AVERROR(EAGAIN), but a repeated call 1 second
* later accepts the packet (with no other receive/flush API calls involved).
* The API is a strict state machine, and the passage of time is not supposed
* to influence it. Some timing-dependent behavior might still be deemed
* acceptable in certain cases. But it must never result in both send/receive
* returning EAGAIN at the same time at any point. It must also absolutely be
* avoided that the current state is "unstable" and can "flip-flop" between
* the send/receive APIs allowing progress. For example, it's not allowed that
* the codec randomly decides that it actually wants to consume a packet now
* instead of returning a frame, after it just returned AVERROR(EAGAIN) on an
* avcodec_send_packet() call.
* @}
*/
@ -411,6 +434,20 @@ enum AVCodecID {
AV_CODEC_ID_MAGICYUV,
AV_CODEC_ID_SHEERVIDEO,
AV_CODEC_ID_YLC,
AV_CODEC_ID_PSD,
AV_CODEC_ID_PIXLET,
AV_CODEC_ID_SPEEDHQ,
AV_CODEC_ID_FMVC,
AV_CODEC_ID_SCPR,
AV_CODEC_ID_CLEARVIDEO,
AV_CODEC_ID_XPM,
AV_CODEC_ID_AV1,
AV_CODEC_ID_BITPACKED,
AV_CODEC_ID_MSCC,
AV_CODEC_ID_SRGC,
AV_CODEC_ID_SVG,
AV_CODEC_ID_GDV,
AV_CODEC_ID_FITS,
/* various PCM "codecs" */
AV_CODEC_ID_FIRST_AUDIO = 0x10000, ///< A dummy id pointing at the start of audio codecs
@ -448,6 +485,8 @@ enum AVCodecID {
AV_CODEC_ID_PCM_S64LE = 0x10800,
AV_CODEC_ID_PCM_S64BE,
AV_CODEC_ID_PCM_F16LE,
AV_CODEC_ID_PCM_F24LE,
/* various ADPCM codecs */
AV_CODEC_ID_ADPCM_IMA_QT = 0x11000,
@ -511,6 +550,7 @@ enum AVCodecID {
AV_CODEC_ID_SOL_DPCM,
AV_CODEC_ID_SDX2_DPCM = 0x14800,
AV_CODEC_ID_GREMLIN_DPCM,
/* audio codecs */
AV_CODEC_ID_MP2 = 0x15000,
@ -598,6 +638,9 @@ enum AVCodecID {
AV_CODEC_ID_XMA1,
AV_CODEC_ID_XMA2,
AV_CODEC_ID_DST,
AV_CODEC_ID_ATRAC3AL,
AV_CODEC_ID_ATRAC3PAL,
AV_CODEC_ID_DOLBY_E,
/* subtitle codecs */
AV_CODEC_ID_FIRST_SUBTITLE = 0x17000, ///< A dummy ID pointing at the start of subtitle codecs.
@ -689,7 +732,7 @@ typedef struct AVCodecDescriptor {
/**
* Codec uses only intra compression.
* Video codecs only.
* Video and audio codecs only.
*/
#define AV_CODEC_PROP_INTRA_ONLY (1 << 0)
/**
@ -1360,6 +1403,11 @@ typedef struct AVCPBProperties {
* @{
*/
enum AVPacketSideDataType {
/**
* An AV_PKT_DATA_PALETTE side data packet contains exactly AVPALETTE_SIZE
* bytes worth of palette. This side data signals that a new palette is
* present.
*/
AV_PKT_DATA_PALETTE,
/**
@ -1533,10 +1581,40 @@ enum AVPacketSideDataType {
/**
* Mastering display metadata (based on SMPTE-2086:2014). This metadata
* should be associated with a video stream and containts data in the form
* should be associated with a video stream and contains data in the form
* of the AVMasteringDisplayMetadata struct.
*/
AV_PKT_DATA_MASTERING_DISPLAY_METADATA
AV_PKT_DATA_MASTERING_DISPLAY_METADATA,
/**
* This side data should be associated with a video stream and corresponds
* to the AVSphericalMapping structure.
*/
AV_PKT_DATA_SPHERICAL,
/**
* Content light level (based on CTA-861.3). This metadata should be
* associated with a video stream and contains data in the form of the
* AVContentLightMetadata struct.
*/
AV_PKT_DATA_CONTENT_LIGHT_LEVEL,
/**
* ATSC A53 Part 4 Closed Captions. This metadata should be associated with
* a video stream. A53 CC bitstream is stored as uint8_t in AVPacketSideData.data.
* The number of bytes of CC data is AVPacketSideData.size.
*/
AV_PKT_DATA_A53_CC,
/**
* The number of side data elements (in fact a bit more than it).
* This is not part of the public API/ABI in the sense that it may
* change when new side data types are added.
* This must stay the last enum value.
* If its value becomes huge, some code using it
* needs to be updated as it assumes it to be smaller than other limits.
*/
AV_PKT_DATA_NB
};
#define AV_PKT_DATA_QUALITY_FACTOR AV_PKT_DATA_QUALITY_STATS //DEPRECATED
@ -1638,6 +1716,13 @@ typedef struct AVPacket {
* after decoding.
**/
#define AV_PKT_FLAG_DISCARD 0x0004
/**
* The packet comes from a trusted source.
*
* Otherwise-unsafe constructs such as arbitrary pointers to data
* outside the packet may be followed.
*/
#define AV_PKT_FLAG_TRUSTED 0x0008
enum AVSideDataParamChangeFlags {
AV_SIDE_DATA_PARAM_CHANGE_CHANNEL_COUNT = 0x0001,
@ -1665,7 +1750,7 @@ enum AVFieldOrder {
* New fields can be added to the end with minor version bumps.
* Removal, reordering and changes to existing fields require a major
* version bump.
* Please use AVOptions (av_opt* / av_set/get*()) to access these fields from user
* You can use AVOptions (av_opt* / av_set/get*()) to access these fields from user
* applications.
* The name string for AVOptions options matches the associated command line
* parameter name and can be found in libavcodec/options_table.h
@ -2605,6 +2690,7 @@ typedef struct AVCodecContext {
* - encoding: unused
* - decoding: set by the caller before avcodec_open2().
*/
attribute_deprecated
int refcounted_frames;
/* - encoding parameters */
@ -2936,8 +3022,8 @@ typedef struct AVCodecContext {
#define FF_DEBUG_MMCO 0x00000800
#define FF_DEBUG_BUGS 0x00001000
#if FF_API_DEBUG_MV
#define FF_DEBUG_VIS_QP 0x00002000 ///< only access through AVOptions from outside libavcodec
#define FF_DEBUG_VIS_MB_TYPE 0x00004000 ///< only access through AVOptions from outside libavcodec
#define FF_DEBUG_VIS_QP 0x00002000
#define FF_DEBUG_VIS_MB_TYPE 0x00004000
#endif
#define FF_DEBUG_BUFFERS 0x00008000
#define FF_DEBUG_THREADS 0x00010000
@ -2947,7 +3033,6 @@ typedef struct AVCodecContext {
#if FF_API_DEBUG_MV
/**
* debug
* Code outside libavcodec should access this field using AVOptions
* - encoding: Set by user.
* - decoding: Set by user.
*/
@ -3061,6 +3146,7 @@ typedef struct AVCodecContext {
#if FF_API_ARCH_ALPHA
#define FF_IDCT_SIMPLEALPHA 23
#endif
#define FF_IDCT_NONE 24 /* Used by XvMC to extract IDCT coefficients with FF_IDCT_PERM_NONE */
#define FF_IDCT_SIMPLEAUTO 128
/**
@ -3082,8 +3168,6 @@ typedef struct AVCodecContext {
* low resolution decoding, 1-> 1/2 size, 2->1/4 size
* - encoding: unused
* - decoding: Set by user.
* Code outside libavcodec should access this field using:
* av_codec_{get,set}_lowres(avctx)
*/
int lowres;
#endif
@ -3384,8 +3468,6 @@ typedef struct AVCodecContext {
/**
* Timebase in which pkt_dts/pts and AVPacket.dts/pts are.
* Code outside libavcodec should access this field using:
* av_codec_{get,set}_pkt_timebase(avctx)
* - encoding unused.
* - decoding set by user.
*/
@ -3393,8 +3475,6 @@ typedef struct AVCodecContext {
/**
* AVCodecDescriptor
* Code outside libavcodec should access this field using:
* av_codec_{get,set}_codec_descriptor(avctx)
* - encoding: unused.
* - decoding: set by libavcodec.
*/
@ -3405,8 +3485,6 @@ typedef struct AVCodecContext {
* low resolution decoding, 1-> 1/2 size, 2->1/4 size
* - encoding: unused
* - decoding: Set by user.
* Code outside libavcodec should access this field using:
* av_codec_{get,set}_lowres(avctx)
*/
int lowres;
#endif
@ -3447,7 +3525,6 @@ typedef struct AVCodecContext {
* However for formats that do not use pre-multiplied alpha
* there might be serious artefacts (though e.g. libswscale currently
* assumes pre-multiplied alpha anyway).
* Code outside libavcodec should access this field using AVOptions
*
* - decoding: set by user
* - encoding: unused
@ -3464,7 +3541,6 @@ typedef struct AVCodecContext {
#if !FF_API_DEBUG_MV
/**
* debug motion vectors
* Code outside libavcodec should access this field using AVOptions
* - encoding: Set by user.
* - decoding: Set by user.
*/
@ -3476,7 +3552,6 @@ typedef struct AVCodecContext {
/**
* custom intra quantization matrix
* Code outside libavcodec should access this field using av_codec_g/set_chroma_intra_matrix()
* - encoding: Set by user, can be NULL.
* - decoding: unused.
*/
@ -3485,8 +3560,6 @@ typedef struct AVCodecContext {
/**
* dump format separator.
* can be ", " or "\n " or anything else
* Code outside libavcodec should access this field using AVOptions
* (NO direct access).
* - encoding: Set by user.
* - decoding: Set by user.
*/
@ -3496,13 +3569,12 @@ typedef struct AVCodecContext {
* ',' separated list of allowed decoders.
* If NULL then all are allowed
* - encoding: unused
* - decoding: set by user through AVOPtions (NO direct access)
* - decoding: set by user
*/
char *codec_whitelist;
/*
/**
* Properties of the stream that gets decoded
* To be accessed through av_codec_get_properties() (NO direct access)
* - encoding: unused
* - decoding: set by libavcodec
*/
@ -3522,7 +3594,8 @@ typedef struct AVCodecContext {
/**
* A reference to the AVHWFramesContext describing the input (for encoding)
* or output (decoding) frames. The reference is set by the caller and
* afterwards owned (and freed) by libavcodec.
* afterwards owned (and freed) by libavcodec - it should never be read by
* the caller after being set.
*
* - decoding: This field should be set by the caller from the get_format()
* callback. The previous reference (if any) will always be
@ -3564,6 +3637,71 @@ typedef struct AVCodecContext {
*/
int trailing_padding;
/**
* The number of pixels per image to maximally accept.
*
* - decoding: set by user
* - encoding: set by user
*/
int64_t max_pixels;
/**
* A reference to the AVHWDeviceContext describing the device which will
* be used by a hardware encoder/decoder. The reference is set by the
* caller and afterwards owned (and freed) by libavcodec.
*
* This should be used if either the codec device does not require
* hardware frames or any that are used are to be allocated internally by
* libavcodec. If the user wishes to supply any of the frames used as
* encoder input or decoder output then hw_frames_ctx should be used
* instead. When hw_frames_ctx is set in get_format() for a decoder, this
* field will be ignored while decoding the associated stream segment, but
* may again be used on a following one after another get_format() call.
*
* For both encoders and decoders this field should be set before
* avcodec_open2() is called and must not be written to thereafter.
*
* Note that some decoders may require this field to be set initially in
* order to support hw_frames_ctx at all - in that case, all frames
* contexts used must be created on the same device.
*/
AVBufferRef *hw_device_ctx;
/**
* Bit set of AV_HWACCEL_FLAG_* flags, which affect hardware accelerated
* decoding (if active).
* - encoding: unused
* - decoding: Set by user (either before avcodec_open2(), or in the
* AVCodecContext.get_format callback)
*/
int hwaccel_flags;
/**
* Video decoding only. Certain video codecs support cropping, meaning that
* only a sub-rectangle of the decoded frame is intended for display. This
* option controls how cropping is handled by libavcodec.
*
* When set to 1 (the default), libavcodec will apply cropping internally.
* I.e. it will modify the output frame width/height fields and offset the
* data pointers (only by as much as possible while preserving alignment, or
* by the full amount if the AV_CODEC_FLAG_UNALIGNED flag is set) so that
* the frames output by the decoder refer only to the cropped area. The
* crop_* fields of the output frames will be zero.
*
* When set to 0, the width/height fields of the output frames will be set
* to the coded dimensions and the crop_* fields will describe the cropping
* rectangle. Applying the cropping is left to the caller.
*
* @warning When hardware acceleration with opaque output frames is used,
* libavcodec is unable to apply cropping from the top/left border.
*
* @note when this option is set to zero, the width/height fields of the
* AVCodecContext and output AVFrames have different meanings. The codec
* context fields store display dimensions (with the coded dimensions in
* coded_width/height), while the frame fields store the coded dimensions
* (with the display dimensions being determined by the crop_* fields).
*/
int apply_cropping;
} AVCodecContext;
AVRational av_codec_get_pkt_timebase (const AVCodecContext *avctx);
@ -3623,7 +3761,7 @@ typedef struct AVCodec {
const int *supported_samplerates; ///< array of supported audio samplerates, or NULL if unknown, array is terminated by 0
const enum AVSampleFormat *sample_fmts; ///< array of supported sample formats, or NULL if unknown, array is terminated by -1
const uint64_t *channel_layouts; ///< array of support channel layouts, or NULL if unknown. array is terminated by 0
uint8_t max_lowres; ///< maximum value for lowres supported by the decoder, no direct access, use av_codec_get_max_lowres()
uint8_t max_lowres; ///< maximum value for lowres supported by the decoder
const AVClass *priv_class; ///< AVClass for the private context
const AVProfile *profiles; ///< array of recognized profiles, or NULL if unknown, array is terminated by {FF_PROFILE_UNKNOWN}
@ -3684,20 +3822,22 @@ typedef struct AVCodec {
int (*decode)(AVCodecContext *, void *outdata, int *outdata_size, AVPacket *avpkt);
int (*close)(AVCodecContext *);
/**
* Decode/encode API with decoupled packet/frame dataflow. The API is the
* Encode API with decoupled packet/frame dataflow. The API is the
* same as the avcodec_ prefixed APIs (avcodec_send_frame() etc.), except
* that:
* - never called if the codec is closed or the wrong type,
* - AVPacket parameter change side data is applied right before calling
* AVCodec->send_packet,
* - if AV_CODEC_CAP_DELAY is not set, drain packets or frames are never sent,
* - only one drain packet is ever passed down (until the next flush()),
* - a drain AVPacket is always NULL (no need to check for avpkt->size).
* - if AV_CODEC_CAP_DELAY is not set, drain frames are never sent,
* - only one drain frame is ever passed down,
*/
int (*send_frame)(AVCodecContext *avctx, const AVFrame *frame);
int (*send_packet)(AVCodecContext *avctx, const AVPacket *avpkt);
int (*receive_frame)(AVCodecContext *avctx, AVFrame *frame);
int (*receive_packet)(AVCodecContext *avctx, AVPacket *avpkt);
/**
* Decode API with decoupled packet/frame dataflow. This function is called
* to get one output frame. It should call ff_decode_get_packet() to obtain
* input data.
*/
int (*receive_frame)(AVCodecContext *avctx, AVFrame *frame);
/**
* Flush buffers.
* Will be called when seeking
@ -3708,6 +3848,12 @@ typedef struct AVCodec {
* See FF_CODEC_CAP_* in internal.h
*/
int caps_internal;
/**
* Decoding only, a comma-separated list of bitstream filters to apply to
* packets before decoding.
*/
const char *bsfs;
} AVCodec;
int av_codec_get_max_lowres(const AVCodec *codec);
@ -3749,7 +3895,7 @@ typedef struct AVHWAccel {
/**
* Hardware accelerated codec capabilities.
* see HWACCEL_CODEC_CAP_*
* see AV_HWACCEL_CODEC_CAP_*
*/
int capabilities;
@ -3820,7 +3966,7 @@ typedef struct AVHWAccel {
/**
* Called for every Macroblock in a slice.
*
* XvMC uses it to replace the ff_mpv_decode_mb().
* XvMC uses it to replace the ff_mpv_reconstruct_mb().
* Instead of decoding to raw picture, MB parameters are
* stored in an array provided by the video driver.
*
@ -3850,8 +3996,19 @@ typedef struct AVHWAccel {
* AVCodecInternal.hwaccel_priv_data.
*/
int priv_data_size;
/**
* Internal hwaccel capabilities.
*/
int caps_internal;
} AVHWAccel;
/**
* HWAccel is experimental and is thus avoided in favor of non experimental
* codecs
*/
#define AV_HWACCEL_CODEC_CAP_EXPERIMENTAL 0x0200
/**
* Hardware acceleration should be used for decoding even if the codec level
* used is unknown or higher than the maximum supported level reported by the
@ -3868,6 +4025,20 @@ typedef struct AVHWAccel {
*/
#define AV_HWACCEL_FLAG_ALLOW_HIGH_DEPTH (1 << 1)
/**
* Hardware acceleration should still be attempted for decoding when the
* codec profile does not match the reported capabilities of the hardware.
*
* For example, this can be used to try to decode baseline profile H.264
* streams in hardware - it will often succeed, because many streams marked
* as baseline profile actually conform to constrained baseline profile.
*
* @warning If the stream is actually not supported then the behaviour is
* undefined, and may include returning entirely incorrect output
* while indicating success.
*/
#define AV_HWACCEL_FLAG_ALLOW_PROFILE_MISMATCH (1 << 2)
/**
* @}
*/
@ -4377,13 +4548,13 @@ AVPacket *av_packet_alloc(void);
* @see av_packet_alloc
* @see av_packet_ref
*/
AVPacket *av_packet_clone(AVPacket *src);
AVPacket *av_packet_clone(const AVPacket *src);
/**
* Free the packet, if the packet is reference counted, it will be
* unreferenced first.
*
* @param packet packet to be freed. The pointer will be set to NULL.
* @param pkt packet to be freed. The pointer will be set to NULL.
* @note passing NULL is a no-op.
*/
void av_packet_free(AVPacket **pkt);
@ -4452,14 +4623,20 @@ int av_dup_packet(AVPacket *pkt);
* Copy packet, including contents
*
* @return 0 on success, negative AVERROR on fail
*
* @deprecated Use av_packet_ref
*/
attribute_deprecated
int av_copy_packet(AVPacket *dst, const AVPacket *src);
/**
* Copy packet side data
*
* @return 0 on success, negative AVERROR on fail
*
* @deprecated Use av_packet_copy_props
*/
attribute_deprecated
int av_copy_packet_side_data(AVPacket *dst, const AVPacket *src);
/**
@ -4518,12 +4695,16 @@ int av_packet_shrink_side_data(AVPacket *pkt, enum AVPacketSideDataType type,
* @param size pointer for side information size to store (optional)
* @return pointer to data if present or NULL otherwise
*/
uint8_t* av_packet_get_side_data(AVPacket *pkt, enum AVPacketSideDataType type,
uint8_t* av_packet_get_side_data(const AVPacket *pkt, enum AVPacketSideDataType type,
int *size);
#if FF_API_MERGE_SD_API
attribute_deprecated
int av_packet_merge_side_data(AVPacket *pkt);
attribute_deprecated
int av_packet_split_side_data(AVPacket *pkt);
#endif
const char *av_packet_side_data_name(enum AVPacketSideDataType type);
@ -4823,13 +5004,13 @@ int avcodec_decode_video2(AVCodecContext *avctx, AVFrame *picture,
* and reusing a get_buffer written for video codecs would probably perform badly
* due to a potentially very different allocation pattern.
*
* Some decoders (those marked with CODEC_CAP_DELAY) have a delay between input
* Some decoders (those marked with AV_CODEC_CAP_DELAY) have a delay between input
* and output. This means that for some packets they will not immediately
* produce decoded output and need to be flushed at the end of decoding to get
* all the decoded data. Flushing is done by calling this function with packets
* with avpkt->data set to NULL and avpkt->size set to 0 until it stops
* returning subtitles. It is safe to flush even those decoders that are not
* marked with CODEC_CAP_DELAY, then no subtitles will be returned.
* marked with AV_CODEC_CAP_DELAY, then no subtitles will be returned.
*
* @note The AVCodecContext MUST have been opened with @ref avcodec_open2()
* before packets may be fed to the decoder.
@ -4883,8 +5064,10 @@ int avcodec_decode_subtitle2(AVCodecContext *avctx, AVSubtitle *sub,
* a flush packet.
*
* @return 0 on success, otherwise negative error code:
* AVERROR(EAGAIN): input is not accepted right now - the packet must be
* resent after trying to read output
* AVERROR(EAGAIN): input is not accepted in the current state - user
* must read output with avcodec_receive_frame() (once
* all output is read, the packet should be resent, and
* the call will not fail with EAGAIN).
* AVERROR_EOF: the decoder has been flushed, and no new packets can
* be sent to it (also returned if more than 1 flush
* packet is sent)
@ -4905,7 +5088,7 @@ int avcodec_send_packet(AVCodecContext *avctx, const AVPacket *avpkt);
*
* @return
* 0: success, a frame was returned
* AVERROR(EAGAIN): output is not available right now - user must try
* AVERROR(EAGAIN): output is not available in this state - user must try
* to send new input
* AVERROR_EOF: the decoder has been fully flushed, and there will be
* no more output frames
@ -4938,8 +5121,10 @@ int avcodec_receive_frame(AVCodecContext *avctx, AVFrame *frame);
* avctx->frame_size for all frames except the last.
* The final frame may be smaller than avctx->frame_size.
* @return 0 on success, otherwise negative error code:
* AVERROR(EAGAIN): input is not accepted right now - the frame must be
* resent after trying to read output packets
* AVERROR(EAGAIN): input is not accepted in the current state - user
* must read output with avcodec_receive_packet() (once
* all output is read, the packet should be resent, and
* the call will not fail with EAGAIN).
* AVERROR_EOF: the encoder has been flushed, and no new frames can
* be sent to it
* AVERROR(EINVAL): codec not opened, refcounted_frames not set, it is a
@ -4957,8 +5142,8 @@ int avcodec_send_frame(AVCodecContext *avctx, const AVFrame *frame);
* encoder. Note that the function will always call
* av_frame_unref(frame) before doing anything else.
* @return 0 on success, otherwise negative error code:
* AVERROR(EAGAIN): output is not available right now - user must try
* to send input
* AVERROR(EAGAIN): output is not available in the current state - user
* must try to send input
* AVERROR_EOF: the encoder has been fully flushed, and there will be
* no more output packets
* AVERROR(EINVAL): codec not opened, or it is an encoder
@ -5509,22 +5694,14 @@ int av_picture_pad(AVPicture *dst, const AVPicture *src, int height, int width,
* @{
*/
#if FF_API_GETCHROMA
/**
* Utility function to access log2_chroma_w log2_chroma_h from
* the pixel format AVPixFmtDescriptor.
*
* This function asserts that pix_fmt is valid. See av_pix_fmt_get_chroma_sub_sample
* for one that returns a failure code and continues in case of invalid
* pix_fmts.
*
* @param[in] pix_fmt the pixel format
* @param[out] h_shift store log2_chroma_w
* @param[out] v_shift store log2_chroma_h
*
* @see av_pix_fmt_get_chroma_sub_sample
* @deprecated Use av_pix_fmt_get_chroma_sub_sample
*/
attribute_deprecated
void avcodec_get_chroma_sub_sample(enum AVPixelFormat pix_fmt, int *h_shift, int *v_shift);
#endif
/**
* Return a value representing the fourCC code associated to the
@ -5584,6 +5761,7 @@ attribute_deprecated
void avcodec_set_dimensions(AVCodecContext *s, int width, int height);
#endif
#if FF_API_TAG_STRING
/**
* Put a string representing the codec tag codec_tag in buf.
*
@ -5592,8 +5770,12 @@ void avcodec_set_dimensions(AVCodecContext *s, int width, int height);
* @param codec_tag codec tag to assign
* @return the length of the string that would have been generated if
* enough space had been available, excluding the trailing null
*
* @deprecated see av_fourcc_make_string() and av_fourcc2str().
*/
attribute_deprecated
size_t av_get_codec_tag_string(char *buf, size_t buf_size, unsigned int codec_tag);
#endif
void avcodec_string(char *buf, int buf_size, AVCodecContext *enc, int encode);
@ -5706,7 +5888,7 @@ int av_get_audio_frame_duration2(AVCodecParameters *par, int frame_bytes);
#if FF_API_OLD_BSF
typedef struct AVBitStreamFilterContext {
void *priv_data;
struct AVBitStreamFilter *filter;
const struct AVBitStreamFilter *filter;
AVCodecParserContext *parser;
struct AVBitStreamFilterContext *next;
/**
@ -5753,12 +5935,15 @@ typedef struct AVBSFContext {
void *priv_data;
/**
* Parameters of the input stream. Set by the caller before av_bsf_init().
* Parameters of the input stream. This field is allocated in
* av_bsf_alloc(), it needs to be filled by the caller before
* av_bsf_init().
*/
AVCodecParameters *par_in;
/**
* Parameters of the output stream. Set by the filter in av_bsf_init().
* Parameters of the output stream. This field is allocated in
* av_bsf_alloc(), it is set by the filter in av_bsf_init().
*/
AVCodecParameters *par_out;
@ -5936,8 +6121,7 @@ int av_bsf_init(AVBSFContext *ctx);
* av_bsf_receive_packet() repeatedly until it returns AVERROR(EAGAIN) or
* AVERROR_EOF.
*
* @param pkt the packet to filter. pkt must contain some payload (i.e data or
* side data must be present in pkt). The bitstream filter will take ownership of
* @param pkt the packet to filter. The bitstream filter will take ownership of
* the packet and reset the contents of pkt. pkt is not touched if an error occurs.
* This parameter may be NULL, which signals the end of the stream (i.e. no more
* packets will be sent). That will cause the filter to output any packets it

Просмотреть файл

@ -1,84 +0,0 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_AVDCT_H
#define AVCODEC_AVDCT_H
#include "libavutil/opt.h"
/**
* AVDCT context.
* @note function pointers can be NULL if the specific features have been
* disabled at build time.
*/
typedef struct AVDCT {
const AVClass *av_class;
void (*idct)(int16_t *block /* align 16 */);
/**
* IDCT input permutation.
* Several optimized IDCTs need a permutated input (relative to the
* normal order of the reference IDCT).
* This permutation must be performed before the idct_put/add.
* Note, normally this can be merged with the zigzag/alternate scan<br>
* An example to avoid confusion:
* - (->decode coeffs -> zigzag reorder -> dequant -> reference IDCT -> ...)
* - (x -> reference DCT -> reference IDCT -> x)
* - (x -> reference DCT -> simple_mmx_perm = idct_permutation
* -> simple_idct_mmx -> x)
* - (-> decode coeffs -> zigzag reorder -> simple_mmx_perm -> dequant
* -> simple_idct_mmx -> ...)
*/
uint8_t idct_permutation[64];
void (*fdct)(int16_t *block /* align 16 */);
/**
* DCT algorithm.
* must use AVOptions to set this field.
*/
int dct_algo;
/**
* IDCT algorithm.
* must use AVOptions to set this field.
*/
int idct_algo;
void (*get_pixels)(int16_t *block /* align 16 */,
const uint8_t *pixels /* align 8 */,
ptrdiff_t line_size);
int bits_per_sample;
} AVDCT;
/**
* Allocates a AVDCT context.
* This needs to be initialized with avcodec_dct_init() after optionally
* configuring it with AVOptions.
*
* To free it use av_free()
*/
AVDCT *avcodec_dct_alloc(void);
int avcodec_dct_init(AVDCT *);
const AVClass *avcodec_dct_get_class(void);
#endif /* AVCODEC_AVDCT_H */

Просмотреть файл

@ -247,8 +247,6 @@ failed_alloc:
av_packet_unref(pkt);
return AVERROR(ENOMEM);
}
FF_ENABLE_DEPRECATION_WARNINGS
#endif
int av_dup_packet(AVPacket *pkt)
{
@ -266,6 +264,8 @@ int av_copy_packet(AVPacket *dst, const AVPacket *src)
*dst = *src;
return copy_packet_data(dst, src, 0);
}
FF_ENABLE_DEPRECATION_WARNINGS
#endif
void av_packet_free_side_data(AVPacket *pkt)
{
@ -296,9 +296,20 @@ int av_packet_add_side_data(AVPacket *pkt, enum AVPacketSideDataType type,
uint8_t *data, size_t size)
{
AVPacketSideData *tmp;
int elems = pkt->side_data_elems;
int i, elems = pkt->side_data_elems;
if ((unsigned)elems + 1 > INT_MAX / sizeof(*pkt->side_data))
for (i = 0; i < elems; i++) {
AVPacketSideData *sd = &pkt->side_data[i];
if (sd->type == type) {
av_free(sd->data);
sd->data = data;
sd->size = size;
return 0;
}
}
if ((unsigned)elems + 1 > AV_PKT_DATA_NB)
return AVERROR(ERANGE);
tmp = av_realloc(pkt->side_data, (elems + 1) * sizeof(*tmp));
@ -336,7 +347,7 @@ uint8_t *av_packet_new_side_data(AVPacket *pkt, enum AVPacketSideDataType type,
return data;
}
uint8_t *av_packet_get_side_data(AVPacket *pkt, enum AVPacketSideDataType type,
uint8_t *av_packet_get_side_data(const AVPacket *pkt, enum AVPacketSideDataType type,
int *size)
{
int i;
@ -348,6 +359,8 @@ uint8_t *av_packet_get_side_data(AVPacket *pkt, enum AVPacketSideDataType type,
return pkt->side_data[i].data;
}
}
if (size)
*size = 0;
return NULL;
}
@ -372,10 +385,15 @@ const char *av_packet_side_data_name(enum AVPacketSideDataType type)
case AV_PKT_DATA_METADATA_UPDATE: return "Metadata Update";
case AV_PKT_DATA_MPEGTS_STREAM_ID: return "MPEGTS Stream ID";
case AV_PKT_DATA_MASTERING_DISPLAY_METADATA: return "Mastering display metadata";
case AV_PKT_DATA_CONTENT_LIGHT_LEVEL: return "Content light level metadata";
case AV_PKT_DATA_SPHERICAL: return "Spherical Mapping";
case AV_PKT_DATA_A53_CC: return "A53 Closed Captions";
}
return NULL;
}
#if FF_API_MERGE_SD_API
#define FF_MERGE_MARKER 0x8c4d9d108e25e9feULL
int av_packet_merge_side_data(AVPacket *pkt){
@ -431,6 +449,9 @@ int av_packet_split_side_data(AVPacket *pkt){
p-= size+5;
}
if (i > AV_PKT_DATA_NB)
return AVERROR(ERANGE);
pkt->side_data = av_malloc_array(i, sizeof(*pkt->side_data));
if (!pkt->side_data)
return AVERROR(ENOMEM);
@ -456,6 +477,35 @@ int av_packet_split_side_data(AVPacket *pkt){
}
return 0;
}
#endif
#if FF_API_MERGE_SD
int ff_packet_split_and_drop_side_data(AVPacket *pkt){
if (!pkt->side_data_elems && pkt->size >12 && AV_RB64(pkt->data + pkt->size - 8) == FF_MERGE_MARKER){
int i;
unsigned int size;
uint8_t *p;
p = pkt->data + pkt->size - 8 - 5;
for (i=1; ; i++){
size = AV_RB32(p);
if (size>INT_MAX - 5 || p - pkt->data < size)
return 0;
if (p[4]&128)
break;
if (p - pkt->data < size + 5)
return 0;
p-= size+5;
if (i > AV_PKT_DATA_NB)
return 0;
}
pkt->size = p - pkt->data - size;
av_assert0(pkt->size >= 0);
return 1;
}
return 0;
}
#endif
uint8_t *av_packet_pack_dictionary(AVDictionary *dict, int *size)
{
@ -505,7 +555,7 @@ int av_packet_unpack_dictionary(const uint8_t *data, int size, AVDictionary **di
const uint8_t *key = data;
const uint8_t *val = data + strlen(key) + 1;
if (val >= end)
if (val >= end || !*key)
return AVERROR_INVALIDDATA;
ret = av_dict_set(dict, key, val, 0);
@ -607,7 +657,7 @@ fail:
return ret;
}
AVPacket *av_packet_clone(AVPacket *src)
AVPacket *av_packet_clone(const AVPacket *src)
{
AVPacket *ret = av_packet_alloc();

Просмотреть файл

@ -28,7 +28,6 @@
* bitstream api.
*/
#include "libavutil/atomic.h"
#include "libavutil/avassert.h"
#include "libavutil/qsort.h"
#include "avcodec.h"
@ -99,9 +98,11 @@ void avpriv_copy_bits(PutBitContext *pb, const uint8_t *src, int length)
case 2: \
v = *(const uint16_t *)ptr; \
break; \
default: \
case 4: \
v = *(const uint32_t *)ptr; \
break; \
default: \
av_assert1(0); \
} \
}
@ -126,14 +127,6 @@ static int alloc_table(VLC *vlc, int size, int use_static)
return index;
}
static av_always_inline uint32_t bitswap_32(uint32_t x)
{
return (uint32_t)ff_reverse[ x & 0xFF] << 24 |
(uint32_t)ff_reverse[(x >> 8) & 0xFF] << 16 |
(uint32_t)ff_reverse[(x >> 16) & 0xFF] << 8 |
(uint32_t)ff_reverse[ x >> 24];
}
typedef struct VLCcode {
uint8_t bits;
uint16_t symbol;
@ -183,7 +176,7 @@ static int build_table(VLC *vlc, int table_nb_bits, int nb_codes,
n = codes[i].bits;
code = codes[i].code;
symbol = codes[i].symbol;
ff_dlog(NULL, "i=%d n=%d code=0x%x\n", i, n, code);
ff_dlog(NULL, "i=%d n=%d code=0x%"PRIx32"\n", i, n, code);
if (n <= table_nb_bits) {
/* no need to add another table */
j = code >> (32 - table_nb_bits);
@ -264,7 +257,7 @@ static int build_table(VLC *vlc, int table_nb_bits, int nb_codes,
'bits' or 'codes' tables.
'xxx_size' : gives the number of bytes of each entry of the 'bits'
or 'codes' tables.
or 'codes' tables. Currently 1,2 and 4 are supported.
'wrap' and 'size' make it possible to use any memory configuration and types
(byte/word/long) to store the 'bits', 'codes', and 'symbols' tables.
@ -317,7 +310,8 @@ int ff_init_vlc_sparse(VLC *vlc_arg, int nb_bits, int nb_codes,
} \
GET_DATA(buf[j].code, codes, i, codes_wrap, codes_size); \
if (buf[j].code >= (1LL<<buf[j].bits)) { \
av_log(NULL, AV_LOG_ERROR, "Invalid code %x for %d in init_vlc\n", buf[j].code, i);\
av_log(NULL, AV_LOG_ERROR, "Invalid code %"PRIx32" for %d in " \
"init_vlc\n", buf[j].code, i); \
if (!(flags & INIT_VLC_USE_NEW_STATIC)) \
av_free(buf); \
return -1; \

Просмотреть файл

@ -0,0 +1,185 @@
/*
* copyright (c) 2006 Michael Niedermayer <michaelni@gmx.at>
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <string.h>
#include "avcodec.h"
#include "libavutil/internal.h"
#include "libavutil/mem.h"
#include "libavutil/opt.h"
#if FF_API_OLD_BSF
FF_DISABLE_DEPRECATION_WARNINGS
AVBitStreamFilter *av_bitstream_filter_next(const AVBitStreamFilter *f)
{
const AVBitStreamFilter *filter = NULL;
void *opaque = NULL;
while (filter != f)
filter = av_bsf_next(&opaque);
return av_bsf_next(&opaque);
}
void av_register_bitstream_filter(AVBitStreamFilter *bsf)
{
}
typedef struct BSFCompatContext {
AVBSFContext *ctx;
int extradata_updated;
} BSFCompatContext;
AVBitStreamFilterContext *av_bitstream_filter_init(const char *name)
{
AVBitStreamFilterContext *ctx = NULL;
BSFCompatContext *priv = NULL;
const AVBitStreamFilter *bsf;
bsf = av_bsf_get_by_name(name);
if (!bsf)
return NULL;
ctx = av_mallocz(sizeof(*ctx));
if (!ctx)
return NULL;
priv = av_mallocz(sizeof(*priv));
if (!priv)
goto fail;
ctx->filter = bsf;
ctx->priv_data = priv;
return ctx;
fail:
if (priv)
av_bsf_free(&priv->ctx);
av_freep(&priv);
av_freep(&ctx);
return NULL;
}
void av_bitstream_filter_close(AVBitStreamFilterContext *bsfc)
{
BSFCompatContext *priv;
if (!bsfc)
return;
priv = bsfc->priv_data;
av_bsf_free(&priv->ctx);
av_freep(&bsfc->priv_data);
av_free(bsfc);
}
int av_bitstream_filter_filter(AVBitStreamFilterContext *bsfc,
AVCodecContext *avctx, const char *args,
uint8_t **poutbuf, int *poutbuf_size,
const uint8_t *buf, int buf_size, int keyframe)
{
BSFCompatContext *priv = bsfc->priv_data;
AVPacket pkt = { 0 };
int ret;
if (!priv->ctx) {
ret = av_bsf_alloc(bsfc->filter, &priv->ctx);
if (ret < 0)
return ret;
ret = avcodec_parameters_from_context(priv->ctx->par_in, avctx);
if (ret < 0)
return ret;
priv->ctx->time_base_in = avctx->time_base;
if (bsfc->args && bsfc->filter->priv_class) {
const AVOption *opt = av_opt_next(priv->ctx->priv_data, NULL);
const char * shorthand[2] = {NULL};
if (opt)
shorthand[0] = opt->name;
ret = av_opt_set_from_string(priv->ctx->priv_data, bsfc->args, shorthand, "=", ":");
if (ret < 0)
return ret;
}
ret = av_bsf_init(priv->ctx);
if (ret < 0)
return ret;
}
pkt.data = buf;
pkt.size = buf_size;
ret = av_bsf_send_packet(priv->ctx, &pkt);
if (ret < 0)
return ret;
*poutbuf = NULL;
*poutbuf_size = 0;
ret = av_bsf_receive_packet(priv->ctx, &pkt);
if (ret == AVERROR(EAGAIN) || ret == AVERROR_EOF)
return 0;
else if (ret < 0)
return ret;
*poutbuf = av_malloc(pkt.size + AV_INPUT_BUFFER_PADDING_SIZE);
if (!*poutbuf) {
av_packet_unref(&pkt);
return AVERROR(ENOMEM);
}
*poutbuf_size = pkt.size;
memcpy(*poutbuf, pkt.data, pkt.size);
av_packet_unref(&pkt);
/* drain all the remaining packets we cannot return */
while (ret >= 0) {
ret = av_bsf_receive_packet(priv->ctx, &pkt);
av_packet_unref(&pkt);
}
if (!priv->extradata_updated) {
/* update extradata in avctx from the output codec parameters */
if (priv->ctx->par_out->extradata_size && (!args || !strstr(args, "private_spspps_buf"))) {
av_freep(&avctx->extradata);
avctx->extradata_size = 0;
avctx->extradata = av_mallocz(priv->ctx->par_out->extradata_size + AV_INPUT_BUFFER_PADDING_SIZE);
if (!avctx->extradata)
return AVERROR(ENOMEM);
memcpy(avctx->extradata, priv->ctx->par_out->extradata, priv->ctx->par_out->extradata_size);
avctx->extradata_size = priv->ctx->par_out->extradata_size;
}
priv->extradata_updated = 1;
}
return 1;
}
FF_ENABLE_DEPRECATION_WARNINGS
#endif

Просмотреть файл

@ -0,0 +1,91 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "config.h"
#include "libavutil/common.h"
#include "libavutil/log.h"
#include "avcodec.h"
#include "bsf.h"
extern const AVBitStreamFilter ff_aac_adtstoasc_bsf;
extern const AVBitStreamFilter ff_chomp_bsf;
extern const AVBitStreamFilter ff_dump_extradata_bsf;
extern const AVBitStreamFilter ff_dca_core_bsf;
extern const AVBitStreamFilter ff_extract_extradata_bsf;
extern const AVBitStreamFilter ff_h264_mp4toannexb_bsf;
extern const AVBitStreamFilter ff_hevc_mp4toannexb_bsf;
extern const AVBitStreamFilter ff_imx_dump_header_bsf;
extern const AVBitStreamFilter ff_mjpeg2jpeg_bsf;
extern const AVBitStreamFilter ff_mjpega_dump_header_bsf;
extern const AVBitStreamFilter ff_mp3_header_decompress_bsf;
extern const AVBitStreamFilter ff_mpeg4_unpack_bframes_bsf;
extern const AVBitStreamFilter ff_mov2textsub_bsf;
extern const AVBitStreamFilter ff_noise_bsf;
extern const AVBitStreamFilter ff_null_bsf;
extern const AVBitStreamFilter ff_remove_extradata_bsf;
extern const AVBitStreamFilter ff_text2movsub_bsf;
extern const AVBitStreamFilter ff_vp9_raw_reorder_bsf;
extern const AVBitStreamFilter ff_vp9_superframe_bsf;
extern const AVBitStreamFilter ff_vp9_superframe_split_bsf;
#include "libavcodec/bsf_list.c"
const AVBitStreamFilter *av_bsf_next(void **opaque)
{
uintptr_t i = (uintptr_t)*opaque;
const AVBitStreamFilter *f = bitstream_filters[i];
if (f)
*opaque = (void*)(i + 1);
return f;
}
const AVBitStreamFilter *av_bsf_get_by_name(const char *name)
{
int i;
for (i = 0; bitstream_filters[i]; i++) {
const AVBitStreamFilter *f = bitstream_filters[i];
if (!strcmp(f->name, name))
return f;
}
return NULL;
}
const AVClass *ff_bsf_child_class_next(const AVClass *prev)
{
int i;
/* find the filter that corresponds to prev */
for (i = 0; prev && bitstream_filters[i]; i++) {
if (bitstream_filters[i]->priv_class == prev) {
i++;
break;
}
}
/* find next filter with priv options */
for (; bitstream_filters[i]; i++)
if (bitstream_filters[i]->priv_class)
return bitstream_filters[i]->priv_class;
return NULL;
}

Просмотреть файл

@ -19,6 +19,7 @@
#ifndef AVCODEC_BLOCKDSP_H
#define AVCODEC_BLOCKDSP_H
#include <stddef.h>
#include <stdint.h>
#include "avcodec.h"
@ -29,7 +30,7 @@
* h for op_pixels_func is limited to { width / 2, width },
* but never larger than 16 and never smaller than 4. */
typedef void (*op_fill_func)(uint8_t *block /* align width (8 or 16) */,
uint8_t value, int line_size, int h);
uint8_t value, ptrdiff_t line_size, int h);
typedef struct BlockDSPContext {
void (*clear_block)(int16_t *block /* align 16 */);

Просмотреть файл

@ -0,0 +1,3 @@
static const AVBitStreamFilter * const bitstream_filters[] = {
&ff_null_bsf,
NULL };

Просмотреть файл

@ -94,7 +94,7 @@ DEF(unsigned int, be24, 3, AV_RB24, AV_WB24)
DEF(unsigned int, be16, 2, AV_RB16, AV_WB16)
DEF(unsigned int, byte, 1, AV_RB8 , AV_WB8)
#if HAVE_BIGENDIAN
#if AV_HAVE_BIGENDIAN
# define bytestream2_get_ne16 bytestream2_get_be16
# define bytestream2_get_ne24 bytestream2_get_be24
# define bytestream2_get_ne32 bytestream2_get_be32

Просмотреть файл

@ -169,6 +169,14 @@ static const AVCodecDescriptor codec_descriptors[] = {
.long_name = NULL_IF_CONFIG_SMALL("FLV / Sorenson Spark / Sorenson H.263 (Flash Video)"),
.props = AV_CODEC_PROP_LOSSY,
},
{
.id = AV_CODEC_ID_SVG,
.type = AVMEDIA_TYPE_VIDEO,
.name = "svg",
.long_name = NULL_IF_CONFIG_SMALL("Scalable Vector Graphics"),
.props = AV_CODEC_PROP_LOSSLESS,
.mime_types= MT("image/svg+xml"),
},
{
.id = AV_CODEC_ID_SVQ1,
.type = AVMEDIA_TYPE_VIDEO,
@ -520,7 +528,7 @@ static const AVCodecDescriptor codec_descriptors[] = {
.type = AVMEDIA_TYPE_VIDEO,
.name = "fraps",
.long_name = NULL_IF_CONFIG_SMALL("Fraps"),
.props = AV_CODEC_PROP_LOSSLESS,
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_TRUEMOTION2,
@ -1106,7 +1114,7 @@ static const AVCodecDescriptor codec_descriptors[] = {
.type = AVMEDIA_TYPE_VIDEO,
.name = "y41p",
.long_name = NULL_IF_CONFIG_SMALL("Uncompressed YUV 4:1:1 12-bit"),
.props = AV_CODEC_PROP_INTRA_ONLY,
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_ESCAPE130,
@ -1120,56 +1128,56 @@ static const AVCodecDescriptor codec_descriptors[] = {
.type = AVMEDIA_TYPE_VIDEO,
.name = "avrp",
.long_name = NULL_IF_CONFIG_SMALL("Avid 1:1 10-bit RGB Packer"),
.props = AV_CODEC_PROP_INTRA_ONLY,
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_012V,
.type = AVMEDIA_TYPE_VIDEO,
.name = "012v",
.long_name = NULL_IF_CONFIG_SMALL("Uncompressed 4:2:2 10-bit"),
.props = AV_CODEC_PROP_INTRA_ONLY,
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_AVUI,
.type = AVMEDIA_TYPE_VIDEO,
.name = "avui",
.long_name = NULL_IF_CONFIG_SMALL("Avid Meridien Uncompressed"),
.props = AV_CODEC_PROP_INTRA_ONLY,
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_AYUV,
.type = AVMEDIA_TYPE_VIDEO,
.name = "ayuv",
.long_name = NULL_IF_CONFIG_SMALL("Uncompressed packed MS 4:4:4:4"),
.props = AV_CODEC_PROP_INTRA_ONLY,
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_TARGA_Y216,
.type = AVMEDIA_TYPE_VIDEO,
.name = "targa_y216",
.long_name = NULL_IF_CONFIG_SMALL("Pinnacle TARGA CineWave YUV16"),
.props = AV_CODEC_PROP_INTRA_ONLY,
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_V308,
.type = AVMEDIA_TYPE_VIDEO,
.name = "v308",
.long_name = NULL_IF_CONFIG_SMALL("Uncompressed packed 4:4:4"),
.props = AV_CODEC_PROP_INTRA_ONLY,
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_V408,
.type = AVMEDIA_TYPE_VIDEO,
.name = "v408",
.long_name = NULL_IF_CONFIG_SMALL("Uncompressed packed QT 4:4:4:4"),
.props = AV_CODEC_PROP_INTRA_ONLY,
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_YUV4,
.type = AVMEDIA_TYPE_VIDEO,
.name = "yuv4",
.long_name = NULL_IF_CONFIG_SMALL("Uncompressed packed 4:2:0"),
.props = AV_CODEC_PROP_INTRA_ONLY,
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_AVRN,
@ -1272,7 +1280,7 @@ static const AVCodecDescriptor codec_descriptors[] = {
.id = AV_CODEC_ID_HAP,
.type = AVMEDIA_TYPE_VIDEO,
.name = "hap",
.long_name = NULL_IF_CONFIG_SMALL("Vidvox Hap decoder"),
.long_name = NULL_IF_CONFIG_SMALL("Vidvox Hap"),
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSY,
},
{
@ -1289,6 +1297,13 @@ static const AVCodecDescriptor codec_descriptors[] = {
.long_name = NULL_IF_CONFIG_SMALL("Screenpresso"),
.props = AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_SPEEDHQ,
.type = AVMEDIA_TYPE_VIDEO,
.name = "speedhq",
.long_name = NULL_IF_CONFIG_SMALL("NewTek SpeedHQ"),
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSY,
},
{
.id = AV_CODEC_ID_WRAPPED_AVFRAME,
.type = AVMEDIA_TYPE_VIDEO,
@ -1303,6 +1318,69 @@ static const AVCodecDescriptor codec_descriptors[] = {
.long_name = NULL_IF_CONFIG_SMALL("innoHeim/Rsupport Screen Capture Codec"),
.props = AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_PIXLET,
.type = AVMEDIA_TYPE_VIDEO,
.name = "pixlet",
.long_name = NULL_IF_CONFIG_SMALL("Apple Pixlet"),
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSY,
},
{
.id = AV_CODEC_ID_FMVC,
.type = AVMEDIA_TYPE_VIDEO,
.name = "fmvc",
.long_name = NULL_IF_CONFIG_SMALL("FM Screen Capture Codec"),
.props = AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_SCPR,
.type = AVMEDIA_TYPE_VIDEO,
.name = "scpr",
.long_name = NULL_IF_CONFIG_SMALL("ScreenPressor"),
.props = AV_CODEC_PROP_LOSSLESS | AV_CODEC_PROP_LOSSY,
},
{
.id = AV_CODEC_ID_CLEARVIDEO,
.type = AVMEDIA_TYPE_VIDEO,
.name = "clearvideo",
.long_name = NULL_IF_CONFIG_SMALL("Iterated Systems ClearVideo"),
.props = AV_CODEC_PROP_LOSSY,
},
{
.id = AV_CODEC_ID_AV1,
.type = AVMEDIA_TYPE_VIDEO,
.name = "av1",
.long_name = NULL_IF_CONFIG_SMALL("Alliance for Open Media AV1"),
.props = AV_CODEC_PROP_LOSSY,
},
{
.id = AV_CODEC_ID_BITPACKED,
.type = AVMEDIA_TYPE_VIDEO,
.name = "bitpacked",
.long_name = NULL_IF_CONFIG_SMALL("Bitpacked"),
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_MSCC,
.type = AVMEDIA_TYPE_VIDEO,
.name = "mscc",
.long_name = NULL_IF_CONFIG_SMALL("Mandsoft Screen Capture Codec"),
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_SRGC,
.type = AVMEDIA_TYPE_VIDEO,
.name = "srgc",
.long_name = NULL_IF_CONFIG_SMALL("Screen Recorder Gold Codec"),
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_GDV,
.type = AVMEDIA_TYPE_VIDEO,
.name = "gdv",
.long_name = NULL_IF_CONFIG_SMALL("Gremlin Digital Video"),
.props = AV_CODEC_PROP_LOSSY,
},
/* image codecs */
{
@ -1349,6 +1427,13 @@ static const AVCodecDescriptor codec_descriptors[] = {
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSY |
AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_FITS,
.type = AVMEDIA_TYPE_VIDEO,
.name = "fits",
.long_name = NULL_IF_CONFIG_SMALL("FITS (Flexible Image Transport System)"),
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_GIF,
.type = AVMEDIA_TYPE_VIDEO,
@ -1424,6 +1509,13 @@ static const AVCodecDescriptor codec_descriptors[] = {
.long_name = NULL_IF_CONFIG_SMALL("PPM (Portable PixelMap) image"),
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_PSD,
.type = AVMEDIA_TYPE_VIDEO,
.name = "psd",
.long_name = NULL_IF_CONFIG_SMALL("Photoshop PSD file"),
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_PTX,
.type = AVMEDIA_TYPE_VIDEO,
@ -1511,6 +1603,15 @@ static const AVCodecDescriptor codec_descriptors[] = {
.name = "xbm",
.long_name = NULL_IF_CONFIG_SMALL("XBM (X BitMap) image"),
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSLESS,
.mime_types= MT("image/x-xbitmap"),
},
{
.id = AV_CODEC_ID_XPM,
.type = AVMEDIA_TYPE_VIDEO,
.name = "xpm",
.long_name = NULL_IF_CONFIG_SMALL("XPM (X PixMap) image"),
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSLESS,
.mime_types= MT("image/x-xpixmap"),
},
{
.id = AV_CODEC_ID_XWD,
@ -1726,6 +1827,20 @@ static const AVCodecDescriptor codec_descriptors[] = {
.long_name = NULL_IF_CONFIG_SMALL("PCM signed 20|24-bit big-endian"),
.props = AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_PCM_F16LE,
.type = AVMEDIA_TYPE_AUDIO,
.name = "pcm_f16le",
.long_name = NULL_IF_CONFIG_SMALL("PCM 16.8 floating point little-endian"),
.props = AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_PCM_F24LE,
.type = AVMEDIA_TYPE_AUDIO,
.name = "pcm_f24le",
.long_name = NULL_IF_CONFIG_SMALL("PCM 24.0 floating point little-endian"),
.props = AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_PCM_F32BE,
.type = AVMEDIA_TYPE_AUDIO,
@ -2133,6 +2248,13 @@ static const AVCodecDescriptor codec_descriptors[] = {
.long_name = NULL_IF_CONFIG_SMALL("DPCM Squareroot-Delta-Exact"),
.props = AV_CODEC_PROP_LOSSY,
},
{
.id = AV_CODEC_ID_GREMLIN_DPCM,
.type = AVMEDIA_TYPE_AUDIO,
.name = "gremlin_dpcm",
.long_name = NULL_IF_CONFIG_SMALL("DPCM Gremlin"),
.props = AV_CODEC_PROP_LOSSY,
},
/* audio codecs */
{
@ -2226,7 +2348,7 @@ static const AVCodecDescriptor codec_descriptors[] = {
.type = AVMEDIA_TYPE_AUDIO,
.name = "flac",
.long_name = NULL_IF_CONFIG_SMALL("FLAC (Free Lossless Audio Codec)"),
.props = AV_CODEC_PROP_LOSSLESS,
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_MP3ADU,
@ -2254,7 +2376,7 @@ static const AVCodecDescriptor codec_descriptors[] = {
.type = AVMEDIA_TYPE_AUDIO,
.name = "alac",
.long_name = NULL_IF_CONFIG_SMALL("ALAC (Apple Lossless Audio Codec)"),
.props = AV_CODEC_PROP_LOSSLESS,
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_WESTWOOD_SND1,
@ -2296,7 +2418,7 @@ static const AVCodecDescriptor codec_descriptors[] = {
.type = AVMEDIA_TYPE_AUDIO,
.name = "tta",
.long_name = NULL_IF_CONFIG_SMALL("TTA (True Audio)"),
.props = AV_CODEC_PROP_LOSSLESS,
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_SMACKAUDIO,
@ -2317,7 +2439,8 @@ static const AVCodecDescriptor codec_descriptors[] = {
.type = AVMEDIA_TYPE_AUDIO,
.name = "wavpack",
.long_name = NULL_IF_CONFIG_SMALL("WavPack"),
.props = AV_CODEC_PROP_LOSSY | AV_CODEC_PROP_LOSSLESS,
.props = AV_CODEC_PROP_INTRA_ONLY |
AV_CODEC_PROP_LOSSY | AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_DSICINAUDIO,
@ -2426,6 +2549,20 @@ static const AVCodecDescriptor codec_descriptors[] = {
.long_name = NULL_IF_CONFIG_SMALL("ATRAC3+ (Adaptive TRansform Acoustic Coding 3+)"),
.props = AV_CODEC_PROP_LOSSY,
},
{
.id = AV_CODEC_ID_ATRAC3PAL,
.type = AVMEDIA_TYPE_AUDIO,
.name = "atrac3pal",
.long_name = NULL_IF_CONFIG_SMALL("ATRAC3+ AL (Adaptive TRansform Acoustic Coding 3+ Advanced Lossless)"),
.props = AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_ATRAC3AL,
.type = AVMEDIA_TYPE_AUDIO,
.name = "atrac3al",
.long_name = NULL_IF_CONFIG_SMALL("ATRAC3 AL (Adaptive TRansform Acoustic Coding 3 Advanced Lossless)"),
.props = AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_EAC3,
.type = AVMEDIA_TYPE_AUDIO,
@ -2525,6 +2662,13 @@ static const AVCodecDescriptor codec_descriptors[] = {
.long_name = NULL_IF_CONFIG_SMALL("Digital Speech Standard - Standard Play mode (DSS SP)"),
.props = AV_CODEC_PROP_LOSSY,
},
{
.id = AV_CODEC_ID_DOLBY_E,
.type = AVMEDIA_TYPE_AUDIO,
.name = "dolby_e",
.long_name = NULL_IF_CONFIG_SMALL("Dolby E"),
.props = AV_CODEC_PROP_LOSSY,
},
{
.id = AV_CODEC_ID_G729,
.type = AVMEDIA_TYPE_AUDIO,
@ -2611,7 +2755,7 @@ static const AVCodecDescriptor codec_descriptors[] = {
.type = AVMEDIA_TYPE_AUDIO,
.name = "tak",
.long_name = NULL_IF_CONFIG_SMALL("TAK (Tom's lossless Audio Kompressor)"),
.props = AV_CODEC_PROP_LOSSLESS,
.props = AV_CODEC_PROP_INTRA_ONLY | AV_CODEC_PROP_LOSSLESS,
},
{
.id = AV_CODEC_ID_METASOUND,

Просмотреть файл

@ -24,6 +24,7 @@
#if !defined(AVCODEC_DCT_H) && (!defined(FFT_FLOAT) || FFT_FLOAT)
#define AVCODEC_DCT_H
#include <stddef.h>
#include <stdint.h>
#include "rdft.h"
@ -62,7 +63,7 @@ void ff_j_rev_dct(int16_t *data);
void ff_j_rev_dct4(int16_t *data);
void ff_j_rev_dct2(int16_t *data);
void ff_j_rev_dct1(int16_t *data);
void ff_jref_idct_put(uint8_t *dest, int line_size, int16_t *block);
void ff_jref_idct_add(uint8_t *dest, int line_size, int16_t *block);
void ff_jref_idct_put(uint8_t *dest, ptrdiff_t line_size, int16_t *block);
void ff_jref_idct_add(uint8_t *dest, ptrdiff_t line_size, int16_t *block);
#endif /* AVCODEC_DCT_H */

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -0,0 +1,39 @@
/*
* generic decoding-related code
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_DECODE_H
#define AVCODEC_DECODE_H
#include "avcodec.h"
/**
* Called by decoders to get the next packet for decoding.
*
* @param pkt An empty packet to be filled with data.
* @return 0 if a new reference has been successfully written to pkt
* AVERROR(EAGAIN) if no data is currently available
* AVERROR_EOF if and end of stream has been reached, so no more data
* will be available
*/
int ff_decode_get_packet(AVCodecContext *avctx, AVPacket *pkt);
void ff_decode_bsfs_uninit(AVCodecContext *avctx);
#endif /* AVCODEC_DECODE_H */

Просмотреть файл

@ -73,6 +73,16 @@ AVHWAccel ff_hevc_mediacodec_hwaccel;
AVHWAccel ff_mpeg4_mediacodec_hwaccel;
AVHWAccel ff_vp8_mediacodec_hwaccel;
AVHWAccel ff_vp9_mediacodec_hwaccel;
/* Added by FFmpeg 3.4 */
AVHWAccel ff_h264_d3d11va2_hwaccel;
AVHWAccel ff_hevc_d3d11va2_hwaccel;
AVHWAccel ff_hevc_videotoolbox_hwaccel;
AVHWAccel ff_mpeg2_d3d11va2_hwaccel;
AVHWAccel ff_mpeg2_mediacodec_hwaccel;
AVHWAccel ff_vc1_d3d11va2_hwaccel;
AVHWAccel ff_vp8_qsv_hwaccel;
AVHWAccel ff_vp9_d3d11va2_hwaccel;
AVHWAccel ff_wmv3_d3d11va2_hwaccel;
AVCodec ff_a64multi_encoder;
AVCodec ff_a64multi5_encoder;
@ -741,6 +751,55 @@ AVCodec ff_pcm_s64be_decoder;
AVCodec ff_pcm_s64be_encoder;
AVCodec ff_truehd_encoder;
AVCodec ff_mlp_encoder;
/* Added by FFmpeg 3.4 */
AVCodec ff_clearvideo_decoder;
AVCodec ff_fits_encoder;
AVCodec ff_fits_decoder;
AVCodec ff_fmvc_decoder;
AVCodec ff_gdv_decoder;
AVCodec ff_h263_v4l2m2m_decoder;
AVCodec ff_h264_v4l2m2m_decoder;
AVCodec ff_h264_rkmpp_decoder;
AVCodec ff_hevc_rkmpp_decoder;
AVCodec ff_hevc_v4l2m2m_decoder;
AVCodec ff_mpeg4_v4l2m2m_decoder;
AVCodec ff_mpeg1_v4l2m2m_decoder;
AVCodec ff_mpeg2_v4l2m2m_decoder;
AVCodec ff_mpeg2_mediacodec_decoder;
AVCodec ff_mscc_decoder;
AVCodec ff_pixlet_decoder;
AVCodec ff_psd_decoder;
AVCodec ff_scpr_decoder;
AVCodec ff_speedhq_decoder;
AVCodec ff_srgc_decoder;
AVCodec ff_vc1_v4l2m2m_decoder;
AVCodec ff_vp8_rkmpp_decoder;
AVCodec ff_vp8_v4l2m2m_decoder;
AVCodec ff_vp9_rkmpp_decoder;
AVCodec ff_vp9_v4l2m2m_decoder;
AVCodec ff_bitpacked_decoder;
AVCodec ff_wrapped_avframe_decoder;
AVCodec ff_xpm_decoder;
AVCodec ff_atrac3al_decoder;
AVCodec ff_atrac3pal_decoder;
AVCodec ff_dolby_e_decoder;
AVCodec ff_opus_encoder;
AVCodec ff_qdmc_decoder;
AVCodec ff_pcm_f16le_decoder;
AVCodec ff_pcm_f24le_decoder;
AVCodec ff_gremlin_dpcm_decoder;
AVCodec ff_adpcm_g726le_encoder;
AVCodec ff_librsvg_decoder;
AVCodec ff_h263_v4l2m2m_encoder;
AVCodec ff_h264_v4l2m2m_encoder;
AVCodec ff_hevc_v4l2m2m_encoder;
AVCodec ff_mpeg2_vaapi_encoder;
AVCodec ff_mpeg4_v4l2m2m_encoder;
AVCodec ff_vp8_qsv_decoder;
AVCodec ff_vp8_v4l2m2m_encoder;
AVCodec ff_vp8_vaapi_encoder;
AVCodec ff_vp9_vaapi_encoder;
AVCodecParser ff_aac_parser;
AVCodecParser ff_aac_latm_parser;
@ -777,6 +836,10 @@ AVCodecParser ff_tak_parser;
AVCodecParser ff_vc1_parser;
AVCodecParser ff_vorbis_parser;
AVCodecParser ff_vp3_parser;
/* Added by FFmpeg 3.4 */
AVCodecParser ff_sipr_parser;
AVCodecParser ff_xma_parser;
AVBitStreamFilter ff_aac_adtstoasc_bsf;
AVBitStreamFilter ff_chomp_bsf;
AVBitStreamFilter ff_dump_extradata_bsf;
@ -806,12 +869,15 @@ int ff_thread_video_encode_frame(AVCodecContext *avctx, AVPacket *pkt, const AVF
void ff_videodsp_init_aarch64(VideoDSPContext *ctx, int bpc) {}
void ff_videodsp_init_arm(VideoDSPContext *ctx, int bpc) {}
void ff_videodsp_init_ppc(VideoDSPContext *ctx, int bpc) {}
void ff_videodsp_init_mips(VideoDSPContext *ctx, int bpc) {}
void ff_vp7dsp_init(VP8DSPContext *c) {}
void ff_vp78dsp_init_arm(VP8DSPContext *c) {}
void ff_vp78dsp_init_ppc(VP8DSPContext *c) {}
void ff_vp8dsp_init_arm(VP8DSPContext *c) {}
void ff_vp8dsp_init_mips(VP8DSPContext *c) {}
void ff_vp9dsp_init_mips(VP9DSPContext *dsp, int bpp) {}
void ff_vp9dsp_init_aarch64(VP9DSPContext *dsp, int bpp) {}
void ff_vp9dsp_init_arm(VP9DSPContext *dsp, int bpp) {}
void ff_flacdsp_init_arm(FLACDSPContext *c, enum AVSampleFormat fmt, int channels, int bps) {}
#if !defined(HAVE_64BIT_BUILD)
void ff_flac_decorrelate_indep8_16_sse2(uint8_t **out, int32_t **in, int channels, int len, int shift) {}
@ -819,11 +885,3 @@ void ff_flac_decorrelate_indep8_32_avx(uint8_t **out, int32_t **in, int channels
void ff_flac_decorrelate_indep8_16_avx(uint8_t **out, int32_t **in, int channels, int len, int shift) {}
void ff_flac_decorrelate_indep8_32_sse2(uint8_t **out, int32_t **in, int channels, int len, int shift) {}
#endif
void av_bitstream_filter_close(AVBitStreamFilterContext *bsf) {}
int av_bitstream_filter_filter(AVBitStreamFilterContext *bsfc,
AVCodecContext *avctx, const char *args,
uint8_t **poutbuf, int *poutbuf_size,
const uint8_t *buf, int buf_size, int keyframe) { return 0; }
AVBitStreamFilterContext *av_bitstream_filter_init(const char *name) { return NULL;}
AVBitStreamFilter *av_bitstream_filter_next(const AVBitStreamFilter *f) { return NULL; }
void av_register_bitstream_filter(AVBitStreamFilter *bsf) {}

Просмотреть файл

@ -57,8 +57,8 @@ typedef struct ERContext {
int *mb_index2xy;
int mb_num;
int mb_width, mb_height;
int mb_stride;
int b8_stride;
ptrdiff_t mb_stride;
ptrdiff_t b8_stride;
volatile int error_count;
int error_occurred;

Просмотреть файл

@ -201,7 +201,7 @@ void ff_flac_set_channel_layout(AVCodecContext *avctx)
avctx->channel_layout = 0;
}
void ff_flac_parse_streaminfo(AVCodecContext *avctx, struct FLACStreaminfo *s,
int ff_flac_parse_streaminfo(AVCodecContext *avctx, struct FLACStreaminfo *s,
const uint8_t *buffer)
{
GetBitContext gb;
@ -213,6 +213,7 @@ void ff_flac_parse_streaminfo(AVCodecContext *avctx, struct FLACStreaminfo *s,
av_log(avctx, AV_LOG_WARNING, "invalid max blocksize: %d\n",
s->max_blocksize);
s->max_blocksize = 16;
return AVERROR_INVALIDDATA;
}
skip_bits(&gb, 24); /* skip min frame size */
@ -222,6 +223,12 @@ void ff_flac_parse_streaminfo(AVCodecContext *avctx, struct FLACStreaminfo *s,
s->channels = get_bits(&gb, 3) + 1;
s->bps = get_bits(&gb, 5) + 1;
if (s->bps < 4) {
av_log(avctx, AV_LOG_ERROR, "invalid bps: %d\n", s->bps);
s->bps = 16;
return AVERROR_INVALIDDATA;
}
avctx->channels = s->channels;
avctx->sample_rate = s->samplerate;
avctx->bits_per_raw_sample = s->bps;
@ -234,4 +241,6 @@ void ff_flac_parse_streaminfo(AVCodecContext *avctx, struct FLACStreaminfo *s,
skip_bits_long(&gb, 64); /* md5 sum */
skip_bits_long(&gb, 64); /* md5 sum */
return 0;
}

Просмотреть файл

@ -95,8 +95,10 @@ typedef struct FLACFrameInfo {
* @param[out] avctx codec context to set basic stream parameters
* @param[out] s where parsed information is stored
* @param[in] buffer pointer to start of 34-byte streaminfo data
*
* @return negative error code on faiure or >= 0 on success
*/
void ff_flac_parse_streaminfo(AVCodecContext *avctx, struct FLACStreaminfo *s,
int ff_flac_parse_streaminfo(AVCodecContext *avctx, struct FLACStreaminfo *s,
const uint8_t *buffer);
/**

Просмотреть файл

@ -586,10 +586,12 @@ static int flac_parse(AVCodecParserContext *s, AVCodecContext *avctx,
temp = curr->next;
av_freep(&curr->link_penalty);
av_free(curr);
fpc->nb_headers_buffered--;
}
fpc->headers = fpc->best_header->next;
av_freep(&fpc->best_header->link_penalty);
av_freep(&fpc->best_header);
fpc->nb_headers_buffered--;
}
/* Find and score new headers. */
@ -638,7 +640,7 @@ static int flac_parse(AVCodecParserContext *s, AVCodecContext *avctx,
read_end - read_start, NULL);
} else {
int8_t pad[MAX_FRAME_HEADER_SIZE] = { 0 };
av_fifo_generic_write(fpc->fifo_buf, (void*) pad, sizeof(pad), NULL);
av_fifo_generic_write(fpc->fifo_buf, pad, sizeof(pad), NULL);
}
/* Tag headers and update sequences. */

Просмотреть файл

@ -109,7 +109,9 @@ static av_cold int flac_decode_init(AVCodecContext *avctx)
return AVERROR_INVALIDDATA;
/* initialize based on the demuxer-supplied streamdata header */
ff_flac_parse_streaminfo(avctx, &s->flac_stream_info, streaminfo);
ret = ff_flac_parse_streaminfo(avctx, &s->flac_stream_info, streaminfo);
if (ret < 0)
return ret;
ret = allocate_buffers(s);
if (ret < 0)
return ret;
@ -175,7 +177,9 @@ static int parse_streaminfo(FLACContext *s, const uint8_t *buf, int buf_size)
metadata_size != FLAC_STREAMINFO_SIZE) {
return AVERROR_INVALIDDATA;
}
ff_flac_parse_streaminfo(s->avctx, &s->flac_stream_info, &buf[8]);
ret = ff_flac_parse_streaminfo(s->avctx, &s->flac_stream_info, &buf[8]);
if (ret < 0)
return ret;
ret = allocate_buffers(s);
if (ret < 0)
return ret;
@ -201,12 +205,12 @@ static int get_metadata_size(const uint8_t *buf, int buf_size)
buf += 4;
do {
if (buf_end - buf < 4)
return 0;
return AVERROR_INVALIDDATA;
flac_parse_block_header(buf, &metadata_last, NULL, &metadata_size);
buf += 4;
if (buf_end - buf < metadata_size) {
/* need more data in order to read the complete header */
return 0;
return AVERROR_INVALIDDATA;
}
buf += metadata_size;
} while (!metadata_last);
@ -254,8 +258,15 @@ static int decode_residuals(FLACContext *s, int32_t *decoded, int pred_order)
for (; i < samples; i++)
*decoded++ = get_sbits_long(&s->gb, tmp);
} else {
int real_limit = tmp ? (INT_MAX >> tmp) + 2 : INT_MAX;
for (; i < samples; i++) {
*decoded++ = get_sr_golomb_flac(&s->gb, tmp, INT_MAX, 0);
int v = get_sr_golomb_flac(&s->gb, tmp, real_limit, 0);
if (v == 0x80000000){
av_log(s->avctx, AV_LOG_ERROR, "invalid residual\n");
return AVERROR_INVALIDDATA;
}
*decoded++ = v;
}
}
i= 0;
@ -268,7 +279,8 @@ static int decode_subframe_fixed(FLACContext *s, int32_t *decoded,
int pred_order, int bps)
{
const int blocksize = s->blocksize;
int av_uninit(a), av_uninit(b), av_uninit(c), av_uninit(d), i;
unsigned av_uninit(a), av_uninit(b), av_uninit(c), av_uninit(d);
int i;
int ret;
/* warm up samples */
@ -315,7 +327,7 @@ static int decode_subframe_fixed(FLACContext *s, int32_t *decoded,
return 0;
}
static void lpc_analyze_remodulate(int32_t *decoded, const int coeffs[32],
static void lpc_analyze_remodulate(SUINT32 *decoded, const int coeffs[32],
int order, int qlevel, int len, int bps)
{
int i, j;
@ -331,7 +343,7 @@ static void lpc_analyze_remodulate(int32_t *decoded, const int coeffs[32],
for (i = len - 1; i >= order; i--) {
int64_t p = 0;
for (j = 0; j < order; j++)
p += coeffs[j] * (int64_t)decoded[i-order+j];
p += coeffs[j] * (int64_t)(int32_t)decoded[i-order+j];
decoded[i] -= p >> qlevel;
}
for (i = order; i < len; i++, decoded++) {
@ -447,7 +459,7 @@ static inline int decode_subframe(FLACContext *s, int channel)
if (wasted) {
int i;
for (i = 0; i < s->blocksize; i++)
decoded[i] <<= wasted;
decoded[i] = (unsigned)decoded[i] << wasted;
}
return 0;

Просмотреть файл

@ -49,8 +49,8 @@ static void flac_lpc_16_c(int32_t *decoded, const int coeffs[32],
int i, j;
for (i = pred_order; i < len - 1; i += 2, decoded += 2) {
int c = coeffs[0];
int d = decoded[0];
SUINT c = coeffs[0];
SUINT d = decoded[0];
int s0 = 0, s1 = 0;
for (j = 1; j < pred_order; j++) {
s0 += c*d;
@ -59,15 +59,15 @@ static void flac_lpc_16_c(int32_t *decoded, const int coeffs[32],
c = coeffs[j];
}
s0 += c*d;
d = decoded[j] += s0 >> qlevel;
d = decoded[j] += (SUINT)(s0 >> qlevel);
s1 += c*d;
decoded[j + 1] += s1 >> qlevel;
decoded[j + 1] += (SUINT)(s1 >> qlevel);
}
if (i < len) {
int sum = 0;
for (j = 0; j < pred_order; j++)
sum += coeffs[j] * decoded[j];
decoded[j] += sum >> qlevel;
sum += coeffs[j] * (SUINT)decoded[j];
decoded[j] = decoded[j] + (unsigned)(sum >> qlevel);
}
}

Просмотреть файл

@ -20,6 +20,7 @@
#define AVCODEC_FLACDSP_H
#include <stdint.h>
#include "libavutil/internal.h"
#include "libavutil/samplefmt.h"
typedef struct FLACDSPContext {

Просмотреть файл

@ -56,7 +56,7 @@ static void FUNC(flac_decorrelate_indep_c)(uint8_t **out, int32_t **in,
for (j = 0; j < len; j++)
for (i = 0; i < channels; i++)
S(samples, i, j) = in[i][j] << shift;
S(samples, i, j) = (int)((unsigned)in[i][j] << shift);
}
static void FUNC(flac_decorrelate_ls_c)(uint8_t **out, int32_t **in,

Просмотреть файл

@ -229,6 +229,20 @@ static inline int get_xbits(GetBitContext *s, int n)
return (NEG_USR32(sign ^ cache, n) ^ sign) - sign;
}
static inline int get_xbits_le(GetBitContext *s, int n)
{
register int sign;
register int32_t cache;
OPEN_READER(re, s);
av_assert2(n>0 && n<=25);
UPDATE_CACHE_LE(re, s);
cache = GET_CACHE(re, s);
sign = sign_extend(~cache, n) >> 31;
LAST_SKIP_BITS(re, s, n);
CLOSE_READER(re, s);
return (zero_extend(sign ^ cache, n) ^ sign) - sign;
}
static inline int get_sbits(GetBitContext *s, int n)
{
register int tmp;
@ -331,6 +345,7 @@ static inline void skip_bits1(GetBitContext *s)
*/
static inline unsigned int get_bits_long(GetBitContext *s, int n)
{
av_assert2(n>=0 && n<=32);
if (!n) {
return 0;
} else if (n <= MIN_CACHE_BITS) {
@ -369,6 +384,10 @@ static inline uint64_t get_bits64(GetBitContext *s, int n)
*/
static inline int get_sbits_long(GetBitContext *s, int n)
{
// sign_extend(x, 0) is undefined
if (!n)
return 0;
return sign_extend(get_bits_long(s, n), n);
}

Просмотреть файл

@ -314,6 +314,8 @@ static inline int get_ur_golomb_jpegls(GetBitContext *gb, int k, int limit,
log = av_log2(buf);
av_assert2(k <= 31);
if (log - k >= 32 - MIN_CACHE_BITS + (MIN_CACHE_BITS == 32) &&
32 - log < limit) {
buf >>= log - k;
@ -325,8 +327,10 @@ static inline int get_ur_golomb_jpegls(GetBitContext *gb, int k, int limit,
} else {
int i;
for (i = 0; i < limit && SHOW_UBITS(re, gb, 1) == 0; i++) {
if (gb->size_in_bits <= re_index)
if (gb->size_in_bits <= re_index) {
CLOSE_READER(re, gb);
return -1;
}
LAST_SKIP_BITS(re, gb, 1);
UPDATE_CACHE(re, gb);
}
@ -348,16 +352,17 @@ static inline int get_ur_golomb_jpegls(GetBitContext *gb, int k, int limit,
buf = 0;
}
CLOSE_READER(re, gb);
return buf + (i << k);
buf += ((SUINT)i << k);
} else if (i == limit - 1) {
buf = SHOW_UBITS(re, gb, esc_len);
LAST_SKIP_BITS(re, gb, esc_len);
CLOSE_READER(re, gb);
return buf + 1;
} else
return -1;
buf ++;
} else {
buf = -1;
}
CLOSE_READER(re, gb);
return buf;
}
}
@ -445,19 +450,20 @@ static inline int get_te(GetBitContext *s, int r, char *file, const char *func,
return i;
}
#define get_ue_golomb(a) get_ue(a, __FILE__, __PRETTY_FUNCTION__, __LINE__)
#define get_se_golomb(a) get_se(a, __FILE__, __PRETTY_FUNCTION__, __LINE__)
#define get_te_golomb(a, r) get_te(a, r, __FILE__, __PRETTY_FUNCTION__, __LINE__)
#define get_te0_golomb(a, r) get_te(a, r, __FILE__, __PRETTY_FUNCTION__, __LINE__)
#define get_ue_golomb(a) get_ue(a, __FILE__, __func__, __LINE__)
#define get_se_golomb(a) get_se(a, __FILE__, __func__, __LINE__)
#define get_te_golomb(a, r) get_te(a, r, __FILE__, __func__, __LINE__)
#define get_te0_golomb(a, r) get_te(a, r, __FILE__, __func__, __LINE__)
#endif /* TRACE */
/**
* write unsigned exp golomb code.
* write unsigned exp golomb code. 2^16 - 2 at most
*/
static inline void set_ue_golomb(PutBitContext *pb, int i)
{
av_assert2(i >= 0);
av_assert2(i <= 0xFFFE);
if (i < 256)
put_bits(pb, ff_ue_golomb_len[i], i + 1);
@ -467,6 +473,21 @@ static inline void set_ue_golomb(PutBitContext *pb, int i)
}
}
/**
* write unsigned exp golomb code. 2^32-2 at most.
*/
static inline void set_ue_golomb_long(PutBitContext *pb, uint32_t i)
{
av_assert2(i <= (UINT32_MAX - 1));
if (i < 256)
put_bits(pb, ff_ue_golomb_len[i], i + 1);
else {
int e = av_log2(i + 1);
put_bits64(pb, 2 * e + 1, i + 1);
}
}
/**
* write truncated unsigned exp golomb code.
*/
@ -486,19 +507,9 @@ static inline void set_te_golomb(PutBitContext *pb, int i, int range)
*/
static inline void set_se_golomb(PutBitContext *pb, int i)
{
#if 0
if (i <= 0)
i = -2 * i;
else
i = 2 * i - 1;
#elif 1
i = 2 * i - 1;
if (i < 0)
i ^= -1; //FIXME check if gcc does the right thing
#else
i = 2 * i - 1;
i ^= (i >> 31);
#endif
set_ue_golomb(pb, i);
}

Просмотреть файл

@ -19,9 +19,10 @@
#ifndef AVCODEC_H264CHROMA_H
#define AVCODEC_H264CHROMA_H
#include <stddef.h>
#include <stdint.h>
typedef void (*h264_chroma_mc_func)(uint8_t *dst/*align 8*/, uint8_t *src/*align 1*/, int srcStride, int h, int x, int y);
typedef void (*h264_chroma_mc_func)(uint8_t *dst /*align 8*/, uint8_t *src /*align 1*/, ptrdiff_t srcStride, int h, int x, int y);
typedef struct H264ChromaContext {
h264_chroma_mc_func put_h264_chroma_pixels_tab[4];

Просмотреть файл

@ -76,6 +76,8 @@ typedef struct HpelDSPContext {
* @param pixels source
* @param line_size number of bytes in a horizontal line of block
* @param h height
* @note The size is kept at [4][4] to match the above pixel_tabs and avoid
* out of bounds reads in the motion estimation code.
*/
op_pixels_func put_no_rnd_pixels_tab[4][4];

Просмотреть файл

@ -0,0 +1,24 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_HWACCEL_H
#define AVCODEC_HWACCEL_H
#define HWACCEL_CAP_ASYNC_SAFE (1 << 0)
#endif /* AVCODEC_HWACCEL_H */

Просмотреть файл

@ -21,6 +21,8 @@
#include <stdint.h>
#include "config.h"
#include "avcodec.h"
/**
@ -51,13 +53,13 @@ int ff_init_scantable_permutation_x86(uint8_t *idct_permutation,
typedef struct IDCTDSPContext {
/* pixel ops : interface with DCT */
void (*put_pixels_clamped)(const int16_t *block /* align 16 */,
uint8_t *pixels /* align 8 */,
uint8_t *av_restrict pixels /* align 8 */,
ptrdiff_t line_size);
void (*put_signed_pixels_clamped)(const int16_t *block /* align 16 */,
uint8_t *pixels /* align 8 */,
uint8_t *av_restrict pixels /* align 8 */,
ptrdiff_t line_size);
void (*add_pixels_clamped)(const int16_t *block /* align 16 */,
uint8_t *pixels /* align 8 */,
uint8_t *av_restrict pixels /* align 8 */,
ptrdiff_t line_size);
void (*idct)(int16_t *block /* align 16 */);
@ -68,14 +70,14 @@ typedef struct IDCTDSPContext {
* @param line_size size in bytes of a horizontal line of dest
*/
void (*idct_put)(uint8_t *dest /* align 8 */,
int line_size, int16_t *block /* align 16 */);
ptrdiff_t line_size, int16_t *block /* align 16 */);
/**
* block -> idct -> add dest -> clip to unsigned 8 bit -> dest.
* @param line_size size in bytes of a horizontal line of dest
*/
void (*idct_add)(uint8_t *dest /* align 8 */,
int line_size, int16_t *block /* align 16 */);
ptrdiff_t line_size, int16_t *block /* align 16 */);
/**
* IDCT input permutation.
@ -95,11 +97,15 @@ typedef struct IDCTDSPContext {
enum idct_permutation_type perm_type;
} IDCTDSPContext;
extern void (*ff_put_pixels_clamped)(const int16_t *block, uint8_t *pixels, ptrdiff_t line_size);
extern void (*ff_add_pixels_clamped)(const int16_t *block, uint8_t *pixels, ptrdiff_t line_size);
void ff_put_pixels_clamped_c(const int16_t *block, uint8_t *av_restrict pixels,
ptrdiff_t line_size);
void ff_add_pixels_clamped_c(const int16_t *block, uint8_t *av_restrict pixels,
ptrdiff_t line_size);
void ff_idctdsp_init(IDCTDSPContext *c, AVCodecContext *avctx);
void ff_idctdsp_init_aarch64(IDCTDSPContext *c, AVCodecContext *avctx,
unsigned high_bit_depth);
void ff_idctdsp_init_alpha(IDCTDSPContext *c, AVCodecContext *avctx,
unsigned high_bit_depth);
void ff_idctdsp_init_arm(IDCTDSPContext *c, AVCodecContext *avctx,

Просмотреть файл

@ -48,8 +48,8 @@
#define FF_CODEC_CAP_INIT_CLEANUP (1 << 1)
/**
* Decoders marked with FF_CODEC_CAP_SETS_PKT_DTS want to set
* AVFrame.pkt_dts manually. If the flag is set, utils.c won't overwrite
* this field. If it's unset, utils.c tries to guess the pkt_dts field
* AVFrame.pkt_dts manually. If the flag is set, decode.c won't overwrite
* this field. If it's unset, decode.c tries to guess the pkt_dts field
* from the input AVPacket.
*/
#define FF_CODEC_CAP_SETS_PKT_DTS (1 << 2)
@ -58,6 +58,16 @@
* skipped due to the skip_frame setting.
*/
#define FF_CODEC_CAP_SKIP_FRAME_FILL_PARAM (1 << 3)
/**
* The decoder sets the cropping fields in the output frames manually.
* If this cap is set, the generic code will initialize output frame
* dimensions to coded rather than display values.
*/
#define FF_CODEC_CAP_EXPORTS_CROPPING (1 << 4)
/**
* Codec initializes slice-based threading with a main function
*/
#define FF_CODEC_CAP_SLICE_THREAD_HAS_MF (1 << 5)
#ifdef TRACE
# define ff_tlog(ctx, ...) av_log(ctx, AV_LOG_TRACE, __VA_ARGS__)
@ -70,11 +80,18 @@
#define FF_DEFAULT_QUANT_BIAS 999999
#endif
#if !FF_API_QSCALE_TYPE
#define FF_QSCALE_TYPE_MPEG1 0
#define FF_QSCALE_TYPE_MPEG2 1
#define FF_QSCALE_TYPE_H264 2
#define FF_QSCALE_TYPE_VP56 3
#endif
#define FF_SANE_NB_CHANNELS 64U
#define FF_SIGNBIT(x) ((x) >> CHAR_BIT * sizeof(x) - 1)
#if HAVE_AVX
#if HAVE_SIMD_ALIGN_32
# define STRIDE_ALIGN 32
#elif HAVE_SIMD_ALIGN_16
# define STRIDE_ALIGN 16
@ -101,6 +118,16 @@ typedef struct FramePool {
int samples;
} FramePool;
typedef struct DecodeSimpleContext {
AVPacket *in_pkt;
AVFrame *out_frame;
} DecodeSimpleContext;
typedef struct DecodeFilterContext {
AVBSFContext **bsfs;
int nb_bsfs;
} DecodeFilterContext;
typedef struct AVCodecInternal {
/**
* Whether the parent AVCodecContext is a copy of the context which had
@ -137,11 +164,14 @@ typedef struct AVCodecInternal {
void *thread_ctx;
DecodeSimpleContext ds;
DecodeFilterContext filter;
/**
* Current packet as passed into the decoder, to avoid having to pass the
* packet into every function.
* Properties (timestamps+side data) extracted from the last packet passed
* for decoding.
*/
AVPacket *pkt;
AVPacket *last_pkt_props;
/**
* temporary buffer used for encoders to store their bitstream
@ -173,7 +203,23 @@ typedef struct AVCodecInternal {
int buffer_pkt_valid; // encoding: packet without data can be valid
AVFrame *buffer_frame;
int draining_done;
/* set to 1 when the caller is using the old decoding API */
int compat_decode;
int compat_decode_warned;
/* this variable is set by the decoder internals to signal to the old
* API compat wrappers the amount of data consumed from the last packet */
size_t compat_decode_consumed;
/* when a partial packet has been consumed, this stores the remaining size
* of the packet (that should be submitted in the next decode call */
size_t compat_decode_partial_size;
AVFrame *compat_decode_frame;
int showed_multi_packet_warning;
int skip_samples_multiplier;
/* to prevent infinite loop on errors when draining */
int nb_draining_errors;
} AVCodecInternal;
struct AVCodecDefault {
@ -262,7 +308,7 @@ static av_always_inline int64_t ff_samples_to_time_base(AVCodecContext *avctx,
static av_always_inline float ff_exp2fi(int x) {
/* Normal range */
if (-126 <= x && x <= 128)
return av_int2float(x+127 << 23);
return av_int2float((x+127) << 23);
/* Too large */
else if (x > 128)
return INFINITY;
@ -327,6 +373,10 @@ int ff_set_sar(AVCodecContext *avctx, AVRational sar);
int ff_side_data_update_matrix_encoding(AVFrame *frame,
enum AVMatrixEncoding matrix_encoding);
#if FF_API_MERGE_SD
int ff_packet_split_and_drop_side_data(AVPacket *pkt);
#endif
/**
* Select the (possibly hardware accelerated) pixel format.
* This is a wrapper around AVCodecContext.get_format() and should be used
@ -361,4 +411,10 @@ int ff_side_data_set_encoder_stats(AVPacket *pkt, int quality, int64_t *error, i
int ff_alloc_a53_sei(const AVFrame *frame, size_t prefix_len,
void **data, size_t *sei_size);
/**
* Get an estimated video bitrate based on frame size, frame rate and coded
* bits per pixel.
*/
int64_t ff_guess_coded_bitrate(AVCodecContext *avctx);
#endif /* AVCODEC_INTERNAL_H */

Просмотреть файл

@ -25,6 +25,7 @@
#include <stdint.h>
#include "libavutil/common.h"
#include "libavutil/reverse.h"
#include "config.h"
#define MAX_NEG_CROP 1024
@ -96,15 +97,6 @@ static av_always_inline unsigned UMULH(unsigned a, unsigned b){
#define mid_pred mid_pred
static inline av_const int mid_pred(int a, int b, int c)
{
#if 0
int t= (a-b)&((a-b)>>31);
a-=t;
b+=t;
b-= (b-c)&((b-c)>>31);
b+= (a-b)&((a-b)>>31);
return b;
#else
if(a>b){
if(c>b){
if(c>a) b=a;
@ -117,7 +109,6 @@ static inline av_const int mid_pred(int a, int b, int c)
}
}
return b;
#endif
}
#endif
@ -249,4 +240,12 @@ static inline int8_t ff_u8_to_s8(uint8_t a)
return b.s8;
}
static av_always_inline uint32_t bitswap_32(uint32_t x)
{
return (uint32_t)ff_reverse[ x & 0xFF] << 24 |
(uint32_t)ff_reverse[(x >> 8) & 0xFF] << 16 |
(uint32_t)ff_reverse[(x >> 16) & 0xFF] << 8 |
(uint32_t)ff_reverse[ x >> 24];
}
#endif /* AVCODEC_MATHOPS_H */

Просмотреть файл

@ -76,7 +76,7 @@ typedef struct MECmpContext {
me_cmp_func frame_skip_cmp[6]; // only width 8 used
me_cmp_func pix_abs[2][4];
me_cmp_func median_sad[2];
me_cmp_func median_sad[6];
} MECmpContext;
void ff_me_cmp_init_static(void);

Просмотреть файл

@ -16,7 +16,11 @@ SOURCES += [
'avpacket.c',
'avpicture.c',
'bitstream.c',
'bitstream_filter.c',
'bitstream_filters.c',
'bsf.c',
'codec_desc.c',
'decode.c',
'dummy_funcs.c',
'flac.c',
'flac_parser.c',
@ -28,6 +32,7 @@ SOURCES += [
'imgconvert.c',
'log2_tab.c',
'mathtables.c',
'null_bsf.c',
'options.c',
'parser.c',
'profiles.c',
@ -48,10 +53,16 @@ SOURCES += [
'vp8dsp.c',
'vp9.c',
'vp9_parser.c',
'vp9block.c',
'vp9data.c',
'vp9dsp.c',
'vp9dsp_10bpp.c',
'vp9dsp_12bpp.c',
'vp9dsp_8bpp.c',
'vp9lpf.c',
'vp9mvs.c',
'vp9prob.c',
'vp9recon.c',
'xiph.c'
]

Просмотреть файл

@ -422,6 +422,7 @@ typedef struct MpegEncContext {
struct MJpegContext *mjpeg_ctx;
int esc_pos;
int pred;
int huffman;
/* MSMPEG4 specific */
int mv_table_index;
@ -679,7 +680,7 @@ void ff_mpv_common_end(MpegEncContext *s);
void ff_mpv_decode_defaults(MpegEncContext *s);
void ff_mpv_decode_init(MpegEncContext *s, AVCodecContext *avctx);
void ff_mpv_decode_mb(MpegEncContext *s, int16_t block[12][64]);
void ff_mpv_reconstruct_mb(MpegEncContext *s, int16_t block[12][64]);
void ff_mpv_report_decode_progress(MpegEncContext *s);
int ff_mpv_frame_start(MpegEncContext *s, AVCodecContext *avctx);

Просмотреть файл

@ -0,0 +1,43 @@
/*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
/**
* @file
* Null bitstream filter -- pass the input through unchanged.
*/
#include "avcodec.h"
#include "bsf.h"
static int null_filter(AVBSFContext *ctx, AVPacket *out)
{
AVPacket *in;
int ret;
ret = ff_bsf_get_packet(ctx, &in);
if (ret < 0)
return ret;
av_packet_move_ref(out, in);
av_packet_free(&in);
return 0;
}
const AVBitStreamFilter ff_null_bsf = {
.name = "null",
.filter = null_filter,
};

Просмотреть файл

@ -119,6 +119,7 @@ static int init_context_defaults(AVCodecContext *s, const AVCodec *codec)
s->execute2 = avcodec_default_execute2;
s->sample_aspect_ratio = (AVRational){0,1};
s->pix_fmt = AV_PIX_FMT_NONE;
s->sw_pix_fmt = AV_PIX_FMT_NONE;
s->sample_fmt = AV_SAMPLE_FMT_NONE;
s->reordered_opaque = AV_NOPTS_VALUE;
@ -187,6 +188,31 @@ void avcodec_free_context(AVCodecContext **pavctx)
}
#if FF_API_COPY_CONTEXT
static void copy_context_reset(AVCodecContext *avctx)
{
int i;
av_opt_free(avctx);
#if FF_API_CODED_FRAME
FF_DISABLE_DEPRECATION_WARNINGS
av_frame_free(&avctx->coded_frame);
FF_ENABLE_DEPRECATION_WARNINGS
#endif
av_freep(&avctx->rc_override);
av_freep(&avctx->intra_matrix);
av_freep(&avctx->inter_matrix);
av_freep(&avctx->extradata);
av_freep(&avctx->subtitle_header);
av_buffer_unref(&avctx->hw_frames_ctx);
av_buffer_unref(&avctx->hw_device_ctx);
for (i = 0; i < avctx->nb_coded_side_data; i++)
av_freep(&avctx->coded_side_data[i].data);
av_freep(&avctx->coded_side_data);
avctx->subtitle_header_size = 0;
avctx->nb_coded_side_data = 0;
avctx->extradata_size = 0;
}
int avcodec_copy_context(AVCodecContext *dest, const AVCodecContext *src)
{
const AVCodec *orig_codec = dest->codec;
@ -199,12 +225,7 @@ int avcodec_copy_context(AVCodecContext *dest, const AVCodecContext *src)
return AVERROR(EINVAL);
}
av_opt_free(dest);
av_freep(&dest->rc_override);
av_freep(&dest->intra_matrix);
av_freep(&dest->inter_matrix);
av_freep(&dest->extradata);
av_freep(&dest->subtitle_header);
copy_context_reset(dest);
memcpy(dest, src, sizeof(*dest));
av_opt_copy(dest, src);
@ -229,11 +250,14 @@ FF_ENABLE_DEPRECATION_WARNINGS
/* reallocate values that should be allocated separately */
dest->extradata = NULL;
dest->coded_side_data = NULL;
dest->intra_matrix = NULL;
dest->inter_matrix = NULL;
dest->rc_override = NULL;
dest->subtitle_header = NULL;
dest->hw_frames_ctx = NULL;
dest->hw_device_ctx = NULL;
dest->nb_coded_side_data = 0;
#define alloc_and_copy_or_fail(obj, size, pad) \
if (src->obj && size > 0) { \
@ -263,15 +287,7 @@ FF_ENABLE_DEPRECATION_WARNINGS
return 0;
fail:
av_freep(&dest->subtitle_header);
av_freep(&dest->rc_override);
av_freep(&dest->intra_matrix);
av_freep(&dest->inter_matrix);
av_freep(&dest->extradata);
av_buffer_unref(&dest->hw_frames_ctx);
dest->subtitle_header_size = 0;
dest->extradata_size = 0;
av_opt_free(dest);
copy_context_reset(dest);
return AVERROR(ENOMEM);
}
#endif

Просмотреть файл

@ -106,12 +106,12 @@ static const AVOption avcodec_options[] = {
{"umh", "umh motion estimation", 0, AV_OPT_TYPE_CONST, {.i64 = ME_UMH }, INT_MIN, INT_MAX, V|E, "me_method" },
{"iter", "iter motion estimation", 0, AV_OPT_TYPE_CONST, {.i64 = ME_ITER }, INT_MIN, INT_MAX, V|E, "me_method" },
#endif
{"time_base", NULL, OFFSET(time_base), AV_OPT_TYPE_RATIONAL, {.dbl = 0}, INT_MIN, INT_MAX},
{"time_base", NULL, OFFSET(time_base), AV_OPT_TYPE_RATIONAL, {.dbl = 0}, 0, INT_MAX},
{"g", "set the group of picture (GOP) size", OFFSET(gop_size), AV_OPT_TYPE_INT, {.i64 = 12 }, INT_MIN, INT_MAX, V|E},
{"ar", "set audio sampling rate (in Hz)", OFFSET(sample_rate), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, 0, INT_MAX, A|D|E},
{"ac", "set number of audio channels", OFFSET(channels), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, 0, INT_MAX, A|D|E},
{"cutoff", "set cutoff bandwidth", OFFSET(cutoff), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, INT_MIN, INT_MAX, A|E},
{"frame_size", NULL, OFFSET(frame_size), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, INT_MIN, INT_MAX, A|E},
{"frame_size", NULL, OFFSET(frame_size), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, 0, INT_MAX, A|E},
{"frame_number", NULL, OFFSET(frame_number), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, INT_MIN, INT_MAX},
{"delay", NULL, OFFSET(delay), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, INT_MIN, INT_MAX},
{"qcomp", "video quantizer scale compression (VBR). Constant of ratecontrol equation. "
@ -163,6 +163,7 @@ static const AVOption avcodec_options[] = {
{"dc_clip", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = FF_BUG_DC_CLIP }, INT_MIN, INT_MAX, V|D, "bug"},
{"ms", "work around various bugs in Microsoft's broken decoders", 0, AV_OPT_TYPE_CONST, {.i64 = FF_BUG_MS }, INT_MIN, INT_MAX, V|D, "bug"},
{"trunc", "truncated frames", 0, AV_OPT_TYPE_CONST, {.i64 = FF_BUG_TRUNCATED}, INT_MIN, INT_MAX, V|D, "bug"},
{"iedge", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = FF_BUG_IEDGE }, INT_MIN, INT_MAX, V|D, "bug"},
{"strict", "how strictly to follow the standards", OFFSET(strict_std_compliance), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, INT_MIN, INT_MAX, A|V|D|E, "strict"},
{"very", "strictly conform to a older more strict version of the spec or reference software", 0, AV_OPT_TYPE_CONST, {.i64 = FF_COMPLIANCE_VERY_STRICT }, INT_MIN, INT_MAX, A|V|D|E, "strict"},
{"strict", "strictly conform to all the things in the spec no matter what the consequences", 0, AV_OPT_TYPE_CONST, {.i64 = FF_COMPLIANCE_STRICT }, INT_MIN, INT_MAX, A|V|D|E, "strict"},
@ -179,8 +180,8 @@ static const AVOption avcodec_options[] = {
{"careful", "consider things that violate the spec, are fast to check and have not been seen in the wild as errors", 0, AV_OPT_TYPE_CONST, {.i64 = AV_EF_CAREFUL }, INT_MIN, INT_MAX, A|V|D, "err_detect"},
{"compliant", "consider all spec non compliancies as errors", 0, AV_OPT_TYPE_CONST, {.i64 = AV_EF_COMPLIANT }, INT_MIN, INT_MAX, A|V|D, "err_detect"},
{"aggressive", "consider things that a sane encoder should not do as an error", 0, AV_OPT_TYPE_CONST, {.i64 = AV_EF_AGGRESSIVE }, INT_MIN, INT_MAX, A|V|D, "err_detect"},
{"has_b_frames", NULL, OFFSET(has_b_frames), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, INT_MIN, INT_MAX},
{"block_align", NULL, OFFSET(block_align), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, INT_MIN, INT_MAX},
{"has_b_frames", NULL, OFFSET(has_b_frames), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, 0, INT_MAX},
{"block_align", NULL, OFFSET(block_align), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, 0, INT_MAX},
#if FF_API_PRIVATE_OPT
{"mpeg_quant", "use MPEG quantizers instead of H.263", OFFSET(mpeg_quant), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, INT_MIN, INT_MAX, V|E},
#endif
@ -246,7 +247,7 @@ static const AVOption avcodec_options[] = {
{"guess_mvs", "iterative motion vector (MV) search (slow)", 0, AV_OPT_TYPE_CONST, {.i64 = FF_EC_GUESS_MVS }, INT_MIN, INT_MAX, V|D, "ec"},
{"deblock", "use strong deblock filter for damaged MBs", 0, AV_OPT_TYPE_CONST, {.i64 = FF_EC_DEBLOCK }, INT_MIN, INT_MAX, V|D, "ec"},
{"favor_inter", "favor predicting from the previous frame", 0, AV_OPT_TYPE_CONST, {.i64 = FF_EC_FAVOR_INTER }, INT_MIN, INT_MAX, V|D, "ec"},
{"bits_per_coded_sample", NULL, OFFSET(bits_per_coded_sample), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, INT_MIN, INT_MAX},
{"bits_per_coded_sample", NULL, OFFSET(bits_per_coded_sample), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, 0, INT_MAX},
#if FF_API_PRIVATE_OPT
{"pred", "prediction method", OFFSET(prediction_method), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, INT_MIN, INT_MAX, V|E, "pred"},
{"left", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = FF_PRED_LEFT }, INT_MIN, INT_MAX, V|E, "pred"},
@ -312,6 +313,7 @@ static const AVOption avcodec_options[] = {
#endif
{"dctmax", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = FF_CMP_DCTMAX }, INT_MIN, INT_MAX, V|E, "cmp_func"},
{"chroma", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = FF_CMP_CHROMA }, INT_MIN, INT_MAX, V|E, "cmp_func"},
{"msad", "sum of absolute differences, median predicted", 0, AV_OPT_TYPE_CONST, {.i64 = FF_CMP_MEDIAN_SAD }, INT_MIN, INT_MAX, V|E, "cmp_func"},
{"pre_dia_size", "diamond type & size for motion estimation pre-pass", OFFSET(pre_dia_size), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, INT_MIN, INT_MAX, V|E},
{"subq", "sub-pel motion estimation quality", OFFSET(me_subpel_quality), AV_OPT_TYPE_INT, {.i64 = 8 }, INT_MIN, INT_MAX, V|E},
#if FF_API_AFD
@ -393,6 +395,7 @@ static const AVOption avcodec_options[] = {
{"mpeg4_core", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = FF_PROFILE_MPEG4_CORE }, INT_MIN, INT_MAX, V|E, "profile"},
{"mpeg4_main", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = FF_PROFILE_MPEG4_MAIN }, INT_MIN, INT_MAX, V|E, "profile"},
{"mpeg4_asp", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = FF_PROFILE_MPEG4_ADVANCED_SIMPLE }, INT_MIN, INT_MAX, V|E, "profile"},
{"main10", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = FF_PROFILE_HEVC_MAIN_10 }, INT_MIN, INT_MAX, V|E, "profile"},
{"level", NULL, OFFSET(level), AV_OPT_TYPE_INT, {.i64 = FF_LEVEL_UNKNOWN }, INT_MIN, INT_MAX, V|A|E, "level"},
{"unknown", NULL, 0, AV_OPT_TYPE_CONST, {.i64 = FF_LEVEL_UNKNOWN }, INT_MIN, INT_MAX, V|A|E, "level"},
{"lowres", "decode at 1= 1/2, 2=1/4, 3=1/8 resolutions", OFFSET(lowres), AV_OPT_TYPE_INT, {.i64 = 0 }, 0, INT_MAX, V|A|D},
@ -443,30 +446,46 @@ static const AVOption avcodec_options[] = {
{"max_prediction_order", NULL, OFFSET(max_prediction_order), AV_OPT_TYPE_INT, {.i64 = -1 }, INT_MIN, INT_MAX, A|E},
{"timecode_frame_start", "GOP timecode frame start number, in non-drop-frame format", OFFSET(timecode_frame_start), AV_OPT_TYPE_INT64, {.i64 = -1 }, -1, INT64_MAX, V|E},
#endif
{"bits_per_raw_sample", NULL, OFFSET(bits_per_raw_sample), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, INT_MIN, INT_MAX},
{"channel_layout", NULL, OFFSET(channel_layout), AV_OPT_TYPE_INT64, {.i64 = DEFAULT }, 0, INT64_MAX, A|E|D, "channel_layout"},
{"request_channel_layout", NULL, OFFSET(request_channel_layout), AV_OPT_TYPE_INT64, {.i64 = DEFAULT }, 0, INT64_MAX, A|D, "request_channel_layout"},
{"bits_per_raw_sample", NULL, OFFSET(bits_per_raw_sample), AV_OPT_TYPE_INT, {.i64 = DEFAULT }, 0, INT_MAX},
{"channel_layout", NULL, OFFSET(channel_layout), AV_OPT_TYPE_UINT64, {.i64 = DEFAULT }, 0, UINT64_MAX, A|E|D, "channel_layout"},
{"request_channel_layout", NULL, OFFSET(request_channel_layout), AV_OPT_TYPE_UINT64, {.i64 = DEFAULT }, 0, UINT64_MAX, A|D, "request_channel_layout"},
{"rc_max_vbv_use", NULL, OFFSET(rc_max_available_vbv_use), AV_OPT_TYPE_FLOAT, {.dbl = 0 }, 0.0, FLT_MAX, V|E},
{"rc_min_vbv_use", NULL, OFFSET(rc_min_vbv_overflow_use), AV_OPT_TYPE_FLOAT, {.dbl = 3 }, 0.0, FLT_MAX, V|E},
{"ticks_per_frame", NULL, OFFSET(ticks_per_frame), AV_OPT_TYPE_INT, {.i64 = 1 }, 1, INT_MAX, A|V|E|D},
{"color_primaries", "color primaries", OFFSET(color_primaries), AV_OPT_TYPE_INT, {.i64 = AVCOL_PRI_UNSPECIFIED }, 1, AVCOL_PRI_NB-1, V|E|D, "color_primaries_type"},
{"bt709", "BT.709", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_PRI_BT709 }, INT_MIN, INT_MAX, V|E|D, "color_primaries_type"},
{"unspecified", "Unspecified", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_PRI_UNSPECIFIED }, INT_MIN, INT_MAX, V|E|D, "color_primaries_type"},
{"unknown", "Unspecified", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_PRI_UNSPECIFIED }, INT_MIN, INT_MAX, V|E|D, "color_primaries_type"},
{"bt470m", "BT.470 M", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_PRI_BT470M }, INT_MIN, INT_MAX, V|E|D, "color_primaries_type"},
{"bt470bg", "BT.470 BG", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_PRI_BT470BG }, INT_MIN, INT_MAX, V|E|D, "color_primaries_type"},
{"smpte170m", "SMPTE 170 M", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_PRI_SMPTE170M }, INT_MIN, INT_MAX, V|E|D, "color_primaries_type"},
{"smpte240m", "SMPTE 240 M", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_PRI_SMPTE240M }, INT_MIN, INT_MAX, V|E|D, "color_primaries_type"},
{"film", "Film", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_PRI_FILM }, INT_MIN, INT_MAX, V|E|D, "color_primaries_type"},
{"bt2020", "BT.2020", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_PRI_BT2020 }, INT_MIN, INT_MAX, V|E|D, "color_primaries_type"},
{"smpte428_1", "SMPTE ST 428-1", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_PRI_SMPTEST428_1 }, INT_MIN, INT_MAX, V|E|D, "color_primaries_type"},
{"smpte428", "SMPTE 428-1", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_PRI_SMPTE428 }, INT_MIN, INT_MAX, V|E|D, "color_primaries_type"},
{"smpte428_1", "SMPTE 428-1", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_PRI_SMPTE428 }, INT_MIN, INT_MAX, V|E|D, "color_primaries_type"},
{"smpte431", "SMPTE 431-2", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_PRI_SMPTE431 }, INT_MIN, INT_MAX, V|E|D, "color_primaries_type"},
{"smpte432", "SMPTE 422-1", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_PRI_SMPTE432 }, INT_MIN, INT_MAX, V|E|D, "color_primaries_type"},
{"jedec-p22", "JEDEC P22", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_PRI_JEDEC_P22 }, INT_MIN, INT_MAX, V|E|D, "color_primaries_type"},
{"unspecified", "Unspecified", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_PRI_UNSPECIFIED }, INT_MIN, INT_MAX, V|E|D, "color_primaries_type"},
{"color_trc", "color transfer characteristics", OFFSET(color_trc), AV_OPT_TYPE_INT, {.i64 = AVCOL_TRC_UNSPECIFIED }, 1, AVCOL_TRC_NB-1, V|E|D, "color_trc_type"},
{"bt709", "BT.709", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_BT709 }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"unspecified", "Unspecified", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_UNSPECIFIED }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"unknown", "Unspecified", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_UNSPECIFIED }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"gamma22", "BT.470 M", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_GAMMA22 }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"gamma28", "BT.470 BG", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_GAMMA28 }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"smpte170m", "SMPTE 170 M", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_SMPTE170M }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"smpte240m", "SMPTE 240 M", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_SMPTE240M }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"linear", "Linear", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_LINEAR }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"log100", "Log", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_LOG }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"log316", "Log square root", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_LOG_SQRT }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"iec61966-2-4", "IEC 61966-2-4", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_IEC61966_2_4 }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"bt1361e", "BT.1361", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_BT1361_ECG }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"iec61966-2-1", "IEC 61966-2-1", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_IEC61966_2_1 }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"bt2020-10", "BT.2020 - 10 bit", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_BT2020_10 }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"bt2020-12", "BT.2020 - 12 bit", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_BT2020_12 }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"smpte2084", "SMPTE 2084", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_SMPTE2084 }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"smpte428", "SMPTE 428-1", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_SMPTE428 }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"arib-std-b67", "ARIB STD-B67", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_ARIB_STD_B67 }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"unspecified", "Unspecified", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_UNSPECIFIED }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"log", "Log", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_LOG }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"log_sqrt", "Log square root", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_LOG_SQRT }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"iec61966_2_4", "IEC 61966-2-4", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_IEC61966_2_4 }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
@ -474,31 +493,39 @@ static const AVOption avcodec_options[] = {
{"iec61966_2_1", "IEC 61966-2-1", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_IEC61966_2_1 }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"bt2020_10bit", "BT.2020 - 10 bit", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_BT2020_10 }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"bt2020_12bit", "BT.2020 - 12 bit", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_BT2020_12 }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"smpte2084", "SMPTE ST 2084", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_SMPTEST2084 }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"smpte428_1", "SMPTE ST 428-1", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_SMPTEST428_1 }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"smpte428_1", "SMPTE 428-1", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_TRC_SMPTE428 }, INT_MIN, INT_MAX, V|E|D, "color_trc_type"},
{"colorspace", "color space", OFFSET(colorspace), AV_OPT_TYPE_INT, {.i64 = AVCOL_SPC_UNSPECIFIED }, 0, AVCOL_SPC_NB-1, V|E|D, "colorspace_type"},
{"rgb", "RGB", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_SPC_RGB }, INT_MIN, INT_MAX, V|E|D, "colorspace_type"},
{"bt709", "BT.709", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_SPC_BT709 }, INT_MIN, INT_MAX, V|E|D, "colorspace_type"},
{"unspecified", "Unspecified", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_SPC_UNSPECIFIED }, INT_MIN, INT_MAX, V|E|D, "colorspace_type"},
{"unknown", "Unspecified", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_SPC_UNSPECIFIED }, INT_MIN, INT_MAX, V|E|D, "colorspace_type"},
{"fcc", "FCC", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_SPC_FCC }, INT_MIN, INT_MAX, V|E|D, "colorspace_type"},
{"bt470bg", "BT.470 BG", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_SPC_BT470BG }, INT_MIN, INT_MAX, V|E|D, "colorspace_type"},
{"smpte170m", "SMPTE 170 M", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_SPC_SMPTE170M }, INT_MIN, INT_MAX, V|E|D, "colorspace_type"},
{"smpte240m", "SMPTE 240 M", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_SPC_SMPTE240M }, INT_MIN, INT_MAX, V|E|D, "colorspace_type"},
{"ycocg", "YCOCG", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_SPC_YCOCG }, INT_MIN, INT_MAX, V|E|D, "colorspace_type"},
{"ycgco", "YCGCO", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_SPC_YCGCO }, INT_MIN, INT_MAX, V|E|D, "colorspace_type"},
{"bt2020nc", "BT.2020 NCL", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_SPC_BT2020_NCL }, INT_MIN, INT_MAX, V|E|D, "colorspace_type"},
{"bt2020c", "BT.2020 CL", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_SPC_BT2020_CL }, INT_MIN, INT_MAX, V|E|D, "colorspace_type"},
{"smpte2085", "SMPTE 2085", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_SPC_SMPTE2085 }, INT_MIN, INT_MAX, V|E|D, "colorspace_type"},
{"unspecified", "Unspecified", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_SPC_UNSPECIFIED }, INT_MIN, INT_MAX, V|E|D, "colorspace_type"},
{"ycocg", "YCGCO", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_SPC_YCGCO }, INT_MIN, INT_MAX, V|E|D, "colorspace_type"},
{"bt2020_ncl", "BT.2020 NCL", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_SPC_BT2020_NCL }, INT_MIN, INT_MAX, V|E|D, "colorspace_type"},
{"bt2020_cl", "BT.2020 CL", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_SPC_BT2020_CL }, INT_MIN, INT_MAX, V|E|D, "colorspace_type"},
{"color_range", "color range", OFFSET(color_range), AV_OPT_TYPE_INT, {.i64 = AVCOL_RANGE_UNSPECIFIED }, 0, AVCOL_RANGE_NB-1, V|E|D, "color_range_type"},
{"unknown", "Unspecified", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_RANGE_UNSPECIFIED }, INT_MIN, INT_MAX, V|E|D, "color_range_type"},
{"tv", "MPEG (219*2^(n-8))", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_RANGE_MPEG }, INT_MIN, INT_MAX, V|E|D, "color_range_type"},
{"pc", "JPEG (2^n-1)", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_RANGE_JPEG }, INT_MIN, INT_MAX, V|E|D, "color_range_type"},
{"unspecified", "Unspecified", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_RANGE_UNSPECIFIED }, INT_MIN, INT_MAX, V|E|D, "color_range_type"},
{"mpeg", "MPEG (219*2^(n-8))", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_RANGE_MPEG }, INT_MIN, INT_MAX, V|E|D, "color_range_type"},
{"jpeg", "JPEG (2^n-1)", 0, AV_OPT_TYPE_CONST, {.i64 = AVCOL_RANGE_JPEG }, INT_MIN, INT_MAX, V|E|D, "color_range_type"},
{"chroma_sample_location", "chroma sample location", OFFSET(chroma_sample_location), AV_OPT_TYPE_INT, {.i64 = AVCHROMA_LOC_UNSPECIFIED }, 0, AVCHROMA_LOC_NB-1, V|E|D, "chroma_sample_location_type"},
{"unspecified", "Unspecified", 0, AV_OPT_TYPE_CONST, {.i64 = AVCHROMA_LOC_UNSPECIFIED }, INT_MIN, INT_MAX, V|E|D, "chroma_sample_location_type"},
{"unknown", "Unspecified", 0, AV_OPT_TYPE_CONST, {.i64 = AVCHROMA_LOC_UNSPECIFIED }, INT_MIN, INT_MAX, V|E|D, "chroma_sample_location_type"},
{"left", "Left", 0, AV_OPT_TYPE_CONST, {.i64 = AVCHROMA_LOC_LEFT }, INT_MIN, INT_MAX, V|E|D, "chroma_sample_location_type"},
{"center", "Center", 0, AV_OPT_TYPE_CONST, {.i64 = AVCHROMA_LOC_CENTER }, INT_MIN, INT_MAX, V|E|D, "chroma_sample_location_type"},
{"topleft", "Top-left", 0, AV_OPT_TYPE_CONST, {.i64 = AVCHROMA_LOC_TOPLEFT }, INT_MIN, INT_MAX, V|E|D, "chroma_sample_location_type"},
{"top", "Top", 0, AV_OPT_TYPE_CONST, {.i64 = AVCHROMA_LOC_TOP }, INT_MIN, INT_MAX, V|E|D, "chroma_sample_location_type"},
{"bottomleft", "Bottom-left", 0, AV_OPT_TYPE_CONST, {.i64 = AVCHROMA_LOC_BOTTOMLEFT }, INT_MIN, INT_MAX, V|E|D, "chroma_sample_location_type"},
{"bottom", "Bottom", 0, AV_OPT_TYPE_CONST, {.i64 = AVCHROMA_LOC_BOTTOM }, INT_MIN, INT_MAX, V|E|D, "chroma_sample_location_type"},
{"unspecified", "Unspecified", 0, AV_OPT_TYPE_CONST, {.i64 = AVCHROMA_LOC_UNSPECIFIED }, INT_MIN, INT_MAX, V|E|D, "chroma_sample_location_type"},
{"log_level_offset", "set the log level offset", OFFSET(log_level_offset), AV_OPT_TYPE_INT, {.i64 = 0 }, INT_MIN, INT_MAX },
{"slices", "set the number of slices, used in parallelized encoding", OFFSET(slices), AV_OPT_TYPE_INT, {.i64 = 0 }, 0, INT_MAX, V|E},
{"thread_type", "select multithreading type", OFFSET(thread_type), AV_OPT_TYPE_FLAGS, {.i64 = FF_THREAD_SLICE|FF_THREAD_FRAME }, 0, INT_MAX, V|A|E|D, "thread_type"},
@ -545,6 +572,7 @@ static const AVOption avcodec_options[] = {
{"codec_whitelist", "List of decoders that are allowed to be used", OFFSET(codec_whitelist), AV_OPT_TYPE_STRING, { .str = NULL }, CHAR_MIN, CHAR_MAX, A|V|S|D },
{"pixel_format", "set pixel format", OFFSET(pix_fmt), AV_OPT_TYPE_PIXEL_FMT, {.i64=AV_PIX_FMT_NONE}, -1, INT_MAX, 0 },
{"video_size", "set video size", OFFSET(width), AV_OPT_TYPE_IMAGE_SIZE, {.str=NULL}, 0, INT_MAX, 0 },
{"max_pixels", "Maximum number of pixels", OFFSET(max_pixels), AV_OPT_TYPE_INT64, {.i64 = INT_MAX }, 0, INT_MAX, A|V|S|D|E },
{NULL},
};

Просмотреть файл

@ -20,6 +20,7 @@
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include <inttypes.h>
#include <stdint.h>
#include <string.h>
@ -251,7 +252,7 @@ int ff_combine_frame(ParseContext *pc, int next,
const uint8_t **buf, int *buf_size)
{
if (pc->overread) {
ff_dlog(NULL, "overread %d, state:%X next:%d index:%d o_index:%d\n",
ff_dlog(NULL, "overread %d, state:%"PRIX32" next:%d index:%d o_index:%d\n",
pc->overread, pc->state, next, pc->index, pc->overread_index);
ff_dlog(NULL, "%X %X %X %X\n",
(*buf)[0], (*buf)[1], (*buf)[2], (*buf)[3]);
@ -284,6 +285,8 @@ int ff_combine_frame(ParseContext *pc, int next,
return -1;
}
av_assert0(next >= 0 || pc->buffer);
*buf_size =
pc->overread_index = pc->index + next;
@ -314,7 +317,7 @@ int ff_combine_frame(ParseContext *pc, int next,
}
if (pc->overread) {
ff_dlog(NULL, "overread %d, state:%X next:%d index:%d o_index:%d\n",
ff_dlog(NULL, "overread %d, state:%"PRIX32" next:%d index:%d o_index:%d\n",
pc->overread, pc->state, next, pc->index, pc->overread_index);
ff_dlog(NULL, "%X %X %X %X\n",
(*buf)[0], (*buf)[1], (*buf)[2], (*buf)[3]);

Просмотреть файл

@ -21,16 +21,23 @@
#include <stdint.h>
#include "config.h"
#include "avcodec.h"
typedef struct PixblockDSPContext {
void (*get_pixels)(int16_t *block /* align 16 */,
void (*get_pixels)(int16_t *av_restrict block /* align 16 */,
const uint8_t *pixels /* align 8 */,
ptrdiff_t line_size);
void (*diff_pixels)(int16_t *block /* align 16 */,
ptrdiff_t stride);
void (*diff_pixels)(int16_t *av_restrict block /* align 16 */,
const uint8_t *s1 /* align 8 */,
const uint8_t *s2 /* align 8 */,
int stride);
ptrdiff_t stride);
void (*diff_pixels_unaligned)(int16_t *av_restrict block /* align 16 */,
const uint8_t *s1,
const uint8_t *s2,
ptrdiff_t stride);
} PixblockDSPContext;
void ff_pixblockdsp_init(PixblockDSPContext *c, AVCodecContext *avctx);

Просмотреть файл

@ -24,9 +24,11 @@
#include "config.h"
#include <stdatomic.h>
#include <stdint.h>
#include "avcodec.h"
#include "hwaccel.h"
#include "internal.h"
#include "pthread_internal.h"
#include "thread.h"
@ -43,6 +45,25 @@
#include "libavutil/opt.h"
#include "libavutil/thread.h"
enum {
///< Set when the thread is awaiting a packet.
STATE_INPUT_READY,
///< Set before the codec has called ff_thread_finish_setup().
STATE_SETTING_UP,
/**
* Set when the codec calls get_buffer().
* State is returned to STATE_SETTING_UP afterwards.
*/
STATE_GET_BUFFER,
/**
* Set when the codec calls get_format().
* State is returned to STATE_SETTING_UP afterwards.
*/
STATE_GET_FORMAT,
///< Set after the codec has called ff_thread_finish_setup().
STATE_SETUP_FINISHED,
};
/**
* Context used by codec threads and stored in their AVCodecInternal thread_ctx.
*/
@ -66,19 +87,7 @@ typedef struct PerThreadContext {
int got_frame; ///< The output of got_picture_ptr from the last avcodec_decode_video() call.
int result; ///< The result of the last codec decode/encode() call.
enum {
STATE_INPUT_READY, ///< Set when the thread is awaiting a packet.
STATE_SETTING_UP, ///< Set before the codec has called ff_thread_finish_setup().
STATE_GET_BUFFER, /**<
* Set when the codec calls get_buffer().
* State is returned to STATE_SETTING_UP afterwards.
*/
STATE_GET_FORMAT, /**<
* Set when the codec calls get_format().
* State is returned to STATE_SETTING_UP afterwards.
*/
STATE_SETUP_FINISHED ///< Set after the codec has called ff_thread_finish_setup().
} state;
atomic_int state;
/**
* Array of frames passed to ff_thread_release_buffer().
@ -95,6 +104,11 @@ typedef struct PerThreadContext {
enum AVPixelFormat result_format; ///< get_format() result
int die; ///< Set when the thread should exit.
int hwaccel_serializing;
int async_serializing;
atomic_int debug_threads; ///< Set if the FF_DEBUG_THREADS option is set.
} PerThreadContext;
/**
@ -105,6 +119,14 @@ typedef struct FrameThreadContext {
PerThreadContext *prev_thread; ///< The last thread submit_packet() was called on.
pthread_mutex_t buffer_mutex; ///< Mutex used to protect get/release_buffer().
/**
* This lock is used for ensuring threads run in serial when hwaccel
* is used.
*/
pthread_mutex_t hwaccel_mutex;
pthread_mutex_t async_mutex;
pthread_cond_t async_cond;
int async_lock;
int next_decoding; ///< The next context to submit a packet to.
int next_finished; ///< The next context to return output from.
@ -118,6 +140,24 @@ typedef struct FrameThreadContext {
#define THREAD_SAFE_CALLBACKS(avctx) \
((avctx)->thread_safe_callbacks || (avctx)->get_buffer2 == avcodec_default_get_buffer2)
static void async_lock(FrameThreadContext *fctx)
{
pthread_mutex_lock(&fctx->async_mutex);
while (fctx->async_lock)
pthread_cond_wait(&fctx->async_cond, &fctx->async_mutex);
fctx->async_lock = 1;
pthread_mutex_unlock(&fctx->async_mutex);
}
static void async_unlock(FrameThreadContext *fctx)
{
pthread_mutex_lock(&fctx->async_mutex);
av_assert0(fctx->async_lock);
fctx->async_lock = 0;
pthread_cond_broadcast(&fctx->async_cond);
pthread_mutex_unlock(&fctx->async_mutex);
}
/**
* Codec worker thread.
*
@ -133,14 +173,29 @@ static attribute_align_arg void *frame_worker_thread(void *arg)
pthread_mutex_lock(&p->mutex);
while (1) {
while (p->state == STATE_INPUT_READY && !p->die)
pthread_cond_wait(&p->input_cond, &p->mutex);
while (atomic_load(&p->state) == STATE_INPUT_READY && !p->die)
pthread_cond_wait(&p->input_cond, &p->mutex);
if (p->die) break;
if (!codec->update_thread_context && THREAD_SAFE_CALLBACKS(avctx))
ff_thread_finish_setup(avctx);
/* If a decoder supports hwaccel, then it must call ff_get_format().
* Since that call must happen before ff_thread_finish_setup(), the
* decoder is required to implement update_thread_context() and call
* ff_thread_finish_setup() manually. Therefore the above
* ff_thread_finish_setup() call did not happen and hwaccel_serializing
* cannot be true here. */
av_assert0(!p->hwaccel_serializing);
/* if the previous thread uses hwaccel then we take the lock to ensure
* the threads don't run concurrently */
if (avctx->hwaccel) {
pthread_mutex_lock(&p->parent->hwaccel_mutex);
p->hwaccel_serializing = 1;
}
av_frame_unref(p->frame);
p->got_frame = 0;
p->result = codec->decode(avctx, p->frame, &p->got_frame, &p->avpkt);
@ -152,17 +207,23 @@ static attribute_align_arg void *frame_worker_thread(void *arg)
av_frame_unref(p->frame);
}
if (p->state == STATE_SETTING_UP) ff_thread_finish_setup(avctx);
if (atomic_load(&p->state) == STATE_SETTING_UP)
ff_thread_finish_setup(avctx);
if (p->hwaccel_serializing) {
p->hwaccel_serializing = 0;
pthread_mutex_unlock(&p->parent->hwaccel_mutex);
}
if (p->async_serializing) {
p->async_serializing = 0;
async_unlock(p->parent);
}
pthread_mutex_lock(&p->progress_mutex);
#if 0 //BUFREF-FIXME
for (i = 0; i < MAX_BUFFERS; i++)
if (p->progress_used[i] && (p->got_frame || p->result<0 || avctx->codec_id != AV_CODEC_ID_H264)) {
p->progress[i][0] = INT_MAX;
p->progress[i][1] = INT_MAX;
}
#endif
p->state = STATE_INPUT_READY;
atomic_store(&p->state, STATE_INPUT_READY);
pthread_cond_broadcast(&p->progress_cond);
pthread_cond_signal(&p->output_cond);
@ -185,12 +246,13 @@ static int update_context_from_thread(AVCodecContext *dst, AVCodecContext *src,
{
int err = 0;
if (dst != src) {
if (dst != src && (for_user || !(av_codec_get_codec_descriptor(src)->props & AV_CODEC_PROP_INTRA_ONLY))) {
dst->time_base = src->time_base;
dst->framerate = src->framerate;
dst->width = src->width;
dst->height = src->height;
dst->pix_fmt = src->pix_fmt;
dst->sw_pix_fmt = src->sw_pix_fmt;
dst->coded_width = src->coded_width;
dst->coded_height = src->coded_height;
@ -226,6 +288,19 @@ FF_ENABLE_DEPRECATION_WARNINGS
dst->sample_fmt = src->sample_fmt;
dst->channel_layout = src->channel_layout;
dst->internal->hwaccel_priv_data = src->internal->hwaccel_priv_data;
if (!!dst->hw_frames_ctx != !!src->hw_frames_ctx ||
(dst->hw_frames_ctx && dst->hw_frames_ctx->data != src->hw_frames_ctx->data)) {
av_buffer_unref(&dst->hw_frames_ctx);
if (src->hw_frames_ctx) {
dst->hw_frames_ctx = av_buffer_ref(src->hw_frames_ctx);
if (!dst->hw_frames_ctx)
return AVERROR(ENOMEM);
}
}
dst->hwaccel_flags = src->hwaccel_flags;
}
if (for_user) {
@ -307,24 +382,35 @@ static void release_delayed_buffers(PerThreadContext *p)
}
}
static int submit_packet(PerThreadContext *p, AVPacket *avpkt)
static int submit_packet(PerThreadContext *p, AVCodecContext *user_avctx,
AVPacket *avpkt)
{
FrameThreadContext *fctx = p->parent;
PerThreadContext *prev_thread = fctx->prev_thread;
const AVCodec *codec = p->avctx->codec;
int ret;
if (!avpkt->size && !(codec->capabilities & AV_CODEC_CAP_DELAY))
return 0;
pthread_mutex_lock(&p->mutex);
ret = update_context_from_user(p->avctx, user_avctx);
if (ret) {
pthread_mutex_unlock(&p->mutex);
return ret;
}
atomic_store_explicit(&p->debug_threads,
(p->avctx->debug & FF_DEBUG_THREADS) != 0,
memory_order_relaxed);
release_delayed_buffers(p);
if (prev_thread) {
int err;
if (prev_thread->state == STATE_SETTING_UP) {
if (atomic_load(&prev_thread->state) == STATE_SETTING_UP) {
pthread_mutex_lock(&prev_thread->progress_mutex);
while (prev_thread->state == STATE_SETTING_UP)
while (atomic_load(&prev_thread->state) == STATE_SETTING_UP)
pthread_cond_wait(&prev_thread->progress_cond, &prev_thread->progress_mutex);
pthread_mutex_unlock(&prev_thread->progress_mutex);
}
@ -337,9 +423,14 @@ static int submit_packet(PerThreadContext *p, AVPacket *avpkt)
}
av_packet_unref(&p->avpkt);
av_packet_ref(&p->avpkt, avpkt);
ret = av_packet_ref(&p->avpkt, avpkt);
if (ret < 0) {
pthread_mutex_unlock(&p->mutex);
av_log(p->avctx, AV_LOG_ERROR, "av_packet_ref() failed in submit_packet()\n");
return ret;
}
p->state = STATE_SETTING_UP;
atomic_store(&p->state, STATE_SETTING_UP);
pthread_cond_signal(&p->input_cond);
pthread_mutex_unlock(&p->mutex);
@ -352,13 +443,13 @@ static int submit_packet(PerThreadContext *p, AVPacket *avpkt)
if (!p->avctx->thread_safe_callbacks && (
p->avctx->get_format != avcodec_default_get_format ||
p->avctx->get_buffer2 != avcodec_default_get_buffer2)) {
while (p->state != STATE_SETUP_FINISHED && p->state != STATE_INPUT_READY) {
while (atomic_load(&p->state) != STATE_SETUP_FINISHED && atomic_load(&p->state) != STATE_INPUT_READY) {
int call_done = 1;
pthread_mutex_lock(&p->progress_mutex);
while (p->state == STATE_SETTING_UP)
while (atomic_load(&p->state) == STATE_SETTING_UP)
pthread_cond_wait(&p->progress_cond, &p->progress_mutex);
switch (p->state) {
switch (atomic_load_explicit(&p->state, memory_order_acquire)) {
case STATE_GET_BUFFER:
p->result = ff_get_buffer(p->avctx, p->requested_frame, p->requested_flags);
break;
@ -370,7 +461,7 @@ static int submit_packet(PerThreadContext *p, AVPacket *avpkt)
break;
}
if (call_done) {
p->state = STATE_SETTING_UP;
atomic_store(&p->state, STATE_SETTING_UP);
pthread_cond_signal(&p->progress_cond);
}
pthread_mutex_unlock(&p->progress_mutex);
@ -392,15 +483,18 @@ int ff_thread_decode_frame(AVCodecContext *avctx,
PerThreadContext *p;
int err;
/* release the async lock, permitting blocked hwaccel threads to
* go forward while we are in this function */
async_unlock(fctx);
/*
* Submit a packet to the next decoding thread.
*/
p = &fctx->threads[fctx->next_decoding];
err = update_context_from_user(p->avctx, avctx);
if (err) return err;
err = submit_packet(p, avpkt);
if (err) return err;
err = submit_packet(p, avctx, avpkt);
if (err)
goto finish;
/*
* If we're still receiving the initial packets, don't return a frame.
@ -411,23 +505,25 @@ int ff_thread_decode_frame(AVCodecContext *avctx,
if (fctx->delaying) {
*got_picture_ptr=0;
if (avpkt->size)
return avpkt->size;
if (avpkt->size) {
err = avpkt->size;
goto finish;
}
}
/*
* Return the next available frame from the oldest thread.
* If we're at the end of the stream, then we have to skip threads that
* didn't output a frame, because we don't want to accidentally signal
* EOF (avpkt->size == 0 && *got_picture_ptr == 0).
* didn't output a frame/error, because we don't want to accidentally signal
* EOF (avpkt->size == 0 && *got_picture_ptr == 0 && err >= 0).
*/
do {
p = &fctx->threads[finished++];
if (p->state != STATE_INPUT_READY) {
if (atomic_load(&p->state) != STATE_INPUT_READY) {
pthread_mutex_lock(&p->progress_mutex);
while (p->state != STATE_INPUT_READY)
while (atomic_load_explicit(&p->state, memory_order_relaxed) != STATE_INPUT_READY)
pthread_cond_wait(&p->output_cond, &p->progress_mutex);
pthread_mutex_unlock(&p->progress_mutex);
}
@ -435,20 +531,19 @@ int ff_thread_decode_frame(AVCodecContext *avctx,
av_frame_move_ref(picture, p->frame);
*got_picture_ptr = p->got_frame;
picture->pkt_dts = p->avpkt.dts;
if (p->result < 0)
err = p->result;
err = p->result;
/*
* A later call with avkpt->size == 0 may loop over all threads,
* including this one, searching for a frame to return before being
* including this one, searching for a frame/error to return before being
* stopped by the "finished != fctx->next_finished" condition.
* Make sure we don't mistakenly return the same frame again.
* Make sure we don't mistakenly return the same frame/error again.
*/
p->got_frame = 0;
p->result = 0;
if (finished >= avctx->thread_count) finished = 0;
} while (!avpkt->size && !*got_picture_ptr && finished != fctx->next_finished);
} while (!avpkt->size && !*got_picture_ptr && err >= 0 && finished != fctx->next_finished);
update_context_from_thread(avctx, p->avctx, 1);
@ -456,32 +551,33 @@ int ff_thread_decode_frame(AVCodecContext *avctx,
fctx->next_finished = finished;
/*
* When no frame was found while flushing, but an error occurred in
* any thread, return it instead of 0.
* Otherwise the error can get lost.
*/
if (!avpkt->size && !*got_picture_ptr)
return err;
/* return the size of the consumed packet if no error occurred */
return (p->result >= 0) ? avpkt->size : p->result;
if (err >= 0)
err = avpkt->size;
finish:
async_lock(fctx);
return err;
}
void ff_thread_report_progress(ThreadFrame *f, int n, int field)
{
PerThreadContext *p;
volatile int *progress = f->progress ? (int*)f->progress->data : NULL;
atomic_int *progress = f->progress ? (atomic_int*)f->progress->data : NULL;
if (!progress || progress[field] >= n) return;
if (!progress ||
atomic_load_explicit(&progress[field], memory_order_relaxed) >= n)
return;
p = f->owner->internal->thread_ctx;
p = f->owner[field]->internal->thread_ctx;
if (f->owner->debug&FF_DEBUG_THREADS)
av_log(f->owner, AV_LOG_DEBUG, "%p finished %d field %d\n", progress, n, field);
if (atomic_load_explicit(&p->debug_threads, memory_order_relaxed))
av_log(f->owner[field], AV_LOG_DEBUG,
"%p finished %d field %d\n", progress, n, field);
pthread_mutex_lock(&p->progress_mutex);
progress[field] = n;
atomic_store_explicit(&progress[field], n, memory_order_release);
pthread_cond_broadcast(&p->progress_cond);
pthread_mutex_unlock(&p->progress_mutex);
}
@ -489,17 +585,20 @@ void ff_thread_report_progress(ThreadFrame *f, int n, int field)
void ff_thread_await_progress(ThreadFrame *f, int n, int field)
{
PerThreadContext *p;
volatile int *progress = f->progress ? (int*)f->progress->data : NULL;
atomic_int *progress = f->progress ? (atomic_int*)f->progress->data : NULL;
if (!progress || progress[field] >= n) return;
if (!progress ||
atomic_load_explicit(&progress[field], memory_order_acquire) >= n)
return;
p = f->owner->internal->thread_ctx;
p = f->owner[field]->internal->thread_ctx;
if (f->owner->debug&FF_DEBUG_THREADS)
av_log(f->owner, AV_LOG_DEBUG, "thread awaiting %d field %d from %p\n", n, field, progress);
if (atomic_load_explicit(&p->debug_threads, memory_order_relaxed))
av_log(f->owner[field], AV_LOG_DEBUG,
"thread awaiting %d field %d from %p\n", n, field, progress);
pthread_mutex_lock(&p->progress_mutex);
while (progress[field] < n)
while (atomic_load_explicit(&progress[field], memory_order_relaxed) < n)
pthread_cond_wait(&p->progress_cond, &p->progress_mutex);
pthread_mutex_unlock(&p->progress_mutex);
}
@ -509,12 +608,26 @@ void ff_thread_finish_setup(AVCodecContext *avctx) {
if (!(avctx->active_thread_type&FF_THREAD_FRAME)) return;
if(p->state == STATE_SETUP_FINISHED){
av_log(avctx, AV_LOG_WARNING, "Multiple ff_thread_finish_setup() calls\n");
if (avctx->hwaccel && !p->hwaccel_serializing) {
pthread_mutex_lock(&p->parent->hwaccel_mutex);
p->hwaccel_serializing = 1;
}
/* this assumes that no hwaccel calls happen before ff_thread_finish_setup() */
if (avctx->hwaccel &&
!(avctx->hwaccel->caps_internal & HWACCEL_CAP_ASYNC_SAFE)) {
p->async_serializing = 1;
async_lock(p->parent);
}
pthread_mutex_lock(&p->progress_mutex);
p->state = STATE_SETUP_FINISHED;
if(atomic_load(&p->state) == STATE_SETUP_FINISHED){
av_log(avctx, AV_LOG_WARNING, "Multiple ff_thread_finish_setup() calls\n");
}
atomic_store(&p->state, STATE_SETUP_FINISHED);
pthread_cond_broadcast(&p->progress_cond);
pthread_mutex_unlock(&p->progress_mutex);
}
@ -524,17 +637,21 @@ static void park_frame_worker_threads(FrameThreadContext *fctx, int thread_count
{
int i;
async_unlock(fctx);
for (i = 0; i < thread_count; i++) {
PerThreadContext *p = &fctx->threads[i];
if (p->state != STATE_INPUT_READY) {
if (atomic_load(&p->state) != STATE_INPUT_READY) {
pthread_mutex_lock(&p->progress_mutex);
while (p->state != STATE_INPUT_READY)
while (atomic_load(&p->state) != STATE_INPUT_READY)
pthread_cond_wait(&p->output_cond, &p->progress_mutex);
pthread_mutex_unlock(&p->progress_mutex);
}
p->got_frame = 0;
}
async_lock(fctx);
}
void ff_frame_thread_free(AVCodecContext *avctx, int thread_count)
@ -587,13 +704,20 @@ void ff_frame_thread_free(AVCodecContext *avctx, int thread_count)
av_freep(&p->avctx->slice_offset);
}
if (p->avctx)
if (p->avctx) {
av_freep(&p->avctx->internal);
av_buffer_unref(&p->avctx->hw_frames_ctx);
}
av_freep(&p->avctx);
}
av_freep(&fctx->threads);
pthread_mutex_destroy(&fctx->buffer_mutex);
pthread_mutex_destroy(&fctx->hwaccel_mutex);
pthread_mutex_destroy(&fctx->async_mutex);
pthread_cond_destroy(&fctx->async_cond);
av_freep(&avctx->internal->thread_ctx);
if (avctx->priv_data && avctx->codec && avctx->codec->priv_class)
@ -615,8 +739,10 @@ int ff_frame_thread_init(AVCodecContext *avctx)
if (!thread_count) {
int nb_cpus = av_cpu_count();
#if FF_API_DEBUG_MV
if ((avctx->debug & (FF_DEBUG_VIS_QP | FF_DEBUG_VIS_MB_TYPE)) || avctx->debug_mv)
nb_cpus = 1;
#endif
// use number of cores + 1 as thread count if there is more than one
if (nb_cpus > 1)
thread_count = avctx->thread_count = FFMIN(nb_cpus + 1, MAX_AUTO_THREADS);
@ -640,6 +766,11 @@ int ff_frame_thread_init(AVCodecContext *avctx)
}
pthread_mutex_init(&fctx->buffer_mutex, NULL);
pthread_mutex_init(&fctx->hwaccel_mutex, NULL);
pthread_mutex_init(&fctx->async_mutex, NULL);
pthread_cond_init(&fctx->async_cond, NULL);
fctx->async_lock = 1;
fctx->delaying = 1;
for (i = 0; i < thread_count; i++) {
@ -677,7 +808,7 @@ int ff_frame_thread_init(AVCodecContext *avctx)
}
*copy->internal = *src->internal;
copy->internal->thread_ctx = p;
copy->internal->pkt = &p->avpkt;
copy->internal->last_pkt_props = &p->avpkt;
if (!i) {
src = copy;
@ -701,6 +832,8 @@ int ff_frame_thread_init(AVCodecContext *avctx)
if (err) goto error;
atomic_init(&p->debug_threads, (copy->debug & FF_DEBUG_THREADS) != 0);
err = AVERROR(pthread_create(&p->thread, NULL, frame_worker_thread, p));
p->thread_init= !err;
if(!p->thread_init)
@ -736,6 +869,7 @@ void ff_thread_flush(AVCodecContext *avctx)
// Make sure decode flush calls with size=0 won't return old frames
p->got_frame = 0;
av_frame_unref(p->frame);
p->result = 0;
release_delayed_buffers(p);
@ -747,7 +881,7 @@ void ff_thread_flush(AVCodecContext *avctx)
int ff_thread_can_start_frame(AVCodecContext *avctx)
{
PerThreadContext *p = avctx->internal->thread_ctx;
if ((avctx->active_thread_type&FF_THREAD_FRAME) && p->state != STATE_SETTING_UP &&
if ((avctx->active_thread_type&FF_THREAD_FRAME) && atomic_load(&p->state) != STATE_SETTING_UP &&
(avctx->codec->update_thread_context || !THREAD_SAFE_CALLBACKS(avctx))) {
return 0;
}
@ -759,28 +893,29 @@ static int thread_get_buffer_internal(AVCodecContext *avctx, ThreadFrame *f, int
PerThreadContext *p = avctx->internal->thread_ctx;
int err;
f->owner = avctx;
f->owner[0] = f->owner[1] = avctx;
ff_init_buffer_info(avctx, f->f);
if (!(avctx->active_thread_type & FF_THREAD_FRAME))
return ff_get_buffer(avctx, f->f, flags);
if (p->state != STATE_SETTING_UP &&
if (atomic_load(&p->state) != STATE_SETTING_UP &&
(avctx->codec->update_thread_context || !THREAD_SAFE_CALLBACKS(avctx))) {
av_log(avctx, AV_LOG_ERROR, "get_buffer() cannot be called after ff_thread_finish_setup()\n");
return -1;
}
if (avctx->internal->allocate_progress) {
int *progress;
f->progress = av_buffer_alloc(2 * sizeof(int));
atomic_int *progress;
f->progress = av_buffer_alloc(2 * sizeof(*progress));
if (!f->progress) {
return AVERROR(ENOMEM);
}
progress = (int*)f->progress->data;
progress = (atomic_int*)f->progress->data;
progress[0] = progress[1] = -1;
atomic_init(&progress[0], -1);
atomic_init(&progress[1], -1);
}
pthread_mutex_lock(&p->parent->buffer_mutex);
@ -791,10 +926,10 @@ static int thread_get_buffer_internal(AVCodecContext *avctx, ThreadFrame *f, int
pthread_mutex_lock(&p->progress_mutex);
p->requested_frame = f->f;
p->requested_flags = flags;
p->state = STATE_GET_BUFFER;
atomic_store_explicit(&p->state, STATE_GET_BUFFER, memory_order_release);
pthread_cond_broadcast(&p->progress_cond);
while (p->state != STATE_SETTING_UP)
while (atomic_load(&p->state) != STATE_SETTING_UP)
pthread_cond_wait(&p->progress_cond, &p->progress_mutex);
err = p->result;
@ -819,16 +954,16 @@ enum AVPixelFormat ff_thread_get_format(AVCodecContext *avctx, const enum AVPixe
if (!(avctx->active_thread_type & FF_THREAD_FRAME) || avctx->thread_safe_callbacks ||
avctx->get_format == avcodec_default_get_format)
return ff_get_format(avctx, fmt);
if (p->state != STATE_SETTING_UP) {
if (atomic_load(&p->state) != STATE_SETTING_UP) {
av_log(avctx, AV_LOG_ERROR, "get_format() cannot be called after ff_thread_finish_setup()\n");
return -1;
}
pthread_mutex_lock(&p->progress_mutex);
p->available_formats = fmt;
p->state = STATE_GET_FORMAT;
atomic_store(&p->state, STATE_GET_FORMAT);
pthread_cond_broadcast(&p->progress_cond);
while (p->state != STATE_SETTING_UP)
while (atomic_load(&p->state) != STATE_SETTING_UP)
pthread_cond_wait(&p->progress_cond, &p->progress_mutex);
res = p->result_format;
@ -862,7 +997,7 @@ void ff_thread_release_buffer(AVCodecContext *avctx, ThreadFrame *f)
av_log(avctx, AV_LOG_DEBUG, "thread_release_buffer called on pic %p\n", f);
av_buffer_unref(&f->progress);
f->owner = NULL;
f->owner[0] = f->owner[1] = NULL;
if (can_direct_free) {
av_frame_unref(f->f);

Просмотреть файл

@ -34,26 +34,21 @@
#include "libavutil/cpu.h"
#include "libavutil/mem.h"
#include "libavutil/thread.h"
#include "libavutil/slicethread.h"
typedef int (action_func)(AVCodecContext *c, void *arg);
typedef int (action_func2)(AVCodecContext *c, void *arg, int jobnr, int threadnr);
typedef int (main_func)(AVCodecContext *c);
typedef struct SliceThreadContext {
pthread_t *workers;
AVSliceThread *thread;
action_func *func;
action_func2 *func2;
main_func *mainfunc;
void *args;
int *rets;
int job_count;
int job_size;
pthread_cond_t last_job_cond;
pthread_cond_t current_job_cond;
pthread_mutex_t current_job_lock;
unsigned current_execute;
int current_job;
int done;
int *entries;
int entries_count;
int thread_count;
@ -61,43 +56,22 @@ typedef struct SliceThreadContext {
pthread_mutex_t *progress_mutex;
} SliceThreadContext;
static void* attribute_align_arg worker(void *v)
{
AVCodecContext *avctx = v;
static void main_function(void *priv) {
AVCodecContext *avctx = priv;
SliceThreadContext *c = avctx->internal->thread_ctx;
unsigned last_execute = 0;
int our_job = c->job_count;
int thread_count = avctx->thread_count;
int self_id;
c->mainfunc(avctx);
}
pthread_mutex_lock(&c->current_job_lock);
self_id = c->current_job++;
for (;;){
int ret;
while (our_job >= c->job_count) {
if (c->current_job == thread_count + c->job_count)
pthread_cond_signal(&c->last_job_cond);
static void worker_func(void *priv, int jobnr, int threadnr, int nb_jobs, int nb_threads)
{
AVCodecContext *avctx = priv;
SliceThreadContext *c = avctx->internal->thread_ctx;
int ret;
while (last_execute == c->current_execute && !c->done)
pthread_cond_wait(&c->current_job_cond, &c->current_job_lock);
last_execute = c->current_execute;
our_job = self_id;
if (c->done) {
pthread_mutex_unlock(&c->current_job_lock);
return NULL;
}
}
pthread_mutex_unlock(&c->current_job_lock);
ret = c->func ? c->func(avctx, (char*)c->args + our_job*c->job_size):
c->func2(avctx, c->args, our_job, self_id);
if (c->rets)
c->rets[our_job%c->job_count] = ret;
pthread_mutex_lock(&c->current_job_lock);
our_job = c->current_job++;
}
ret = c->func ? c->func(avctx, (char *)c->args + c->job_size * jobnr)
: c->func2(avctx, c->args, jobnr, threadnr);
if (c->rets)
c->rets[jobnr] = ret;
}
void ff_slice_thread_free(AVCodecContext *avctx)
@ -105,40 +79,19 @@ void ff_slice_thread_free(AVCodecContext *avctx)
SliceThreadContext *c = avctx->internal->thread_ctx;
int i;
pthread_mutex_lock(&c->current_job_lock);
c->done = 1;
pthread_cond_broadcast(&c->current_job_cond);
for (i = 0; i < c->thread_count; i++)
pthread_cond_broadcast(&c->progress_cond[i]);
pthread_mutex_unlock(&c->current_job_lock);
for (i=0; i<avctx->thread_count; i++)
pthread_join(c->workers[i], NULL);
avpriv_slicethread_free(&c->thread);
for (i = 0; i < c->thread_count; i++) {
pthread_mutex_destroy(&c->progress_mutex[i]);
pthread_cond_destroy(&c->progress_cond[i]);
}
pthread_mutex_destroy(&c->current_job_lock);
pthread_cond_destroy(&c->current_job_cond);
pthread_cond_destroy(&c->last_job_cond);
av_freep(&c->entries);
av_freep(&c->progress_mutex);
av_freep(&c->progress_cond);
av_freep(&c->workers);
av_freep(&avctx->internal->thread_ctx);
}
static av_always_inline void thread_park_workers(SliceThreadContext *c, int thread_count)
{
while (c->current_job != thread_count + c->job_count)
pthread_cond_wait(&c->last_job_cond, &c->current_job_lock);
pthread_mutex_unlock(&c->current_job_lock);
}
static int thread_execute(AVCodecContext *avctx, action_func* func, void *arg, int *ret, int job_count, int job_size)
{
SliceThreadContext *c = avctx->internal->thread_ctx;
@ -149,23 +102,12 @@ static int thread_execute(AVCodecContext *avctx, action_func* func, void *arg, i
if (job_count <= 0)
return 0;
pthread_mutex_lock(&c->current_job_lock);
c->current_job = avctx->thread_count;
c->job_count = job_count;
c->job_size = job_size;
c->args = arg;
c->func = func;
if (ret) {
c->rets = ret;
} else {
c->rets = NULL;
}
c->current_execute++;
pthread_cond_broadcast(&c->current_job_cond);
thread_park_workers(c, avctx->thread_count);
c->rets = ret;
avpriv_slicethread_execute(c->thread, job_count, !!c->mainfunc );
return 0;
}
@ -176,11 +118,19 @@ static int thread_execute2(AVCodecContext *avctx, action_func2* func2, void *arg
return thread_execute(avctx, NULL, arg, ret, job_count, 0);
}
int ff_slice_thread_execute_with_mainfunc(AVCodecContext *avctx, action_func2* func2, main_func *mainfunc, void *arg, int *ret, int job_count)
{
SliceThreadContext *c = avctx->internal->thread_ctx;
c->func2 = func2;
c->mainfunc = mainfunc;
return thread_execute(avctx, NULL, arg, ret, job_count, 0);
}
int ff_slice_thread_init(AVCodecContext *avctx)
{
int i;
SliceThreadContext *c;
int thread_count = avctx->thread_count;
static void (*mainfunc)(void *);
#if HAVE_W32THREADS
w32thread_init();
@ -208,35 +158,17 @@ int ff_slice_thread_init(AVCodecContext *avctx)
return 0;
}
c = av_mallocz(sizeof(SliceThreadContext));
if (!c)
return -1;
c->workers = av_mallocz_array(thread_count, sizeof(pthread_t));
if (!c->workers) {
av_free(c);
return -1;
avctx->internal->thread_ctx = c = av_mallocz(sizeof(*c));
mainfunc = avctx->codec->caps_internal & FF_CODEC_CAP_SLICE_THREAD_HAS_MF ? &main_function : NULL;
if (!c || (thread_count = avpriv_slicethread_create(&c->thread, avctx, worker_func, mainfunc, thread_count)) <= 1) {
if (c)
avpriv_slicethread_free(&c->thread);
av_freep(&avctx->internal->thread_ctx);
avctx->thread_count = 1;
avctx->active_thread_type = 0;
return 0;
}
avctx->internal->thread_ctx = c;
c->current_job = 0;
c->job_count = 0;
c->job_size = 0;
c->done = 0;
pthread_cond_init(&c->current_job_cond, NULL);
pthread_cond_init(&c->last_job_cond, NULL);
pthread_mutex_init(&c->current_job_lock, NULL);
pthread_mutex_lock(&c->current_job_lock);
for (i=0; i<thread_count; i++) {
if(pthread_create(&c->workers[i], NULL, worker, avctx)) {
avctx->thread_count = i;
pthread_mutex_unlock(&c->current_job_lock);
ff_thread_free(avctx);
return -1;
}
}
thread_park_workers(c, thread_count);
avctx->thread_count = thread_count;
avctx->execute = thread_execute;
avctx->execute2 = thread_execute2;

Просмотреть файл

@ -119,6 +119,18 @@ static inline void flush_put_bits(PutBitContext *s)
s->bit_buf = 0;
}
static inline void flush_put_bits_le(PutBitContext *s)
{
while (s->bit_left < 32) {
av_assert0(s->buf_ptr < s->buf_end);
*s->buf_ptr++ = s->bit_buf;
s->bit_buf >>= 8;
s->bit_left += 8;
}
s->bit_left = 32;
s->bit_buf = 0;
}
#ifdef BITSTREAM_WRITER_LE
#define avpriv_align_put_bits align_put_bits_unsupported_here
#define avpriv_put_string ff_put_string_unsupported_here
@ -197,6 +209,34 @@ static inline void put_bits(PutBitContext *s, int n, unsigned int value)
s->bit_left = bit_left;
}
static inline void put_bits_le(PutBitContext *s, int n, unsigned int value)
{
unsigned int bit_buf;
int bit_left;
av_assert2(n <= 31 && value < (1U << n));
bit_buf = s->bit_buf;
bit_left = s->bit_left;
bit_buf |= value << (32 - bit_left);
if (n >= bit_left) {
if (3 < s->buf_end - s->buf_ptr) {
AV_WL32(s->buf_ptr, bit_buf);
s->buf_ptr += 4;
} else {
av_log(NULL, AV_LOG_ERROR, "Internal error, put_bits buffer too small\n");
av_assert2(0);
}
bit_buf = value >> bit_left;
bit_left += 32;
}
bit_left -= n;
s->bit_buf = bit_buf;
s->bit_left = bit_left;
}
static inline void put_sbits(PutBitContext *pb, int n, int32_t value)
{
av_assert2(n >= 0 && n <= 31);
@ -209,15 +249,72 @@ static inline void put_sbits(PutBitContext *pb, int n, int32_t value)
*/
static void av_unused put_bits32(PutBitContext *s, uint32_t value)
{
int lo = value & 0xffff;
int hi = value >> 16;
unsigned int bit_buf;
int bit_left;
bit_buf = s->bit_buf;
bit_left = s->bit_left;
#ifdef BITSTREAM_WRITER_LE
put_bits(s, 16, lo);
put_bits(s, 16, hi);
bit_buf |= value << (32 - bit_left);
if (3 < s->buf_end - s->buf_ptr) {
AV_WL32(s->buf_ptr, bit_buf);
s->buf_ptr += 4;
} else {
av_log(NULL, AV_LOG_ERROR, "Internal error, put_bits buffer too small\n");
av_assert2(0);
}
bit_buf = (uint64_t)value >> bit_left;
#else
put_bits(s, 16, hi);
put_bits(s, 16, lo);
bit_buf = (uint64_t)bit_buf << bit_left;
bit_buf |= value >> (32 - bit_left);
if (3 < s->buf_end - s->buf_ptr) {
AV_WB32(s->buf_ptr, bit_buf);
s->buf_ptr += 4;
} else {
av_log(NULL, AV_LOG_ERROR, "Internal error, put_bits buffer too small\n");
av_assert2(0);
}
bit_buf = value;
#endif
s->bit_buf = bit_buf;
s->bit_left = bit_left;
}
/**
* Write up to 64 bits into a bitstream.
*/
static inline void put_bits64(PutBitContext *s, int n, uint64_t value)
{
av_assert2((n == 64) || (n < 64 && value < (UINT64_C(1) << n)));
if (n < 32)
put_bits(s, n, value);
else if (n == 32)
put_bits32(s, value);
else if (n < 64) {
uint32_t lo = value & 0xffffffff;
uint32_t hi = value >> 32;
#ifdef BITSTREAM_WRITER_LE
put_bits32(s, lo);
put_bits(s, n - 32, hi);
#else
put_bits(s, n - 32, hi);
put_bits32(s, lo);
#endif
} else {
uint32_t lo = value & 0xffffffff;
uint32_t hi = value >> 32;
#ifdef BITSTREAM_WRITER_LE
put_bits32(s, lo);
put_bits32(s, hi);
#else
put_bits32(s, hi);
put_bits32(s, lo);
#endif
}
}
/**

Просмотреть файл

@ -96,8 +96,4 @@ void ff_rate_control_uninit(struct MpegEncContext *s);
int ff_vbv_update(struct MpegEncContext *s, int frame_size);
void ff_get_2pass_fcode(struct MpegEncContext *s);
int ff_xvid_rate_control_init(struct MpegEncContext *s);
void ff_xvid_rate_control_uninit(struct MpegEncContext *s);
float ff_xvid_rate_estimate_qscale(struct MpegEncContext *s, int dry_run);
#endif /* AVCODEC_RATECONTROL_H */

Просмотреть файл

@ -119,6 +119,12 @@ const PixelFormatTag ff_raw_pix_fmt_tags[] = {
{ AV_PIX_FMT_RGB48BE, MKTAG( 48, 'R', 'G', 'B') },
{ AV_PIX_FMT_BGR48LE, MKTAG('B', 'G', 'R', 48 ) },
{ AV_PIX_FMT_BGR48BE, MKTAG( 48, 'B', 'G', 'R') },
{ AV_PIX_FMT_GRAY9LE, MKTAG('Y', '1', 0 , 9 ) },
{ AV_PIX_FMT_GRAY9BE, MKTAG( 9 , 0 , '1', 'Y') },
{ AV_PIX_FMT_GRAY10LE, MKTAG('Y', '1', 0 , 10 ) },
{ AV_PIX_FMT_GRAY10BE, MKTAG(10 , 0 , '1', 'Y') },
{ AV_PIX_FMT_GRAY12LE, MKTAG('Y', '1', 0 , 12 ) },
{ AV_PIX_FMT_GRAY12BE, MKTAG(12 , 0 , '1', 'Y') },
{ AV_PIX_FMT_GRAY16LE, MKTAG('Y', '1', 0 , 16 ) },
{ AV_PIX_FMT_GRAY16BE, MKTAG(16 , 0 , '1', 'Y') },
{ AV_PIX_FMT_YUV420P9LE, MKTAG('Y', '3', 11 , 9 ) },
@ -266,6 +272,14 @@ const PixelFormatTag ff_raw_pix_fmt_tags[] = {
{ AV_PIX_FMT_YUV422P10BE, MKTAG('I', '2', 'A', 'B') },
{ AV_PIX_FMT_YUV444P10LE, MKTAG('I', '4', 'A', 'L') },
{ AV_PIX_FMT_YUV444P10BE, MKTAG('I', '4', 'A', 'B') },
{ AV_PIX_FMT_YUV420P12LE, MKTAG('I', '0', 'C', 'L') },
{ AV_PIX_FMT_YUV420P12BE, MKTAG('I', '0', 'C', 'B') },
{ AV_PIX_FMT_YUV422P12LE, MKTAG('I', '2', 'C', 'L') },
{ AV_PIX_FMT_YUV422P12BE, MKTAG('I', '2', 'C', 'B') },
{ AV_PIX_FMT_YUV444P12LE, MKTAG('I', '4', 'C', 'L') },
{ AV_PIX_FMT_YUV444P12BE, MKTAG('I', '4', 'C', 'B') },
{ AV_PIX_FMT_YUV420P16LE, MKTAG('I', '0', 'F', 'L') },
{ AV_PIX_FMT_YUV420P16BE, MKTAG('I', '0', 'F', 'B') },
{ AV_PIX_FMT_YUV444P16LE, MKTAG('I', '4', 'F', 'L') },
{ AV_PIX_FMT_YUV444P16BE, MKTAG('I', '4', 'F', 'B') },

Просмотреть файл

@ -34,7 +34,7 @@
typedef struct ThreadFrame {
AVFrame *f;
AVCodecContext *owner;
AVCodecContext *owner[2];
// progress->data is an array of 2 ints holding progress for top/bottom
// fields
AVBufferRef *progress;
@ -133,8 +133,10 @@ void ff_thread_release_buffer(AVCodecContext *avctx, ThreadFrame *f);
int ff_thread_ref_frame(ThreadFrame *dst, ThreadFrame *src);
int ff_thread_init(AVCodecContext *s);
int ff_slice_thread_execute_with_mainfunc(AVCodecContext *avctx,
int (*action_func2)(AVCodecContext *c, void *arg, int jobnr, int threadnr),
int (*main_func)(AVCodecContext *c), void *arg, int *ret, int job_count);
void ff_thread_free(AVCodecContext *s);
int ff_alloc_entries(AVCodecContext *avctx, int count);
void ff_reset_entries(AVCodecContext *avctx);
void ff_thread_report_progress2(AVCodecContext *avctx, int field, int thread, int n);

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -28,8 +28,8 @@
#include "libavutil/version.h"
#define LIBAVCODEC_VERSION_MAJOR 57
#define LIBAVCODEC_VERSION_MINOR 64
#define LIBAVCODEC_VERSION_MICRO 101
#define LIBAVCODEC_VERSION_MINOR 107
#define LIBAVCODEC_VERSION_MICRO 100
#define LIBAVCODEC_VERSION_INT AV_VERSION_INT(LIBAVCODEC_VERSION_MAJOR, \
LIBAVCODEC_VERSION_MINOR, \
@ -60,9 +60,6 @@
#ifndef FF_API_AVCODEC_RESAMPLE
#define FF_API_AVCODEC_RESAMPLE FF_API_AUDIO_CONVERT
#endif
#ifndef FF_API_GETCHROMA
#define FF_API_GETCHROMA (LIBAVCODEC_VERSION_MAJOR < 58)
#endif
#ifndef FF_API_MISSING_SAMPLE
#define FF_API_MISSING_SAMPLE (LIBAVCODEC_VERSION_MAJOR < 58)
#endif
@ -157,6 +154,9 @@
#ifndef FF_API_VAAPI_CONTEXT
#define FF_API_VAAPI_CONTEXT (LIBAVCODEC_VERSION_MAJOR < 58)
#endif
#ifndef FF_API_MERGE_SD
#define FF_API_MERGE_SD (LIBAVCODEC_VERSION_MAJOR < 58)
#endif
#ifndef FF_API_AVCTX_TIMEBASE
#define FF_API_AVCTX_TIMEBASE (LIBAVCODEC_VERSION_MAJOR < 59)
#endif
@ -226,5 +226,18 @@
#ifndef FF_API_NVENC_OLD_NAME
#define FF_API_NVENC_OLD_NAME (LIBAVCODEC_VERSION_MAJOR < 59)
#endif
#ifndef FF_API_STRUCT_VAAPI_CONTEXT
#define FF_API_STRUCT_VAAPI_CONTEXT (LIBAVCODEC_VERSION_MAJOR < 59)
#endif
#ifndef FF_API_MERGE_SD_API
#define FF_API_MERGE_SD_API (LIBAVCODEC_VERSION_MAJOR < 59)
#endif
#ifndef FF_API_TAG_STRING
#define FF_API_TAG_STRING (LIBAVCODEC_VERSION_MAJOR < 59)
#endif
#ifndef FF_API_GETCHROMA
#define FF_API_GETCHROMA (LIBAVCODEC_VERSION_MAJOR < 59)
#endif
#endif /* AVCODEC_VERSION_H */

Просмотреть файл

@ -52,4 +52,6 @@ av_cold void ff_videodsp_init(VideoDSPContext *ctx, int bpc)
ff_videodsp_init_ppc(ctx, bpc);
if (ARCH_X86)
ff_videodsp_init_x86(ctx, bpc);
if (ARCH_MIPS)
ff_videodsp_init_mips(ctx, bpc);
}

Просмотреть файл

@ -83,5 +83,6 @@ void ff_videodsp_init_aarch64(VideoDSPContext *ctx, int bpc);
void ff_videodsp_init_arm(VideoDSPContext *ctx, int bpc);
void ff_videodsp_init_ppc(VideoDSPContext *ctx, int bpc);
void ff_videodsp_init_x86(VideoDSPContext *ctx, int bpc);
void ff_videodsp_init_mips(VideoDSPContext *ctx, int bpc);
#endif /* AVCODEC_VIDEODSP_H */

Просмотреть файл

@ -54,12 +54,28 @@ void ff_free_vlc(VLC *vlc);
#define INIT_VLC_LE 2
#define INIT_VLC_USE_NEW_STATIC 4
#define INIT_VLC_STATIC(vlc, bits, a, b, c, d, e, f, g, static_size) \
#define INIT_VLC_SPARSE_STATIC(vlc, bits, a, b, c, d, e, f, g, h, i, j, static_size) \
do { \
static VLC_TYPE table[static_size][2]; \
(vlc)->table = table; \
(vlc)->table_allocated = static_size; \
init_vlc(vlc, bits, a, b, c, d, e, f, g, INIT_VLC_USE_NEW_STATIC); \
ff_init_vlc_sparse(vlc, bits, a, b, c, d, e, f, g, h, i, j, \
INIT_VLC_USE_NEW_STATIC); \
} while (0)
#define INIT_LE_VLC_SPARSE_STATIC(vlc, bits, a, b, c, d, e, f, g, h, i, j, static_size) \
do { \
static VLC_TYPE table[static_size][2]; \
(vlc)->table = table; \
(vlc)->table_allocated = static_size; \
ff_init_vlc_sparse(vlc, bits, a, b, c, d, e, f, g, h, i, j, \
INIT_VLC_USE_NEW_STATIC | INIT_VLC_LE); \
} while (0)
#define INIT_VLC_STATIC(vlc, bits, a, b, c, d, e, f, g, static_size) \
INIT_VLC_SPARSE_STATIC(vlc, bits, a, b, c, d, e, f, g, NULL, 0, 0, static_size)
#define INIT_LE_VLC_STATIC(vlc, bits, a, b, c, d, e, f, g, static_size) \
INIT_LE_VLC_SPARSE_STATIC(vlc, bits, a, b, c, d, e, f, g, NULL, 0, 0, static_size)
#endif /* AVCODEC_VLC_H */

Просмотреть файл

@ -32,9 +32,6 @@ typedef struct AVVorbisParseContext AVVorbisParseContext;
/**
* Allocate and initialize the Vorbis parser using headers in the extradata.
*
* @param avctx codec context
* @param s Vorbis parser context
*/
AVVorbisParseContext *av_vorbis_parse_init(const uint8_t *extradata,
int extradata_size);

Просмотреть файл

@ -38,11 +38,11 @@ typedef struct VP3DSPContext {
const uint8_t *b,
ptrdiff_t stride, int h);
void (*idct_put)(uint8_t *dest, int line_size, int16_t *block);
void (*idct_add)(uint8_t *dest, int line_size, int16_t *block);
void (*idct_dc_add)(uint8_t *dest, int line_size, int16_t *block);
void (*v_loop_filter)(uint8_t *src, int stride, int *bounding_values);
void (*h_loop_filter)(uint8_t *src, int stride, int *bounding_values);
void (*idct_put)(uint8_t *dest, ptrdiff_t stride, int16_t *block);
void (*idct_add)(uint8_t *dest, ptrdiff_t stride, int16_t *block);
void (*idct_dc_add)(uint8_t *dest, ptrdiff_t stride, int16_t *block);
void (*v_loop_filter)(uint8_t *src, ptrdiff_t stride, int *bounding_values);
void (*h_loop_filter)(uint8_t *src, ptrdiff_t stride, int *bounding_values);
} VP3DSPContext;
void ff_vp3dsp_init(VP3DSPContext *c, int flags);

Просмотреть файл

@ -26,6 +26,7 @@
#ifndef AVCODEC_VP56_H
#define AVCODEC_VP56_H
#include "avcodec.h"
#include "get_bits.h"
#include "hpeldsp.h"
#include "bytestream.h"
@ -72,9 +73,9 @@ typedef struct VP56mv {
typedef void (*VP56ParseVectorAdjustment)(VP56Context *s,
VP56mv *vect);
typedef void (*VP56Filter)(VP56Context *s, uint8_t *dst, uint8_t *src,
int offset1, int offset2, int stride,
int offset1, int offset2, ptrdiff_t stride,
VP56mv mv, int mask, int select, int luma);
typedef void (*VP56ParseCoeff)(VP56Context *s);
typedef int (*VP56ParseCoeff)(VP56Context *s);
typedef void (*VP56DefaultModelsInit)(VP56Context *s);
typedef void (*VP56ParseVectorModels)(VP56Context *s);
typedef int (*VP56ParseCoeffModels)(VP56Context *s);
@ -179,7 +180,7 @@ struct vp56_context {
int flip; /* are we flipping ? */
int frbi; /* first row block index in MB */
int srbi; /* second row block index in MB */
int stride[4]; /* stride for each plan */
ptrdiff_t stride[4]; /* stride for each plan */
const uint8_t *vp56_coord_div;
VP56ParseVectorAdjustment parse_vector_adjustment;
@ -203,6 +204,9 @@ struct vp56_context {
VLC runv_vlc[2];
VLC ract_vlc[2][3][6];
unsigned int nb_null[2][2]; /* number of consecutive NULL DC/AC */
int have_undamaged_frame;
int discard_frame;
};
@ -221,7 +225,7 @@ int ff_vp56_decode_frame(AVCodecContext *avctx, void *data, int *got_frame,
*/
extern const uint8_t ff_vp56_norm_shift[256];
void ff_vp56_init_range_decoder(VP56RangeCoder *c, const uint8_t *buf, int buf_size);
int ff_vp56_init_range_decoder(VP56RangeCoder *c, const uint8_t *buf, int buf_size);
static av_always_inline unsigned int vp56_rac_renorm(VP56RangeCoder *c)
{

Просмотреть файл

@ -21,22 +21,24 @@
#ifndef AVCODEC_VP56DSP_H
#define AVCODEC_VP56DSP_H
#include <stddef.h>
#include <stdint.h>
#include "avcodec.h"
typedef struct VP56DSPContext {
void (*edge_filter_hor)(uint8_t *yuv, int stride, int t);
void (*edge_filter_ver)(uint8_t *yuv, int stride, int t);
void (*edge_filter_hor)(uint8_t *yuv, ptrdiff_t stride, int t);
void (*edge_filter_ver)(uint8_t *yuv, ptrdiff_t stride, int t);
void (*vp6_filter_diag4)(uint8_t *dst, uint8_t *src, int stride,
void (*vp6_filter_diag4)(uint8_t *dst, uint8_t *src, ptrdiff_t stride,
const int16_t *h_weights,const int16_t *v_weights);
} VP56DSPContext;
void ff_vp6_filter_diag4_c(uint8_t *dst, uint8_t *src, int stride,
void ff_vp6_filter_diag4_c(uint8_t *dst, uint8_t *src, ptrdiff_t stride,
const int16_t *h_weights, const int16_t *v_weights);
void ff_vp56dsp_init(VP56DSPContext *s, enum AVCodecID codec);
void ff_vp6dsp_init_arm(VP56DSPContext *s, enum AVCodecID codec);
void ff_vp6dsp_init_x86(VP56DSPContext* c, enum AVCodecID codec);
void ff_vp5dsp_init(VP56DSPContext *s);
void ff_vp6dsp_init(VP56DSPContext *s);
void ff_vp6dsp_init_arm(VP56DSPContext *s);
void ff_vp6dsp_init_x86(VP56DSPContext *s);
#endif /* AVCODEC_VP56DSP_H */

Просмотреть файл

@ -37,11 +37,14 @@ const uint8_t ff_vp56_norm_shift[256]= {
0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,
};
void ff_vp56_init_range_decoder(VP56RangeCoder *c, const uint8_t *buf, int buf_size)
int ff_vp56_init_range_decoder(VP56RangeCoder *c, const uint8_t *buf, int buf_size)
{
c->high = 255;
c->bits = -16;
c->buffer = buf;
c->end = buf + buf_size;
if (buf_size < 1)
return AVERROR_INVALIDDATA;
c->code_word = bytestream_get_be24(&c->buffer);
return 0;
}

Просмотреть файл

@ -261,6 +261,7 @@ static int setup_partitions(VP8Context *s, const uint8_t *buf, int buf_size)
{
const uint8_t *sizes = buf;
int i;
int ret;
s->num_coeff_partitions = 1 << vp8_rac_get_uint(&s->c, 2);
@ -274,13 +275,13 @@ static int setup_partitions(VP8Context *s, const uint8_t *buf, int buf_size)
if (buf_size - size < 0)
return -1;
ff_vp56_init_range_decoder(&s->coeff_partition[i], buf, size);
ret = ff_vp56_init_range_decoder(&s->coeff_partition[i], buf, size);
if (ret < 0)
return ret;
buf += size;
buf_size -= size;
}
ff_vp56_init_range_decoder(&s->coeff_partition[i], buf, buf_size);
return 0;
return ff_vp56_init_range_decoder(&s->coeff_partition[i], buf, buf_size);
}
static void vp7_get_quants(VP8Context *s)
@ -434,8 +435,8 @@ static void copy_chroma(AVFrame *dst, AVFrame *src, int width, int height)
}
}
static void fade(uint8_t *dst, int dst_linesize,
const uint8_t *src, int src_linesize,
static void fade(uint8_t *dst, ptrdiff_t dst_linesize,
const uint8_t *src, ptrdiff_t src_linesize,
int width, int height,
int alpha, int beta)
{
@ -518,7 +519,9 @@ static int vp7_decode_frame_header(VP8Context *s, const uint8_t *buf, int buf_si
memcpy(s->put_pixels_tab, s->vp8dsp.put_vp8_epel_pixels_tab, sizeof(s->put_pixels_tab));
ff_vp56_init_range_decoder(c, buf, part1_size);
ret = ff_vp56_init_range_decoder(c, buf, part1_size);
if (ret < 0)
return ret;
buf += part1_size;
buf_size -= part1_size;
@ -570,7 +573,9 @@ static int vp7_decode_frame_header(VP8Context *s, const uint8_t *buf, int buf_si
s->lf_delta.enabled = 0;
s->num_coeff_partitions = 1;
ff_vp56_init_range_decoder(&s->coeff_partition[0], buf, buf_size);
ret = ff_vp56_init_range_decoder(&s->coeff_partition[0], buf, buf_size);
if (ret < 0)
return ret;
if (!s->macroblocks_base || /* first frame */
width != s->avctx->width || height != s->avctx->height ||
@ -699,7 +704,9 @@ static int vp8_decode_frame_header(VP8Context *s, const uint8_t *buf, int buf_si
memset(&s->lf_delta, 0, sizeof(s->lf_delta));
}
ff_vp56_init_range_decoder(c, buf, header_size);
ret = ff_vp56_init_range_decoder(c, buf, header_size);
if (ret < 0)
return ret;
buf += header_size;
buf_size -= header_size;
@ -765,7 +772,7 @@ static int vp8_decode_frame_header(VP8Context *s, const uint8_t *buf, int buf_si
}
static av_always_inline
void clamp_mv(VP8Context *s, VP56mv *dst, const VP56mv *src)
void clamp_mv(VP8mvbounds *s, VP56mv *dst, const VP56mv *src)
{
dst->x = av_clip(src->x, av_clip(s->mv_min.x, INT16_MIN, INT16_MAX),
av_clip(s->mv_max.x, INT16_MIN, INT16_MAX));
@ -1024,7 +1031,7 @@ void vp7_decode_mvs(VP8Context *s, VP8Macroblock *mb,
}
static av_always_inline
void vp8_decode_mvs(VP8Context *s, VP8Macroblock *mb,
void vp8_decode_mvs(VP8Context *s, VP8mvbounds *mv_bounds, VP8Macroblock *mb,
int mb_x, int mb_y, int layout)
{
VP8Macroblock *mb_edge[3] = { 0 /* top */,
@ -1095,7 +1102,7 @@ void vp8_decode_mvs(VP8Context *s, VP8Macroblock *mb,
if (vp56_rac_get_prob_branchy(c, vp8_mode_contexts[cnt[CNT_NEAREST]][1])) {
if (vp56_rac_get_prob_branchy(c, vp8_mode_contexts[cnt[CNT_NEAR]][2])) {
/* Choose the best mv out of 0,0 and the nearest mv */
clamp_mv(s, &mb->mv, &near_mv[CNT_ZERO + (cnt[CNT_NEAREST] >= cnt[CNT_ZERO])]);
clamp_mv(mv_bounds, &mb->mv, &near_mv[CNT_ZERO + (cnt[CNT_NEAREST] >= cnt[CNT_ZERO])]);
cnt[CNT_SPLITMV] = ((mb_edge[VP8_EDGE_LEFT]->mode == VP8_MVMODE_SPLIT) +
(mb_edge[VP8_EDGE_TOP]->mode == VP8_MVMODE_SPLIT)) * 2 +
(mb_edge[VP8_EDGE_TOPLEFT]->mode == VP8_MVMODE_SPLIT);
@ -1109,11 +1116,11 @@ void vp8_decode_mvs(VP8Context *s, VP8Macroblock *mb,
mb->bmv[0] = mb->mv;
}
} else {
clamp_mv(s, &mb->mv, &near_mv[CNT_NEAR]);
clamp_mv(mv_bounds, &mb->mv, &near_mv[CNT_NEAR]);
mb->bmv[0] = mb->mv;
}
} else {
clamp_mv(s, &mb->mv, &near_mv[CNT_NEAREST]);
clamp_mv(mv_bounds, &mb->mv, &near_mv[CNT_NEAREST]);
mb->bmv[0] = mb->mv;
}
} else {
@ -1159,14 +1166,15 @@ void decode_intra4x4_modes(VP8Context *s, VP56RangeCoder *c, VP8Macroblock *mb,
}
static av_always_inline
void decode_mb_mode(VP8Context *s, VP8Macroblock *mb, int mb_x, int mb_y,
void decode_mb_mode(VP8Context *s, VP8mvbounds *mv_bounds,
VP8Macroblock *mb, int mb_x, int mb_y,
uint8_t *segment, uint8_t *ref, int layout, int is_vp7)
{
VP56RangeCoder *c = &s->c;
static const char *vp7_feature_name[] = { "q-index",
"lf-delta",
"partial-golden-update",
"blit-pitch" };
static const char * const vp7_feature_name[] = { "q-index",
"lf-delta",
"partial-golden-update",
"blit-pitch" };
if (is_vp7) {
int i;
*segment = 0;
@ -1223,7 +1231,7 @@ void decode_mb_mode(VP8Context *s, VP8Macroblock *mb, int mb_x, int mb_y,
if (is_vp7)
vp7_decode_mvs(s, mb, mb_x, mb_y, layout);
else
vp8_decode_mvs(s, mb, mb_x, mb_y, layout);
vp8_decode_mvs(s, mv_bounds, mb, mb_x, mb_y, layout);
} else {
// intra MB, 16.1
mb->mode = vp8_rac_get_tree(c, vp8_pred16x16_tree_inter, s->prob->pred16x16);
@ -1461,7 +1469,7 @@ void decode_mb_coeffs(VP8Context *s, VP8ThreadData *td, VP56RangeCoder *c,
static av_always_inline
void backup_mb_border(uint8_t *top_border, uint8_t *src_y,
uint8_t *src_cb, uint8_t *src_cr,
int linesize, int uvlinesize, int simple)
ptrdiff_t linesize, ptrdiff_t uvlinesize, int simple)
{
AV_COPY128(top_border, src_y + 15 * linesize);
if (!simple) {
@ -1472,7 +1480,7 @@ void backup_mb_border(uint8_t *top_border, uint8_t *src_y,
static av_always_inline
void xchg_mb_border(uint8_t *top_border, uint8_t *src_y, uint8_t *src_cb,
uint8_t *src_cr, int linesize, int uvlinesize, int mb_x,
uint8_t *src_cr, ptrdiff_t linesize, ptrdiff_t uvlinesize, int mb_x,
int mb_y, int mb_width, int simple, int xchg)
{
uint8_t *top_border_m1 = top_border - 32; // for TL prediction
@ -1625,7 +1633,8 @@ void intra_predict(VP8Context *s, VP8ThreadData *td, uint8_t *dst[3],
for (y = 0; y < 4; y++) {
uint8_t *topright = ptr + 4 - s->linesize;
for (x = 0; x < 4; x++) {
int copy = 0, linesize = s->linesize;
int copy = 0;
ptrdiff_t linesize = s->linesize;
uint8_t *dst = ptr + 4 * x;
LOCAL_ALIGNED(4, uint8_t, copy_dst, [5 * 8]);
@ -1731,7 +1740,7 @@ void vp8_mc_luma(VP8Context *s, VP8ThreadData *td, uint8_t *dst,
uint8_t *src = ref->f->data[0];
if (AV_RN32A(mv)) {
int src_linesize = linesize;
ptrdiff_t src_linesize = linesize;
int mx = (mv->x * 2) & 7, mx_idx = subpel_idx[0][mx];
int my = (mv->y * 2) & 7, my_idx = subpel_idx[0][my];
@ -2077,8 +2086,8 @@ void filter_mb(VP8Context *s, uint8_t *dst[3], VP8FilterStrength *f,
int filter_level = f->filter_level;
int inner_limit = f->inner_limit;
int inner_filter = f->inner_filter;
int linesize = s->linesize;
int uvlinesize = s->uvlinesize;
ptrdiff_t linesize = s->linesize;
ptrdiff_t uvlinesize = s->uvlinesize;
static const uint8_t hev_thresh_lut[2][64] = {
{ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1,
2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2,
@ -2164,7 +2173,7 @@ void filter_mb_simple(VP8Context *s, uint8_t *dst, VP8FilterStrength *f,
int filter_level = f->filter_level;
int inner_limit = f->inner_limit;
int inner_filter = f->inner_filter;
int linesize = s->linesize;
ptrdiff_t linesize = s->linesize;
if (!filter_level)
return;
@ -2197,8 +2206,8 @@ void vp78_decode_mv_mb_modes(AVCodecContext *avctx, VP8Frame *curframe,
VP8Context *s = avctx->priv_data;
int mb_x, mb_y;
s->mv_min.y = -MARGIN;
s->mv_max.y = ((s->mb_height - 1) << 6) + MARGIN;
s->mv_bounds.mv_min.y = -MARGIN;
s->mv_bounds.mv_max.y = ((s->mb_height - 1) << 6) + MARGIN;
for (mb_y = 0; mb_y < s->mb_height; mb_y++) {
VP8Macroblock *mb = s->macroblocks_base +
((s->mb_width + 1) * (mb_y + 1) + 1);
@ -2206,20 +2215,20 @@ void vp78_decode_mv_mb_modes(AVCodecContext *avctx, VP8Frame *curframe,
AV_WN32A(s->intra4x4_pred_mode_left, DC_PRED * 0x01010101);
s->mv_min.x = -MARGIN;
s->mv_max.x = ((s->mb_width - 1) << 6) + MARGIN;
s->mv_bounds.mv_min.x = -MARGIN;
s->mv_bounds.mv_max.x = ((s->mb_width - 1) << 6) + MARGIN;
for (mb_x = 0; mb_x < s->mb_width; mb_x++, mb_xy++, mb++) {
if (mb_y == 0)
AV_WN32A((mb - s->mb_width - 1)->intra4x4_pred_mode_top,
DC_PRED * 0x01010101);
decode_mb_mode(s, mb, mb_x, mb_y, curframe->seg_map->data + mb_xy,
decode_mb_mode(s, &s->mv_bounds, mb, mb_x, mb_y, curframe->seg_map->data + mb_xy,
prev_frame && prev_frame->seg_map ?
prev_frame->seg_map->data + mb_xy : NULL, 1, is_vp7);
s->mv_min.x -= 64;
s->mv_max.x -= 64;
s->mv_bounds.mv_min.x -= 64;
s->mv_bounds.mv_max.x -= 64;
}
s->mv_min.y -= 64;
s->mv_max.y -= 64;
s->mv_bounds.mv_min.y -= 64;
s->mv_bounds.mv_max.y -= 64;
}
}
@ -2239,15 +2248,15 @@ static void vp8_decode_mv_mb_modes(AVCodecContext *avctx, VP8Frame *cur_frame,
#define check_thread_pos(td, otd, mb_x_check, mb_y_check) \
do { \
int tmp = (mb_y_check << 16) | (mb_x_check & 0xFFFF); \
if (otd->thread_mb_pos < tmp) { \
if (atomic_load(&otd->thread_mb_pos) < tmp) { \
pthread_mutex_lock(&otd->lock); \
td->wait_mb_pos = tmp; \
atomic_store(&td->wait_mb_pos, tmp); \
do { \
if (otd->thread_mb_pos >= tmp) \
if (atomic_load(&otd->thread_mb_pos) >= tmp) \
break; \
pthread_cond_wait(&otd->cond, &otd->lock); \
} while (1); \
td->wait_mb_pos = INT_MAX; \
atomic_store(&td->wait_mb_pos, INT_MAX); \
pthread_mutex_unlock(&otd->lock); \
} \
} while (0)
@ -2258,12 +2267,10 @@ static void vp8_decode_mv_mb_modes(AVCodecContext *avctx, VP8Frame *cur_frame,
int sliced_threading = (avctx->active_thread_type == FF_THREAD_SLICE) && \
(num_jobs > 1); \
int is_null = !next_td || !prev_td; \
int pos_check = (is_null) ? 1 \
: (next_td != td && \
pos >= next_td->wait_mb_pos) || \
(prev_td != td && \
pos >= prev_td->wait_mb_pos); \
td->thread_mb_pos = pos; \
int pos_check = (is_null) ? 1 : \
(next_td != td && pos >= atomic_load(&next_td->wait_mb_pos)) || \
(prev_td != td && pos >= atomic_load(&prev_td->wait_mb_pos)); \
atomic_store(&td->thread_mb_pos, pos); \
if (sliced_threading && pos_check) { \
pthread_mutex_lock(&td->lock); \
pthread_cond_broadcast(&td->cond); \
@ -2275,12 +2282,12 @@ static void vp8_decode_mv_mb_modes(AVCodecContext *avctx, VP8Frame *cur_frame,
#define update_pos(td, mb_y, mb_x) while(0)
#endif
static av_always_inline void decode_mb_row_no_filter(AVCodecContext *avctx, void *tdata,
static av_always_inline int decode_mb_row_no_filter(AVCodecContext *avctx, void *tdata,
int jobnr, int threadnr, int is_vp7)
{
VP8Context *s = avctx->priv_data;
VP8ThreadData *prev_td, *next_td, *td = &s->thread_data[threadnr];
int mb_y = td->thread_mb_pos >> 16;
int mb_y = atomic_load(&td->thread_mb_pos) >> 16;
int mb_x, mb_xy = mb_y * s->mb_width;
int num_jobs = s->num_jobs;
VP8Frame *curframe = s->curframe, *prev_frame = s->prev_frame;
@ -2291,6 +2298,10 @@ static av_always_inline void decode_mb_row_no_filter(AVCodecContext *avctx, void
curframe->tf.f->data[1] + 8 * mb_y * s->uvlinesize,
curframe->tf.f->data[2] + 8 * mb_y * s->uvlinesize
};
if (c->end <= c->buffer && c->bits >= 0)
return AVERROR_INVALIDDATA;
if (mb_y == 0)
prev_td = td;
else
@ -2315,10 +2326,12 @@ static av_always_inline void decode_mb_row_no_filter(AVCodecContext *avctx, void
if (!is_vp7 || mb_y == 0)
memset(td->left_nnz, 0, sizeof(td->left_nnz));
s->mv_min.x = -MARGIN;
s->mv_max.x = ((s->mb_width - 1) << 6) + MARGIN;
td->mv_bounds.mv_min.x = -MARGIN;
td->mv_bounds.mv_max.x = ((s->mb_width - 1) << 6) + MARGIN;
for (mb_x = 0; mb_x < s->mb_width; mb_x++, mb_xy++, mb++) {
if (c->end <= c->buffer && c->bits >= 0)
return AVERROR_INVALIDDATA;
// Wait for previous thread to read mb_x+2, and reach mb_y-1.
if (prev_td != td) {
if (threadnr != 0) {
@ -2338,7 +2351,7 @@ static av_always_inline void decode_mb_row_no_filter(AVCodecContext *avctx, void
dst[2] - dst[1], 2);
if (!s->mb_layout)
decode_mb_mode(s, mb, mb_x, mb_y, curframe->seg_map->data + mb_xy,
decode_mb_mode(s, &td->mv_bounds, mb, mb_x, mb_y, curframe->seg_map->data + mb_xy,
prev_frame && prev_frame->seg_map ?
prev_frame->seg_map->data + mb_xy : NULL, 0, is_vp7);
@ -2385,8 +2398,8 @@ static av_always_inline void decode_mb_row_no_filter(AVCodecContext *avctx, void
dst[0] += 16;
dst[1] += 8;
dst[2] += 8;
s->mv_min.x -= 64;
s->mv_max.x -= 64;
td->mv_bounds.mv_min.x -= 64;
td->mv_bounds.mv_max.x -= 64;
if (mb_x == s->mb_width + 1) {
update_pos(td, mb_y, s->mb_width + 3);
@ -2394,18 +2407,19 @@ static av_always_inline void decode_mb_row_no_filter(AVCodecContext *avctx, void
update_pos(td, mb_y, mb_x);
}
}
return 0;
}
static void vp7_decode_mb_row_no_filter(AVCodecContext *avctx, void *tdata,
static int vp7_decode_mb_row_no_filter(AVCodecContext *avctx, void *tdata,
int jobnr, int threadnr)
{
decode_mb_row_no_filter(avctx, tdata, jobnr, threadnr, 1);
return decode_mb_row_no_filter(avctx, tdata, jobnr, threadnr, 1);
}
static void vp8_decode_mb_row_no_filter(AVCodecContext *avctx, void *tdata,
static int vp8_decode_mb_row_no_filter(AVCodecContext *avctx, void *tdata,
int jobnr, int threadnr)
{
decode_mb_row_no_filter(avctx, tdata, jobnr, threadnr, 0);
return decode_mb_row_no_filter(avctx, tdata, jobnr, threadnr, 0);
}
static av_always_inline void filter_mb_row(AVCodecContext *avctx, void *tdata,
@ -2413,7 +2427,7 @@ static av_always_inline void filter_mb_row(AVCodecContext *avctx, void *tdata,
{
VP8Context *s = avctx->priv_data;
VP8ThreadData *td = &s->thread_data[threadnr];
int mb_x, mb_y = td->thread_mb_pos >> 16, num_jobs = s->num_jobs;
int mb_x, mb_y = atomic_load(&td->thread_mb_pos) >> 16, num_jobs = s->num_jobs;
AVFrame *curframe = s->curframe->tf.f;
VP8Macroblock *mb;
VP8ThreadData *prev_td, *next_td;
@ -2488,19 +2502,24 @@ int vp78_decode_mb_row_sliced(AVCodecContext *avctx, void *tdata, int jobnr,
VP8ThreadData *next_td = NULL, *prev_td = NULL;
VP8Frame *curframe = s->curframe;
int mb_y, num_jobs = s->num_jobs;
int ret;
td->thread_nr = threadnr;
td->mv_bounds.mv_min.y = -MARGIN - 64 * threadnr;
td->mv_bounds.mv_max.y = ((s->mb_height - 1) << 6) + MARGIN - 64 * threadnr;
for (mb_y = jobnr; mb_y < s->mb_height; mb_y += num_jobs) {
if (mb_y >= s->mb_height)
break;
td->thread_mb_pos = mb_y << 16;
s->decode_mb_row_no_filter(avctx, tdata, jobnr, threadnr);
atomic_store(&td->thread_mb_pos, mb_y << 16);
ret = s->decode_mb_row_no_filter(avctx, tdata, jobnr, threadnr);
if (ret < 0) {
update_pos(td, s->mb_height, INT_MAX & 0xFFFF);
return ret;
}
if (s->deblock_filter)
s->filter_mb_row(avctx, tdata, jobnr, threadnr);
update_pos(td, mb_y, INT_MAX & 0xFFFF);
s->mv_min.y -= 64;
s->mv_max.y -= 64;
td->mv_bounds.mv_min.y -= 64 * num_jobs;
td->mv_bounds.mv_max.y -= 64 * num_jobs;
if (avctx->active_thread_type == FF_THREAD_FRAME)
ff_thread_report_progress(&curframe->tf, mb_y, 0);
@ -2531,6 +2550,8 @@ int vp78_decode_frame(AVCodecContext *avctx, void *data, int *got_frame,
enum AVDiscard skip_thresh;
VP8Frame *av_uninit(curframe), *prev_frame;
av_assert0(avctx->pix_fmt == AV_PIX_FMT_YUVA420P || avctx->pix_fmt == AV_PIX_FMT_YUV420P);
if (is_vp7)
ret = vp7_decode_frame_header(s, avpkt->data, avpkt->size);
else
@ -2646,11 +2667,12 @@ int vp78_decode_frame(AVCodecContext *avctx, void *data, int *got_frame,
s->num_jobs = num_jobs;
s->curframe = curframe;
s->prev_frame = prev_frame;
s->mv_min.y = -MARGIN;
s->mv_max.y = ((s->mb_height - 1) << 6) + MARGIN;
s->mv_bounds.mv_min.y = -MARGIN;
s->mv_bounds.mv_max.y = ((s->mb_height - 1) << 6) + MARGIN;
for (i = 0; i < MAX_THREADS; i++) {
s->thread_data[i].thread_mb_pos = 0;
s->thread_data[i].wait_mb_pos = INT_MAX;
VP8ThreadData *td = &s->thread_data[i];
atomic_init(&td->thread_mb_pos, 0);
atomic_init(&td->wait_mb_pos, INT_MAX);
}
if (is_vp7)
avctx->execute2(avctx, vp7_decode_mb_row_sliced, s->thread_data, NULL,

Просмотреть файл

@ -26,6 +26,8 @@
#ifndef AVCODEC_VP8_H
#define AVCODEC_VP8_H
#include <stdatomic.h>
#include "libavutil/buffer.h"
#include "libavutil/thread.h"
@ -91,6 +93,16 @@ typedef struct VP8Macroblock {
VP56mv bmv[16];
} VP8Macroblock;
typedef struct VP8intmv {
int x;
int y;
} VP8intmv;
typedef struct VP8mvbounds {
VP8intmv mv_min;
VP8intmv mv_max;
} VP8mvbounds;
typedef struct VP8ThreadData {
DECLARE_ALIGNED(16, int16_t, block)[6][4][16];
DECLARE_ALIGNED(16, int16_t, block_dc)[16];
@ -114,12 +126,13 @@ typedef struct VP8ThreadData {
pthread_mutex_t lock;
pthread_cond_t cond;
#endif
int thread_mb_pos; // (mb_y << 16) | (mb_x & 0xFFFF)
int wait_mb_pos; // What the current thread is waiting on.
atomic_int thread_mb_pos; // (mb_y << 16) | (mb_x & 0xFFFF)
atomic_int wait_mb_pos; // What the current thread is waiting on.
#define EDGE_EMU_LINESIZE 32
DECLARE_ALIGNED(16, uint8_t, edge_emu_buffer)[21 * EDGE_EMU_LINESIZE];
VP8FilterStrength *filter_strength;
VP8mvbounds mv_bounds;
} VP8ThreadData;
typedef struct VP8Frame {
@ -127,11 +140,6 @@ typedef struct VP8Frame {
AVBufferRef *seg_map;
} VP8Frame;
typedef struct VP8intmv {
int x;
int y;
} VP8intmv;
#define MAX_THREADS 8
typedef struct VP8Context {
VP8ThreadData *thread_data;
@ -143,15 +151,14 @@ typedef struct VP8Context {
uint16_t mb_width; /* number of horizontal MB */
uint16_t mb_height; /* number of vertical MB */
int linesize;
int uvlinesize;
ptrdiff_t linesize;
ptrdiff_t uvlinesize;
uint8_t keyframe;
uint8_t deblock_filter;
uint8_t mbskip_enabled;
uint8_t profile;
VP8intmv mv_min;
VP8intmv mv_max;
VP8mvbounds mv_bounds;
int8_t sign_bias[4]; ///< one state [0, 1] per ref frame type
int ref_count[3];
@ -275,7 +282,7 @@ typedef struct VP8Context {
*/
int mb_layout;
void (*decode_mb_row_no_filter)(AVCodecContext *avctx, void *tdata, int jobnr, int threadnr);
int (*decode_mb_row_no_filter)(AVCodecContext *avctx, void *tdata, int jobnr, int threadnr);
void (*filter_mb_row)(AVCodecContext *avctx, void *tdata, int jobnr, int threadnr);
int vp7;

Просмотреть файл

@ -1,6 +1,4 @@
/*
* Copyright (C) 2008 Michael Niedermayer
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
@ -18,15 +16,56 @@
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "parser.h"
#include "libavutil/intreadwrite.h"
#include "avcodec.h"
static int parse(AVCodecParserContext *s,
AVCodecContext *avctx,
const uint8_t **poutbuf, int *poutbuf_size,
const uint8_t *buf, int buf_size)
{
s->pict_type = (buf[0] & 0x01) ? AV_PICTURE_TYPE_P
: AV_PICTURE_TYPE_I;
unsigned int frame_type;
unsigned int profile;
if (buf_size < 3)
return buf_size;
frame_type = buf[0] & 1;
profile = (buf[0] >> 1) & 7;
if (profile > 3) {
av_log(avctx, AV_LOG_ERROR, "Invalid profile %u.\n", profile);
return buf_size;
}
avctx->profile = profile;
s->key_frame = frame_type == 0;
s->pict_type = frame_type ? AV_PICTURE_TYPE_P : AV_PICTURE_TYPE_I;
s->format = AV_PIX_FMT_YUV420P;
s->field_order = AV_FIELD_PROGRESSIVE;
s->picture_structure = AV_PICTURE_STRUCTURE_FRAME;
if (frame_type == 0) {
unsigned int sync_code;
unsigned int width, height;
if (buf_size < 10)
return buf_size;
sync_code = AV_RL24(buf + 3);
if (sync_code != 0x2a019d) {
av_log(avctx, AV_LOG_ERROR, "Invalid sync code %06x.\n", sync_code);
return buf_size;
}
width = AV_RL16(buf + 6) & 0x3fff;
height = AV_RL16(buf + 8) & 0x3fff;
s->width = width;
s->height = height;
s->coded_width = FFALIGN(width, 16);
s->coded_height = FFALIGN(height, 16);
}
*poutbuf = buf;
*poutbuf_size = buf_size;

Просмотреть файл

@ -53,7 +53,8 @@ static void name ## _idct_dc_add4y_c(uint8_t *dst, int16_t block[4][16], \
#if CONFIG_VP7_DECODER
static void vp7_luma_dc_wht_c(int16_t block[4][4][16], int16_t dc[16])
{
int i, a1, b1, c1, d1;
int i;
unsigned a1, b1, c1, d1;
int16_t tmp[16];
for (i = 0; i < 4; i++) {
@ -61,10 +62,10 @@ static void vp7_luma_dc_wht_c(int16_t block[4][4][16], int16_t dc[16])
b1 = (dc[i * 4 + 0] - dc[i * 4 + 2]) * 23170;
c1 = dc[i * 4 + 1] * 12540 - dc[i * 4 + 3] * 30274;
d1 = dc[i * 4 + 1] * 30274 + dc[i * 4 + 3] * 12540;
tmp[i * 4 + 0] = (a1 + d1) >> 14;
tmp[i * 4 + 3] = (a1 - d1) >> 14;
tmp[i * 4 + 1] = (b1 + c1) >> 14;
tmp[i * 4 + 2] = (b1 - c1) >> 14;
tmp[i * 4 + 0] = (int)(a1 + d1) >> 14;
tmp[i * 4 + 3] = (int)(a1 - d1) >> 14;
tmp[i * 4 + 1] = (int)(b1 + c1) >> 14;
tmp[i * 4 + 2] = (int)(b1 - c1) >> 14;
}
for (i = 0; i < 4; i++) {
@ -73,10 +74,10 @@ static void vp7_luma_dc_wht_c(int16_t block[4][4][16], int16_t dc[16])
c1 = tmp[i + 4] * 12540 - tmp[i + 12] * 30274;
d1 = tmp[i + 4] * 30274 + tmp[i + 12] * 12540;
AV_ZERO64(dc + i * 4);
block[0][i][0] = (a1 + d1 + 0x20000) >> 18;
block[3][i][0] = (a1 - d1 + 0x20000) >> 18;
block[1][i][0] = (b1 + c1 + 0x20000) >> 18;
block[2][i][0] = (b1 - c1 + 0x20000) >> 18;
block[0][i][0] = (int)(a1 + d1 + 0x20000) >> 18;
block[3][i][0] = (int)(a1 - d1 + 0x20000) >> 18;
block[1][i][0] = (int)(b1 + c1 + 0x20000) >> 18;
block[2][i][0] = (int)(b1 - c1 + 0x20000) >> 18;
}
}
@ -95,7 +96,8 @@ static void vp7_luma_dc_wht_dc_c(int16_t block[4][4][16], int16_t dc[16])
static void vp7_idct_add_c(uint8_t *dst, int16_t block[16], ptrdiff_t stride)
{
int i, a1, b1, c1, d1;
int i;
unsigned a1, b1, c1, d1;
int16_t tmp[16];
for (i = 0; i < 4; i++) {
@ -104,10 +106,10 @@ static void vp7_idct_add_c(uint8_t *dst, int16_t block[16], ptrdiff_t stride)
c1 = block[i * 4 + 1] * 12540 - block[i * 4 + 3] * 30274;
d1 = block[i * 4 + 1] * 30274 + block[i * 4 + 3] * 12540;
AV_ZERO64(block + i * 4);
tmp[i * 4 + 0] = (a1 + d1) >> 14;
tmp[i * 4 + 3] = (a1 - d1) >> 14;
tmp[i * 4 + 1] = (b1 + c1) >> 14;
tmp[i * 4 + 2] = (b1 - c1) >> 14;
tmp[i * 4 + 0] = (int)(a1 + d1) >> 14;
tmp[i * 4 + 3] = (int)(a1 - d1) >> 14;
tmp[i * 4 + 1] = (int)(b1 + c1) >> 14;
tmp[i * 4 + 2] = (int)(b1 - c1) >> 14;
}
for (i = 0; i < 4; i++) {
@ -116,13 +118,13 @@ static void vp7_idct_add_c(uint8_t *dst, int16_t block[16], ptrdiff_t stride)
c1 = tmp[i + 4] * 12540 - tmp[i + 12] * 30274;
d1 = tmp[i + 4] * 30274 + tmp[i + 12] * 12540;
dst[0 * stride + i] = av_clip_uint8(dst[0 * stride + i] +
((a1 + d1 + 0x20000) >> 18));
((int)(a1 + d1 + 0x20000) >> 18));
dst[3 * stride + i] = av_clip_uint8(dst[3 * stride + i] +
((a1 - d1 + 0x20000) >> 18));
((int)(a1 - d1 + 0x20000) >> 18));
dst[1 * stride + i] = av_clip_uint8(dst[1 * stride + i] +
((b1 + c1 + 0x20000) >> 18));
((int)(b1 + c1 + 0x20000) >> 18));
dst[2 * stride + i] = av_clip_uint8(dst[2 * stride + i] +
((b1 - c1 + 0x20000) >> 18));
((int)(b1 - c1 + 0x20000) >> 18));
}
}

Просмотреть файл

@ -70,12 +70,12 @@ typedef struct VP8DSPContext {
void (*vp8_h_loop_filter_simple)(uint8_t *dst, ptrdiff_t stride, int flim);
/**
* first dimension: width>>3, height is assumed equal to width
* first dimension: 4-log2(width)
* second dimension: 0 if no vertical interpolation is needed;
* 1 4-tap vertical interpolation filter (my & 1)
* 2 6-tap vertical interpolation filter (!(my & 1))
* third dimension: same as second dimension, for horizontal interpolation
* so something like put_vp8_epel_pixels_tab[width>>3][2*!!my-(my&1)][2*!!mx-(mx&1)](..., mx, my)
* so something like put_vp8_epel_pixels_tab[4-log2(width)][2*!!my-(my&1)][2*!!mx-(mx&1)](..., mx, my)
*/
vp8_mc_func put_vp8_epel_pixels_tab[3][3][3];
vp8_mc_func put_vp8_bilinear_pixels_tab[3][3][3];

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -24,42 +24,6 @@
#ifndef AVCODEC_VP9_H
#define AVCODEC_VP9_H
#include <stdint.h>
#include "thread.h"
#include "vp56.h"
enum BlockLevel {
BL_64X64,
BL_32X32,
BL_16X16,
BL_8X8,
};
enum BlockPartition {
PARTITION_NONE, // [ ] <-.
PARTITION_H, // [-] |
PARTITION_V, // [|] |
PARTITION_SPLIT, // [+] --'
};
enum BlockSize {
BS_64x64,
BS_64x32,
BS_32x64,
BS_32x32,
BS_32x16,
BS_16x32,
BS_16x16,
BS_16x8,
BS_8x16,
BS_8x8,
BS_8x4,
BS_4x8,
BS_4x4,
N_BS_SIZES,
};
enum TxfmMode {
TX_4X4,
TX_8X8,
@ -97,115 +61,13 @@ enum IntraPredMode {
N_INTRA_PRED_MODES
};
enum InterPredMode {
NEARESTMV = 10,
NEARMV = 11,
ZEROMV = 12,
NEWMV = 13,
};
enum FilterMode {
FILTER_8TAP_SMOOTH,
FILTER_8TAP_REGULAR,
FILTER_8TAP_SHARP,
FILTER_BILINEAR,
FILTER_SWITCHABLE,
N_FILTERS,
FILTER_SWITCHABLE = N_FILTERS,
};
enum CompPredMode {
PRED_SINGLEREF,
PRED_COMPREF,
PRED_SWITCHABLE,
};
struct VP9mvrefPair {
VP56mv mv[2];
int8_t ref[2];
};
typedef struct VP9Frame {
ThreadFrame tf;
AVBufferRef *extradata;
uint8_t *segmentation_map;
struct VP9mvrefPair *mv;
int uses_2pass;
AVBufferRef *hwaccel_priv_buf;
void *hwaccel_picture_private;
} VP9Frame;
typedef struct VP9BitstreamHeader {
// bitstream header
uint8_t profile;
uint8_t keyframe;
uint8_t invisible;
uint8_t errorres;
uint8_t intraonly;
uint8_t resetctx;
uint8_t refreshrefmask;
uint8_t highprecisionmvs;
enum FilterMode filtermode;
uint8_t allowcompinter;
uint8_t refreshctx;
uint8_t parallelmode;
uint8_t framectxid;
uint8_t use_last_frame_mvs;
uint8_t refidx[3];
uint8_t signbias[3];
uint8_t fixcompref;
uint8_t varcompref[2];
struct {
uint8_t level;
int8_t sharpness;
} filter;
struct {
uint8_t enabled;
uint8_t updated;
int8_t mode[2];
int8_t ref[4];
} lf_delta;
uint8_t yac_qi;
int8_t ydc_qdelta, uvdc_qdelta, uvac_qdelta;
uint8_t lossless;
#define MAX_SEGMENT 8
struct {
uint8_t enabled;
uint8_t temporal;
uint8_t absolute_vals;
uint8_t update_map;
uint8_t prob[7];
uint8_t pred_prob[3];
struct {
uint8_t q_enabled;
uint8_t lf_enabled;
uint8_t ref_enabled;
uint8_t skip_enabled;
uint8_t ref_val;
int16_t q_val;
int8_t lf_val;
int16_t qmul[2][2];
uint8_t lflvl[4][2];
} feat[MAX_SEGMENT];
} segmentation;
enum TxfmMode txfmmode;
enum CompPredMode comppredmode;
struct {
unsigned log2_tile_cols, log2_tile_rows;
unsigned tile_cols, tile_rows;
} tiling;
int uncompressed_header_size;
int compressed_header_size;
} VP9BitstreamHeader;
typedef struct VP9SharedContext {
VP9BitstreamHeader h;
ThreadFrame refs[8];
#define CUR_FRAME 0
#define REF_FRAME_MVPAIR 1
#define REF_FRAME_SEGMAP 2
VP9Frame frames[3];
} VP9SharedContext;
#endif /* AVCODEC_VP9_H */

Просмотреть файл

@ -27,19 +27,19 @@
(VP56mv) { .x = ROUNDED_DIV(a.x + b.x + c.x + d.x, 4), \
.y = ROUNDED_DIV(a.y + b.y + c.y + d.y, 4) }
static void FN(inter_pred)(AVCodecContext *ctx)
static void FN(inter_pred)(VP9TileData *td)
{
static const uint8_t bwlog_tab[2][N_BS_SIZES] = {
{ 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4 },
{ 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 4, 4 },
};
VP9Context *s = ctx->priv_data;
VP9Block *b = s->b;
int row = s->row, col = s->col;
VP9Context *s = td->s;
VP9Block *b = td->b;
int row = td->row, col = td->col;
ThreadFrame *tref1 = &s->s.refs[s->s.h.refidx[b->ref[0]]], *tref2;
AVFrame *ref1 = tref1->f, *ref2;
int w1 = ref1->width, h1 = ref1->height, w2, h2;
ptrdiff_t ls_y = s->y_stride, ls_uv = s->uv_stride;
ptrdiff_t ls_y = td->y_stride, ls_uv = td->uv_stride;
int bytesperpixel = BYTES_PER_PIXEL;
if (b->comp) {
@ -55,26 +55,26 @@ static void FN(inter_pred)(AVCodecContext *ctx)
#if SCALED == 0
if (b->bs == BS_8x4) {
mc_luma_dir(s, mc[3][b->filter][0], s->dst[0], ls_y,
mc_luma_dir(td, mc[3][b->filter][0], td->dst[0], ls_y,
ref1->data[0], ref1->linesize[0], tref1,
row << 3, col << 3, &b->mv[0][0],,,,, 8, 4, w1, h1, 0);
mc_luma_dir(s, mc[3][b->filter][0],
s->dst[0] + 4 * ls_y, ls_y,
mc_luma_dir(td, mc[3][b->filter][0],
td->dst[0] + 4 * ls_y, ls_y,
ref1->data[0], ref1->linesize[0], tref1,
(row << 3) + 4, col << 3, &b->mv[2][0],,,,, 8, 4, w1, h1, 0);
w1 = (w1 + s->ss_h) >> s->ss_h;
if (s->ss_v) {
h1 = (h1 + 1) >> 1;
uvmv = ROUNDED_DIV_MVx2(b->mv[0][0], b->mv[2][0]);
mc_chroma_dir(s, mc[3 + s->ss_h][b->filter][0],
s->dst[1], s->dst[2], ls_uv,
mc_chroma_dir(td, mc[3 + s->ss_h][b->filter][0],
td->dst[1], td->dst[2], ls_uv,
ref1->data[1], ref1->linesize[1],
ref1->data[2], ref1->linesize[2], tref1,
row << 2, col << (3 - s->ss_h),
&uvmv,,,,, 8 >> s->ss_h, 4, w1, h1, 0);
} else {
mc_chroma_dir(s, mc[3 + s->ss_h][b->filter][0],
s->dst[1], s->dst[2], ls_uv,
mc_chroma_dir(td, mc[3 + s->ss_h][b->filter][0],
td->dst[1], td->dst[2], ls_uv,
ref1->data[1], ref1->linesize[1],
ref1->data[2], ref1->linesize[2], tref1,
row << 3, col << (3 - s->ss_h),
@ -87,8 +87,8 @@ static void FN(inter_pred)(AVCodecContext *ctx)
} else {
uvmv = ROUNDED_DIV_MVx2(b->mv[0][0], b->mv[2][0]);
}
mc_chroma_dir(s, mc[3 + s->ss_h][b->filter][0],
s->dst[1] + 4 * ls_uv, s->dst[2] + 4 * ls_uv, ls_uv,
mc_chroma_dir(td, mc[3 + s->ss_h][b->filter][0],
td->dst[1] + 4 * ls_uv, td->dst[2] + 4 * ls_uv, ls_uv,
ref1->data[1], ref1->linesize[1],
ref1->data[2], ref1->linesize[2], tref1,
(row << 3) + 4, col << (3 - s->ss_h),
@ -96,26 +96,26 @@ static void FN(inter_pred)(AVCodecContext *ctx)
}
if (b->comp) {
mc_luma_dir(s, mc[3][b->filter][1], s->dst[0], ls_y,
mc_luma_dir(td, mc[3][b->filter][1], td->dst[0], ls_y,
ref2->data[0], ref2->linesize[0], tref2,
row << 3, col << 3, &b->mv[0][1],,,,, 8, 4, w2, h2, 1);
mc_luma_dir(s, mc[3][b->filter][1],
s->dst[0] + 4 * ls_y, ls_y,
mc_luma_dir(td, mc[3][b->filter][1],
td->dst[0] + 4 * ls_y, ls_y,
ref2->data[0], ref2->linesize[0], tref2,
(row << 3) + 4, col << 3, &b->mv[2][1],,,,, 8, 4, w2, h2, 1);
w2 = (w2 + s->ss_h) >> s->ss_h;
if (s->ss_v) {
h2 = (h2 + 1) >> 1;
uvmv = ROUNDED_DIV_MVx2(b->mv[0][1], b->mv[2][1]);
mc_chroma_dir(s, mc[3 + s->ss_h][b->filter][1],
s->dst[1], s->dst[2], ls_uv,
mc_chroma_dir(td, mc[3 + s->ss_h][b->filter][1],
td->dst[1], td->dst[2], ls_uv,
ref2->data[1], ref2->linesize[1],
ref2->data[2], ref2->linesize[2], tref2,
row << 2, col << (3 - s->ss_h),
&uvmv,,,,, 8 >> s->ss_h, 4, w2, h2, 1);
} else {
mc_chroma_dir(s, mc[3 + s->ss_h][b->filter][1],
s->dst[1], s->dst[2], ls_uv,
mc_chroma_dir(td, mc[3 + s->ss_h][b->filter][1],
td->dst[1], td->dst[2], ls_uv,
ref2->data[1], ref2->linesize[1],
ref2->data[2], ref2->linesize[2], tref2,
row << 3, col << (3 - s->ss_h),
@ -128,8 +128,8 @@ static void FN(inter_pred)(AVCodecContext *ctx)
} else {
uvmv = ROUNDED_DIV_MVx2(b->mv[0][1], b->mv[2][1]);
}
mc_chroma_dir(s, mc[3 + s->ss_h][b->filter][1],
s->dst[1] + 4 * ls_uv, s->dst[2] + 4 * ls_uv, ls_uv,
mc_chroma_dir(td, mc[3 + s->ss_h][b->filter][1],
td->dst[1] + 4 * ls_uv, td->dst[2] + 4 * ls_uv, ls_uv,
ref2->data[1], ref2->linesize[1],
ref2->data[2], ref2->linesize[2], tref2,
(row << 3) + 4, col << (3 - s->ss_h),
@ -137,32 +137,32 @@ static void FN(inter_pred)(AVCodecContext *ctx)
}
}
} else if (b->bs == BS_4x8) {
mc_luma_dir(s, mc[4][b->filter][0], s->dst[0], ls_y,
mc_luma_dir(td, mc[4][b->filter][0], td->dst[0], ls_y,
ref1->data[0], ref1->linesize[0], tref1,
row << 3, col << 3, &b->mv[0][0],,,,, 4, 8, w1, h1, 0);
mc_luma_dir(s, mc[4][b->filter][0], s->dst[0] + 4 * bytesperpixel, ls_y,
mc_luma_dir(td, mc[4][b->filter][0], td->dst[0] + 4 * bytesperpixel, ls_y,
ref1->data[0], ref1->linesize[0], tref1,
row << 3, (col << 3) + 4, &b->mv[1][0],,,,, 4, 8, w1, h1, 0);
h1 = (h1 + s->ss_v) >> s->ss_v;
if (s->ss_h) {
w1 = (w1 + 1) >> 1;
uvmv = ROUNDED_DIV_MVx2(b->mv[0][0], b->mv[1][0]);
mc_chroma_dir(s, mc[4][b->filter][0],
s->dst[1], s->dst[2], ls_uv,
mc_chroma_dir(td, mc[4][b->filter][0],
td->dst[1], td->dst[2], ls_uv,
ref1->data[1], ref1->linesize[1],
ref1->data[2], ref1->linesize[2], tref1,
row << (3 - s->ss_v), col << 2,
&uvmv,,,,, 4, 8 >> s->ss_v, w1, h1, 0);
} else {
mc_chroma_dir(s, mc[4][b->filter][0],
s->dst[1], s->dst[2], ls_uv,
mc_chroma_dir(td, mc[4][b->filter][0],
td->dst[1], td->dst[2], ls_uv,
ref1->data[1], ref1->linesize[1],
ref1->data[2], ref1->linesize[2], tref1,
row << (3 - s->ss_v), col << 3,
&b->mv[0][0],,,,, 4, 8 >> s->ss_v, w1, h1, 0);
mc_chroma_dir(s, mc[4][b->filter][0],
s->dst[1] + 4 * bytesperpixel,
s->dst[2] + 4 * bytesperpixel, ls_uv,
mc_chroma_dir(td, mc[4][b->filter][0],
td->dst[1] + 4 * bytesperpixel,
td->dst[2] + 4 * bytesperpixel, ls_uv,
ref1->data[1], ref1->linesize[1],
ref1->data[2], ref1->linesize[2], tref1,
row << (3 - s->ss_v), (col << 3) + 4,
@ -170,32 +170,32 @@ static void FN(inter_pred)(AVCodecContext *ctx)
}
if (b->comp) {
mc_luma_dir(s, mc[4][b->filter][1], s->dst[0], ls_y,
mc_luma_dir(td, mc[4][b->filter][1], td->dst[0], ls_y,
ref2->data[0], ref2->linesize[0], tref2,
row << 3, col << 3, &b->mv[0][1],,,,, 4, 8, w2, h2, 1);
mc_luma_dir(s, mc[4][b->filter][1], s->dst[0] + 4 * bytesperpixel, ls_y,
mc_luma_dir(td, mc[4][b->filter][1], td->dst[0] + 4 * bytesperpixel, ls_y,
ref2->data[0], ref2->linesize[0], tref2,
row << 3, (col << 3) + 4, &b->mv[1][1],,,,, 4, 8, w2, h2, 1);
h2 = (h2 + s->ss_v) >> s->ss_v;
if (s->ss_h) {
w2 = (w2 + 1) >> 1;
uvmv = ROUNDED_DIV_MVx2(b->mv[0][1], b->mv[1][1]);
mc_chroma_dir(s, mc[4][b->filter][1],
s->dst[1], s->dst[2], ls_uv,
mc_chroma_dir(td, mc[4][b->filter][1],
td->dst[1], td->dst[2], ls_uv,
ref2->data[1], ref2->linesize[1],
ref2->data[2], ref2->linesize[2], tref2,
row << (3 - s->ss_v), col << 2,
&uvmv,,,,, 4, 8 >> s->ss_v, w2, h2, 1);
} else {
mc_chroma_dir(s, mc[4][b->filter][1],
s->dst[1], s->dst[2], ls_uv,
mc_chroma_dir(td, mc[4][b->filter][1],
td->dst[1], td->dst[2], ls_uv,
ref2->data[1], ref2->linesize[1],
ref2->data[2], ref2->linesize[2], tref2,
row << (3 - s->ss_v), col << 3,
&b->mv[0][1],,,,, 4, 8 >> s->ss_v, w2, h2, 1);
mc_chroma_dir(s, mc[4][b->filter][1],
s->dst[1] + 4 * bytesperpixel,
s->dst[2] + 4 * bytesperpixel, ls_uv,
mc_chroma_dir(td, mc[4][b->filter][1],
td->dst[1] + 4 * bytesperpixel,
td->dst[2] + 4 * bytesperpixel, ls_uv,
ref2->data[1], ref2->linesize[1],
ref2->data[2], ref2->linesize[2], tref2,
row << (3 - s->ss_v), (col << 3) + 4,
@ -205,25 +205,27 @@ static void FN(inter_pred)(AVCodecContext *ctx)
} else
#endif
{
#if SCALED == 0
av_assert2(b->bs == BS_4x4);
#endif
// FIXME if two horizontally adjacent blocks have the same MV,
// do a w8 instead of a w4 call
mc_luma_dir(s, mc[4][b->filter][0], s->dst[0], ls_y,
mc_luma_dir(td, mc[4][b->filter][0], td->dst[0], ls_y,
ref1->data[0], ref1->linesize[0], tref1,
row << 3, col << 3, &b->mv[0][0],
0, 0, 8, 8, 4, 4, w1, h1, 0);
mc_luma_dir(s, mc[4][b->filter][0], s->dst[0] + 4 * bytesperpixel, ls_y,
mc_luma_dir(td, mc[4][b->filter][0], td->dst[0] + 4 * bytesperpixel, ls_y,
ref1->data[0], ref1->linesize[0], tref1,
row << 3, (col << 3) + 4, &b->mv[1][0],
4, 0, 8, 8, 4, 4, w1, h1, 0);
mc_luma_dir(s, mc[4][b->filter][0],
s->dst[0] + 4 * ls_y, ls_y,
mc_luma_dir(td, mc[4][b->filter][0],
td->dst[0] + 4 * ls_y, ls_y,
ref1->data[0], ref1->linesize[0], tref1,
(row << 3) + 4, col << 3, &b->mv[2][0],
0, 4, 8, 8, 4, 4, w1, h1, 0);
mc_luma_dir(s, mc[4][b->filter][0],
s->dst[0] + 4 * ls_y + 4 * bytesperpixel, ls_y,
mc_luma_dir(td, mc[4][b->filter][0],
td->dst[0] + 4 * ls_y + 4 * bytesperpixel, ls_y,
ref1->data[0], ref1->linesize[0], tref1,
(row << 3) + 4, (col << 3) + 4, &b->mv[3][0],
4, 4, 8, 8, 4, 4, w1, h1, 0);
@ -233,24 +235,24 @@ static void FN(inter_pred)(AVCodecContext *ctx)
w1 = (w1 + 1) >> 1;
uvmv = ROUNDED_DIV_MVx4(b->mv[0][0], b->mv[1][0],
b->mv[2][0], b->mv[3][0]);
mc_chroma_dir(s, mc[4][b->filter][0],
s->dst[1], s->dst[2], ls_uv,
mc_chroma_dir(td, mc[4][b->filter][0],
td->dst[1], td->dst[2], ls_uv,
ref1->data[1], ref1->linesize[1],
ref1->data[2], ref1->linesize[2], tref1,
row << 2, col << 2,
&uvmv, 0, 0, 4, 4, 4, 4, w1, h1, 0);
} else {
uvmv = ROUNDED_DIV_MVx2(b->mv[0][0], b->mv[2][0]);
mc_chroma_dir(s, mc[4][b->filter][0],
s->dst[1], s->dst[2], ls_uv,
mc_chroma_dir(td, mc[4][b->filter][0],
td->dst[1], td->dst[2], ls_uv,
ref1->data[1], ref1->linesize[1],
ref1->data[2], ref1->linesize[2], tref1,
row << 2, col << 3,
&uvmv, 0, 0, 8, 4, 4, 4, w1, h1, 0);
uvmv = ROUNDED_DIV_MVx2(b->mv[1][0], b->mv[3][0]);
mc_chroma_dir(s, mc[4][b->filter][0],
s->dst[1] + 4 * bytesperpixel,
s->dst[2] + 4 * bytesperpixel, ls_uv,
mc_chroma_dir(td, mc[4][b->filter][0],
td->dst[1] + 4 * bytesperpixel,
td->dst[2] + 4 * bytesperpixel, ls_uv,
ref1->data[1], ref1->linesize[1],
ref1->data[2], ref1->linesize[2], tref1,
row << 2, (col << 3) + 4,
@ -260,8 +262,8 @@ static void FN(inter_pred)(AVCodecContext *ctx)
if (s->ss_h) {
w1 = (w1 + 1) >> 1;
uvmv = ROUNDED_DIV_MVx2(b->mv[0][0], b->mv[1][0]);
mc_chroma_dir(s, mc[4][b->filter][0],
s->dst[1], s->dst[2], ls_uv,
mc_chroma_dir(td, mc[4][b->filter][0],
td->dst[1], td->dst[2], ls_uv,
ref1->data[1], ref1->linesize[1],
ref1->data[2], ref1->linesize[2], tref1,
row << 3, col << 2,
@ -270,35 +272,35 @@ static void FN(inter_pred)(AVCodecContext *ctx)
// bottom block
// https://code.google.com/p/webm/issues/detail?id=993
uvmv = ROUNDED_DIV_MVx2(b->mv[1][0], b->mv[2][0]);
mc_chroma_dir(s, mc[4][b->filter][0],
s->dst[1] + 4 * ls_uv, s->dst[2] + 4 * ls_uv, ls_uv,
mc_chroma_dir(td, mc[4][b->filter][0],
td->dst[1] + 4 * ls_uv, td->dst[2] + 4 * ls_uv, ls_uv,
ref1->data[1], ref1->linesize[1],
ref1->data[2], ref1->linesize[2], tref1,
(row << 3) + 4, col << 2,
&uvmv, 0, 4, 4, 8, 4, 4, w1, h1, 0);
} else {
mc_chroma_dir(s, mc[4][b->filter][0],
s->dst[1], s->dst[2], ls_uv,
mc_chroma_dir(td, mc[4][b->filter][0],
td->dst[1], td->dst[2], ls_uv,
ref1->data[1], ref1->linesize[1],
ref1->data[2], ref1->linesize[2], tref1,
row << 3, col << 3,
&b->mv[0][0], 0, 0, 8, 8, 4, 4, w1, h1, 0);
mc_chroma_dir(s, mc[4][b->filter][0],
s->dst[1] + 4 * bytesperpixel,
s->dst[2] + 4 * bytesperpixel, ls_uv,
mc_chroma_dir(td, mc[4][b->filter][0],
td->dst[1] + 4 * bytesperpixel,
td->dst[2] + 4 * bytesperpixel, ls_uv,
ref1->data[1], ref1->linesize[1],
ref1->data[2], ref1->linesize[2], tref1,
row << 3, (col << 3) + 4,
&b->mv[1][0], 4, 0, 8, 8, 4, 4, w1, h1, 0);
mc_chroma_dir(s, mc[4][b->filter][0],
s->dst[1] + 4 * ls_uv, s->dst[2] + 4 * ls_uv, ls_uv,
mc_chroma_dir(td, mc[4][b->filter][0],
td->dst[1] + 4 * ls_uv, td->dst[2] + 4 * ls_uv, ls_uv,
ref1->data[1], ref1->linesize[1],
ref1->data[2], ref1->linesize[2], tref1,
(row << 3) + 4, col << 3,
&b->mv[2][0], 0, 4, 8, 8, 4, 4, w1, h1, 0);
mc_chroma_dir(s, mc[4][b->filter][0],
s->dst[1] + 4 * ls_uv + 4 * bytesperpixel,
s->dst[2] + 4 * ls_uv + 4 * bytesperpixel, ls_uv,
mc_chroma_dir(td, mc[4][b->filter][0],
td->dst[1] + 4 * ls_uv + 4 * bytesperpixel,
td->dst[2] + 4 * ls_uv + 4 * bytesperpixel, ls_uv,
ref1->data[1], ref1->linesize[1],
ref1->data[2], ref1->linesize[2], tref1,
(row << 3) + 4, (col << 3) + 4,
@ -307,18 +309,18 @@ static void FN(inter_pred)(AVCodecContext *ctx)
}
if (b->comp) {
mc_luma_dir(s, mc[4][b->filter][1], s->dst[0], ls_y,
mc_luma_dir(td, mc[4][b->filter][1], td->dst[0], ls_y,
ref2->data[0], ref2->linesize[0], tref2,
row << 3, col << 3, &b->mv[0][1], 0, 0, 8, 8, 4, 4, w2, h2, 1);
mc_luma_dir(s, mc[4][b->filter][1], s->dst[0] + 4 * bytesperpixel, ls_y,
mc_luma_dir(td, mc[4][b->filter][1], td->dst[0] + 4 * bytesperpixel, ls_y,
ref2->data[0], ref2->linesize[0], tref2,
row << 3, (col << 3) + 4, &b->mv[1][1], 4, 0, 8, 8, 4, 4, w2, h2, 1);
mc_luma_dir(s, mc[4][b->filter][1],
s->dst[0] + 4 * ls_y, ls_y,
mc_luma_dir(td, mc[4][b->filter][1],
td->dst[0] + 4 * ls_y, ls_y,
ref2->data[0], ref2->linesize[0], tref2,
(row << 3) + 4, col << 3, &b->mv[2][1], 0, 4, 8, 8, 4, 4, w2, h2, 1);
mc_luma_dir(s, mc[4][b->filter][1],
s->dst[0] + 4 * ls_y + 4 * bytesperpixel, ls_y,
mc_luma_dir(td, mc[4][b->filter][1],
td->dst[0] + 4 * ls_y + 4 * bytesperpixel, ls_y,
ref2->data[0], ref2->linesize[0], tref2,
(row << 3) + 4, (col << 3) + 4, &b->mv[3][1], 4, 4, 8, 8, 4, 4, w2, h2, 1);
if (s->ss_v) {
@ -327,24 +329,24 @@ static void FN(inter_pred)(AVCodecContext *ctx)
w2 = (w2 + 1) >> 1;
uvmv = ROUNDED_DIV_MVx4(b->mv[0][1], b->mv[1][1],
b->mv[2][1], b->mv[3][1]);
mc_chroma_dir(s, mc[4][b->filter][1],
s->dst[1], s->dst[2], ls_uv,
mc_chroma_dir(td, mc[4][b->filter][1],
td->dst[1], td->dst[2], ls_uv,
ref2->data[1], ref2->linesize[1],
ref2->data[2], ref2->linesize[2], tref2,
row << 2, col << 2,
&uvmv, 0, 0, 4, 4, 4, 4, w2, h2, 1);
} else {
uvmv = ROUNDED_DIV_MVx2(b->mv[0][1], b->mv[2][1]);
mc_chroma_dir(s, mc[4][b->filter][1],
s->dst[1], s->dst[2], ls_uv,
mc_chroma_dir(td, mc[4][b->filter][1],
td->dst[1], td->dst[2], ls_uv,
ref2->data[1], ref2->linesize[1],
ref2->data[2], ref2->linesize[2], tref2,
row << 2, col << 3,
&uvmv, 0, 0, 8, 4, 4, 4, w2, h2, 1);
uvmv = ROUNDED_DIV_MVx2(b->mv[1][1], b->mv[3][1]);
mc_chroma_dir(s, mc[4][b->filter][1],
s->dst[1] + 4 * bytesperpixel,
s->dst[2] + 4 * bytesperpixel, ls_uv,
mc_chroma_dir(td, mc[4][b->filter][1],
td->dst[1] + 4 * bytesperpixel,
td->dst[2] + 4 * bytesperpixel, ls_uv,
ref2->data[1], ref2->linesize[1],
ref2->data[2], ref2->linesize[2], tref2,
row << 2, (col << 3) + 4,
@ -354,8 +356,8 @@ static void FN(inter_pred)(AVCodecContext *ctx)
if (s->ss_h) {
w2 = (w2 + 1) >> 1;
uvmv = ROUNDED_DIV_MVx2(b->mv[0][1], b->mv[1][1]);
mc_chroma_dir(s, mc[4][b->filter][1],
s->dst[1], s->dst[2], ls_uv,
mc_chroma_dir(td, mc[4][b->filter][1],
td->dst[1], td->dst[2], ls_uv,
ref2->data[1], ref2->linesize[1],
ref2->data[2], ref2->linesize[2], tref2,
row << 3, col << 2,
@ -364,35 +366,35 @@ static void FN(inter_pred)(AVCodecContext *ctx)
// bottom block
// https://code.google.com/p/webm/issues/detail?id=993
uvmv = ROUNDED_DIV_MVx2(b->mv[1][1], b->mv[2][1]);
mc_chroma_dir(s, mc[4][b->filter][1],
s->dst[1] + 4 * ls_uv, s->dst[2] + 4 * ls_uv, ls_uv,
mc_chroma_dir(td, mc[4][b->filter][1],
td->dst[1] + 4 * ls_uv, td->dst[2] + 4 * ls_uv, ls_uv,
ref2->data[1], ref2->linesize[1],
ref2->data[2], ref2->linesize[2], tref2,
(row << 3) + 4, col << 2,
&uvmv, 0, 4, 4, 8, 4, 4, w2, h2, 1);
} else {
mc_chroma_dir(s, mc[4][b->filter][1],
s->dst[1], s->dst[2], ls_uv,
mc_chroma_dir(td, mc[4][b->filter][1],
td->dst[1], td->dst[2], ls_uv,
ref2->data[1], ref2->linesize[1],
ref2->data[2], ref2->linesize[2], tref2,
row << 3, col << 3,
&b->mv[0][1], 0, 0, 8, 8, 4, 4, w2, h2, 1);
mc_chroma_dir(s, mc[4][b->filter][1],
s->dst[1] + 4 * bytesperpixel,
s->dst[2] + 4 * bytesperpixel, ls_uv,
mc_chroma_dir(td, mc[4][b->filter][1],
td->dst[1] + 4 * bytesperpixel,
td->dst[2] + 4 * bytesperpixel, ls_uv,
ref2->data[1], ref2->linesize[1],
ref2->data[2], ref2->linesize[2], tref2,
row << 3, (col << 3) + 4,
&b->mv[1][1], 4, 0, 8, 8, 4, 4, w2, h2, 1);
mc_chroma_dir(s, mc[4][b->filter][1],
s->dst[1] + 4 * ls_uv, s->dst[2] + 4 * ls_uv, ls_uv,
mc_chroma_dir(td, mc[4][b->filter][1],
td->dst[1] + 4 * ls_uv, td->dst[2] + 4 * ls_uv, ls_uv,
ref2->data[1], ref2->linesize[1],
ref2->data[2], ref2->linesize[2], tref2,
(row << 3) + 4, col << 3,
&b->mv[2][1], 0, 4, 8, 8, 4, 4, w2, h2, 1);
mc_chroma_dir(s, mc[4][b->filter][1],
s->dst[1] + 4 * ls_uv + 4 * bytesperpixel,
s->dst[2] + 4 * ls_uv + 4 * bytesperpixel, ls_uv,
mc_chroma_dir(td, mc[4][b->filter][1],
td->dst[1] + 4 * ls_uv + 4 * bytesperpixel,
td->dst[2] + 4 * ls_uv + 4 * bytesperpixel, ls_uv,
ref2->data[1], ref2->linesize[1],
ref2->data[2], ref2->linesize[2], tref2,
(row << 3) + 4, (col << 3) + 4,
@ -403,29 +405,31 @@ static void FN(inter_pred)(AVCodecContext *ctx)
}
} else {
int bwl = bwlog_tab[0][b->bs];
int bw = bwh_tab[0][b->bs][0] * 4, bh = bwh_tab[0][b->bs][1] * 4;
int uvbw = bwh_tab[s->ss_h][b->bs][0] * 4, uvbh = bwh_tab[s->ss_v][b->bs][1] * 4;
int bw = ff_vp9_bwh_tab[0][b->bs][0] * 4;
int bh = ff_vp9_bwh_tab[0][b->bs][1] * 4;
int uvbw = ff_vp9_bwh_tab[s->ss_h][b->bs][0] * 4;
int uvbh = ff_vp9_bwh_tab[s->ss_v][b->bs][1] * 4;
mc_luma_dir(s, mc[bwl][b->filter][0], s->dst[0], ls_y,
mc_luma_dir(td, mc[bwl][b->filter][0], td->dst[0], ls_y,
ref1->data[0], ref1->linesize[0], tref1,
row << 3, col << 3, &b->mv[0][0], 0, 0, bw, bh, bw, bh, w1, h1, 0);
w1 = (w1 + s->ss_h) >> s->ss_h;
h1 = (h1 + s->ss_v) >> s->ss_v;
mc_chroma_dir(s, mc[bwl + s->ss_h][b->filter][0],
s->dst[1], s->dst[2], ls_uv,
mc_chroma_dir(td, mc[bwl + s->ss_h][b->filter][0],
td->dst[1], td->dst[2], ls_uv,
ref1->data[1], ref1->linesize[1],
ref1->data[2], ref1->linesize[2], tref1,
row << (3 - s->ss_v), col << (3 - s->ss_h),
&b->mv[0][0], 0, 0, uvbw, uvbh, uvbw, uvbh, w1, h1, 0);
if (b->comp) {
mc_luma_dir(s, mc[bwl][b->filter][1], s->dst[0], ls_y,
mc_luma_dir(td, mc[bwl][b->filter][1], td->dst[0], ls_y,
ref2->data[0], ref2->linesize[0], tref2,
row << 3, col << 3, &b->mv[0][1], 0, 0, bw, bh, bw, bh, w2, h2, 1);
w2 = (w2 + s->ss_h) >> s->ss_h;
h2 = (h2 + s->ss_v) >> s->ss_v;
mc_chroma_dir(s, mc[bwl + s->ss_h][b->filter][1],
s->dst[1], s->dst[2], ls_uv,
mc_chroma_dir(td, mc[bwl + s->ss_h][b->filter][1],
td->dst[1], td->dst[2], ls_uv,
ref2->data[1], ref2->linesize[1],
ref2->data[2], ref2->linesize[2], tref2,
row << (3 - s->ss_v), col << (3 - s->ss_h),

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -0,0 +1,240 @@
/*
* VP9 compatible video decoder
*
* Copyright (C) 2013 Ronald S. Bultje <rsbultje gmail com>
* Copyright (C) 2013 Clément Bœsch <u pkh me>
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#ifndef AVCODEC_VP9DEC_H
#define AVCODEC_VP9DEC_H
#include <stddef.h>
#include <stdint.h>
#include <stdatomic.h>
#include "libavutil/buffer.h"
#include "libavutil/thread.h"
#include "libavutil/internal.h"
#include "vp9.h"
#include "vp9dsp.h"
#include "vp9shared.h"
enum MVJoint {
MV_JOINT_ZERO,
MV_JOINT_H,
MV_JOINT_V,
MV_JOINT_HV,
};
typedef struct ProbContext {
uint8_t y_mode[4][9];
uint8_t uv_mode[10][9];
uint8_t filter[4][2];
uint8_t mv_mode[7][3];
uint8_t intra[4];
uint8_t comp[5];
uint8_t single_ref[5][2];
uint8_t comp_ref[5];
uint8_t tx32p[2][3];
uint8_t tx16p[2][2];
uint8_t tx8p[2];
uint8_t skip[3];
uint8_t mv_joint[3];
struct {
uint8_t sign;
uint8_t classes[10];
uint8_t class0;
uint8_t bits[10];
uint8_t class0_fp[2][3];
uint8_t fp[3];
uint8_t class0_hp;
uint8_t hp;
} mv_comp[2];
uint8_t partition[4][4][3];
} ProbContext;
typedef struct VP9Filter {
uint8_t level[8 * 8];
uint8_t /* bit=col */ mask[2 /* 0=y, 1=uv */][2 /* 0=col, 1=row */]
[8 /* rows */][4 /* 0=16, 1=8, 2=4, 3=inner4 */];
} VP9Filter;
typedef struct VP9Block {
uint8_t seg_id, intra, comp, ref[2], mode[4], uvmode, skip;
enum FilterMode filter;
VP56mv mv[4 /* b_idx */][2 /* ref */];
enum BlockSize bs;
enum TxfmMode tx, uvtx;
enum BlockLevel bl;
enum BlockPartition bp;
} VP9Block;
typedef struct VP9TileData VP9TileData;
typedef struct VP9Context {
VP9SharedContext s;
VP9TileData *td;
VP9DSPContext dsp;
VideoDSPContext vdsp;
GetBitContext gb;
VP56RangeCoder c;
int pass, active_tile_cols;
#if HAVE_THREADS
pthread_mutex_t progress_mutex;
pthread_cond_t progress_cond;
atomic_int *entries;
#endif
uint8_t ss_h, ss_v;
uint8_t last_bpp, bpp_index, bytesperpixel;
uint8_t last_keyframe;
// sb_cols/rows, rows/cols and last_fmt are used for allocating all internal
// arrays, and are thus per-thread. w/h and gf_fmt are synced between threads
// and are therefore per-stream. pix_fmt represents the value in the header
// of the currently processed frame.
int w, h;
enum AVPixelFormat pix_fmt, last_fmt, gf_fmt;
unsigned sb_cols, sb_rows, rows, cols;
ThreadFrame next_refs[8];
struct {
uint8_t lim_lut[64];
uint8_t mblim_lut[64];
} filter_lut;
struct {
ProbContext p;
uint8_t coef[4][2][2][6][6][3];
} prob_ctx[4];
struct {
ProbContext p;
uint8_t coef[4][2][2][6][6][11];
} prob;
// contextual (above) cache
uint8_t *above_partition_ctx;
uint8_t *above_mode_ctx;
// FIXME maybe merge some of the below in a flags field?
uint8_t *above_y_nnz_ctx;
uint8_t *above_uv_nnz_ctx[2];
uint8_t *above_skip_ctx; // 1bit
uint8_t *above_txfm_ctx; // 2bit
uint8_t *above_segpred_ctx; // 1bit
uint8_t *above_intra_ctx; // 1bit
uint8_t *above_comp_ctx; // 1bit
uint8_t *above_ref_ctx; // 2bit
uint8_t *above_filter_ctx;
VP56mv (*above_mv_ctx)[2];
// whole-frame cache
uint8_t *intra_pred_data[3];
VP9Filter *lflvl;
// block reconstruction intermediates
int block_alloc_using_2pass;
uint16_t mvscale[3][2];
uint8_t mvstep[3][2];
} VP9Context;
struct VP9TileData {
//VP9Context should be const, but because of the threading API(generates
//a lot of warnings) it's not.
VP9Context *s;
VP56RangeCoder *c_b;
VP56RangeCoder *c;
int row, row7, col, col7;
uint8_t *dst[3];
ptrdiff_t y_stride, uv_stride;
VP9Block *b_base, *b;
unsigned tile_col_start;
struct {
unsigned y_mode[4][10];
unsigned uv_mode[10][10];
unsigned filter[4][3];
unsigned mv_mode[7][4];
unsigned intra[4][2];
unsigned comp[5][2];
unsigned single_ref[5][2][2];
unsigned comp_ref[5][2];
unsigned tx32p[2][4];
unsigned tx16p[2][3];
unsigned tx8p[2][2];
unsigned skip[3][2];
unsigned mv_joint[4];
struct {
unsigned sign[2];
unsigned classes[11];
unsigned class0[2];
unsigned bits[10][2];
unsigned class0_fp[2][4];
unsigned fp[4];
unsigned class0_hp[2];
unsigned hp[2];
} mv_comp[2];
unsigned partition[4][4][4];
unsigned coef[4][2][2][6][6][3];
unsigned eob[4][2][2][6][6][2];
} counts;
// whole-frame cache
DECLARE_ALIGNED(32, uint8_t, edge_emu_buffer)[135 * 144 * 2];
// contextual (left) cache
DECLARE_ALIGNED(16, uint8_t, left_y_nnz_ctx)[16];
DECLARE_ALIGNED(16, uint8_t, left_mode_ctx)[16];
DECLARE_ALIGNED(16, VP56mv, left_mv_ctx)[16][2];
DECLARE_ALIGNED(16, uint8_t, left_uv_nnz_ctx)[2][16];
DECLARE_ALIGNED(8, uint8_t, left_partition_ctx)[8];
DECLARE_ALIGNED(8, uint8_t, left_skip_ctx)[8];
DECLARE_ALIGNED(8, uint8_t, left_txfm_ctx)[8];
DECLARE_ALIGNED(8, uint8_t, left_segpred_ctx)[8];
DECLARE_ALIGNED(8, uint8_t, left_intra_ctx)[8];
DECLARE_ALIGNED(8, uint8_t, left_comp_ctx)[8];
DECLARE_ALIGNED(8, uint8_t, left_ref_ctx)[8];
DECLARE_ALIGNED(8, uint8_t, left_filter_ctx)[8];
// block reconstruction intermediates
DECLARE_ALIGNED(32, uint8_t, tmp_y)[64 * 64 * 2];
DECLARE_ALIGNED(32, uint8_t, tmp_uv)[2][64 * 64 * 2];
struct { int x, y; } min_mv, max_mv;
int16_t *block_base, *block, *uvblock_base[2], *uvblock[2];
uint8_t *eob_base, *uveob_base[2], *eob, *uveob[2];
};
void ff_vp9_fill_mv(VP9TileData *td, VP56mv *mv, int mode, int sb);
void ff_vp9_adapt_probs(VP9Context *s);
void ff_vp9_decode_block(VP9TileData *td, int row, int col,
VP9Filter *lflvl, ptrdiff_t yoff, ptrdiff_t uvoff,
enum BlockLevel bl, enum BlockPartition bp);
void ff_vp9_loopfilter_sb(AVCodecContext *avctx, VP9Filter *lflvl,
int row, int col, ptrdiff_t yoff, ptrdiff_t uvoff);
void ff_vp9_intra_recon_8bpp(VP9TileData *td,
ptrdiff_t y_off, ptrdiff_t uv_off);
void ff_vp9_intra_recon_16bpp(VP9TileData *td,
ptrdiff_t y_off, ptrdiff_t uv_off);
void ff_vp9_inter_recon_8bpp(VP9TileData *td);
void ff_vp9_inter_recon_16bpp(VP9TileData *td);
#endif /* AVCODEC_VP9DEC_H */

Просмотреть файл

@ -25,6 +25,62 @@
#include "libavutil/common.h"
#include "vp9dsp.h"
const DECLARE_ALIGNED(16, int16_t, ff_vp9_subpel_filters)[3][16][8] = {
[FILTER_8TAP_REGULAR] = {
{ 0, 0, 0, 128, 0, 0, 0, 0 },
{ 0, 1, -5, 126, 8, -3, 1, 0 },
{ -1, 3, -10, 122, 18, -6, 2, 0 },
{ -1, 4, -13, 118, 27, -9, 3, -1 },
{ -1, 4, -16, 112, 37, -11, 4, -1 },
{ -1, 5, -18, 105, 48, -14, 4, -1 },
{ -1, 5, -19, 97, 58, -16, 5, -1 },
{ -1, 6, -19, 88, 68, -18, 5, -1 },
{ -1, 6, -19, 78, 78, -19, 6, -1 },
{ -1, 5, -18, 68, 88, -19, 6, -1 },
{ -1, 5, -16, 58, 97, -19, 5, -1 },
{ -1, 4, -14, 48, 105, -18, 5, -1 },
{ -1, 4, -11, 37, 112, -16, 4, -1 },
{ -1, 3, -9, 27, 118, -13, 4, -1 },
{ 0, 2, -6, 18, 122, -10, 3, -1 },
{ 0, 1, -3, 8, 126, -5, 1, 0 },
}, [FILTER_8TAP_SHARP] = {
{ 0, 0, 0, 128, 0, 0, 0, 0 },
{ -1, 3, -7, 127, 8, -3, 1, 0 },
{ -2, 5, -13, 125, 17, -6, 3, -1 },
{ -3, 7, -17, 121, 27, -10, 5, -2 },
{ -4, 9, -20, 115, 37, -13, 6, -2 },
{ -4, 10, -23, 108, 48, -16, 8, -3 },
{ -4, 10, -24, 100, 59, -19, 9, -3 },
{ -4, 11, -24, 90, 70, -21, 10, -4 },
{ -4, 11, -23, 80, 80, -23, 11, -4 },
{ -4, 10, -21, 70, 90, -24, 11, -4 },
{ -3, 9, -19, 59, 100, -24, 10, -4 },
{ -3, 8, -16, 48, 108, -23, 10, -4 },
{ -2, 6, -13, 37, 115, -20, 9, -4 },
{ -2, 5, -10, 27, 121, -17, 7, -3 },
{ -1, 3, -6, 17, 125, -13, 5, -2 },
{ 0, 1, -3, 8, 127, -7, 3, -1 },
}, [FILTER_8TAP_SMOOTH] = {
{ 0, 0, 0, 128, 0, 0, 0, 0 },
{ -3, -1, 32, 64, 38, 1, -3, 0 },
{ -2, -2, 29, 63, 41, 2, -3, 0 },
{ -2, -2, 26, 63, 43, 4, -4, 0 },
{ -2, -3, 24, 62, 46, 5, -4, 0 },
{ -2, -3, 21, 60, 49, 7, -4, 0 },
{ -1, -4, 18, 59, 51, 9, -4, 0 },
{ -1, -4, 16, 57, 53, 12, -4, -1 },
{ -1, -4, 14, 55, 55, 14, -4, -1 },
{ -1, -4, 12, 53, 57, 16, -4, -1 },
{ 0, -4, 9, 51, 59, 18, -4, -1 },
{ 0, -4, 7, 49, 60, 21, -3, -2 },
{ 0, -4, 5, 46, 62, 24, -3, -2 },
{ 0, -4, 4, 43, 63, 26, -2, -2 },
{ 0, -3, 2, 41, 63, 29, -2, -2 },
{ 0, -3, 1, 38, 64, 32, -1, -3 },
}
};
av_cold void ff_vp9dsp_init(VP9DSPContext *dsp, int bpp, int bitexact)
{
if (bpp == 8) {
@ -36,6 +92,8 @@ av_cold void ff_vp9dsp_init(VP9DSPContext *dsp, int bpp, int bitexact)
ff_vp9dsp_init_12(dsp);
}
if (ARCH_AARCH64) ff_vp9dsp_init_aarch64(dsp, bpp);
if (ARCH_ARM) ff_vp9dsp_init_arm(dsp, bpp);
if (ARCH_X86) ff_vp9dsp_init_x86(dsp, bpp, bitexact);
if (ARCH_MIPS) ff_vp9dsp_init_mips(dsp, bpp);
}

Просмотреть файл

@ -27,7 +27,7 @@
#include <stddef.h>
#include <stdint.h>
#include "vp9.h"
#include "libavcodec/vp9.h"
typedef void (*vp9_mc_func)(uint8_t *dst, ptrdiff_t dst_stride,
const uint8_t *ref, ptrdiff_t ref_stride,
@ -111,21 +111,25 @@ typedef struct VP9DSPContext {
*
* dst/stride are aligned by hsize
*/
vp9_mc_func mc[5][4][2][2][2];
vp9_mc_func mc[5][N_FILTERS][2][2][2];
/*
* for scalable MC, first 3 dimensions identical to above, the other two
* don't exist since it changes per stepsize.
*/
vp9_scaled_mc_func smc[5][4][2];
vp9_scaled_mc_func smc[5][N_FILTERS][2];
} VP9DSPContext;
extern const int16_t ff_vp9_subpel_filters[3][16][8];
void ff_vp9dsp_init(VP9DSPContext *dsp, int bpp, int bitexact);
void ff_vp9dsp_init_8(VP9DSPContext *dsp);
void ff_vp9dsp_init_10(VP9DSPContext *dsp);
void ff_vp9dsp_init_12(VP9DSPContext *dsp);
void ff_vp9dsp_init_aarch64(VP9DSPContext *dsp, int bpp);
void ff_vp9dsp_init_arm(VP9DSPContext *dsp, int bpp);
void ff_vp9dsp_init_x86(VP9DSPContext *dsp, int bpp, int bitexact);
void ff_vp9dsp_init_mips(VP9DSPContext *dsp, int bpp);

Просмотреть файл

@ -1991,61 +1991,6 @@ copy_avg_fn(4)
#endif /* BIT_DEPTH != 12 */
static const int16_t vp9_subpel_filters[3][16][8] = {
[FILTER_8TAP_REGULAR] = {
{ 0, 0, 0, 128, 0, 0, 0, 0 },
{ 0, 1, -5, 126, 8, -3, 1, 0 },
{ -1, 3, -10, 122, 18, -6, 2, 0 },
{ -1, 4, -13, 118, 27, -9, 3, -1 },
{ -1, 4, -16, 112, 37, -11, 4, -1 },
{ -1, 5, -18, 105, 48, -14, 4, -1 },
{ -1, 5, -19, 97, 58, -16, 5, -1 },
{ -1, 6, -19, 88, 68, -18, 5, -1 },
{ -1, 6, -19, 78, 78, -19, 6, -1 },
{ -1, 5, -18, 68, 88, -19, 6, -1 },
{ -1, 5, -16, 58, 97, -19, 5, -1 },
{ -1, 4, -14, 48, 105, -18, 5, -1 },
{ -1, 4, -11, 37, 112, -16, 4, -1 },
{ -1, 3, -9, 27, 118, -13, 4, -1 },
{ 0, 2, -6, 18, 122, -10, 3, -1 },
{ 0, 1, -3, 8, 126, -5, 1, 0 },
}, [FILTER_8TAP_SHARP] = {
{ 0, 0, 0, 128, 0, 0, 0, 0 },
{ -1, 3, -7, 127, 8, -3, 1, 0 },
{ -2, 5, -13, 125, 17, -6, 3, -1 },
{ -3, 7, -17, 121, 27, -10, 5, -2 },
{ -4, 9, -20, 115, 37, -13, 6, -2 },
{ -4, 10, -23, 108, 48, -16, 8, -3 },
{ -4, 10, -24, 100, 59, -19, 9, -3 },
{ -4, 11, -24, 90, 70, -21, 10, -4 },
{ -4, 11, -23, 80, 80, -23, 11, -4 },
{ -4, 10, -21, 70, 90, -24, 11, -4 },
{ -3, 9, -19, 59, 100, -24, 10, -4 },
{ -3, 8, -16, 48, 108, -23, 10, -4 },
{ -2, 6, -13, 37, 115, -20, 9, -4 },
{ -2, 5, -10, 27, 121, -17, 7, -3 },
{ -1, 3, -6, 17, 125, -13, 5, -2 },
{ 0, 1, -3, 8, 127, -7, 3, -1 },
}, [FILTER_8TAP_SMOOTH] = {
{ 0, 0, 0, 128, 0, 0, 0, 0 },
{ -3, -1, 32, 64, 38, 1, -3, 0 },
{ -2, -2, 29, 63, 41, 2, -3, 0 },
{ -2, -2, 26, 63, 43, 4, -4, 0 },
{ -2, -3, 24, 62, 46, 5, -4, 0 },
{ -2, -3, 21, 60, 49, 7, -4, 0 },
{ -1, -4, 18, 59, 51, 9, -4, 0 },
{ -1, -4, 16, 57, 53, 12, -4, -1 },
{ -1, -4, 14, 55, 55, 14, -4, -1 },
{ -1, -4, 12, 53, 57, 16, -4, -1 },
{ 0, -4, 9, 51, 59, 18, -4, -1 },
{ 0, -4, 7, 49, 60, 21, -3, -2 },
{ 0, -4, 5, 46, 62, 24, -3, -2 },
{ 0, -4, 4, 43, 63, 26, -2, -2 },
{ 0, -3, 2, 41, 63, 29, -2, -2 },
{ 0, -3, 1, 38, 64, 32, -1, -3 },
}
};
#define FILTER_8TAP(src, x, F, stride) \
av_clip_pixel((F[0] * src[x + -3 * stride] + \
F[1] * src[x + -2 * stride] + \
@ -2155,7 +2100,7 @@ static void avg##_8tap_##type##_##sz##dir##_c(uint8_t *dst, ptrdiff_t dst_stride
int h, int mx, int my) \
{ \
avg##_8tap_1d_##dir##_c(dst, dst_stride, src, src_stride, sz, h, \
vp9_subpel_filters[type_idx][dir_m]); \
ff_vp9_subpel_filters[type_idx][dir_m]); \
}
#define filter_fn_2d(sz, type, type_idx, avg) \
@ -2164,8 +2109,8 @@ static void avg##_8tap_##type##_##sz##hv_c(uint8_t *dst, ptrdiff_t dst_stride, \
int h, int mx, int my) \
{ \
avg##_8tap_2d_hv_c(dst, dst_stride, src, src_stride, sz, h, \
vp9_subpel_filters[type_idx][mx], \
vp9_subpel_filters[type_idx][my]); \
ff_vp9_subpel_filters[type_idx][mx], \
ff_vp9_subpel_filters[type_idx][my]); \
}
#if BIT_DEPTH != 12
@ -2454,7 +2399,7 @@ static void avg##_scaled_##type##_##sz##_c(uint8_t *dst, ptrdiff_t dst_stride, \
int h, int mx, int my, int dx, int dy) \
{ \
avg##_scaled_8tap_c(dst, dst_stride, src, src_stride, sz, h, mx, my, dx, dy, \
vp9_subpel_filters[type_idx]); \
ff_vp9_subpel_filters[type_idx]); \
}
#if BIT_DEPTH != 12

Просмотреть файл

@ -0,0 +1,202 @@
/*
* VP9 compatible video decoder
*
* Copyright (C) 2013 Ronald S. Bultje <rsbultje gmail com>
* Copyright (C) 2013 Clément Bœsch <u pkh me>
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "vp9dec.h"
static av_always_inline void filter_plane_cols(VP9Context *s, int col, int ss_h, int ss_v,
uint8_t *lvl, uint8_t (*mask)[4],
uint8_t *dst, ptrdiff_t ls)
{
int y, x, bytesperpixel = s->bytesperpixel;
// filter edges between columns (e.g. block1 | block2)
for (y = 0; y < 8; y += 2 << ss_v, dst += 16 * ls, lvl += 16 << ss_v) {
uint8_t *ptr = dst, *l = lvl, *hmask1 = mask[y], *hmask2 = mask[y + 1 + ss_v];
unsigned hm1 = hmask1[0] | hmask1[1] | hmask1[2], hm13 = hmask1[3];
unsigned hm2 = hmask2[1] | hmask2[2], hm23 = hmask2[3];
unsigned hm = hm1 | hm2 | hm13 | hm23;
for (x = 1; hm & ~(x - 1); x <<= 1, ptr += 8 * bytesperpixel >> ss_h) {
if (col || x > 1) {
if (hm1 & x) {
int L = *l, H = L >> 4;
int E = s->filter_lut.mblim_lut[L], I = s->filter_lut.lim_lut[L];
if (hmask1[0] & x) {
if (hmask2[0] & x) {
av_assert2(l[8 << ss_v] == L);
s->dsp.loop_filter_16[0](ptr, ls, E, I, H);
} else {
s->dsp.loop_filter_8[2][0](ptr, ls, E, I, H);
}
} else if (hm2 & x) {
L = l[8 << ss_v];
H |= (L >> 4) << 8;
E |= s->filter_lut.mblim_lut[L] << 8;
I |= s->filter_lut.lim_lut[L] << 8;
s->dsp.loop_filter_mix2[!!(hmask1[1] & x)]
[!!(hmask2[1] & x)]
[0](ptr, ls, E, I, H);
} else {
s->dsp.loop_filter_8[!!(hmask1[1] & x)]
[0](ptr, ls, E, I, H);
}
} else if (hm2 & x) {
int L = l[8 << ss_v], H = L >> 4;
int E = s->filter_lut.mblim_lut[L], I = s->filter_lut.lim_lut[L];
s->dsp.loop_filter_8[!!(hmask2[1] & x)]
[0](ptr + 8 * ls, ls, E, I, H);
}
}
if (ss_h) {
if (x & 0xAA)
l += 2;
} else {
if (hm13 & x) {
int L = *l, H = L >> 4;
int E = s->filter_lut.mblim_lut[L], I = s->filter_lut.lim_lut[L];
if (hm23 & x) {
L = l[8 << ss_v];
H |= (L >> 4) << 8;
E |= s->filter_lut.mblim_lut[L] << 8;
I |= s->filter_lut.lim_lut[L] << 8;
s->dsp.loop_filter_mix2[0][0][0](ptr + 4 * bytesperpixel, ls, E, I, H);
} else {
s->dsp.loop_filter_8[0][0](ptr + 4 * bytesperpixel, ls, E, I, H);
}
} else if (hm23 & x) {
int L = l[8 << ss_v], H = L >> 4;
int E = s->filter_lut.mblim_lut[L], I = s->filter_lut.lim_lut[L];
s->dsp.loop_filter_8[0][0](ptr + 8 * ls + 4 * bytesperpixel, ls, E, I, H);
}
l++;
}
}
}
}
static av_always_inline void filter_plane_rows(VP9Context *s, int row, int ss_h, int ss_v,
uint8_t *lvl, uint8_t (*mask)[4],
uint8_t *dst, ptrdiff_t ls)
{
int y, x, bytesperpixel = s->bytesperpixel;
// block1
// filter edges between rows (e.g. ------)
// block2
for (y = 0; y < 8; y++, dst += 8 * ls >> ss_v) {
uint8_t *ptr = dst, *l = lvl, *vmask = mask[y];
unsigned vm = vmask[0] | vmask[1] | vmask[2], vm3 = vmask[3];
for (x = 1; vm & ~(x - 1); x <<= (2 << ss_h), ptr += 16 * bytesperpixel, l += 2 << ss_h) {
if (row || y) {
if (vm & x) {
int L = *l, H = L >> 4;
int E = s->filter_lut.mblim_lut[L], I = s->filter_lut.lim_lut[L];
if (vmask[0] & x) {
if (vmask[0] & (x << (1 + ss_h))) {
av_assert2(l[1 + ss_h] == L);
s->dsp.loop_filter_16[1](ptr, ls, E, I, H);
} else {
s->dsp.loop_filter_8[2][1](ptr, ls, E, I, H);
}
} else if (vm & (x << (1 + ss_h))) {
L = l[1 + ss_h];
H |= (L >> 4) << 8;
E |= s->filter_lut.mblim_lut[L] << 8;
I |= s->filter_lut.lim_lut[L] << 8;
s->dsp.loop_filter_mix2[!!(vmask[1] & x)]
[!!(vmask[1] & (x << (1 + ss_h)))]
[1](ptr, ls, E, I, H);
} else {
s->dsp.loop_filter_8[!!(vmask[1] & x)]
[1](ptr, ls, E, I, H);
}
} else if (vm & (x << (1 + ss_h))) {
int L = l[1 + ss_h], H = L >> 4;
int E = s->filter_lut.mblim_lut[L], I = s->filter_lut.lim_lut[L];
s->dsp.loop_filter_8[!!(vmask[1] & (x << (1 + ss_h)))]
[1](ptr + 8 * bytesperpixel, ls, E, I, H);
}
}
if (!ss_v) {
if (vm3 & x) {
int L = *l, H = L >> 4;
int E = s->filter_lut.mblim_lut[L], I = s->filter_lut.lim_lut[L];
if (vm3 & (x << (1 + ss_h))) {
L = l[1 + ss_h];
H |= (L >> 4) << 8;
E |= s->filter_lut.mblim_lut[L] << 8;
I |= s->filter_lut.lim_lut[L] << 8;
s->dsp.loop_filter_mix2[0][0][1](ptr + ls * 4, ls, E, I, H);
} else {
s->dsp.loop_filter_8[0][1](ptr + ls * 4, ls, E, I, H);
}
} else if (vm3 & (x << (1 + ss_h))) {
int L = l[1 + ss_h], H = L >> 4;
int E = s->filter_lut.mblim_lut[L], I = s->filter_lut.lim_lut[L];
s->dsp.loop_filter_8[0][1](ptr + ls * 4 + 8 * bytesperpixel, ls, E, I, H);
}
}
}
if (ss_v) {
if (y & 1)
lvl += 16;
} else {
lvl += 8;
}
}
}
void ff_vp9_loopfilter_sb(AVCodecContext *avctx, VP9Filter *lflvl,
int row, int col, ptrdiff_t yoff, ptrdiff_t uvoff)
{
VP9Context *s = avctx->priv_data;
AVFrame *f = s->s.frames[CUR_FRAME].tf.f;
uint8_t *dst = f->data[0] + yoff;
ptrdiff_t ls_y = f->linesize[0], ls_uv = f->linesize[1];
uint8_t (*uv_masks)[8][4] = lflvl->mask[s->ss_h | s->ss_v];
int p;
/* FIXME: In how far can we interleave the v/h loopfilter calls? E.g.
* if you think of them as acting on a 8x8 block max, we can interleave
* each v/h within the single x loop, but that only works if we work on
* 8 pixel blocks, and we won't always do that (we want at least 16px
* to use SSE2 optimizations, perhaps 32 for AVX2) */
filter_plane_cols(s, col, 0, 0, lflvl->level, lflvl->mask[0][0], dst, ls_y);
filter_plane_rows(s, row, 0, 0, lflvl->level, lflvl->mask[0][1], dst, ls_y);
for (p = 0; p < 2; p++) {
dst = f->data[1 + p] + uvoff;
filter_plane_cols(s, col, s->ss_h, s->ss_v, lflvl->level, uv_masks[0], dst, ls_uv);
filter_plane_rows(s, row, s->ss_h, s->ss_v, lflvl->level, uv_masks[1], dst, ls_uv);
}
}

Просмотреть файл

@ -0,0 +1,364 @@
/*
* VP9 compatible video decoder
*
* Copyright (C) 2013 Ronald S. Bultje <rsbultje gmail com>
* Copyright (C) 2013 Clément Bœsch <u pkh me>
*
* This file is part of FFmpeg.
*
* FFmpeg is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
* License as published by the Free Software Foundation; either
* version 2.1 of the License, or (at your option) any later version.
*
* FFmpeg is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
* Lesser General Public License for more details.
*
* You should have received a copy of the GNU Lesser General Public
* License along with FFmpeg; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA
*/
#include "internal.h"
#include "vp56.h"
#include "vp9.h"
#include "vp9data.h"
#include "vp9dec.h"
static av_always_inline void clamp_mv(VP56mv *dst, const VP56mv *src,
VP9TileData *td)
{
dst->x = av_clip(src->x, td->min_mv.x, td->max_mv.x);
dst->y = av_clip(src->y, td->min_mv.y, td->max_mv.y);
}
static void find_ref_mvs(VP9TileData *td,
VP56mv *pmv, int ref, int z, int idx, int sb)
{
static const int8_t mv_ref_blk_off[N_BS_SIZES][8][2] = {
[BS_64x64] = { { 3, -1 }, { -1, 3 }, { 4, -1 }, { -1, 4 },
{ -1, -1 }, { 0, -1 }, { -1, 0 }, { 6, -1 } },
[BS_64x32] = { { 0, -1 }, { -1, 0 }, { 4, -1 }, { -1, 2 },
{ -1, -1 }, { 0, -3 }, { -3, 0 }, { 2, -1 } },
[BS_32x64] = { { -1, 0 }, { 0, -1 }, { -1, 4 }, { 2, -1 },
{ -1, -1 }, { -3, 0 }, { 0, -3 }, { -1, 2 } },
[BS_32x32] = { { 1, -1 }, { -1, 1 }, { 2, -1 }, { -1, 2 },
{ -1, -1 }, { 0, -3 }, { -3, 0 }, { -3, -3 } },
[BS_32x16] = { { 0, -1 }, { -1, 0 }, { 2, -1 }, { -1, -1 },
{ -1, 1 }, { 0, -3 }, { -3, 0 }, { -3, -3 } },
[BS_16x32] = { { -1, 0 }, { 0, -1 }, { -1, 2 }, { -1, -1 },
{ 1, -1 }, { -3, 0 }, { 0, -3 }, { -3, -3 } },
[BS_16x16] = { { 0, -1 }, { -1, 0 }, { 1, -1 }, { -1, 1 },
{ -1, -1 }, { 0, -3 }, { -3, 0 }, { -3, -3 } },
[BS_16x8] = { { 0, -1 }, { -1, 0 }, { 1, -1 }, { -1, -1 },
{ 0, -2 }, { -2, 0 }, { -2, -1 }, { -1, -2 } },
[BS_8x16] = { { -1, 0 }, { 0, -1 }, { -1, 1 }, { -1, -1 },
{ -2, 0 }, { 0, -2 }, { -1, -2 }, { -2, -1 } },
[BS_8x8] = { { 0, -1 }, { -1, 0 }, { -1, -1 }, { 0, -2 },
{ -2, 0 }, { -1, -2 }, { -2, -1 }, { -2, -2 } },
[BS_8x4] = { { 0, -1 }, { -1, 0 }, { -1, -1 }, { 0, -2 },
{ -2, 0 }, { -1, -2 }, { -2, -1 }, { -2, -2 } },
[BS_4x8] = { { 0, -1 }, { -1, 0 }, { -1, -1 }, { 0, -2 },
{ -2, 0 }, { -1, -2 }, { -2, -1 }, { -2, -2 } },
[BS_4x4] = { { 0, -1 }, { -1, 0 }, { -1, -1 }, { 0, -2 },
{ -2, 0 }, { -1, -2 }, { -2, -1 }, { -2, -2 } },
};
VP9Context *s = td->s;
VP9Block *b = td->b;
int row = td->row, col = td->col, row7 = td->row7;
const int8_t (*p)[2] = mv_ref_blk_off[b->bs];
#define INVALID_MV 0x80008000U
uint32_t mem = INVALID_MV, mem_sub8x8 = INVALID_MV;
int i;
#define RETURN_DIRECT_MV(mv) \
do { \
uint32_t m = AV_RN32A(&mv); \
if (!idx) { \
AV_WN32A(pmv, m); \
return; \
} else if (mem == INVALID_MV) { \
mem = m; \
} else if (m != mem) { \
AV_WN32A(pmv, m); \
return; \
} \
} while (0)
if (sb >= 0) {
if (sb == 2 || sb == 1) {
RETURN_DIRECT_MV(b->mv[0][z]);
} else if (sb == 3) {
RETURN_DIRECT_MV(b->mv[2][z]);
RETURN_DIRECT_MV(b->mv[1][z]);
RETURN_DIRECT_MV(b->mv[0][z]);
}
#define RETURN_MV(mv) \
do { \
if (sb > 0) { \
VP56mv tmp; \
uint32_t m; \
av_assert2(idx == 1); \
av_assert2(mem != INVALID_MV); \
if (mem_sub8x8 == INVALID_MV) { \
clamp_mv(&tmp, &mv, td); \
m = AV_RN32A(&tmp); \
if (m != mem) { \
AV_WN32A(pmv, m); \
return; \
} \
mem_sub8x8 = AV_RN32A(&mv); \
} else if (mem_sub8x8 != AV_RN32A(&mv)) { \
clamp_mv(&tmp, &mv, td); \
m = AV_RN32A(&tmp); \
if (m != mem) { \
AV_WN32A(pmv, m); \
} else { \
/* BUG I'm pretty sure this isn't the intention */ \
AV_WN32A(pmv, 0); \
} \
return; \
} \
} else { \
uint32_t m = AV_RN32A(&mv); \
if (!idx) { \
clamp_mv(pmv, &mv, td); \
return; \
} else if (mem == INVALID_MV) { \
mem = m; \
} else if (m != mem) { \
clamp_mv(pmv, &mv, td); \
return; \
} \
} \
} while (0)
if (row > 0) {
VP9mvrefPair *mv = &s->s.frames[CUR_FRAME].mv[(row - 1) * s->sb_cols * 8 + col];
if (mv->ref[0] == ref)
RETURN_MV(s->above_mv_ctx[2 * col + (sb & 1)][0]);
else if (mv->ref[1] == ref)
RETURN_MV(s->above_mv_ctx[2 * col + (sb & 1)][1]);
}
if (col > td->tile_col_start) {
VP9mvrefPair *mv = &s->s.frames[CUR_FRAME].mv[row * s->sb_cols * 8 + col - 1];
if (mv->ref[0] == ref)
RETURN_MV(td->left_mv_ctx[2 * row7 + (sb >> 1)][0]);
else if (mv->ref[1] == ref)
RETURN_MV(td->left_mv_ctx[2 * row7 + (sb >> 1)][1]);
}
i = 2;
} else {
i = 0;
}
// previously coded MVs in this neighborhood, using same reference frame
for (; i < 8; i++) {
int c = p[i][0] + col, r = p[i][1] + row;
if (c >= td->tile_col_start && c < s->cols &&
r >= 0 && r < s->rows) {
VP9mvrefPair *mv = &s->s.frames[CUR_FRAME].mv[r * s->sb_cols * 8 + c];
if (mv->ref[0] == ref)
RETURN_MV(mv->mv[0]);
else if (mv->ref[1] == ref)
RETURN_MV(mv->mv[1]);
}
}
// MV at this position in previous frame, using same reference frame
if (s->s.h.use_last_frame_mvs) {
VP9mvrefPair *mv = &s->s.frames[REF_FRAME_MVPAIR].mv[row * s->sb_cols * 8 + col];
if (!s->s.frames[REF_FRAME_MVPAIR].uses_2pass)
ff_thread_await_progress(&s->s.frames[REF_FRAME_MVPAIR].tf, row >> 3, 0);
if (mv->ref[0] == ref)
RETURN_MV(mv->mv[0]);
else if (mv->ref[1] == ref)
RETURN_MV(mv->mv[1]);
}
#define RETURN_SCALE_MV(mv, scale) \
do { \
if (scale) { \
VP56mv mv_temp = { -mv.x, -mv.y }; \
RETURN_MV(mv_temp); \
} else { \
RETURN_MV(mv); \
} \
} while (0)
// previously coded MVs in this neighborhood, using different reference frame
for (i = 0; i < 8; i++) {
int c = p[i][0] + col, r = p[i][1] + row;
if (c >= td->tile_col_start && c < s->cols && r >= 0 && r < s->rows) {
VP9mvrefPair *mv = &s->s.frames[CUR_FRAME].mv[r * s->sb_cols * 8 + c];
if (mv->ref[0] != ref && mv->ref[0] >= 0)
RETURN_SCALE_MV(mv->mv[0],
s->s.h.signbias[mv->ref[0]] != s->s.h.signbias[ref]);
if (mv->ref[1] != ref && mv->ref[1] >= 0 &&
// BUG - libvpx has this condition regardless of whether
// we used the first ref MV and pre-scaling
AV_RN32A(&mv->mv[0]) != AV_RN32A(&mv->mv[1])) {
RETURN_SCALE_MV(mv->mv[1], s->s.h.signbias[mv->ref[1]] != s->s.h.signbias[ref]);
}
}
}
// MV at this position in previous frame, using different reference frame
if (s->s.h.use_last_frame_mvs) {
VP9mvrefPair *mv = &s->s.frames[REF_FRAME_MVPAIR].mv[row * s->sb_cols * 8 + col];
// no need to await_progress, because we already did that above
if (mv->ref[0] != ref && mv->ref[0] >= 0)
RETURN_SCALE_MV(mv->mv[0], s->s.h.signbias[mv->ref[0]] != s->s.h.signbias[ref]);
if (mv->ref[1] != ref && mv->ref[1] >= 0 &&
// BUG - libvpx has this condition regardless of whether
// we used the first ref MV and pre-scaling
AV_RN32A(&mv->mv[0]) != AV_RN32A(&mv->mv[1])) {
RETURN_SCALE_MV(mv->mv[1], s->s.h.signbias[mv->ref[1]] != s->s.h.signbias[ref]);
}
}
AV_ZERO32(pmv);
clamp_mv(pmv, pmv, td);
#undef INVALID_MV
#undef RETURN_MV
#undef RETURN_SCALE_MV
}
static av_always_inline int read_mv_component(VP9TileData *td, int idx, int hp)
{
VP9Context *s = td->s;
int bit, sign = vp56_rac_get_prob(td->c, s->prob.p.mv_comp[idx].sign);
int n, c = vp8_rac_get_tree(td->c, ff_vp9_mv_class_tree,
s->prob.p.mv_comp[idx].classes);
td->counts.mv_comp[idx].sign[sign]++;
td->counts.mv_comp[idx].classes[c]++;
if (c) {
int m;
for (n = 0, m = 0; m < c; m++) {
bit = vp56_rac_get_prob(td->c, s->prob.p.mv_comp[idx].bits[m]);
n |= bit << m;
td->counts.mv_comp[idx].bits[m][bit]++;
}
n <<= 3;
bit = vp8_rac_get_tree(td->c, ff_vp9_mv_fp_tree,
s->prob.p.mv_comp[idx].fp);
n |= bit << 1;
td->counts.mv_comp[idx].fp[bit]++;
if (hp) {
bit = vp56_rac_get_prob(td->c, s->prob.p.mv_comp[idx].hp);
td->counts.mv_comp[idx].hp[bit]++;
n |= bit;
} else {
n |= 1;
// bug in libvpx - we count for bw entropy purposes even if the
// bit wasn't coded
td->counts.mv_comp[idx].hp[1]++;
}
n += 8 << c;
} else {
n = vp56_rac_get_prob(td->c, s->prob.p.mv_comp[idx].class0);
td->counts.mv_comp[idx].class0[n]++;
bit = vp8_rac_get_tree(td->c, ff_vp9_mv_fp_tree,
s->prob.p.mv_comp[idx].class0_fp[n]);
td->counts.mv_comp[idx].class0_fp[n][bit]++;
n = (n << 3) | (bit << 1);
if (hp) {
bit = vp56_rac_get_prob(td->c, s->prob.p.mv_comp[idx].class0_hp);
td->counts.mv_comp[idx].class0_hp[bit]++;
n |= bit;
} else {
n |= 1;
// bug in libvpx - we count for bw entropy purposes even if the
// bit wasn't coded
td->counts.mv_comp[idx].class0_hp[1]++;
}
}
return sign ? -(n + 1) : (n + 1);
}
void ff_vp9_fill_mv(VP9TileData *td, VP56mv *mv, int mode, int sb)
{
VP9Context *s = td->s;
VP9Block *b = td->b;
if (mode == ZEROMV) {
AV_ZERO64(mv);
} else {
int hp;
// FIXME cache this value and reuse for other subblocks
find_ref_mvs(td, &mv[0], b->ref[0], 0, mode == NEARMV,
mode == NEWMV ? -1 : sb);
// FIXME maybe move this code into find_ref_mvs()
if ((mode == NEWMV || sb == -1) &&
!(hp = s->s.h.highprecisionmvs &&
abs(mv[0].x) < 64 && abs(mv[0].y) < 64)) {
if (mv[0].y & 1) {
if (mv[0].y < 0)
mv[0].y++;
else
mv[0].y--;
}
if (mv[0].x & 1) {
if (mv[0].x < 0)
mv[0].x++;
else
mv[0].x--;
}
}
if (mode == NEWMV) {
enum MVJoint j = vp8_rac_get_tree(td->c, ff_vp9_mv_joint_tree,
s->prob.p.mv_joint);
td->counts.mv_joint[j]++;
if (j >= MV_JOINT_V)
mv[0].y += read_mv_component(td, 0, hp);
if (j & 1)
mv[0].x += read_mv_component(td, 1, hp);
}
if (b->comp) {
// FIXME cache this value and reuse for other subblocks
find_ref_mvs(td, &mv[1], b->ref[1], 1, mode == NEARMV,
mode == NEWMV ? -1 : sb);
if ((mode == NEWMV || sb == -1) &&
!(hp = s->s.h.highprecisionmvs &&
abs(mv[1].x) < 64 && abs(mv[1].y) < 64)) {
if (mv[1].y & 1) {
if (mv[1].y < 0)
mv[1].y++;
else
mv[1].y--;
}
if (mv[1].x & 1) {
if (mv[1].x < 0)
mv[1].x++;
else
mv[1].x--;
}
}
if (mode == NEWMV) {
enum MVJoint j = vp8_rac_get_tree(td->c, ff_vp9_mv_joint_tree,
s->prob.p.mv_joint);
td->counts.mv_joint[j]++;
if (j >= MV_JOINT_V)
mv[1].y += read_mv_component(td, 0, hp);
if (j & 1)
mv[1].x += read_mv_component(td, 1, hp);
}
}
}
}

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше