Merge mulgrave-w:git/linux-2.6

Conflicts:

	include/linux/blkdev.h

Trivial merge to incorporate tag prototypes.
This commit is contained in:
James Bottomley 2006-09-23 21:03:52 -05:00
Родитель dfdc58ba35 1ab9dd0902
Коммит 1aedf2ccc6
628 изменённых файлов: 33204 добавлений и 15641 удалений

Просмотреть файл

@ -2384,6 +2384,13 @@ N: Thomas Molina
E: tmolina@cablespeed.com
D: bug fixes, documentation, minor hackery
N: Paul Moore
E: paul.moore@hp.com
D: NetLabel author
S: Hewlett-Packard
S: 110 Spit Brook Road
S: Nashua, NH 03062
N: James Morris
E: jmorris@namei.org
W: http://namei.org/

Просмотреть файл

@ -184,6 +184,8 @@ mtrr.txt
- how to use PPro Memory Type Range Registers to increase performance.
nbd.txt
- info on a TCP implementation of a network block device.
netlabel/
- directory with information on the NetLabel subsystem.
networking/
- directory with info on various aspects of networking with Linux.
nfsroot.txt

Просмотреть файл

@ -0,0 +1,10 @@
00-INDEX
- this file.
cipso_ipv4.txt
- documentation on the IPv4 CIPSO protocol engine.
draft-ietf-cipso-ipsecurity-01.txt
- IETF draft of the CIPSO protocol, dated 16 July 1992.
introduction.txt
- NetLabel introduction, READ THIS FIRST.
lsm_interface.txt
- documentation on the NetLabel kernel security module API.

Просмотреть файл

@ -0,0 +1,48 @@
NetLabel CIPSO/IPv4 Protocol Engine
==============================================================================
Paul Moore, paul.moore@hp.com
May 17, 2006
* Overview
The NetLabel CIPSO/IPv4 protocol engine is based on the IETF Commercial IP
Security Option (CIPSO) draft from July 16, 1992. A copy of this draft can be
found in this directory, consult '00-INDEX' for the filename. While the IETF
draft never made it to an RFC standard it has become a de-facto standard for
labeled networking and is used in many trusted operating systems.
* Outbound Packet Processing
The CIPSO/IPv4 protocol engine applies the CIPSO IP option to packets by
adding the CIPSO label to the socket. This causes all packets leaving the
system through the socket to have the CIPSO IP option applied. The socket's
CIPSO label can be changed at any point in time, however, it is recommended
that it is set upon the socket's creation. The LSM can set the socket's CIPSO
label by using the NetLabel security module API; if the NetLabel "domain" is
configured to use CIPSO for packet labeling then a CIPSO IP option will be
generated and attached to the socket.
* Inbound Packet Processing
The CIPSO/IPv4 protocol engine validates every CIPSO IP option it finds at the
IP layer without any special handling required by the LSM. However, in order
to decode and translate the CIPSO label on the packet the LSM must use the
NetLabel security module API to extract the security attributes of the packet.
This is typically done at the socket layer using the 'socket_sock_rcv_skb()'
LSM hook.
* Label Translation
The CIPSO/IPv4 protocol engine contains a mechanism to translate CIPSO security
attributes such as sensitivity level and category to values which are
appropriate for the host. These mappings are defined as part of a CIPSO
Domain Of Interpretation (DOI) definition and are configured through the
NetLabel user space communication layer. Each DOI definition can have a
different security attribute mapping table.
* Label Translation Cache
The NetLabel system provides a framework for caching security attribute
mappings from the network labels to the corresponding LSM identifiers. The
CIPSO/IPv4 protocol engine supports this caching mechanism.

Просмотреть файл

@ -0,0 +1,791 @@
IETF CIPSO Working Group
16 July, 1992
COMMERCIAL IP SECURITY OPTION (CIPSO 2.2)
1. Status
This Internet Draft provides the high level specification for a Commercial
IP Security Option (CIPSO). This draft reflects the version as approved by
the CIPSO IETF Working Group. Distribution of this memo is unlimited.
This document is an Internet Draft. Internet Drafts are working documents
of the Internet Engineering Task Force (IETF), its Areas, and its Working
Groups. Note that other groups may also distribute working documents as
Internet Drafts.
Internet Drafts are draft documents valid for a maximum of six months.
Internet Drafts may be updated, replaced, or obsoleted by other documents
at any time. It is not appropriate to use Internet Drafts as reference
material or to cite them other than as a "working draft" or "work in
progress."
Please check the I-D abstract listing contained in each Internet Draft
directory to learn the current status of this or any other Internet Draft.
2. Background
Currently the Internet Protocol includes two security options. One of
these options is the DoD Basic Security Option (BSO) (Type 130) which allows
IP datagrams to be labeled with security classifications. This option
provides sixteen security classifications and a variable number of handling
restrictions. To handle additional security information, such as security
categories or compartments, another security option (Type 133) exists and
is referred to as the DoD Extended Security Option (ESO). The values for
the fixed fields within these two options are administered by the Defense
Information Systems Agency (DISA).
Computer vendors are now building commercial operating systems with
mandatory access controls and multi-level security. These systems are
no longer built specifically for a particular group in the defense or
intelligence communities. They are generally available commercial systems
for use in a variety of government and civil sector environments.
The small number of ESO format codes can not support all the possible
applications of a commercial security option. The BSO and ESO were
designed to only support the United States DoD. CIPSO has been designed
to support multiple security policies. This Internet Draft provides the
format and procedures required to support a Mandatory Access Control
security policy. Support for additional security policies shall be
defined in future RFCs.
Internet Draft, Expires 15 Jan 93 [PAGE 1]
CIPSO INTERNET DRAFT 16 July, 1992
3. CIPSO Format
Option type: 134 (Class 0, Number 6, Copy on Fragmentation)
Option length: Variable
This option permits security related information to be passed between
systems within a single Domain of Interpretation (DOI). A DOI is a
collection of systems which agree on the meaning of particular values
in the security option. An authority that has been assigned a DOI
identifier will define a mapping between appropriate CIPSO field values
and their human readable equivalent. This authority will distribute that
mapping to hosts within the authority's domain. These mappings may be
sensitive, therefore a DOI authority is not required to make these
mappings available to anyone other than the systems that are included in
the DOI.
This option MUST be copied on fragmentation. This option appears at most
once in a datagram. All multi-octet fields in the option are defined to be
transmitted in network byte order. The format of this option is as follows:
+----------+----------+------//------+-----------//---------+
| 10000110 | LLLLLLLL | DDDDDDDDDDDD | TTTTTTTTTTTTTTTTTTTT |
+----------+----------+------//------+-----------//---------+
TYPE=134 OPTION DOMAIN OF TAGS
LENGTH INTERPRETATION
Figure 1. CIPSO Format
3.1 Type
This field is 1 octet in length. Its value is 134.
3.2 Length
This field is 1 octet in length. It is the total length of the option
including the type and length fields. With the current IP header length
restriction of 40 octets the value of this field MUST not exceed 40.
3.3 Domain of Interpretation Identifier
This field is an unsigned 32 bit integer. The value 0 is reserved and MUST
not appear as the DOI identifier in any CIPSO option. Implementations
should assume that the DOI identifier field is not aligned on any particular
byte boundary.
To conserve space in the protocol, security levels and categories are
represented by numbers rather than their ASCII equivalent. This requires
a mapping table within CIPSO hosts to map these numbers to their
corresponding ASCII representations. Non-related groups of systems may
Internet Draft, Expires 15 Jan 93 [PAGE 2]
CIPSO INTERNET DRAFT 16 July, 1992
have their own unique mappings. For example, one group of systems may
use the number 5 to represent Unclassified while another group may use the
number 1 to represent that same security level. The DOI identifier is used
to identify which mapping was used for the values within the option.
3.4 Tag Types
A common format for passing security related information is necessary
for interoperability. CIPSO uses sets of "tags" to contain the security
information relevant to the data in the IP packet. Each tag begins with
a tag type identifier followed by the length of the tag and ends with the
actual security information to be passed. All multi-octet fields in a tag
are defined to be transmitted in network byte order. Like the DOI
identifier field in the CIPSO header, implementations should assume that
all tags, as well as fields within a tag, are not aligned on any particular
octet boundary. The tag types defined in this document contain alignment
bytes to assist alignment of some information, however alignment can not
be guaranteed if CIPSO is not the first IP option.
CIPSO tag types 0 through 127 are reserved for defining standard tag
formats. Their definitions will be published in RFCs. Tag types whose
identifiers are greater than 127 are defined by the DOI authority and may
only be meaningful in certain Domains of Interpretation. For these tag
types, implementations will require the DOI identifier as well as the tag
number to determine the security policy and the format associated with the
tag. Use of tag types above 127 are restricted to closed networks where
interoperability with other networks will not be an issue. Implementations
that support a tag type greater than 127 MUST support at least one DOI that
requires only tag types 1 to 127.
Tag type 0 is reserved. Tag types 1, 2, and 5 are defined in this
Internet Draft. Types 3 and 4 are reserved for work in progress.
The standard format for all current and future CIPSO tags is shown below:
+----------+----------+--------//--------+
| TTTTTTTT | LLLLLLLL | IIIIIIIIIIIIIIII |
+----------+----------+--------//--------+
TAG TAG TAG
TYPE LENGTH INFORMATION
Figure 2: Standard Tag Format
In the three tag types described in this document, the length and count
restrictions are based on the current IP limitation of 40 octets for all
IP options. If the IP header is later expanded, then the length and count
restrictions specified in this document may increase to use the full area
provided for IP options.
3.4.1 Tag Type Classes
Tag classes consist of tag types that have common processing requirements
and support the same security policy. The three tags defined in this
Internet Draft belong to the Mandatory Access Control (MAC) Sensitivity
Internet Draft, Expires 15 Jan 93 [PAGE 3]
CIPSO INTERNET DRAFT 16 July, 1992
class and support the MAC Sensitivity security policy.
3.4.2 Tag Type 1
This is referred to as the "bit-mapped" tag type. Tag type 1 is included
in the MAC Sensitivity tag type class. The format of this tag type is as
follows:
+----------+----------+----------+----------+--------//---------+
| 00000001 | LLLLLLLL | 00000000 | LLLLLLLL | CCCCCCCCCCCCCCCCC |
+----------+----------+----------+----------+--------//---------+
TAG TAG ALIGNMENT SENSITIVITY BIT MAP OF
TYPE LENGTH OCTET LEVEL CATEGORIES
Figure 3. Tag Type 1 Format
3.4.2.1 Tag Type
This field is 1 octet in length and has a value of 1.
3.4.2.2 Tag Length
This field is 1 octet in length. It is the total length of the tag type
including the type and length fields. With the current IP header length
restriction of 40 bytes the value within this field is between 4 and 34.
3.4.2.3 Alignment Octet
This field is 1 octet in length and always has the value of 0. Its purpose
is to align the category bitmap field on an even octet boundary. This will
speed many implementations including router implementations.
3.4.2.4 Sensitivity Level
This field is 1 octet in length. Its value is from 0 to 255. The values
are ordered with 0 being the minimum value and 255 representing the maximum
value.
3.4.2.5 Bit Map of Categories
The length of this field is variable and ranges from 0 to 30 octets. This
provides representation of categories 0 to 239. The ordering of the bits
is left to right or MSB to LSB. For example category 0 is represented by
the most significant bit of the first byte and category 15 is represented
by the least significant bit of the second byte. Figure 4 graphically
shows this ordering. Bit N is binary 1 if category N is part of the label
for the datagram, and bit N is binary 0 if category N is not part of the
label. Except for the optimized tag 1 format described in the next section,
Internet Draft, Expires 15 Jan 93 [PAGE 4]
CIPSO INTERNET DRAFT 16 July, 1992
minimal encoding SHOULD be used resulting in no trailing zero octets in the
category bitmap.
octet 0 octet 1 octet 2 octet 3 octet 4 octet 5
XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX XXXXXXXX . . .
bit 01234567 89111111 11112222 22222233 33333333 44444444
number 012345 67890123 45678901 23456789 01234567
Figure 4. Ordering of Bits in Tag 1 Bit Map
3.4.2.6 Optimized Tag 1 Format
Routers work most efficiently when processing fixed length fields. To
support these routers there is an optimized form of tag type 1. The format
does not change. The only change is to the category bitmap which is set to
a constant length of 10 octets. Trailing octets required to fill out the 10
octets are zero filled. Ten octets, allowing for 80 categories, was chosen
because it makes the total length of the CIPSO option 20 octets. If CIPSO
is the only option then the option will be full word aligned and additional
filler octets will not be required.
3.4.3 Tag Type 2
This is referred to as the "enumerated" tag type. It is used to describe
large but sparsely populated sets of categories. Tag type 2 is in the MAC
Sensitivity tag type class. The format of this tag type is as follows:
+----------+----------+----------+----------+-------------//-------------+
| 00000010 | LLLLLLLL | 00000000 | LLLLLLLL | CCCCCCCCCCCCCCCCCCCCCCCCCC |
+----------+----------+----------+----------+-------------//-------------+
TAG TAG ALIGNMENT SENSITIVITY ENUMERATED
TYPE LENGTH OCTET LEVEL CATEGORIES
Figure 5. Tag Type 2 Format
3.4.3.1 Tag Type
This field is one octet in length and has a value of 2.
3.4.3.2 Tag Length
This field is 1 octet in length. It is the total length of the tag type
including the type and length fields. With the current IP header length
restriction of 40 bytes the value within this field is between 4 and 34.
3.4.3.3 Alignment Octet
This field is 1 octet in length and always has the value of 0. Its purpose
is to align the category field on an even octet boundary. This will
Internet Draft, Expires 15 Jan 93 [PAGE 5]
CIPSO INTERNET DRAFT 16 July, 1992
speed many implementations including router implementations.
3.4.3.4 Sensitivity Level
This field is 1 octet in length. Its value is from 0 to 255. The values
are ordered with 0 being the minimum value and 255 representing the
maximum value.
3.4.3.5 Enumerated Categories
In this tag, categories are represented by their actual value rather than
by their position within a bit field. The length of each category is 2
octets. Up to 15 categories may be represented by this tag. Valid values
for categories are 0 to 65534. Category 65535 is not a valid category
value. The categories MUST be listed in ascending order within the tag.
3.4.4 Tag Type 5
This is referred to as the "range" tag type. It is used to represent
labels where all categories in a range, or set of ranges, are included
in the sensitivity label. Tag type 5 is in the MAC Sensitivity tag type
class. The format of this tag type is as follows:
+----------+----------+----------+----------+------------//-------------+
| 00000101 | LLLLLLLL | 00000000 | LLLLLLLL | Top/Bottom | Top/Bottom |
+----------+----------+----------+----------+------------//-------------+
TAG TAG ALIGNMENT SENSITIVITY CATEGORY RANGES
TYPE LENGTH OCTET LEVEL
Figure 6. Tag Type 5 Format
3.4.4.1 Tag Type
This field is one octet in length and has a value of 5.
3.4.4.2 Tag Length
This field is 1 octet in length. It is the total length of the tag type
including the type and length fields. With the current IP header length
restriction of 40 bytes the value within this field is between 4 and 34.
3.4.4.3 Alignment Octet
This field is 1 octet in length and always has the value of 0. Its purpose
is to align the category range field on an even octet boundary. This will
speed many implementations including router implementations.
Internet Draft, Expires 15 Jan 93 [PAGE 6]
CIPSO INTERNET DRAFT 16 July, 1992
3.4.4.4 Sensitivity Level
This field is 1 octet in length. Its value is from 0 to 255. The values
are ordered with 0 being the minimum value and 255 representing the maximum
value.
3.4.4.5 Category Ranges
A category range is a 4 octet field comprised of the 2 octet index of the
highest numbered category followed by the 2 octet index of the lowest
numbered category. These range endpoints are inclusive within the range of
categories. All categories within a range are included in the sensitivity
label. This tag may contain a maximum of 7 category pairs. The bottom
category endpoint for the last pair in the tag MAY be omitted and SHOULD be
assumed to be 0. The ranges MUST be non-overlapping and be listed in
descending order. Valid values for categories are 0 to 65534. Category
65535 is not a valid category value.
3.4.5 Minimum Requirements
A CIPSO implementation MUST be capable of generating at least tag type 1 in
the non-optimized form. In addition, a CIPSO implementation MUST be able
to receive any valid tag type 1 even those using the optimized tag type 1
format.
4. Configuration Parameters
The configuration parameters defined below are required for all CIPSO hosts,
gateways, and routers that support multiple sensitivity labels. A CIPSO
host is defined to be the origination or destination system for an IP
datagram. A CIPSO gateway provides IP routing services between two or more
IP networks and may be required to perform label translations between
networks. A CIPSO gateway may be an enhanced CIPSO host or it may just
provide gateway services with no end system CIPSO capabilities. A CIPSO
router is a dedicated IP router that routes IP datagrams between two or more
IP networks.
An implementation of CIPSO on a host MUST have the capability to reject a
datagram for reasons that the information contained can not be adequately
protected by the receiving host or if acceptance may result in violation of
the host or network security policy. In addition, a CIPSO gateway or router
MUST be able to reject datagrams going to networks that can not provide
adequate protection or may violate the network's security policy. To
provide this capability the following minimal set of configuration
parameters are required for CIPSO implementations:
HOST_LABEL_MAX - This parameter contains the maximum sensitivity label that
a CIPSO host is authorized to handle. All datagrams that have a label
greater than this maximum MUST be rejected by the CIPSO host. This
parameter does not apply to CIPSO gateways or routers. This parameter need
not be defined explicitly as it can be implicitly derived from the
PORT_LABEL_MAX parameters for the associated interfaces.
Internet Draft, Expires 15 Jan 93 [PAGE 7]
CIPSO INTERNET DRAFT 16 July, 1992
HOST_LABEL_MIN - This parameter contains the minimum sensitivity label that
a CIPSO host is authorized to handle. All datagrams that have a label less
than this minimum MUST be rejected by the CIPSO host. This parameter does
not apply to CIPSO gateways or routers. This parameter need not be defined
explicitly as it can be implicitly derived from the PORT_LABEL_MIN
parameters for the associated interfaces.
PORT_LABEL_MAX - This parameter contains the maximum sensitivity label for
all datagrams that may exit a particular network interface port. All
outgoing datagrams that have a label greater than this maximum MUST be
rejected by the CIPSO system. The label within this parameter MUST be
less than or equal to the label within the HOST_LABEL_MAX parameter. This
parameter does not apply to CIPSO hosts that support only one network port.
PORT_LABEL_MIN - This parameter contains the minimum sensitivity label for
all datagrams that may exit a particular network interface port. All
outgoing datagrams that have a label less than this minimum MUST be
rejected by the CIPSO system. The label within this parameter MUST be
greater than or equal to the label within the HOST_LABEL_MIN parameter.
This parameter does not apply to CIPSO hosts that support only one network
port.
PORT_DOI - This parameter is used to assign a DOI identifier value to a
particular network interface port. All CIPSO labels within datagrams
going out this port MUST use the specified DOI identifier. All CIPSO
hosts and gateways MUST support either this parameter, the NET_DOI
parameter, or the HOST_DOI parameter.
NET_DOI - This parameter is used to assign a DOI identifier value to a
particular IP network address. All CIPSO labels within datagrams destined
for the particular IP network MUST use the specified DOI identifier. All
CIPSO hosts and gateways MUST support either this parameter, the PORT_DOI
parameter, or the HOST_DOI parameter.
HOST_DOI - This parameter is used to assign a DOI identifier value to a
particular IP host address. All CIPSO labels within datagrams destined for
the particular IP host will use the specified DOI identifier. All CIPSO
hosts and gateways MUST support either this parameter, the PORT_DOI
parameter, or the NET_DOI parameter.
This list represents the minimal set of configuration parameters required
to be compliant. Implementors are encouraged to add to this list to
provide enhanced functionality and control. For example, many security
policies may require both incoming and outgoing datagrams be checked against
the port and host label ranges.
4.1 Port Range Parameters
The labels represented by the PORT_LABEL_MAX and PORT_LABEL_MIN parameters
MAY be in CIPSO or local format. Some CIPSO systems, such as routers, may
want to have the range parameters expressed in CIPSO format so that incoming
labels do not have to be converted to a local format before being compared
against the range. If multiple DOIs are supported by one of these CIPSO
Internet Draft, Expires 15 Jan 93 [PAGE 8]
CIPSO INTERNET DRAFT 16 July, 1992
systems then multiple port range parameters would be needed, one set for
each DOI supported on a particular port.
The port range will usually represent the total set of labels that may
exist on the logical network accessed through the corresponding network
interface. It may, however, represent a subset of these labels that are
allowed to enter the CIPSO system.
4.2 Single Label CIPSO Hosts
CIPSO implementations that support only one label are not required to
support the parameters described above. These limited implementations are
only required to support a NET_LABEL parameter. This parameter contains
the CIPSO label that may be inserted in datagrams that exit the host. In
addition, the host MUST reject any incoming datagram that has a label which
is not equivalent to the NET_LABEL parameter.
5. Handling Procedures
This section describes the processing requirements for incoming and
outgoing IP datagrams. Just providing the correct CIPSO label format
is not enough. Assumptions will be made by one system on how a
receiving system will handle the CIPSO label. Wrong assumptions may
lead to non-interoperability or even a security incident. The
requirements described below represent the minimal set needed for
interoperability and that provide users some level of confidence.
Many other requirements could be added to increase user confidence,
however at the risk of restricting creativity and limiting vendor
participation.
5.1 Input Procedures
All datagrams received through a network port MUST have a security label
associated with them, either contained in the datagram or assigned to the
receiving port. Without this label the host, gateway, or router will not
have the information it needs to make security decisions. This security
label will be obtained from the CIPSO if the option is present in the
datagram. See section 4.1.2 for handling procedures for unlabeled
datagrams. This label will be compared against the PORT (if appropriate)
and HOST configuration parameters defined in section 3.
If any field within the CIPSO option, such as the DOI identifier, is not
recognized the IP datagram is discarded and an ICMP "parameter problem"
(type 12) is generated and returned. The ICMP code field is set to "bad
parameter" (code 0) and the pointer is set to the start of the CIPSO field
that is unrecognized.
If the contents of the CIPSO are valid but the security label is
outside of the configured host or port label range, the datagram is
discarded and an ICMP "destination unreachable" (type 3) is generated
and returned. The code field of the ICMP is set to "communication with
destination network administratively prohibited" (code 9) or to
Internet Draft, Expires 15 Jan 93 [PAGE 9]
CIPSO INTERNET DRAFT 16 July, 1992
"communication with destination host administratively prohibited"
(code 10). The value of the code field used is dependent upon whether
the originator of the ICMP message is acting as a CIPSO host or a CIPSO
gateway. The recipient of the ICMP message MUST be able to handle either
value. The same procedure is performed if a CIPSO can not be added to an
IP packet because it is too large to fit in the IP options area.
If the error is triggered by receipt of an ICMP message, the message
is discarded and no response is permitted (consistent with general ICMP
processing rules).
5.1.1 Unrecognized tag types
The default condition for any CIPSO implementation is that an
unrecognized tag type MUST be treated as a "parameter problem" and
handled as described in section 4.1. A CIPSO implementation MAY allow
the system administrator to identify tag types that may safely be
ignored. This capability is an allowable enhancement, not a
requirement.
5.1.2 Unlabeled Packets
A network port may be configured to not require a CIPSO label for all
incoming datagrams. For this configuration a CIPSO label must be
assigned to that network port and associated with all unlabeled IP
datagrams. This capability might be used for single level networks or
networks that have CIPSO and non-CIPSO hosts and the non-CIPSO hosts
all operate at the same label.
If a CIPSO option is required and none is found, the datagram is
discarded and an ICMP "parameter problem" (type 12) is generated and
returned to the originator of the datagram. The code field of the ICMP
is set to "option missing" (code 1) and the ICMP pointer is set to 134
(the value of the option type for the missing CIPSO option).
5.2 Output Procedures
A CIPSO option MUST appear only once in a datagram. Only one tag type
from the MAC Sensitivity class MAY be included in a CIPSO option. Given
the current set of defined tag types, this means that CIPSO labels at
first will contain only one tag.
All datagrams leaving a CIPSO system MUST meet the following condition:
PORT_LABEL_MIN <= CIPSO label <= PORT_LABEL_MAX
If this condition is not satisfied the datagram MUST be discarded.
If the CIPSO system only supports one port, the HOST_LABEL_MIN and the
HOST_LABEL_MAX parameters MAY be substituted for the PORT parameters in
the above condition.
The DOI identifier to be used for all outgoing datagrams is configured by
Internet Draft, Expires 15 Jan 93 [PAGE 10]
CIPSO INTERNET DRAFT 16 July, 1992
the administrator. If port level DOI identifier assignment is used, then
the PORT_DOI configuration parameter MUST contain the DOI identifier to
use. If network level DOI assignment is used, then the NET_DOI parameter
MUST contain the DOI identifier to use. And if host level DOI assignment
is employed, then the HOST_DOI parameter MUST contain the DOI identifier
to use. A CIPSO implementation need only support one level of DOI
assignment.
5.3 DOI Processing Requirements
A CIPSO implementation MUST support at least one DOI and SHOULD support
multiple DOIs. System and network administrators are cautioned to
ensure that at least one DOI is common within an IP network to allow for
broadcasting of IP datagrams.
CIPSO gateways MUST be capable of translating a CIPSO option from one
DOI to another when forwarding datagrams between networks. For
efficiency purposes this capability is only a desired feature for CIPSO
routers.
5.4 Label of ICMP Messages
The CIPSO label to be used on all outgoing ICMP messages MUST be equivalent
to the label of the datagram that caused the ICMP message. If the ICMP was
generated due to a problem associated with the original CIPSO label then the
following responses are allowed:
a. Use the CIPSO label of the original IP datagram
b. Drop the original datagram with no return message generated
In most cases these options will have the same effect. If you can not
interpret the label or if it is outside the label range of your host or
interface then an ICMP message with the same label will probably not be
able to exit the system.
6. Assignment of DOI Identifier Numbers =
Requests for assignment of a DOI identifier number should be addressed to
the Internet Assigned Numbers Authority (IANA).
7. Acknowledgements
Much of the material in this RFC is based on (and copied from) work
done by Gary Winiger of Sun Microsystems and published as Commercial
IP Security Option at the INTEROP 89, Commercial IPSO Workshop.
8. Author's Address
To submit mail for distribution to members of the IETF CIPSO Working
Group, send mail to: cipso@wdl1.wdl.loral.com.
Internet Draft, Expires 15 Jan 93 [PAGE 11]
CIPSO INTERNET DRAFT 16 July, 1992
To be added to or deleted from this distribution, send mail to:
cipso-request@wdl1.wdl.loral.com.
9. References
RFC 1038, "Draft Revised IP Security Option", M. St. Johns, IETF, January
1988.
RFC 1108, "U.S. Department of Defense Security Options
for the Internet Protocol", Stephen Kent, IAB, 1 March, 1991.
Internet Draft, Expires 15 Jan 93 [PAGE 12]

Просмотреть файл

@ -0,0 +1,46 @@
NetLabel Introduction
==============================================================================
Paul Moore, paul.moore@hp.com
August 2, 2006
* Overview
NetLabel is a mechanism which can be used by kernel security modules to attach
security attributes to outgoing network packets generated from user space
applications and read security attributes from incoming network packets. It
is composed of three main components, the protocol engines, the communication
layer, and the kernel security module API.
* Protocol Engines
The protocol engines are responsible for both applying and retrieving the
network packet's security attributes. If any translation between the network
security attributes and those on the host are required then the protocol
engine will handle those tasks as well. Other kernel subsystems should
refrain from calling the protocol engines directly, instead they should use
the NetLabel kernel security module API described below.
Detailed information about each NetLabel protocol engine can be found in this
directory, consult '00-INDEX' for filenames.
* Communication Layer
The communication layer exists to allow NetLabel configuration and monitoring
from user space. The NetLabel communication layer uses a message based
protocol built on top of the Generic NETLINK transport mechanism. The exact
formatting of these NetLabel messages as well as the Generic NETLINK family
names can be found in the the 'net/netlabel/' directory as comments in the
header files as well as in 'include/net/netlabel.h'.
* Security Module API
The purpose of the NetLabel security module API is to provide a protocol
independent interface to the underlying NetLabel protocol engines. In addition
to protocol independence, the security module API is designed to be completely
LSM independent which should allow multiple LSMs to leverage the same code
base.
Detailed information about the NetLabel security module API can be found in the
'include/net/netlabel.h' header file as well as the 'lsm_interface.txt' file
found in this directory.

Просмотреть файл

@ -0,0 +1,47 @@
NetLabel Linux Security Module Interface
==============================================================================
Paul Moore, paul.moore@hp.com
May 17, 2006
* Overview
NetLabel is a mechanism which can set and retrieve security attributes from
network packets. It is intended to be used by LSM developers who want to make
use of a common code base for several different packet labeling protocols.
The NetLabel security module API is defined in 'include/net/netlabel.h' but a
brief overview is given below.
* NetLabel Security Attributes
Since NetLabel supports multiple different packet labeling protocols and LSMs
it uses the concept of security attributes to refer to the packet's security
labels. The NetLabel security attributes are defined by the
'netlbl_lsm_secattr' structure in the NetLabel header file. Internally the
NetLabel subsystem converts the security attributes to and from the correct
low-level packet label depending on the NetLabel build time and run time
configuration. It is up to the LSM developer to translate the NetLabel
security attributes into whatever security identifiers are in use for their
particular LSM.
* NetLabel LSM Protocol Operations
These are the functions which allow the LSM developer to manipulate the labels
on outgoing packets as well as read the labels on incoming packets. Functions
exist to operate both on sockets as well as the sk_buffs directly. These high
level functions are translated into low level protocol operations based on how
the administrator has configured the NetLabel subsystem.
* NetLabel Label Mapping Cache Operations
Depending on the exact configuration, translation between the network packet
label and the internal LSM security identifier can be time consuming. The
NetLabel label mapping cache is a caching mechanism which can be used to
sidestep much of this overhead once a mapping has been established. Once the
LSM has received a packet, used NetLabel to decode it's security attributes,
and translated the security attributes into a LSM internal identifier the LSM
can use the NetLabel caching functions to associate the LSM internal
identifier with the network packet's label. This means that in the future
when a incoming packet matches a cached value not only are the internal
NetLabel translation mechanisms bypassed but the LSM translation mechanisms are
bypassed as well which should result in a significant reduction in overhead.

Просмотреть файл

@ -375,6 +375,41 @@ tcp_slow_start_after_idle - BOOLEAN
be timed out after an idle period.
Default: 1
CIPSOv4 Variables:
cipso_cache_enable - BOOLEAN
If set, enable additions to and lookups from the CIPSO label mapping
cache. If unset, additions are ignored and lookups always result in a
miss. However, regardless of the setting the cache is still
invalidated when required when means you can safely toggle this on and
off and the cache will always be "safe".
Default: 1
cipso_cache_bucket_size - INTEGER
The CIPSO label cache consists of a fixed size hash table with each
hash bucket containing a number of cache entries. This variable limits
the number of entries in each hash bucket; the larger the value the
more CIPSO label mappings that can be cached. When the number of
entries in a given hash bucket reaches this limit adding new entries
causes the oldest entry in the bucket to be removed to make room.
Default: 10
cipso_rbm_optfmt - BOOLEAN
Enable the "Optimized Tag 1 Format" as defined in section 3.4.2.6 of
the CIPSO draft specification (see Documentation/netlabel for details).
This means that when set the CIPSO tag will be padded with empty
categories in order to make the packet data 32-bit aligned.
Default: 0
cipso_rbm_structvalid - BOOLEAN
If set, do a very strict check of the CIPSO option when
ip_options_compile() is called. If unset, relax the checks done during
ip_options_compile(). Either way is "safe" as errors are caught else
where in the CIPSO processing code but setting this to 0 (False) should
result in less work (i.e. it should be faster) but could cause problems
with other implementations that require strict checking.
Default: 0
IP Variables:
ip_local_port_range - 2 INTEGERS
@ -730,6 +765,9 @@ conf/all/forwarding - BOOLEAN
This referred to as global forwarding.
proxy_ndp - BOOLEAN
Do proxy ndp.
conf/interface/*:
Change special settings per interface.

Просмотреть файл

@ -0,0 +1,14 @@
flowi structure:
The secid member in the flow structure is used in LSMs (e.g. SELinux) to indicate
the label of the flow. This label of the flow is currently used in selecting
matching labeled xfrm(s).
If this is an outbound flow, the label is derived from the socket, if any, or
the incoming packet this flow is being generated as a response to (e.g. tcp
resets, timewait ack, etc.). It is also conceivable that the label could be
derived from other sources such as process context, device, etc., in special
cases, as may be appropriate.
If this is an inbound flow, the label is derived from the IPSec security
associations, if any, used by the packet.

Просмотреть файл

@ -758,6 +758,7 @@ Prior to version 0.9.0rc4 options had a 'snd_' prefix. This was removed.
position_fix - Fix DMA pointer (0 = auto, 1 = none, 2 = POSBUF, 3 = FIFO size)
single_cmd - Use single immediate commands to communicate with
codecs (for debugging only)
disable_msi - Disable Message Signaled Interrupt (MSI)
This module supports one card and autoprobe.
@ -778,11 +779,16 @@ Prior to version 0.9.0rc4 options had a 'snd_' prefix. This was removed.
6stack-digout 6-jack with a SPDIF out
w810 3-jack
z71v 3-jack (HP shared SPDIF)
asus 3-jack
asus 3-jack (ASUS Mobo)
asus-w1v ASUS W1V
asus-dig ASUS with SPDIF out
asus-dig2 ASUS with SPDIF out (using GPIO2)
uniwill 3-jack
F1734 2-jack
lg LG laptop (m1 express dual)
lg-lw LG LW20 laptop
lg-lw LG LW20/LW25 laptop
tcl TCL S700
clevo Clevo laptops (m520G, m665n)
test for testing/debugging purpose, almost all controls can be
adjusted. Appearing only when compiled with
$CONFIG_SND_DEBUG=y
@ -790,6 +796,7 @@ Prior to version 0.9.0rc4 options had a 'snd_' prefix. This was removed.
ALC260
hp HP machines
hp-3013 HP machines (3013-variant)
fujitsu Fujitsu S7020
acer Acer TravelMate
basic fixed pin assignment (old default model)
@ -797,24 +804,32 @@ Prior to version 0.9.0rc4 options had a 'snd_' prefix. This was removed.
ALC262
fujitsu Fujitsu Laptop
hp-bpc HP xw4400/6400/8400/9400 laptops
benq Benq ED8
basic fixed pin assignment w/o SPDIF
auto auto-config reading BIOS (default)
ALC882/885
3stack-dig 3-jack with SPDIF I/O
6stck-dig 6-jack digital with SPDIF I/O
arima Arima W820Di1
auto auto-config reading BIOS (default)
ALC883/888
3stack-dig 3-jack with SPDIF I/O
6stack-dig 6-jack digital with SPDIF I/O
6stack-dig-demo 6-stack digital for Intel demo board
3stack-6ch 3-jack 6-channel
3stack-6ch-dig 3-jack 6-channel with SPDIF I/O
6stack-dig-demo 6-jack digital for Intel demo board
acer Acer laptops (Travelmate 3012WTMi, Aspire 5600, etc)
auto auto-config reading BIOS (default)
ALC861/660
3stack 3-jack
3stack-dig 3-jack with SPDIF I/O
6stack-dig 6-jack with SPDIF I/O
3stack-660 3-jack (for ALC660)
uniwill-m31 Uniwill M31 laptop
auto auto-config reading BIOS (default)
CMI9880
@ -843,10 +858,21 @@ Prior to version 0.9.0rc4 options had a 'snd_' prefix. This was removed.
3stack-dig ditto with SPDIF
laptop 3-jack with hp-jack automute
laptop-dig ditto with SPDIF
auto auto-confgi reading BIOS (default)
auto auto-config reading BIOS (default)
STAC7661(?)
STAC9200/9205/9220/9221/9254
ref Reference board
3stack D945 3stack
5stack D945 5stack + SPDIF
STAC9227/9228/9229/927x
ref Reference board
3stack D965 3stack
5stack D965 5stack + SPDIF
STAC9872
vaio Setup for VAIO FE550G/SZ110
vaio-ar Setup for VAIO AR
If the default configuration doesn't work and one of the above
matches with your device, report it together with the PCI
@ -1213,6 +1239,14 @@ Prior to version 0.9.0rc4 options had a 'snd_' prefix. This was removed.
Module supports only 1 card. This module has no enable option.
Module snd-mts64
----------------
Module for Ego Systems (ESI) Miditerminal 4140
This module supports multiple devices.
Requires parport (CONFIG_PARPORT).
Module snd-nm256
----------------

Просмотреть файл

@ -1054,9 +1054,8 @@
<para>
For a device which allows hotplugging, you can use
<function>snd_card_free_in_thread</function>. This one will
postpone the destruction and wait in a kernel-thread until all
devices are closed.
<function>snd_card_free_when_closed</function>. This one will
postpone the destruction until all devices are closed.
</para>
</section>

Просмотреть файл

@ -1058,8 +1058,8 @@ core99_reset_cpu(struct device_node *node, long param, long value)
if (np == NULL)
return -ENODEV;
for (np = np->child; np != NULL; np = np->sibling) {
u32 *num = get_property(np, "reg", NULL);
u32 *rst = get_property(np, "soft-reset", NULL);
const u32 *num = get_property(np, "reg", NULL);
const u32 *rst = get_property(np, "soft-reset", NULL);
if (num == NULL || rst == NULL)
continue;
if (param == *num) {

Просмотреть файл

@ -702,7 +702,7 @@ static void __init smp_core99_setup(int ncpus)
/* GPIO based HW sync on ppc32 Core99 */
if (pmac_tb_freeze == NULL && !machine_is_compatible("MacRISC4")) {
struct device_node *cpu;
u32 *tbprop = NULL;
const u32 *tbprop = NULL;
core99_tb_gpio = KL_GPIO_TB_ENABLE; /* default value */
cpu = of_find_node_by_type(NULL, "cpu");

Просмотреть файл

@ -2801,6 +2801,18 @@ long blk_congestion_wait(int rw, long timeout)
EXPORT_SYMBOL(blk_congestion_wait);
/**
* blk_congestion_end - wake up sleepers on a congestion queue
* @rw: READ or WRITE
*/
void blk_congestion_end(int rw)
{
wait_queue_head_t *wqh = &congestion_wqh[rw];
if (waitqueue_active(wqh))
wake_up(wqh);
}
/*
* Has to be called with the request spinlock acquired
*/

Просмотреть файл

@ -92,13 +92,17 @@ static int hmac_init(struct hash_desc *pdesc)
struct hmac_ctx *ctx = align_ptr(ipad + bs * 2 + ds, sizeof(void *));
struct hash_desc desc;
struct scatterlist tmp;
int err;
desc.tfm = ctx->child;
desc.flags = pdesc->flags & CRYPTO_TFM_REQ_MAY_SLEEP;
sg_set_buf(&tmp, ipad, bs);
return unlikely(crypto_hash_init(&desc)) ?:
crypto_hash_update(&desc, &tmp, 1);
err = crypto_hash_init(&desc);
if (unlikely(err))
return err;
return crypto_hash_update(&desc, &tmp, bs);
}
static int hmac_update(struct hash_desc *pdesc,
@ -123,13 +127,17 @@ static int hmac_final(struct hash_desc *pdesc, u8 *out)
struct hmac_ctx *ctx = align_ptr(digest + ds, sizeof(void *));
struct hash_desc desc;
struct scatterlist tmp;
int err;
desc.tfm = ctx->child;
desc.flags = pdesc->flags & CRYPTO_TFM_REQ_MAY_SLEEP;
sg_set_buf(&tmp, opad, bs + ds);
return unlikely(crypto_hash_final(&desc, digest)) ?:
crypto_hash_digest(&desc, &tmp, bs + ds, out);
err = crypto_hash_final(&desc, digest);
if (unlikely(err))
return err;
return crypto_hash_digest(&desc, &tmp, bs + ds, out);
}
static int hmac_digest(struct hash_desc *pdesc, struct scatterlist *sg,
@ -145,6 +153,7 @@ static int hmac_digest(struct hash_desc *pdesc, struct scatterlist *sg,
struct hash_desc desc;
struct scatterlist sg1[2];
struct scatterlist sg2[1];
int err;
desc.tfm = ctx->child;
desc.flags = pdesc->flags & CRYPTO_TFM_REQ_MAY_SLEEP;
@ -154,8 +163,11 @@ static int hmac_digest(struct hash_desc *pdesc, struct scatterlist *sg,
sg1[1].length = 0;
sg_set_buf(sg2, opad, bs + ds);
return unlikely(crypto_hash_digest(&desc, sg1, nbytes + bs, digest)) ?:
crypto_hash_digest(&desc, sg2, bs + ds, out);
err = crypto_hash_digest(&desc, sg1, nbytes + bs, digest);
if (unlikely(err))
return err;
return crypto_hash_digest(&desc, sg2, bs + ds, out);
}
static int hmac_init_tfm(struct crypto_tfm *tfm)

Просмотреть файл

@ -1912,7 +1912,7 @@ he_service_rbrq(struct he_dev *he_dev, int group)
skb->tail = skb->data + skb->len;
#ifdef USE_CHECKSUM_HW
if (vcc->vpi == 0 && vcc->vci >= ATM_NOT_RSV_VCI) {
skb->ip_summed = CHECKSUM_HW;
skb->ip_summed = CHECKSUM_COMPLETE;
skb->csum = TCP_CKSUM(skb->data,
he_vcc->pdu_len);
}

Просмотреть файл

@ -87,7 +87,7 @@ static int briq_panel_release(struct inode *ino, struct file *filep)
return 0;
}
static ssize_t briq_panel_read(struct file *file, char *buf, size_t count,
static ssize_t briq_panel_read(struct file *file, char __user *buf, size_t count,
loff_t *ppos)
{
unsigned short c;
@ -135,7 +135,7 @@ static void scroll_vfd( void )
vfd_cursor = 20;
}
static ssize_t briq_panel_write(struct file *file, const char *buf, size_t len,
static ssize_t briq_panel_write(struct file *file, const char __user *buf, size_t len,
loff_t *ppos)
{
size_t indx = len;
@ -150,19 +150,22 @@ static ssize_t briq_panel_write(struct file *file, const char *buf, size_t len,
return -EBUSY;
for (;;) {
char c;
if (!indx)
break;
if (get_user(c, buf))
return -EFAULT;
if (esc) {
set_led(*buf);
set_led(c);
esc = 0;
} else if (*buf == 27) {
} else if (c == 27) {
esc = 1;
} else if (*buf == 12) {
} else if (c == 12) {
/* do a form feed */
for (i=0; i<40; i++)
vfd[i] = ' ';
vfd_cursor = 0;
} else if (*buf == 10) {
} else if (c == 10) {
if (vfd_cursor < 20)
vfd_cursor = 20;
else if (vfd_cursor < 40)
@ -175,7 +178,7 @@ static ssize_t briq_panel_write(struct file *file, const char *buf, size_t len,
/* just a character */
if (vfd_cursor > 39)
scroll_vfd();
vfd[vfd_cursor++] = *buf;
vfd[vfd_cursor++] = c;
}
indx--;
buf++;
@ -202,7 +205,7 @@ static struct miscdevice briq_panel_miscdev = {
static int __init briq_panel_init(void)
{
struct device_node *root = find_path_device("/");
char *machine;
const char *machine;
int i;
machine = get_property(root, "model", NULL);

Просмотреть файл

@ -38,6 +38,7 @@
#define __IB_MAD_PRIV_H__
#include <linux/completion.h>
#include <linux/err.h>
#include <linux/pci.h>
#include <linux/workqueue.h>
#include <rdma/ib_mad.h>

Просмотреть файл

@ -49,6 +49,7 @@
#include <linux/init.h>
#include <linux/dma-mapping.h>
#include <linux/if_arp.h>
#include <linux/vmalloc.h>
#include <asm/io.h>
#include <asm/irq.h>

Просмотреть файл

@ -50,6 +50,7 @@
#include <linux/dma-mapping.h>
#include <linux/mm.h>
#include <linux/inet.h>
#include <linux/vmalloc.h>
#include <linux/route.h>

Просмотреть файл

@ -43,6 +43,7 @@
#include <linux/io.h>
#include <linux/pci.h>
#include <linux/vmalloc.h>
#include <asm/uaccess.h>
#include "ipath_kernel.h"

Просмотреть файл

@ -101,7 +101,7 @@ config MTD_REDBOOT_PARTS_READONLY
config MTD_CMDLINE_PARTS
bool "Command line partition table parsing"
depends on MTD_PARTITIONS = "y"
depends on MTD_PARTITIONS = "y" && MTD = "y"
---help---
Allow generic configuration of the MTD partition tables via the kernel
command line. Multiple flash resources are supported for hardware where
@ -264,7 +264,7 @@ config RFD_FTL
http://www.gensw.com/pages/prod/bios/rfd.htm
config SSFDC
bool "NAND SSFDC (SmartMedia) read only translation layer"
tristate "NAND SSFDC (SmartMedia) read only translation layer"
depends on MTD
default n
help

Просмотреть файл

@ -10,7 +10,6 @@
* published by the Free Software Foundation.
*/
#include <linux/config.h>
#include <linux/kernel.h>
#include <linux/module.h>
#include <linux/init.h>
@ -29,7 +28,7 @@ struct ssfdcr_record {
int cis_block; /* block n. containing CIS/IDI */
int erase_size; /* phys_block_size */
unsigned short *logic_block_map; /* all zones (max 8192 phys blocks on
the 128MB) */
the 128MiB) */
int map_len; /* n. phys_blocks on the card */
};
@ -43,11 +42,11 @@ struct ssfdcr_record {
#define MAX_LOGIC_BLK_PER_ZONE 1000
#define MAX_PHYS_BLK_PER_ZONE 1024
#define KB(x) ( (x) * 1024L )
#define MB(x) ( KB(x) * 1024L )
#define KiB(x) ( (x) * 1024L )
#define MiB(x) ( KiB(x) * 1024L )
/** CHS Table
1MB 2MB 4MB 8MB 16MB 32MB 64MB 128MB
1MiB 2MiB 4MiB 8MiB 16MiB 32MiB 64MiB 128MiB
NCylinder 125 125 250 250 500 500 500 500
NHead 4 4 4 4 4 8 8 16
NSector 4 8 8 16 16 16 32 32
@ -64,14 +63,14 @@ typedef struct {
/* Must be ordered by size */
static const chs_entry_t chs_table[] = {
{ MB( 1), 125, 4, 4 },
{ MB( 2), 125, 4, 8 },
{ MB( 4), 250, 4, 8 },
{ MB( 8), 250, 4, 16 },
{ MB( 16), 500, 4, 16 },
{ MB( 32), 500, 8, 16 },
{ MB( 64), 500, 8, 32 },
{ MB(128), 500, 16, 32 },
{ MiB( 1), 125, 4, 4 },
{ MiB( 2), 125, 4, 8 },
{ MiB( 4), 250, 4, 8 },
{ MiB( 8), 250, 4, 16 },
{ MiB( 16), 500, 4, 16 },
{ MiB( 32), 500, 8, 16 },
{ MiB( 64), 500, 8, 32 },
{ MiB(128), 500, 16, 32 },
{ 0 },
};
@ -109,25 +108,30 @@ static int get_valid_cis_sector(struct mtd_info *mtd)
int ret, k, cis_sector;
size_t retlen;
loff_t offset;
uint8_t sect_buf[SECTOR_SIZE];
uint8_t *sect_buf;
cis_sector = -1;
sect_buf = kmalloc(SECTOR_SIZE, GFP_KERNEL);
if (!sect_buf)
goto out;
/*
* Look for CIS/IDI sector on the first GOOD block (give up after 4 bad
* blocks). If the first good block doesn't contain CIS number the flash
* is not SSFDC formatted
*/
cis_sector = -1;
for (k = 0, offset = 0; k < 4; k++, offset += mtd->erasesize) {
if (!mtd->block_isbad(mtd, offset)) {
ret = mtd->read(mtd, offset, SECTOR_SIZE, &retlen,
sect_buf);
/* CIS pattern match on the sector buffer */
if ( ret < 0 || retlen != SECTOR_SIZE ) {
if (ret < 0 || retlen != SECTOR_SIZE) {
printk(KERN_WARNING
"SSFDC_RO:can't read CIS/IDI sector\n");
} else if ( !memcmp(sect_buf, cis_numbers,
sizeof(cis_numbers)) ) {
} else if (!memcmp(sect_buf, cis_numbers,
sizeof(cis_numbers))) {
/* Found */
cis_sector = (int)(offset >> SECTOR_SHIFT);
} else {
@ -140,6 +144,8 @@ static int get_valid_cis_sector(struct mtd_info *mtd)
}
}
kfree(sect_buf);
out:
return cis_sector;
}
@ -227,7 +233,7 @@ static int get_logical_address(uint8_t *oob_buf)
}
}
if ( !ok )
if (!ok)
block_address = -2;
DEBUG(MTD_DEBUG_LEVEL3, "SSFDC_RO: get_logical_address() %d\n",
@ -245,8 +251,8 @@ static int build_logical_block_map(struct ssfdcr_record *ssfdc)
struct mtd_info *mtd = ssfdc->mbd.mtd;
DEBUG(MTD_DEBUG_LEVEL1, "SSFDC_RO: build_block_map() nblks=%d (%luK)\n",
ssfdc->map_len, (unsigned long)ssfdc->map_len *
ssfdc->erase_size / 1024 );
ssfdc->map_len,
(unsigned long)ssfdc->map_len * ssfdc->erase_size / 1024);
/* Scan every physical block, skip CIS block */
for (phys_block = ssfdc->cis_block + 1; phys_block < ssfdc->map_len;
@ -323,21 +329,21 @@ static void ssfdcr_add_mtd(struct mtd_blktrans_ops *tr, struct mtd_info *mtd)
/* Set geometry */
ssfdc->heads = 16;
ssfdc->sectors = 32;
get_chs( mtd->size, NULL, &ssfdc->heads, &ssfdc->sectors);
get_chs(mtd->size, NULL, &ssfdc->heads, &ssfdc->sectors);
ssfdc->cylinders = (unsigned short)((mtd->size >> SECTOR_SHIFT) /
((long)ssfdc->sectors * (long)ssfdc->heads));
DEBUG(MTD_DEBUG_LEVEL1, "SSFDC_RO: using C:%d H:%d S:%d == %ld sects\n",
ssfdc->cylinders, ssfdc->heads , ssfdc->sectors,
(long)ssfdc->cylinders * (long)ssfdc->heads *
(long)ssfdc->sectors );
(long)ssfdc->sectors);
ssfdc->mbd.size = (long)ssfdc->heads * (long)ssfdc->cylinders *
(long)ssfdc->sectors;
/* Allocate logical block map */
ssfdc->logic_block_map = kmalloc( sizeof(ssfdc->logic_block_map[0]) *
ssfdc->map_len, GFP_KERNEL);
ssfdc->logic_block_map = kmalloc(sizeof(ssfdc->logic_block_map[0]) *
ssfdc->map_len, GFP_KERNEL);
if (!ssfdc->logic_block_map) {
printk(KERN_WARNING
"SSFDC_RO: out of memory for data structures\n");
@ -408,7 +414,7 @@ static int ssfdcr_readsect(struct mtd_blktrans_dev *dev,
"SSFDC_RO: ssfdcr_readsect() phys_sect_no=%lu\n",
sect_no);
if (read_physical_sector( ssfdc->mbd.mtd, buf, sect_no ) < 0)
if (read_physical_sector(ssfdc->mbd.mtd, buf, sect_no) < 0)
return -EIO;
} else {
memset(buf, 0xff, SECTOR_SIZE);

Просмотреть файл

@ -2077,7 +2077,7 @@ boomerang_start_xmit(struct sk_buff *skb, struct net_device *dev)
vp->tx_ring[entry].next = 0;
#if DO_ZEROCOPY
if (skb->ip_summed != CHECKSUM_HW)
if (skb->ip_summed != CHECKSUM_PARTIAL)
vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded);
else
vp->tx_ring[entry].status = cpu_to_le32(skb->len | TxIntrUploaded | AddTCPChksum | AddUDPChksum);

Просмотреть файл

@ -813,7 +813,7 @@ static int cp_start_xmit (struct sk_buff *skb, struct net_device *dev)
if (mss)
flags |= LargeSend | ((mss & MSSMask) << MSSShift);
else if (skb->ip_summed == CHECKSUM_HW) {
else if (skb->ip_summed == CHECKSUM_PARTIAL) {
const struct iphdr *ip = skb->nh.iph;
if (ip->protocol == IPPROTO_TCP)
flags |= IPCS | TCPCS;
@ -867,7 +867,7 @@ static int cp_start_xmit (struct sk_buff *skb, struct net_device *dev)
if (mss)
ctrl |= LargeSend |
((mss & MSSMask) << MSSShift);
else if (skb->ip_summed == CHECKSUM_HW) {
else if (skb->ip_summed == CHECKSUM_PARTIAL) {
if (ip->protocol == IPPROTO_TCP)
ctrl |= IPCS | TCPCS;
else if (ip->protocol == IPPROTO_UDP)
@ -898,7 +898,7 @@ static int cp_start_xmit (struct sk_buff *skb, struct net_device *dev)
txd->addr = cpu_to_le64(first_mapping);
wmb();
if (skb->ip_summed == CHECKSUM_HW) {
if (skb->ip_summed == CHECKSUM_PARTIAL) {
if (ip->protocol == IPPROTO_TCP)
txd->opts1 = cpu_to_le32(first_eor | first_len |
FirstFrag | DescOwn |

Просмотреть файл

@ -2040,7 +2040,7 @@ static void ace_rx_int(struct net_device *dev, u32 rxretprd, u32 rxretcsm)
*/
if (bd_flags & BD_FLG_TCP_UDP_SUM) {
skb->csum = htons(csum);
skb->ip_summed = CHECKSUM_HW;
skb->ip_summed = CHECKSUM_COMPLETE;
} else {
skb->ip_summed = CHECKSUM_NONE;
}
@ -2511,7 +2511,7 @@ restart:
mapping = ace_map_tx_skb(ap, skb, skb, idx);
flagsize = (skb->len << 16) | (BD_FLG_END);
if (skb->ip_summed == CHECKSUM_HW)
if (skb->ip_summed == CHECKSUM_PARTIAL)
flagsize |= BD_FLG_TCP_UDP_SUM;
#if ACENIC_DO_VLAN
if (vlan_tx_tag_present(skb)) {
@ -2534,7 +2534,7 @@ restart:
mapping = ace_map_tx_skb(ap, skb, NULL, idx);
flagsize = (skb_headlen(skb) << 16);
if (skb->ip_summed == CHECKSUM_HW)
if (skb->ip_summed == CHECKSUM_PARTIAL)
flagsize |= BD_FLG_TCP_UDP_SUM;
#if ACENIC_DO_VLAN
if (vlan_tx_tag_present(skb)) {
@ -2560,7 +2560,7 @@ restart:
PCI_DMA_TODEVICE);
flagsize = (frag->size << 16);
if (skb->ip_summed == CHECKSUM_HW)
if (skb->ip_summed == CHECKSUM_PARTIAL)
flagsize |= BD_FLG_TCP_UDP_SUM;
idx = (idx + 1) % ACE_TX_RING_ENTRIES(ap);

Просмотреть файл

@ -161,6 +161,7 @@ static struct pci_device_id com20020pci_id_table[] = {
{ 0x1571, 0xa204, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
{ 0x1571, 0xa205, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
{ 0x1571, 0xa206, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
{ 0x10B5, 0x9030, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
{ 0x10B5, 0x9050, PCI_ANY_ID, PCI_ANY_ID, 0, 0, ARC_CAN_10MBIT },
{0,}
};

Просмотреть файл

@ -4423,7 +4423,7 @@ bnx2_start_xmit(struct sk_buff *skb, struct net_device *dev)
ring_prod = TX_RING_IDX(prod);
vlan_tag_flags = 0;
if (skb->ip_summed == CHECKSUM_HW) {
if (skb->ip_summed == CHECKSUM_PARTIAL) {
vlan_tag_flags |= TX_BD_FLAGS_TCP_UDP_CKSUM;
}

Просмотреть файл

@ -2167,7 +2167,7 @@ end_copy_pkt:
cas_page_unmap(addr);
}
skb->csum = ntohs(i ^ 0xffff);
skb->ip_summed = CHECKSUM_HW;
skb->ip_summed = CHECKSUM_COMPLETE;
skb->protocol = eth_type_trans(skb, cp->dev);
return len;
}
@ -2821,7 +2821,7 @@ static inline int cas_xmit_tx_ringN(struct cas *cp, int ring,
}
ctrl = 0;
if (skb->ip_summed == CHECKSUM_HW) {
if (skb->ip_summed == CHECKSUM_PARTIAL) {
u64 csum_start_off, csum_stuff_off;
csum_start_off = (u64) (skb->h.raw - skb->data);

Просмотреть файл

@ -1470,9 +1470,9 @@ int t1_start_xmit(struct sk_buff *skb, struct net_device *dev)
}
if (!(adapter->flags & UDP_CSUM_CAPABLE) &&
skb->ip_summed == CHECKSUM_HW &&
skb->ip_summed == CHECKSUM_PARTIAL &&
skb->nh.iph->protocol == IPPROTO_UDP)
if (unlikely(skb_checksum_help(skb, 0))) {
if (unlikely(skb_checksum_help(skb))) {
dev_kfree_skb_any(skb);
return NETDEV_TX_OK;
}
@ -1495,11 +1495,11 @@ int t1_start_xmit(struct sk_buff *skb, struct net_device *dev)
cpl = (struct cpl_tx_pkt *)__skb_push(skb, sizeof(*cpl));
cpl->opcode = CPL_TX_PKT;
cpl->ip_csum_dis = 1; /* SW calculates IP csum */
cpl->l4_csum_dis = skb->ip_summed == CHECKSUM_HW ? 0 : 1;
cpl->l4_csum_dis = skb->ip_summed == CHECKSUM_PARTIAL ? 0 : 1;
/* the length field isn't used so don't bother setting it */
st->tx_cso += (skb->ip_summed == CHECKSUM_HW);
sge->stats.tx_do_cksum += (skb->ip_summed == CHECKSUM_HW);
st->tx_cso += (skb->ip_summed == CHECKSUM_PARTIAL);
sge->stats.tx_do_cksum += (skb->ip_summed == CHECKSUM_PARTIAL);
sge->stats.tx_reg_pkts++;
}
cpl->iff = dev->if_port;

Просмотреть файл

@ -611,7 +611,7 @@ start_xmit (struct sk_buff *skb, struct net_device *dev)
txdesc = &np->tx_ring[entry];
#if 0
if (skb->ip_summed == CHECKSUM_HW) {
if (skb->ip_summed == CHECKSUM_PARTIAL) {
txdesc->status |=
cpu_to_le64 (TCPChecksumEnable | UDPChecksumEnable |
IPChecksumEnable);

Просмотреть файл

@ -2600,7 +2600,7 @@ e1000_tx_csum(struct e1000_adapter *adapter, struct e1000_tx_ring *tx_ring,
unsigned int i;
uint8_t css;
if (likely(skb->ip_summed == CHECKSUM_HW)) {
if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) {
css = skb->h.raw - skb->data;
i = tx_ring->next_to_use;
@ -2927,11 +2927,11 @@ e1000_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
}
/* reserve a descriptor for the offload context */
if ((mss) || (skb->ip_summed == CHECKSUM_HW))
if ((mss) || (skb->ip_summed == CHECKSUM_PARTIAL))
count++;
count++;
#else
if (skb->ip_summed == CHECKSUM_HW)
if (skb->ip_summed == CHECKSUM_PARTIAL)
count++;
#endif
@ -3608,7 +3608,7 @@ e1000_rx_checksum(struct e1000_adapter *adapter,
*/
csum = ntohl(csum ^ 0xFFFF);
skb->csum = csum;
skb->ip_summed = CHECKSUM_HW;
skb->ip_summed = CHECKSUM_COMPLETE;
}
adapter->hw_csum_good++;
}

Просмотреть файл

@ -1503,7 +1503,8 @@ static int nv_start_xmit(struct sk_buff *skb, struct net_device *dev)
tx_flags_extra = NV_TX2_TSO | (skb_shinfo(skb)->gso_size << NV_TX2_TSO_SHIFT);
else
#endif
tx_flags_extra = (skb->ip_summed == CHECKSUM_HW ? (NV_TX2_CHECKSUM_L3|NV_TX2_CHECKSUM_L4) : 0);
tx_flags_extra = skb->ip_summed == CHECKSUM_PARTIAL ?
NV_TX2_CHECKSUM_L3 | NV_TX2_CHECKSUM_L4 : 0;
/* vlan tag */
if (np->vlangrp && vlan_tx_tag_present(skb)) {

Просмотреть файл

@ -947,7 +947,7 @@ static int gfar_start_xmit(struct sk_buff *skb, struct net_device *dev)
/* Set up checksumming */
if (likely((dev->features & NETIF_F_IP_CSUM)
&& (CHECKSUM_HW == skb->ip_summed))) {
&& (CHECKSUM_PARTIAL == skb->ip_summed))) {
fcb = gfar_add_fcb(skb, txbdp);
status |= TXBD_TOE;
gfar_tx_checksum(skb, fcb);

Просмотреть файл

@ -1648,7 +1648,7 @@ static int hamachi_rx(struct net_device *dev)
* could do the pseudo myself and return
* CHECKSUM_UNNECESSARY
*/
skb->ip_summed = CHECKSUM_HW;
skb->ip_summed = CHECKSUM_COMPLETE;
}
}
}

Просмотреть файл

@ -1036,7 +1036,7 @@ static inline u16 emac_tx_csum(struct ocp_enet_private *dev,
struct sk_buff *skb)
{
#if defined(CONFIG_IBM_EMAC_TAH)
if (skb->ip_summed == CHECKSUM_HW) {
if (skb->ip_summed == CHECKSUM_PARTIAL) {
++dev->stats.tx_packets_csum;
return EMAC_TX_CTRL_TAH_CSUM;
}

Просмотреть файл

@ -1387,7 +1387,7 @@ static int ioc3_start_xmit(struct sk_buff *skb, struct net_device *dev)
* MAC header which should not be summed and the TCP/UDP pseudo headers
* manually.
*/
if (skb->ip_summed == CHECKSUM_HW) {
if (skb->ip_summed == CHECKSUM_PARTIAL) {
int proto = ntohs(skb->nh.iph->protocol);
unsigned int csoff;
struct iphdr *ih = skb->nh.iph;

Просмотреть файл

@ -249,7 +249,7 @@ static void __exit ali_ircc_cleanup(void)
IRDA_DEBUG(2, "%s(), ---------------- Start ----------------\n", __FUNCTION__);
for (i=0; i < 4; i++) {
for (i=0; i < ARRAY_SIZE(dev_self); i++) {
if (dev_self[i])
ali_ircc_close(dev_self[i]);
}
@ -273,6 +273,12 @@ static int ali_ircc_open(int i, chipio_t *info)
int err;
IRDA_DEBUG(2, "%s(), ---------------- Start ----------------\n", __FUNCTION__);
if (i >= ARRAY_SIZE(dev_self)) {
IRDA_ERROR("%s(), maximum number of supported chips reached!\n",
__FUNCTION__);
return -ENOMEM;
}
/* Set FIR FIFO and DMA Threshold */
if ((ali_ircc_setup(info)) == -1)

Просмотреть файл

@ -1090,7 +1090,7 @@ static int __init irport_init(void)
{
int i;
for (i=0; (io[i] < 2000) && (i < 4); i++) {
for (i=0; (io[i] < 2000) && (i < ARRAY_SIZE(dev_self)); i++) {
if (irport_open(i, io[i], irq[i]) != NULL)
return 0;
}
@ -1112,7 +1112,7 @@ static void __exit irport_cleanup(void)
IRDA_DEBUG( 4, "%s()\n", __FUNCTION__);
for (i=0; i < 4; i++) {
for (i=0; i < ARRAY_SIZE(dev_self); i++) {
if (dev_self[i])
irport_close(dev_self[i]);
}

Просмотреть файл

@ -279,7 +279,7 @@ static void via_ircc_clean(void)
IRDA_DEBUG(3, "%s()\n", __FUNCTION__);
for (i=0; i < 4; i++) {
for (i=0; i < ARRAY_SIZE(dev_self); i++) {
if (dev_self[i])
via_ircc_close(dev_self[i]);
}
@ -327,6 +327,9 @@ static __devinit int via_ircc_open(int i, chipio_t * info, unsigned int id)
IRDA_DEBUG(3, "%s()\n", __FUNCTION__);
if (i >= ARRAY_SIZE(dev_self))
return -ENOMEM;
/* Allocate new instance of the driver */
dev = alloc_irdadev(sizeof(struct via_ircc_cb));
if (dev == NULL)

Просмотреть файл

@ -117,7 +117,7 @@ static int __init w83977af_init(void)
IRDA_DEBUG(0, "%s()\n", __FUNCTION__ );
for (i=0; (io[i] < 2000) && (i < 4); i++) {
for (i=0; (io[i] < 2000) && (i < ARRAY_SIZE(dev_self)); i++) {
if (w83977af_open(i, io[i], irq[i], dma[i]) == 0)
return 0;
}
@ -136,7 +136,7 @@ static void __exit w83977af_cleanup(void)
IRDA_DEBUG(4, "%s()\n", __FUNCTION__ );
for (i=0; i < 4; i++) {
for (i=0; i < ARRAY_SIZE(dev_self); i++) {
if (dev_self[i])
w83977af_close(dev_self[i]);
}

Просмотреть файл

@ -1232,7 +1232,7 @@ ixgb_tx_csum(struct ixgb_adapter *adapter, struct sk_buff *skb)
unsigned int i;
uint8_t css, cso;
if(likely(skb->ip_summed == CHECKSUM_HW)) {
if(likely(skb->ip_summed == CHECKSUM_PARTIAL)) {
css = skb->h.raw - skb->data;
cso = (skb->h.raw + skb->csum) - skb->data;

Просмотреть файл

@ -1147,7 +1147,7 @@ static void eth_tx_submit_descs_for_skb(struct mv643xx_private *mp,
desc->byte_cnt = length;
desc->buf_ptr = dma_map_single(NULL, skb->data, length, DMA_TO_DEVICE);
if (skb->ip_summed == CHECKSUM_HW) {
if (skb->ip_summed == CHECKSUM_PARTIAL) {
BUG_ON(skb->protocol != ETH_P_IP);
cmd_sts |= ETH_GEN_TCP_UDP_CHECKSUM |

Просмотреть файл

@ -930,7 +930,7 @@ static inline void myri10ge_vlan_ip_csum(struct sk_buff *skb, u16 hw_csum)
(vh->h_vlan_encapsulated_proto == htons(ETH_P_IP) ||
vh->h_vlan_encapsulated_proto == htons(ETH_P_IPV6))) {
skb->csum = hw_csum;
skb->ip_summed = CHECKSUM_HW;
skb->ip_summed = CHECKSUM_COMPLETE;
}
}
@ -973,7 +973,7 @@ myri10ge_rx_done(struct myri10ge_priv *mgp, struct myri10ge_rx_buf *rx,
if ((skb->protocol == ntohs(ETH_P_IP)) ||
(skb->protocol == ntohs(ETH_P_IPV6))) {
skb->csum = ntohs((u16) csum);
skb->ip_summed = CHECKSUM_HW;
skb->ip_summed = CHECKSUM_COMPLETE;
} else
myri10ge_vlan_ip_csum(skb, ntohs((u16) csum));
}
@ -1897,13 +1897,13 @@ again:
pseudo_hdr_offset = 0;
odd_flag = 0;
flags = (MXGEFW_FLAGS_NO_TSO | MXGEFW_FLAGS_FIRST);
if (likely(skb->ip_summed == CHECKSUM_HW)) {
if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) {
cksum_offset = (skb->h.raw - skb->data);
pseudo_hdr_offset = (skb->h.raw + skb->csum) - skb->data;
/* If the headers are excessively large, then we must
* fall back to a software checksum */
if (unlikely(cksum_offset > 255 || pseudo_hdr_offset > 127)) {
if (skb_checksum_help(skb, 0))
if (skb_checksum_help(skb))
goto drop;
cksum_offset = 0;
pseudo_hdr_offset = 0;

Просмотреть файл

@ -1153,7 +1153,7 @@ again:
if (!nr_frags)
frag = NULL;
extsts = 0;
if (skb->ip_summed == CHECKSUM_HW) {
if (skb->ip_summed == CHECKSUM_PARTIAL) {
extsts |= EXTSTS_IPPKT;
if (IPPROTO_TCP == skb->nh.iph->protocol)
extsts |= EXTSTS_TCPPKT;

Просмотреть файл

@ -2169,7 +2169,7 @@ static inline u32 rtl8169_tso_csum(struct sk_buff *skb, struct net_device *dev)
if (mss)
return LargeSend | ((mss & MSSMask) << MSSShift);
}
if (skb->ip_summed == CHECKSUM_HW) {
if (skb->ip_summed == CHECKSUM_PARTIAL) {
const struct iphdr *ip = skb->nh.iph;
if (ip->protocol == IPPROTO_TCP)

Просмотреть файл

@ -3893,7 +3893,7 @@ static int s2io_xmit(struct sk_buff *skb, struct net_device *dev)
txdp->Control_1 |= TXD_TCP_LSO_MSS(s2io_tcp_mss(skb));
}
#endif
if (skb->ip_summed == CHECKSUM_HW) {
if (skb->ip_summed == CHECKSUM_PARTIAL) {
txdp->Control_2 |=
(TXD_TX_CKO_IPV4_EN | TXD_TX_CKO_TCP_EN |
TXD_TX_CKO_UDP_EN);

Просмотреть файл

@ -1559,7 +1559,7 @@ struct sk_buff *pMessage) /* pointer to send-message */
pTxd->VDataHigh = (SK_U32) (PhysAddr >> 32);
pTxd->pMBuf = pMessage;
if (pMessage->ip_summed == CHECKSUM_HW) {
if (pMessage->ip_summed == CHECKSUM_PARTIAL) {
u16 hdrlen = pMessage->h.raw - pMessage->data;
u16 offset = hdrlen + pMessage->csum;
@ -1678,7 +1678,7 @@ struct sk_buff *pMessage) /* pointer to send-message */
/*
** Does the HW need to evaluate checksum for TCP or UDP packets?
*/
if (pMessage->ip_summed == CHECKSUM_HW) {
if (pMessage->ip_summed == CHECKSUM_PARTIAL) {
u16 hdrlen = pMessage->h.raw - pMessage->data;
u16 offset = hdrlen + pMessage->csum;
@ -2158,7 +2158,7 @@ rx_start:
#ifdef USE_SK_RX_CHECKSUM
pMsg->csum = pRxd->TcpSums & 0xffff;
pMsg->ip_summed = CHECKSUM_HW;
pMsg->ip_summed = CHECKSUM_COMPLETE;
#else
pMsg->ip_summed = CHECKSUM_NONE;
#endif

Просмотреть файл

@ -2338,7 +2338,7 @@ static int skge_xmit_frame(struct sk_buff *skb, struct net_device *dev)
td->dma_lo = map;
td->dma_hi = map >> 32;
if (skb->ip_summed == CHECKSUM_HW) {
if (skb->ip_summed == CHECKSUM_PARTIAL) {
int offset = skb->h.raw - skb->data;
/* This seems backwards, but it is what the sk98lin
@ -2642,7 +2642,7 @@ static inline struct sk_buff *skge_rx_get(struct skge_port *skge,
skb->dev = skge->netdev;
if (skge->rx_csum) {
skb->csum = csum;
skb->ip_summed = CHECKSUM_HW;
skb->ip_summed = CHECKSUM_COMPLETE;
}
skb->protocol = eth_type_trans(skb, skge->netdev);

Просмотреть файл

@ -1163,7 +1163,7 @@ static unsigned tx_le_req(const struct sk_buff *skb)
if (skb_is_gso(skb))
++count;
if (skb->ip_summed == CHECKSUM_HW)
if (skb->ip_summed == CHECKSUM_PARTIAL)
++count;
return count;
@ -1272,7 +1272,7 @@ static int sky2_xmit_frame(struct sk_buff *skb, struct net_device *dev)
#endif
/* Handle TCP checksum offload */
if (skb->ip_summed == CHECKSUM_HW) {
if (skb->ip_summed == CHECKSUM_PARTIAL) {
u16 hdr = skb->h.raw - skb->data;
u16 offset = hdr + skb->csum;
@ -2000,7 +2000,7 @@ static int sky2_status_intr(struct sky2_hw *hw, int to_do)
#endif
case OP_RXCHKS:
skb = sky2->rx_ring[sky2->rx_next].skb;
skb->ip_summed = CHECKSUM_HW;
skb->ip_summed = CHECKSUM_COMPLETE;
skb->csum = le16_to_cpu(status);
break;

Просмотреть файл

@ -1230,7 +1230,7 @@ static int start_tx(struct sk_buff *skb, struct net_device *dev)
}
#if defined(ZEROCOPY) && defined(HAS_BROKEN_FIRMWARE)
if (skb->ip_summed == CHECKSUM_HW) {
if (skb->ip_summed == CHECKSUM_PARTIAL) {
if (skb_padto(skb, (skb->len + PADDING_MASK) & ~PADDING_MASK))
return NETDEV_TX_OK;
}
@ -1252,7 +1252,7 @@ static int start_tx(struct sk_buff *skb, struct net_device *dev)
status |= TxDescIntr;
np->reap_tx = 0;
}
if (skb->ip_summed == CHECKSUM_HW) {
if (skb->ip_summed == CHECKSUM_PARTIAL) {
status |= TxCalTCP;
np->stats.tx_compressed++;
}
@ -1499,7 +1499,7 @@ static int __netdev_rx(struct net_device *dev, int *quota)
* Until then, the printk stays. :-) -Ion
*/
else if (le16_to_cpu(desc->status2) & 0x0040) {
skb->ip_summed = CHECKSUM_HW;
skb->ip_summed = CHECKSUM_COMPLETE;
skb->csum = le16_to_cpu(desc->csum);
printk(KERN_DEBUG "%s: checksum_hw, status2 = %#x\n", dev->name, le16_to_cpu(desc->status2));
}

Просмотреть файл

@ -855,7 +855,7 @@ static int gem_rx(struct gem *gp, int work_to_do)
}
skb->csum = ntohs((status & RXDCTRL_TCPCSUM) ^ 0xffff);
skb->ip_summed = CHECKSUM_HW;
skb->ip_summed = CHECKSUM_COMPLETE;
skb->protocol = eth_type_trans(skb, gp->dev);
netif_receive_skb(skb);
@ -1026,7 +1026,7 @@ static int gem_start_xmit(struct sk_buff *skb, struct net_device *dev)
unsigned long flags;
ctrl = 0;
if (skb->ip_summed == CHECKSUM_HW) {
if (skb->ip_summed == CHECKSUM_PARTIAL) {
u64 csum_start_off, csum_stuff_off;
csum_start_off = (u64) (skb->h.raw - skb->data);

Просмотреть файл

@ -1207,7 +1207,7 @@ static void happy_meal_transceiver_check(struct happy_meal *hp, void __iomem *tr
* flags, thus:
*
* skb->csum = rxd->rx_flags & 0xffff;
* skb->ip_summed = CHECKSUM_HW;
* skb->ip_summed = CHECKSUM_COMPLETE;
*
* before sending off the skb to the protocols, and we are good as gold.
*/
@ -2074,7 +2074,7 @@ static void happy_meal_rx(struct happy_meal *hp, struct net_device *dev)
/* This card is _fucking_ hot... */
skb->csum = ntohs(csum ^ 0xffff);
skb->ip_summed = CHECKSUM_HW;
skb->ip_summed = CHECKSUM_COMPLETE;
RXD(("len=%d csum=%4x]", len, csum));
skb->protocol = eth_type_trans(skb, dev);
@ -2268,7 +2268,7 @@ static int happy_meal_start_xmit(struct sk_buff *skb, struct net_device *dev)
u32 tx_flags;
tx_flags = TXFLAG_OWN;
if (skb->ip_summed == CHECKSUM_HW) {
if (skb->ip_summed == CHECKSUM_PARTIAL) {
u32 csum_start_off, csum_stuff_off;
csum_start_off = (u32) (skb->h.raw - skb->data);

Просмотреть файл

@ -149,122 +149,67 @@ module_param(tg3_debug, int, 0);
MODULE_PARM_DESC(tg3_debug, "Tigon3 bitmapped debugging message enable value");
static struct pci_device_id tg3_pci_tbl[] = {
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5700,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5701,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5702,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5703,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5704,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5702FE,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5705,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5705_2,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5705M,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5705M_2,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5702X,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5703X,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5704S,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5702A3,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5703A3,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5782,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5788,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5789,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5901,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5901_2,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5704S_2,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5705F,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5720,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5721,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5750,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5751,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5750M,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5751M,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5751F,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5752,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5752M,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5753,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5753M,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5753F,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5754,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5754M,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5755,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5755M,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5786,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5787,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5787M,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5714,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5714S,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5715,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5715S,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5780,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5780S,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5781,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_SYSKONNECT, PCI_DEVICE_ID_SYSKONNECT_9DXX,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_SYSKONNECT, PCI_DEVICE_ID_SYSKONNECT_9MXX,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_ALTIMA, PCI_DEVICE_ID_ALTIMA_AC1000,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_ALTIMA, PCI_DEVICE_ID_ALTIMA_AC1001,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_ALTIMA, PCI_DEVICE_ID_ALTIMA_AC1003,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_ALTIMA, PCI_DEVICE_ID_ALTIMA_AC9100,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ PCI_VENDOR_ID_APPLE, PCI_DEVICE_ID_APPLE_TIGON3,
PCI_ANY_ID, PCI_ANY_ID, 0, 0, 0UL },
{ 0, }
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5700)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5701)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5702)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5703)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5704)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5702FE)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5705)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5705_2)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5705M)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5705M_2)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5702X)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5703X)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5704S)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5702A3)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5703A3)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5782)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5788)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5789)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5901)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5901_2)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5704S_2)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5705F)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5720)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5721)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5750)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5751)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5750M)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5751M)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5751F)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5752)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5752M)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5753)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5753M)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5753F)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5754)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5754M)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5755)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5755M)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5786)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5787)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5787M)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5714)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5714S)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5715)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5715S)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5780)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5780S)},
{PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, PCI_DEVICE_ID_TIGON3_5781)},
{PCI_DEVICE(PCI_VENDOR_ID_SYSKONNECT, PCI_DEVICE_ID_SYSKONNECT_9DXX)},
{PCI_DEVICE(PCI_VENDOR_ID_SYSKONNECT, PCI_DEVICE_ID_SYSKONNECT_9MXX)},
{PCI_DEVICE(PCI_VENDOR_ID_ALTIMA, PCI_DEVICE_ID_ALTIMA_AC1000)},
{PCI_DEVICE(PCI_VENDOR_ID_ALTIMA, PCI_DEVICE_ID_ALTIMA_AC1001)},
{PCI_DEVICE(PCI_VENDOR_ID_ALTIMA, PCI_DEVICE_ID_ALTIMA_AC1003)},
{PCI_DEVICE(PCI_VENDOR_ID_ALTIMA, PCI_DEVICE_ID_ALTIMA_AC9100)},
{PCI_DEVICE(PCI_VENDOR_ID_APPLE, PCI_DEVICE_ID_APPLE_TIGON3)},
{}
};
MODULE_DEVICE_TABLE(pci, tg3_pci_tbl);
static struct {
static const struct {
const char string[ETH_GSTRING_LEN];
} ethtool_stats_keys[TG3_NUM_STATS] = {
{ "rx_octets" },
@ -345,7 +290,7 @@ static struct {
{ "nic_tx_threshold_hit" }
};
static struct {
static const struct {
const char string[ETH_GSTRING_LEN];
} ethtool_test_keys[TG3_NUM_TEST] = {
{ "nvram test (online) " },
@ -3851,11 +3796,11 @@ static int tg3_start_xmit(struct sk_buff *skb, struct net_device *dev)
skb->h.th->check = 0;
}
else if (skb->ip_summed == CHECKSUM_HW)
else if (skb->ip_summed == CHECKSUM_PARTIAL)
base_flags |= TXD_FLAG_TCPUDP_CSUM;
#else
mss = 0;
if (skb->ip_summed == CHECKSUM_HW)
if (skb->ip_summed == CHECKSUM_PARTIAL)
base_flags |= TXD_FLAG_TCPUDP_CSUM;
#endif
#if TG3_VLAN_TAG_USED
@ -3981,7 +3926,7 @@ static int tg3_start_xmit_dma_bug(struct sk_buff *skb, struct net_device *dev)
entry = tp->tx_prod;
base_flags = 0;
if (skb->ip_summed == CHECKSUM_HW)
if (skb->ip_summed == CHECKSUM_PARTIAL)
base_flags |= TXD_FLAG_TCPUDP_CSUM;
#if TG3_TSO_SUPPORT != 0
mss = 0;
@ -4969,7 +4914,7 @@ static int tg3_halt(struct tg3 *tp, int kind, int silent)
#define TG3_FW_BSS_ADDR 0x08000a70
#define TG3_FW_BSS_LEN 0x10
static u32 tg3FwText[(TG3_FW_TEXT_LEN / sizeof(u32)) + 1] = {
static const u32 tg3FwText[(TG3_FW_TEXT_LEN / sizeof(u32)) + 1] = {
0x00000000, 0x10000003, 0x00000000, 0x0000000d, 0x0000000d, 0x3c1d0800,
0x37bd3ffc, 0x03a0f021, 0x3c100800, 0x26100000, 0x0e000018, 0x00000000,
0x0000000d, 0x3c1d0800, 0x37bd3ffc, 0x03a0f021, 0x3c100800, 0x26100034,
@ -5063,7 +5008,7 @@ static u32 tg3FwText[(TG3_FW_TEXT_LEN / sizeof(u32)) + 1] = {
0x27bd0008, 0x03e00008, 0x00000000, 0x00000000, 0x00000000
};
static u32 tg3FwRodata[(TG3_FW_RODATA_LEN / sizeof(u32)) + 1] = {
static const u32 tg3FwRodata[(TG3_FW_RODATA_LEN / sizeof(u32)) + 1] = {
0x35373031, 0x726c7341, 0x00000000, 0x00000000, 0x53774576, 0x656e7430,
0x00000000, 0x726c7045, 0x76656e74, 0x31000000, 0x556e6b6e, 0x45766e74,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x66617461, 0x6c457272,
@ -5128,13 +5073,13 @@ static int tg3_halt_cpu(struct tg3 *tp, u32 offset)
struct fw_info {
unsigned int text_base;
unsigned int text_len;
u32 *text_data;
const u32 *text_data;
unsigned int rodata_base;
unsigned int rodata_len;
u32 *rodata_data;
const u32 *rodata_data;
unsigned int data_base;
unsigned int data_len;
u32 *data_data;
const u32 *data_data;
};
/* tp->lock is held. */
@ -5266,7 +5211,7 @@ static int tg3_load_5701_a0_firmware_fix(struct tg3 *tp)
#define TG3_TSO_FW_BSS_ADDR 0x08001b80
#define TG3_TSO_FW_BSS_LEN 0x894
static u32 tg3TsoFwText[(TG3_TSO_FW_TEXT_LEN / 4) + 1] = {
static const u32 tg3TsoFwText[(TG3_TSO_FW_TEXT_LEN / 4) + 1] = {
0x0e000003, 0x00000000, 0x08001b24, 0x00000000, 0x10000003, 0x00000000,
0x0000000d, 0x0000000d, 0x3c1d0800, 0x37bd4000, 0x03a0f021, 0x3c100800,
0x26100000, 0x0e000010, 0x00000000, 0x0000000d, 0x27bdffe0, 0x3c04fefe,
@ -5553,7 +5498,7 @@ static u32 tg3TsoFwText[(TG3_TSO_FW_TEXT_LEN / 4) + 1] = {
0xac470014, 0xac4a0018, 0x03e00008, 0xac4b001c, 0x00000000, 0x00000000,
};
static u32 tg3TsoFwRodata[] = {
static const u32 tg3TsoFwRodata[] = {
0x4d61696e, 0x43707542, 0x00000000, 0x4d61696e, 0x43707541, 0x00000000,
0x00000000, 0x00000000, 0x73746b6f, 0x66666c64, 0x496e0000, 0x73746b6f,
0x66662a2a, 0x00000000, 0x53774576, 0x656e7430, 0x00000000, 0x00000000,
@ -5561,7 +5506,7 @@ static u32 tg3TsoFwRodata[] = {
0x00000000,
};
static u32 tg3TsoFwData[] = {
static const u32 tg3TsoFwData[] = {
0x00000000, 0x73746b6f, 0x66666c64, 0x5f76312e, 0x362e3000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000,
@ -5583,7 +5528,7 @@ static u32 tg3TsoFwData[] = {
#define TG3_TSO5_FW_BSS_ADDR 0x00010f50
#define TG3_TSO5_FW_BSS_LEN 0x88
static u32 tg3Tso5FwText[(TG3_TSO5_FW_TEXT_LEN / 4) + 1] = {
static const u32 tg3Tso5FwText[(TG3_TSO5_FW_TEXT_LEN / 4) + 1] = {
0x0c004003, 0x00000000, 0x00010f04, 0x00000000, 0x10000003, 0x00000000,
0x0000000d, 0x0000000d, 0x3c1d0001, 0x37bde000, 0x03a0f021, 0x3c100001,
0x26100000, 0x0c004010, 0x00000000, 0x0000000d, 0x27bdffe0, 0x3c04fefe,
@ -5742,14 +5687,14 @@ static u32 tg3Tso5FwText[(TG3_TSO5_FW_TEXT_LEN / 4) + 1] = {
0x00000000, 0x00000000, 0x00000000,
};
static u32 tg3Tso5FwRodata[(TG3_TSO5_FW_RODATA_LEN / 4) + 1] = {
static const u32 tg3Tso5FwRodata[(TG3_TSO5_FW_RODATA_LEN / 4) + 1] = {
0x4d61696e, 0x43707542, 0x00000000, 0x4d61696e, 0x43707541, 0x00000000,
0x00000000, 0x00000000, 0x73746b6f, 0x66666c64, 0x00000000, 0x00000000,
0x73746b6f, 0x66666c64, 0x00000000, 0x00000000, 0x66617461, 0x6c457272,
0x00000000, 0x00000000, 0x00000000,
};
static u32 tg3Tso5FwData[(TG3_TSO5_FW_DATA_LEN / 4) + 1] = {
static const u32 tg3Tso5FwData[(TG3_TSO5_FW_DATA_LEN / 4) + 1] = {
0x00000000, 0x73746b6f, 0x66666c64, 0x5f76312e, 0x322e3000, 0x00000000,
0x00000000, 0x00000000, 0x00000000,
};

Просмотреть файл

@ -830,7 +830,7 @@ typhoon_start_tx(struct sk_buff *skb, struct net_device *dev)
first_txd->addrHi = (u64)((unsigned long) skb) >> 32;
first_txd->processFlags = 0;
if(skb->ip_summed == CHECKSUM_HW) {
if(skb->ip_summed == CHECKSUM_PARTIAL) {
/* The 3XP will figure out if this is UDP/TCP */
first_txd->processFlags |= TYPHOON_TX_PF_TCP_CHKSUM;
first_txd->processFlags |= TYPHOON_TX_PF_UDP_CHKSUM;

Просмотреть файл

@ -1230,7 +1230,7 @@ static int rhine_start_tx(struct sk_buff *skb, struct net_device *dev)
rp->tx_skbuff[entry] = skb;
if ((rp->quirks & rqRhineI) &&
(((unsigned long)skb->data & 3) || skb_shinfo(skb)->nr_frags != 0 || skb->ip_summed == CHECKSUM_HW)) {
(((unsigned long)skb->data & 3) || skb_shinfo(skb)->nr_frags != 0 || skb->ip_summed == CHECKSUM_PARTIAL)) {
/* Must use alignment buffer. */
if (skb->len > PKT_BUF_SZ) {
/* packet too long, drop it */

Просмотреть файл

@ -2002,7 +2002,7 @@ static int velocity_xmit(struct sk_buff *skb, struct net_device *dev)
* Handle hardware checksum
*/
if ((vptr->flags & VELOCITY_FLAGS_TX_CSUM)
&& (skb->ip_summed == CHECKSUM_HW)) {
&& (skb->ip_summed == CHECKSUM_PARTIAL)) {
struct iphdr *ip = skb->nh.iph;
if (ip->protocol == IPPROTO_TCP)
td_ptr->tdesc1.TCR |= TCR0_TCPCK;

Просмотреть файл

@ -1826,8 +1826,8 @@ static int __devinit riva_get_EDID_OF(struct fb_info *info, struct pci_dev *pd)
{
struct riva_par *par = info->par;
struct device_node *dp;
unsigned char *pedid = NULL;
unsigned char *disptype = NULL;
const unsigned char *pedid = NULL;
const unsigned char *disptype = NULL;
static char *propnames[] = {
"DFP,EDID", "LCD,EDID", "EDID", "EDID1", "EDID,B", "EDID,A", NULL };
int i;

Просмотреть файл

@ -1471,8 +1471,8 @@ config NFS_V4
If unsure, say N.
config NFS_DIRECTIO
bool "Allow direct I/O on NFS files (EXPERIMENTAL)"
depends on NFS_FS && EXPERIMENTAL
bool "Allow direct I/O on NFS files"
depends on NFS_FS
help
This option enables applications to perform uncached I/O on files
in NFS file systems using the O_DIRECT open() flag. When O_DIRECT

Просмотреть файл

@ -828,17 +828,19 @@ void d_instantiate(struct dentry *entry, struct inode * inode)
* (or otherwise set) by the caller to indicate that it is now
* in use by the dcache.
*/
struct dentry *d_instantiate_unique(struct dentry *entry, struct inode *inode)
static struct dentry *__d_instantiate_unique(struct dentry *entry,
struct inode *inode)
{
struct dentry *alias;
int len = entry->d_name.len;
const char *name = entry->d_name.name;
unsigned int hash = entry->d_name.hash;
BUG_ON(!list_empty(&entry->d_alias));
spin_lock(&dcache_lock);
if (!inode)
goto do_negative;
if (!inode) {
entry->d_inode = NULL;
return NULL;
}
list_for_each_entry(alias, &inode->i_dentry, d_alias) {
struct qstr *qstr = &alias->d_name;
@ -851,19 +853,35 @@ struct dentry *d_instantiate_unique(struct dentry *entry, struct inode *inode)
if (memcmp(qstr->name, name, len))
continue;
dget_locked(alias);
spin_unlock(&dcache_lock);
BUG_ON(!d_unhashed(alias));
iput(inode);
return alias;
}
list_add(&entry->d_alias, &inode->i_dentry);
do_negative:
entry->d_inode = inode;
fsnotify_d_instantiate(entry, inode);
spin_unlock(&dcache_lock);
security_d_instantiate(entry, inode);
return NULL;
}
struct dentry *d_instantiate_unique(struct dentry *entry, struct inode *inode)
{
struct dentry *result;
BUG_ON(!list_empty(&entry->d_alias));
spin_lock(&dcache_lock);
result = __d_instantiate_unique(entry, inode);
spin_unlock(&dcache_lock);
if (!result) {
security_d_instantiate(entry, inode);
return NULL;
}
BUG_ON(!d_unhashed(result));
iput(inode);
return result;
}
EXPORT_SYMBOL(d_instantiate_unique);
/**
@ -1235,6 +1253,11 @@ static void __d_rehash(struct dentry * entry, struct hlist_head *list)
hlist_add_head_rcu(&entry->d_hash, list);
}
static void _d_rehash(struct dentry * entry)
{
__d_rehash(entry, d_hash(entry->d_parent, entry->d_name.hash));
}
/**
* d_rehash - add an entry back to the hash
* @entry: dentry to add to the hash
@ -1244,11 +1267,9 @@ static void __d_rehash(struct dentry * entry, struct hlist_head *list)
void d_rehash(struct dentry * entry)
{
struct hlist_head *list = d_hash(entry->d_parent, entry->d_name.hash);
spin_lock(&dcache_lock);
spin_lock(&entry->d_lock);
__d_rehash(entry, list);
_d_rehash(entry);
spin_unlock(&entry->d_lock);
spin_unlock(&dcache_lock);
}
@ -1386,6 +1407,120 @@ already_unhashed:
spin_unlock(&dcache_lock);
}
/*
* Prepare an anonymous dentry for life in the superblock's dentry tree as a
* named dentry in place of the dentry to be replaced.
*/
static void __d_materialise_dentry(struct dentry *dentry, struct dentry *anon)
{
struct dentry *dparent, *aparent;
switch_names(dentry, anon);
do_switch(dentry->d_name.len, anon->d_name.len);
do_switch(dentry->d_name.hash, anon->d_name.hash);
dparent = dentry->d_parent;
aparent = anon->d_parent;
dentry->d_parent = (aparent == anon) ? dentry : aparent;
list_del(&dentry->d_u.d_child);
if (!IS_ROOT(dentry))
list_add(&dentry->d_u.d_child, &dentry->d_parent->d_subdirs);
else
INIT_LIST_HEAD(&dentry->d_u.d_child);
anon->d_parent = (dparent == dentry) ? anon : dparent;
list_del(&anon->d_u.d_child);
if (!IS_ROOT(anon))
list_add(&anon->d_u.d_child, &anon->d_parent->d_subdirs);
else
INIT_LIST_HEAD(&anon->d_u.d_child);
anon->d_flags &= ~DCACHE_DISCONNECTED;
}
/**
* d_materialise_unique - introduce an inode into the tree
* @dentry: candidate dentry
* @inode: inode to bind to the dentry, to which aliases may be attached
*
* Introduces an dentry into the tree, substituting an extant disconnected
* root directory alias in its place if there is one
*/
struct dentry *d_materialise_unique(struct dentry *dentry, struct inode *inode)
{
struct dentry *alias, *actual;
BUG_ON(!d_unhashed(dentry));
spin_lock(&dcache_lock);
if (!inode) {
actual = dentry;
dentry->d_inode = NULL;
goto found_lock;
}
/* See if a disconnected directory already exists as an anonymous root
* that we should splice into the tree instead */
if (S_ISDIR(inode->i_mode) && (alias = __d_find_alias(inode, 1))) {
spin_lock(&alias->d_lock);
/* Is this a mountpoint that we could splice into our tree? */
if (IS_ROOT(alias))
goto connect_mountpoint;
if (alias->d_name.len == dentry->d_name.len &&
alias->d_parent == dentry->d_parent &&
memcmp(alias->d_name.name,
dentry->d_name.name,
dentry->d_name.len) == 0)
goto replace_with_alias;
spin_unlock(&alias->d_lock);
/* Doh! Seem to be aliasing directories for some reason... */
dput(alias);
}
/* Add a unique reference */
actual = __d_instantiate_unique(dentry, inode);
if (!actual)
actual = dentry;
else if (unlikely(!d_unhashed(actual)))
goto shouldnt_be_hashed;
found_lock:
spin_lock(&actual->d_lock);
found:
_d_rehash(actual);
spin_unlock(&actual->d_lock);
spin_unlock(&dcache_lock);
if (actual == dentry) {
security_d_instantiate(dentry, inode);
return NULL;
}
iput(inode);
return actual;
/* Convert the anonymous/root alias into an ordinary dentry */
connect_mountpoint:
__d_materialise_dentry(dentry, alias);
/* Replace the candidate dentry with the alias in the tree */
replace_with_alias:
__d_drop(alias);
actual = alias;
goto found;
shouldnt_be_hashed:
spin_unlock(&dcache_lock);
BUG();
goto shouldnt_be_hashed;
}
/**
* d_path - return the path of a dentry
* @dentry: dentry to report
@ -1784,6 +1919,7 @@ EXPORT_SYMBOL(d_instantiate);
EXPORT_SYMBOL(d_invalidate);
EXPORT_SYMBOL(d_lookup);
EXPORT_SYMBOL(d_move);
EXPORT_SYMBOL_GPL(d_materialise_unique);
EXPORT_SYMBOL(d_path);
EXPORT_SYMBOL(d_prune_aliases);
EXPORT_SYMBOL(d_rehash);

Просмотреть файл

@ -151,11 +151,13 @@ static void nlmclnt_release_lockargs(struct nlm_rqst *req)
int
nlmclnt_proc(struct inode *inode, int cmd, struct file_lock *fl)
{
struct rpc_clnt *client = NFS_CLIENT(inode);
struct sockaddr_in addr;
struct nlm_host *host;
struct nlm_rqst *call;
sigset_t oldset;
unsigned long flags;
int status, proto, vers;
int status, vers;
vers = (NFS_PROTO(inode)->version == 3) ? 4 : 1;
if (NFS_PROTO(inode)->version > 3) {
@ -163,10 +165,8 @@ nlmclnt_proc(struct inode *inode, int cmd, struct file_lock *fl)
return -ENOLCK;
}
/* Retrieve transport protocol from NFS client */
proto = NFS_CLIENT(inode)->cl_xprt->prot;
host = nlmclnt_lookup_host(NFS_ADDR(inode), proto, vers);
rpc_peeraddr(client, (struct sockaddr *) &addr, sizeof(addr));
host = nlmclnt_lookup_host(&addr, client->cl_xprt->prot, vers);
if (host == NULL)
return -ENOLCK;

Просмотреть файл

@ -26,7 +26,6 @@
#define NLM_HOST_REBIND (60 * HZ)
#define NLM_HOST_EXPIRE ((nrhosts > NLM_HOST_MAX)? 300 * HZ : 120 * HZ)
#define NLM_HOST_COLLECT ((nrhosts > NLM_HOST_MAX)? 120 * HZ : 60 * HZ)
#define NLM_HOST_ADDR(sv) (&(sv)->s_nlmclnt->cl_xprt->addr)
static struct nlm_host * nlm_hosts[NLM_HOST_NRHASH];
static unsigned long next_gc;
@ -167,7 +166,6 @@ struct rpc_clnt *
nlm_bind_host(struct nlm_host *host)
{
struct rpc_clnt *clnt;
struct rpc_xprt *xprt;
dprintk("lockd: nlm_bind_host(%08x)\n",
(unsigned)ntohl(host->h_addr.sin_addr.s_addr));
@ -179,7 +177,6 @@ nlm_bind_host(struct nlm_host *host)
* RPC rebind is required
*/
if ((clnt = host->h_rpcclnt) != NULL) {
xprt = clnt->cl_xprt;
if (time_after_eq(jiffies, host->h_nextrebind)) {
rpc_force_rebind(clnt);
host->h_nextrebind = jiffies + NLM_HOST_REBIND;
@ -187,31 +184,37 @@ nlm_bind_host(struct nlm_host *host)
host->h_nextrebind - jiffies);
}
} else {
xprt = xprt_create_proto(host->h_proto, &host->h_addr, NULL);
if (IS_ERR(xprt))
goto forgetit;
unsigned long increment = nlmsvc_timeout * HZ;
struct rpc_timeout timeparms = {
.to_initval = increment,
.to_increment = increment,
.to_maxval = increment * 6UL,
.to_retries = 5U,
};
struct rpc_create_args args = {
.protocol = host->h_proto,
.address = (struct sockaddr *)&host->h_addr,
.addrsize = sizeof(host->h_addr),
.timeout = &timeparms,
.servername = host->h_name,
.program = &nlm_program,
.version = host->h_version,
.authflavor = RPC_AUTH_UNIX,
.flags = (RPC_CLNT_CREATE_HARDRTRY |
RPC_CLNT_CREATE_AUTOBIND),
};
xprt_set_timeout(&xprt->timeout, 5, nlmsvc_timeout);
xprt->resvport = 1; /* NLM requires a reserved port */
/* Existing NLM servers accept AUTH_UNIX only */
clnt = rpc_new_client(xprt, host->h_name, &nlm_program,
host->h_version, RPC_AUTH_UNIX);
if (IS_ERR(clnt))
goto forgetit;
clnt->cl_autobind = 1; /* turn on pmap queries */
clnt->cl_softrtry = 1; /* All queries are soft */
host->h_rpcclnt = clnt;
clnt = rpc_create(&args);
if (!IS_ERR(clnt))
host->h_rpcclnt = clnt;
else {
printk("lockd: couldn't create RPC handle for %s\n", host->h_name);
clnt = NULL;
}
}
mutex_unlock(&host->h_mutex);
return clnt;
forgetit:
printk("lockd: couldn't create RPC handle for %s\n", host->h_name);
mutex_unlock(&host->h_mutex);
return NULL;
}
/*

Просмотреть файл

@ -109,30 +109,23 @@ nsm_unmonitor(struct nlm_host *host)
static struct rpc_clnt *
nsm_create(void)
{
struct rpc_xprt *xprt;
struct rpc_clnt *clnt;
struct sockaddr_in sin;
struct sockaddr_in sin = {
.sin_family = AF_INET,
.sin_addr.s_addr = htonl(INADDR_LOOPBACK),
.sin_port = 0,
};
struct rpc_create_args args = {
.protocol = IPPROTO_UDP,
.address = (struct sockaddr *)&sin,
.addrsize = sizeof(sin),
.servername = "localhost",
.program = &nsm_program,
.version = SM_VERSION,
.authflavor = RPC_AUTH_NULL,
.flags = (RPC_CLNT_CREATE_ONESHOT),
};
sin.sin_family = AF_INET;
sin.sin_addr.s_addr = htonl(INADDR_LOOPBACK);
sin.sin_port = 0;
xprt = xprt_create_proto(IPPROTO_UDP, &sin, NULL);
if (IS_ERR(xprt))
return (struct rpc_clnt *)xprt;
xprt->resvport = 1; /* NSM requires a reserved port */
clnt = rpc_create_client(xprt, "localhost",
&nsm_program, SM_VERSION,
RPC_AUTH_NULL);
if (IS_ERR(clnt))
goto out_err;
clnt->cl_softrtry = 1;
clnt->cl_oneshot = 1;
return clnt;
out_err:
return clnt;
return rpc_create(&args);
}
/*

Просмотреть файл

@ -4,9 +4,9 @@
obj-$(CONFIG_NFS_FS) += nfs.o
nfs-y := dir.o file.o inode.o super.o nfs2xdr.o pagelist.o \
proc.o read.o symlink.o unlink.o write.o \
namespace.o
nfs-y := client.o dir.o file.o getroot.o inode.o super.o nfs2xdr.o \
pagelist.o proc.o read.o symlink.o unlink.o \
write.o namespace.o
nfs-$(CONFIG_ROOT_NFS) += nfsroot.o mount_clnt.o
nfs-$(CONFIG_NFS_V3) += nfs3proc.o nfs3xdr.o
nfs-$(CONFIG_NFS_V3_ACL) += nfs3acl.o

Просмотреть файл

@ -19,6 +19,7 @@
#include "nfs4_fs.h"
#include "callback.h"
#include "internal.h"
#define NFSDBG_FACILITY NFSDBG_CALLBACK
@ -36,6 +37,21 @@ static struct svc_program nfs4_callback_program;
unsigned int nfs_callback_set_tcpport;
unsigned short nfs_callback_tcpport;
static const int nfs_set_port_min = 0;
static const int nfs_set_port_max = 65535;
static int param_set_port(const char *val, struct kernel_param *kp)
{
char *endp;
int num = simple_strtol(val, &endp, 0);
if (endp == val || *endp || num < nfs_set_port_min || num > nfs_set_port_max)
return -EINVAL;
*((int *)kp->arg) = num;
return 0;
}
module_param_call(callback_tcpport, param_set_port, param_get_int,
&nfs_callback_set_tcpport, 0644);
/*
* This is the callback kernel thread.
@ -134,10 +150,8 @@ out_err:
/*
* Kill the server process if it is not already up.
*/
int nfs_callback_down(void)
void nfs_callback_down(void)
{
int ret = 0;
lock_kernel();
mutex_lock(&nfs_callback_mutex);
nfs_callback_info.users--;
@ -149,20 +163,19 @@ int nfs_callback_down(void)
} while (wait_for_completion_timeout(&nfs_callback_info.stopped, 5*HZ) == 0);
mutex_unlock(&nfs_callback_mutex);
unlock_kernel();
return ret;
}
static int nfs_callback_authenticate(struct svc_rqst *rqstp)
{
struct in_addr *addr = &rqstp->rq_addr.sin_addr;
struct nfs4_client *clp;
struct sockaddr_in *addr = &rqstp->rq_addr;
struct nfs_client *clp;
/* Don't talk to strangers */
clp = nfs4_find_client(addr);
clp = nfs_find_client(addr, 4);
if (clp == NULL)
return SVC_DROP;
dprintk("%s: %u.%u.%u.%u NFSv4 callback!\n", __FUNCTION__, NIPQUAD(addr));
nfs4_put_client(clp);
dprintk("%s: %u.%u.%u.%u NFSv4 callback!\n", __FUNCTION__, NIPQUAD(addr->sin_addr));
nfs_put_client(clp);
switch (rqstp->rq_authop->flavour) {
case RPC_AUTH_NULL:
if (rqstp->rq_proc != CB_NULL)

Просмотреть файл

@ -62,8 +62,13 @@ struct cb_recallargs {
extern unsigned nfs4_callback_getattr(struct cb_getattrargs *args, struct cb_getattrres *res);
extern unsigned nfs4_callback_recall(struct cb_recallargs *args, void *dummy);
#ifdef CONFIG_NFS_V4
extern int nfs_callback_up(void);
extern int nfs_callback_down(void);
extern void nfs_callback_down(void);
#else
#define nfs_callback_up() (0)
#define nfs_callback_down() do {} while(0)
#endif
extern unsigned int nfs_callback_set_tcpport;
extern unsigned short nfs_callback_tcpport;

Просмотреть файл

@ -10,19 +10,20 @@
#include "nfs4_fs.h"
#include "callback.h"
#include "delegation.h"
#include "internal.h"
#define NFSDBG_FACILITY NFSDBG_CALLBACK
unsigned nfs4_callback_getattr(struct cb_getattrargs *args, struct cb_getattrres *res)
{
struct nfs4_client *clp;
struct nfs_client *clp;
struct nfs_delegation *delegation;
struct nfs_inode *nfsi;
struct inode *inode;
res->bitmap[0] = res->bitmap[1] = 0;
res->status = htonl(NFS4ERR_BADHANDLE);
clp = nfs4_find_client(&args->addr->sin_addr);
clp = nfs_find_client(args->addr, 4);
if (clp == NULL)
goto out;
inode = nfs_delegation_find_inode(clp, &args->fh);
@ -48,7 +49,7 @@ out_iput:
up_read(&nfsi->rwsem);
iput(inode);
out_putclient:
nfs4_put_client(clp);
nfs_put_client(clp);
out:
dprintk("%s: exit with status = %d\n", __FUNCTION__, ntohl(res->status));
return res->status;
@ -56,12 +57,12 @@ out:
unsigned nfs4_callback_recall(struct cb_recallargs *args, void *dummy)
{
struct nfs4_client *clp;
struct nfs_client *clp;
struct inode *inode;
unsigned res;
res = htonl(NFS4ERR_BADHANDLE);
clp = nfs4_find_client(&args->addr->sin_addr);
clp = nfs_find_client(args->addr, 4);
if (clp == NULL)
goto out;
inode = nfs_delegation_find_inode(clp, &args->fh);
@ -80,7 +81,7 @@ unsigned nfs4_callback_recall(struct cb_recallargs *args, void *dummy)
}
iput(inode);
out_putclient:
nfs4_put_client(clp);
nfs_put_client(clp);
out:
dprintk("%s: exit with status = %d\n", __FUNCTION__, ntohl(res));
return res;

1448
fs/nfs/client.c Normal file

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -18,6 +18,7 @@
#include "nfs4_fs.h"
#include "delegation.h"
#include "internal.h"
static struct nfs_delegation *nfs_alloc_delegation(void)
{
@ -52,7 +53,7 @@ static int nfs_delegation_claim_locks(struct nfs_open_context *ctx, struct nfs4_
case -NFS4ERR_EXPIRED:
/* kill_proc(fl->fl_pid, SIGLOST, 1); */
case -NFS4ERR_STALE_CLIENTID:
nfs4_schedule_state_recovery(NFS_SERVER(inode)->nfs4_state);
nfs4_schedule_state_recovery(NFS_SERVER(inode)->nfs_client);
goto out_err;
}
}
@ -114,7 +115,7 @@ void nfs_inode_reclaim_delegation(struct inode *inode, struct rpc_cred *cred, st
*/
int nfs_inode_set_delegation(struct inode *inode, struct rpc_cred *cred, struct nfs_openres *res)
{
struct nfs4_client *clp = NFS_SERVER(inode)->nfs4_state;
struct nfs_client *clp = NFS_SERVER(inode)->nfs_client;
struct nfs_inode *nfsi = NFS_I(inode);
struct nfs_delegation *delegation;
int status = 0;
@ -145,7 +146,7 @@ int nfs_inode_set_delegation(struct inode *inode, struct rpc_cred *cred, struct
sizeof(delegation->stateid)) != 0 ||
delegation->type != nfsi->delegation->type) {
printk("%s: server %u.%u.%u.%u, handed out a duplicate delegation!\n",
__FUNCTION__, NIPQUAD(clp->cl_addr));
__FUNCTION__, NIPQUAD(clp->cl_addr.sin_addr));
status = -EIO;
}
}
@ -176,7 +177,7 @@ static void nfs_msync_inode(struct inode *inode)
*/
int __nfs_inode_return_delegation(struct inode *inode)
{
struct nfs4_client *clp = NFS_SERVER(inode)->nfs4_state;
struct nfs_client *clp = NFS_SERVER(inode)->nfs_client;
struct nfs_inode *nfsi = NFS_I(inode);
struct nfs_delegation *delegation;
int res = 0;
@ -208,7 +209,7 @@ int __nfs_inode_return_delegation(struct inode *inode)
*/
void nfs_return_all_delegations(struct super_block *sb)
{
struct nfs4_client *clp = NFS_SB(sb)->nfs4_state;
struct nfs_client *clp = NFS_SB(sb)->nfs_client;
struct nfs_delegation *delegation;
struct inode *inode;
@ -232,7 +233,7 @@ restart:
int nfs_do_expire_all_delegations(void *ptr)
{
struct nfs4_client *clp = ptr;
struct nfs_client *clp = ptr;
struct nfs_delegation *delegation;
struct inode *inode;
@ -254,11 +255,11 @@ restart:
}
out:
spin_unlock(&clp->cl_lock);
nfs4_put_client(clp);
nfs_put_client(clp);
module_put_and_exit(0);
}
void nfs_expire_all_delegations(struct nfs4_client *clp)
void nfs_expire_all_delegations(struct nfs_client *clp)
{
struct task_struct *task;
@ -266,17 +267,17 @@ void nfs_expire_all_delegations(struct nfs4_client *clp)
atomic_inc(&clp->cl_count);
task = kthread_run(nfs_do_expire_all_delegations, clp,
"%u.%u.%u.%u-delegreturn",
NIPQUAD(clp->cl_addr));
NIPQUAD(clp->cl_addr.sin_addr));
if (!IS_ERR(task))
return;
nfs4_put_client(clp);
nfs_put_client(clp);
module_put(THIS_MODULE);
}
/*
* Return all delegations following an NFS4ERR_CB_PATH_DOWN error.
*/
void nfs_handle_cb_pathdown(struct nfs4_client *clp)
void nfs_handle_cb_pathdown(struct nfs_client *clp)
{
struct nfs_delegation *delegation;
struct inode *inode;
@ -299,7 +300,7 @@ restart:
struct recall_threadargs {
struct inode *inode;
struct nfs4_client *clp;
struct nfs_client *clp;
const nfs4_stateid *stateid;
struct completion started;
@ -310,7 +311,7 @@ static int recall_thread(void *data)
{
struct recall_threadargs *args = (struct recall_threadargs *)data;
struct inode *inode = igrab(args->inode);
struct nfs4_client *clp = NFS_SERVER(inode)->nfs4_state;
struct nfs_client *clp = NFS_SERVER(inode)->nfs_client;
struct nfs_inode *nfsi = NFS_I(inode);
struct nfs_delegation *delegation;
@ -371,7 +372,7 @@ out_module_put:
/*
* Retrieve the inode associated with a delegation
*/
struct inode *nfs_delegation_find_inode(struct nfs4_client *clp, const struct nfs_fh *fhandle)
struct inode *nfs_delegation_find_inode(struct nfs_client *clp, const struct nfs_fh *fhandle)
{
struct nfs_delegation *delegation;
struct inode *res = NULL;
@ -389,7 +390,7 @@ struct inode *nfs_delegation_find_inode(struct nfs4_client *clp, const struct nf
/*
* Mark all delegations as needing to be reclaimed
*/
void nfs_delegation_mark_reclaim(struct nfs4_client *clp)
void nfs_delegation_mark_reclaim(struct nfs_client *clp)
{
struct nfs_delegation *delegation;
spin_lock(&clp->cl_lock);
@ -401,7 +402,7 @@ void nfs_delegation_mark_reclaim(struct nfs4_client *clp)
/*
* Reap all unclaimed delegations after reboot recovery is done
*/
void nfs_delegation_reap_unclaimed(struct nfs4_client *clp)
void nfs_delegation_reap_unclaimed(struct nfs_client *clp)
{
struct nfs_delegation *delegation, *n;
LIST_HEAD(head);
@ -423,7 +424,7 @@ void nfs_delegation_reap_unclaimed(struct nfs4_client *clp)
int nfs4_copy_delegation_stateid(nfs4_stateid *dst, struct inode *inode)
{
struct nfs4_client *clp = NFS_SERVER(inode)->nfs4_state;
struct nfs_client *clp = NFS_SERVER(inode)->nfs_client;
struct nfs_inode *nfsi = NFS_I(inode);
struct nfs_delegation *delegation;
int res = 0;

Просмотреть файл

@ -29,13 +29,13 @@ void nfs_inode_reclaim_delegation(struct inode *inode, struct rpc_cred *cred, st
int __nfs_inode_return_delegation(struct inode *inode);
int nfs_async_inode_return_delegation(struct inode *inode, const nfs4_stateid *stateid);
struct inode *nfs_delegation_find_inode(struct nfs4_client *clp, const struct nfs_fh *fhandle);
struct inode *nfs_delegation_find_inode(struct nfs_client *clp, const struct nfs_fh *fhandle);
void nfs_return_all_delegations(struct super_block *sb);
void nfs_expire_all_delegations(struct nfs4_client *clp);
void nfs_handle_cb_pathdown(struct nfs4_client *clp);
void nfs_expire_all_delegations(struct nfs_client *clp);
void nfs_handle_cb_pathdown(struct nfs_client *clp);
void nfs_delegation_mark_reclaim(struct nfs4_client *clp);
void nfs_delegation_reap_unclaimed(struct nfs4_client *clp);
void nfs_delegation_mark_reclaim(struct nfs_client *clp);
void nfs_delegation_reap_unclaimed(struct nfs_client *clp);
/* NFSv4 delegation-related procedures */
int nfs4_proc_delegreturn(struct inode *inode, struct rpc_cred *cred, const nfs4_stateid *stateid);

Просмотреть файл

@ -30,7 +30,9 @@
#include <linux/nfs_mount.h>
#include <linux/pagemap.h>
#include <linux/smp_lock.h>
#include <linux/pagevec.h>
#include <linux/namei.h>
#include <linux/mount.h>
#include "nfs4_fs.h"
#include "delegation.h"
@ -870,14 +872,14 @@ int nfs_is_exclusive_create(struct inode *dir, struct nameidata *nd)
return (nd->intent.open.flags & O_EXCL) != 0;
}
static inline int nfs_reval_fsid(struct inode *dir,
struct nfs_fh *fh, struct nfs_fattr *fattr)
static inline int nfs_reval_fsid(struct vfsmount *mnt, struct inode *dir,
struct nfs_fh *fh, struct nfs_fattr *fattr)
{
struct nfs_server *server = NFS_SERVER(dir);
if (!nfs_fsid_equal(&server->fsid, &fattr->fsid))
/* Revalidate fsid on root dir */
return __nfs_revalidate_inode(server, dir->i_sb->s_root->d_inode);
return __nfs_revalidate_inode(server, mnt->mnt_root->d_inode);
return 0;
}
@ -902,9 +904,15 @@ static struct dentry *nfs_lookup(struct inode *dir, struct dentry * dentry, stru
lock_kernel();
/* If we're doing an exclusive create, optimize away the lookup */
if (nfs_is_exclusive_create(dir, nd))
goto no_entry;
/*
* If we're doing an exclusive create, optimize away the lookup
* but don't hash the dentry.
*/
if (nfs_is_exclusive_create(dir, nd)) {
d_instantiate(dentry, NULL);
res = NULL;
goto out_unlock;
}
error = NFS_PROTO(dir)->lookup(dir, &dentry->d_name, &fhandle, &fattr);
if (error == -ENOENT)
@ -913,7 +921,7 @@ static struct dentry *nfs_lookup(struct inode *dir, struct dentry * dentry, stru
res = ERR_PTR(error);
goto out_unlock;
}
error = nfs_reval_fsid(dir, &fhandle, &fattr);
error = nfs_reval_fsid(nd->mnt, dir, &fhandle, &fattr);
if (error < 0) {
res = ERR_PTR(error);
goto out_unlock;
@ -922,8 +930,9 @@ static struct dentry *nfs_lookup(struct inode *dir, struct dentry * dentry, stru
res = (struct dentry *)inode;
if (IS_ERR(res))
goto out_unlock;
no_entry:
res = d_add_unique(dentry, inode);
res = d_materialise_unique(dentry, inode);
if (res != NULL)
dentry = res;
nfs_renew_times(dentry);
@ -1117,11 +1126,13 @@ static struct dentry *nfs_readdir_lookup(nfs_readdir_descriptor_t *desc)
dput(dentry);
return NULL;
}
alias = d_add_unique(dentry, inode);
alias = d_materialise_unique(dentry, inode);
if (alias != NULL) {
dput(dentry);
dentry = alias;
}
nfs_renew_times(dentry);
nfs_set_verifier(dentry, nfs_save_change_attribute(dir));
return dentry;
@ -1143,23 +1154,22 @@ int nfs_instantiate(struct dentry *dentry, struct nfs_fh *fhandle,
struct inode *dir = dentry->d_parent->d_inode;
error = NFS_PROTO(dir)->lookup(dir, &dentry->d_name, fhandle, fattr);
if (error)
goto out_err;
return error;
}
if (!(fattr->valid & NFS_ATTR_FATTR)) {
struct nfs_server *server = NFS_SB(dentry->d_sb);
error = server->rpc_ops->getattr(server, fhandle, fattr);
error = server->nfs_client->rpc_ops->getattr(server, fhandle, fattr);
if (error < 0)
goto out_err;
return error;
}
inode = nfs_fhget(dentry->d_sb, fhandle, fattr);
error = PTR_ERR(inode);
if (IS_ERR(inode))
goto out_err;
return error;
d_instantiate(dentry, inode);
if (d_unhashed(dentry))
d_rehash(dentry);
return 0;
out_err:
d_drop(dentry);
return error;
}
/*
@ -1440,48 +1450,82 @@ static int nfs_unlink(struct inode *dir, struct dentry *dentry)
return error;
}
static int
nfs_symlink(struct inode *dir, struct dentry *dentry, const char *symname)
/*
* To create a symbolic link, most file systems instantiate a new inode,
* add a page to it containing the path, then write it out to the disk
* using prepare_write/commit_write.
*
* Unfortunately the NFS client can't create the in-core inode first
* because it needs a file handle to create an in-core inode (see
* fs/nfs/inode.c:nfs_fhget). We only have a file handle *after* the
* symlink request has completed on the server.
*
* So instead we allocate a raw page, copy the symname into it, then do
* the SYMLINK request with the page as the buffer. If it succeeds, we
* now have a new file handle and can instantiate an in-core NFS inode
* and move the raw page into its mapping.
*/
static int nfs_symlink(struct inode *dir, struct dentry *dentry, const char *symname)
{
struct pagevec lru_pvec;
struct page *page;
char *kaddr;
struct iattr attr;
struct nfs_fattr sym_attr;
struct nfs_fh sym_fh;
struct qstr qsymname;
unsigned int pathlen = strlen(symname);
int error;
dfprintk(VFS, "NFS: symlink(%s/%ld, %s, %s)\n", dir->i_sb->s_id,
dir->i_ino, dentry->d_name.name, symname);
#ifdef NFS_PARANOIA
if (dentry->d_inode)
printk("nfs_proc_symlink: %s/%s not negative!\n",
dentry->d_parent->d_name.name, dentry->d_name.name);
#endif
/*
* Fill in the sattr for the call.
* Note: SunOS 4.1.2 crashes if the mode isn't initialized!
*/
attr.ia_valid = ATTR_MODE;
attr.ia_mode = S_IFLNK | S_IRWXUGO;
if (pathlen > PAGE_SIZE)
return -ENAMETOOLONG;
qsymname.name = symname;
qsymname.len = strlen(symname);
attr.ia_mode = S_IFLNK | S_IRWXUGO;
attr.ia_valid = ATTR_MODE;
lock_kernel();
nfs_begin_data_update(dir);
error = NFS_PROTO(dir)->symlink(dir, &dentry->d_name, &qsymname,
&attr, &sym_fh, &sym_attr);
nfs_end_data_update(dir);
if (!error) {
error = nfs_instantiate(dentry, &sym_fh, &sym_attr);
} else {
if (error == -EEXIST)
printk("nfs_proc_symlink: %s/%s already exists??\n",
dentry->d_parent->d_name.name, dentry->d_name.name);
d_drop(dentry);
page = alloc_page(GFP_KERNEL);
if (!page) {
unlock_kernel();
return -ENOMEM;
}
kaddr = kmap_atomic(page, KM_USER0);
memcpy(kaddr, symname, pathlen);
if (pathlen < PAGE_SIZE)
memset(kaddr + pathlen, 0, PAGE_SIZE - pathlen);
kunmap_atomic(kaddr, KM_USER0);
nfs_begin_data_update(dir);
error = NFS_PROTO(dir)->symlink(dir, dentry, page, pathlen, &attr);
nfs_end_data_update(dir);
if (error != 0) {
dfprintk(VFS, "NFS: symlink(%s/%ld, %s, %s) error %d\n",
dir->i_sb->s_id, dir->i_ino,
dentry->d_name.name, symname, error);
d_drop(dentry);
__free_page(page);
unlock_kernel();
return error;
}
/*
* No big deal if we can't add this page to the page cache here.
* READLINK will get the missing page from the server if needed.
*/
pagevec_init(&lru_pvec, 0);
if (!add_to_page_cache(page, dentry->d_inode->i_mapping, 0,
GFP_KERNEL)) {
if (!pagevec_add(&lru_pvec, page))
__pagevec_lru_add(&lru_pvec);
SetPageUptodate(page);
unlock_page(page);
} else
__free_page(page);
unlock_kernel();
return error;
return 0;
}
static int
@ -1638,35 +1682,211 @@ out:
return error;
}
static DEFINE_SPINLOCK(nfs_access_lru_lock);
static LIST_HEAD(nfs_access_lru_list);
static atomic_long_t nfs_access_nr_entries;
static void nfs_access_free_entry(struct nfs_access_entry *entry)
{
put_rpccred(entry->cred);
kfree(entry);
smp_mb__before_atomic_dec();
atomic_long_dec(&nfs_access_nr_entries);
smp_mb__after_atomic_dec();
}
int nfs_access_cache_shrinker(int nr_to_scan, gfp_t gfp_mask)
{
LIST_HEAD(head);
struct nfs_inode *nfsi;
struct nfs_access_entry *cache;
spin_lock(&nfs_access_lru_lock);
restart:
list_for_each_entry(nfsi, &nfs_access_lru_list, access_cache_inode_lru) {
struct inode *inode;
if (nr_to_scan-- == 0)
break;
inode = igrab(&nfsi->vfs_inode);
if (inode == NULL)
continue;
spin_lock(&inode->i_lock);
if (list_empty(&nfsi->access_cache_entry_lru))
goto remove_lru_entry;
cache = list_entry(nfsi->access_cache_entry_lru.next,
struct nfs_access_entry, lru);
list_move(&cache->lru, &head);
rb_erase(&cache->rb_node, &nfsi->access_cache);
if (!list_empty(&nfsi->access_cache_entry_lru))
list_move_tail(&nfsi->access_cache_inode_lru,
&nfs_access_lru_list);
else {
remove_lru_entry:
list_del_init(&nfsi->access_cache_inode_lru);
clear_bit(NFS_INO_ACL_LRU_SET, &nfsi->flags);
}
spin_unlock(&inode->i_lock);
iput(inode);
goto restart;
}
spin_unlock(&nfs_access_lru_lock);
while (!list_empty(&head)) {
cache = list_entry(head.next, struct nfs_access_entry, lru);
list_del(&cache->lru);
nfs_access_free_entry(cache);
}
return (atomic_long_read(&nfs_access_nr_entries) / 100) * sysctl_vfs_cache_pressure;
}
static void __nfs_access_zap_cache(struct inode *inode)
{
struct nfs_inode *nfsi = NFS_I(inode);
struct rb_root *root_node = &nfsi->access_cache;
struct rb_node *n, *dispose = NULL;
struct nfs_access_entry *entry;
/* Unhook entries from the cache */
while ((n = rb_first(root_node)) != NULL) {
entry = rb_entry(n, struct nfs_access_entry, rb_node);
rb_erase(n, root_node);
list_del(&entry->lru);
n->rb_left = dispose;
dispose = n;
}
nfsi->cache_validity &= ~NFS_INO_INVALID_ACCESS;
spin_unlock(&inode->i_lock);
/* Now kill them all! */
while (dispose != NULL) {
n = dispose;
dispose = n->rb_left;
nfs_access_free_entry(rb_entry(n, struct nfs_access_entry, rb_node));
}
}
void nfs_access_zap_cache(struct inode *inode)
{
/* Remove from global LRU init */
if (test_and_clear_bit(NFS_INO_ACL_LRU_SET, &NFS_FLAGS(inode))) {
spin_lock(&nfs_access_lru_lock);
list_del_init(&NFS_I(inode)->access_cache_inode_lru);
spin_unlock(&nfs_access_lru_lock);
}
spin_lock(&inode->i_lock);
/* This will release the spinlock */
__nfs_access_zap_cache(inode);
}
static struct nfs_access_entry *nfs_access_search_rbtree(struct inode *inode, struct rpc_cred *cred)
{
struct rb_node *n = NFS_I(inode)->access_cache.rb_node;
struct nfs_access_entry *entry;
while (n != NULL) {
entry = rb_entry(n, struct nfs_access_entry, rb_node);
if (cred < entry->cred)
n = n->rb_left;
else if (cred > entry->cred)
n = n->rb_right;
else
return entry;
}
return NULL;
}
int nfs_access_get_cached(struct inode *inode, struct rpc_cred *cred, struct nfs_access_entry *res)
{
struct nfs_inode *nfsi = NFS_I(inode);
struct nfs_access_entry *cache = &nfsi->cache_access;
struct nfs_access_entry *cache;
int err = -ENOENT;
if (cache->cred != cred
|| time_after(jiffies, cache->jiffies + NFS_ATTRTIMEO(inode))
|| (nfsi->cache_validity & NFS_INO_INVALID_ACCESS))
return -ENOENT;
memcpy(res, cache, sizeof(*res));
return 0;
spin_lock(&inode->i_lock);
if (nfsi->cache_validity & NFS_INO_INVALID_ACCESS)
goto out_zap;
cache = nfs_access_search_rbtree(inode, cred);
if (cache == NULL)
goto out;
if (time_after(jiffies, cache->jiffies + NFS_ATTRTIMEO(inode)))
goto out_stale;
res->jiffies = cache->jiffies;
res->cred = cache->cred;
res->mask = cache->mask;
list_move_tail(&cache->lru, &nfsi->access_cache_entry_lru);
err = 0;
out:
spin_unlock(&inode->i_lock);
return err;
out_stale:
rb_erase(&cache->rb_node, &nfsi->access_cache);
list_del(&cache->lru);
spin_unlock(&inode->i_lock);
nfs_access_free_entry(cache);
return -ENOENT;
out_zap:
/* This will release the spinlock */
__nfs_access_zap_cache(inode);
return -ENOENT;
}
static void nfs_access_add_rbtree(struct inode *inode, struct nfs_access_entry *set)
{
struct nfs_inode *nfsi = NFS_I(inode);
struct rb_root *root_node = &nfsi->access_cache;
struct rb_node **p = &root_node->rb_node;
struct rb_node *parent = NULL;
struct nfs_access_entry *entry;
spin_lock(&inode->i_lock);
while (*p != NULL) {
parent = *p;
entry = rb_entry(parent, struct nfs_access_entry, rb_node);
if (set->cred < entry->cred)
p = &parent->rb_left;
else if (set->cred > entry->cred)
p = &parent->rb_right;
else
goto found;
}
rb_link_node(&set->rb_node, parent, p);
rb_insert_color(&set->rb_node, root_node);
list_add_tail(&set->lru, &nfsi->access_cache_entry_lru);
spin_unlock(&inode->i_lock);
return;
found:
rb_replace_node(parent, &set->rb_node, root_node);
list_add_tail(&set->lru, &nfsi->access_cache_entry_lru);
list_del(&entry->lru);
spin_unlock(&inode->i_lock);
nfs_access_free_entry(entry);
}
void nfs_access_add_cache(struct inode *inode, struct nfs_access_entry *set)
{
struct nfs_inode *nfsi = NFS_I(inode);
struct nfs_access_entry *cache = &nfsi->cache_access;
if (cache->cred != set->cred) {
if (cache->cred)
put_rpccred(cache->cred);
cache->cred = get_rpccred(set->cred);
}
/* FIXME: replace current access_cache BKL reliance with inode->i_lock */
spin_lock(&inode->i_lock);
nfsi->cache_validity &= ~NFS_INO_INVALID_ACCESS;
spin_unlock(&inode->i_lock);
struct nfs_access_entry *cache = kmalloc(sizeof(*cache), GFP_KERNEL);
if (cache == NULL)
return;
RB_CLEAR_NODE(&cache->rb_node);
cache->jiffies = set->jiffies;
cache->cred = get_rpccred(set->cred);
cache->mask = set->mask;
nfs_access_add_rbtree(inode, cache);
/* Update accounting */
smp_mb__before_atomic_inc();
atomic_long_inc(&nfs_access_nr_entries);
smp_mb__after_atomic_inc();
/* Add inode to global LRU list */
if (!test_and_set_bit(NFS_INO_ACL_LRU_SET, &NFS_FLAGS(inode))) {
spin_lock(&nfs_access_lru_lock);
list_add_tail(&NFS_I(inode)->access_cache_inode_lru, &nfs_access_lru_list);
spin_unlock(&nfs_access_lru_lock);
}
}
static int nfs_do_access(struct inode *inode, struct rpc_cred *cred, int mask)

Просмотреть файл

@ -111,7 +111,7 @@ nfs_file_open(struct inode *inode, struct file *filp)
nfs_inc_stats(inode, NFSIOS_VFSOPEN);
lock_kernel();
res = NFS_SERVER(inode)->rpc_ops->file_open(inode, filp);
res = NFS_PROTO(inode)->file_open(inode, filp);
unlock_kernel();
return res;
}
@ -157,7 +157,7 @@ force_reval:
static loff_t nfs_file_llseek(struct file *filp, loff_t offset, int origin)
{
/* origin == SEEK_END => we must revalidate the cached file length */
if (origin == 2) {
if (origin == SEEK_END) {
struct inode *inode = filp->f_mapping->host;
int retval = nfs_revalidate_file_size(inode, filp);
if (retval < 0)

311
fs/nfs/getroot.c Normal file
Просмотреть файл

@ -0,0 +1,311 @@
/* getroot.c: get the root dentry for an NFS mount
*
* Copyright (C) 2006 Red Hat, Inc. All Rights Reserved.
* Written by David Howells (dhowells@redhat.com)
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version
* 2 of the License, or (at your option) any later version.
*/
#include <linux/config.h>
#include <linux/module.h>
#include <linux/init.h>
#include <linux/time.h>
#include <linux/kernel.h>
#include <linux/mm.h>
#include <linux/string.h>
#include <linux/stat.h>
#include <linux/errno.h>
#include <linux/unistd.h>
#include <linux/sunrpc/clnt.h>
#include <linux/sunrpc/stats.h>
#include <linux/nfs_fs.h>
#include <linux/nfs_mount.h>
#include <linux/nfs4_mount.h>
#include <linux/lockd/bind.h>
#include <linux/smp_lock.h>
#include <linux/seq_file.h>
#include <linux/mount.h>
#include <linux/nfs_idmap.h>
#include <linux/vfs.h>
#include <linux/namei.h>
#include <linux/namespace.h>
#include <linux/security.h>
#include <asm/system.h>
#include <asm/uaccess.h>
#include "nfs4_fs.h"
#include "delegation.h"
#include "internal.h"
#define NFSDBG_FACILITY NFSDBG_CLIENT
#define NFS_PARANOIA 1
/*
* get an NFS2/NFS3 root dentry from the root filehandle
*/
struct dentry *nfs_get_root(struct super_block *sb, struct nfs_fh *mntfh)
{
struct nfs_server *server = NFS_SB(sb);
struct nfs_fsinfo fsinfo;
struct nfs_fattr fattr;
struct dentry *mntroot;
struct inode *inode;
int error;
/* create a dummy root dentry with dummy inode for this superblock */
if (!sb->s_root) {
struct nfs_fh dummyfh;
struct dentry *root;
struct inode *iroot;
memset(&dummyfh, 0, sizeof(dummyfh));
memset(&fattr, 0, sizeof(fattr));
nfs_fattr_init(&fattr);
fattr.valid = NFS_ATTR_FATTR;
fattr.type = NFDIR;
fattr.mode = S_IFDIR | S_IRUSR | S_IWUSR;
fattr.nlink = 2;
iroot = nfs_fhget(sb, &dummyfh, &fattr);
if (IS_ERR(iroot))
return ERR_PTR(PTR_ERR(iroot));
root = d_alloc_root(iroot);
if (!root) {
iput(iroot);
return ERR_PTR(-ENOMEM);
}
sb->s_root = root;
}
/* get the actual root for this mount */
fsinfo.fattr = &fattr;
error = server->nfs_client->rpc_ops->getroot(server, mntfh, &fsinfo);
if (error < 0) {
dprintk("nfs_get_root: getattr error = %d\n", -error);
return ERR_PTR(error);
}
inode = nfs_fhget(sb, mntfh, fsinfo.fattr);
if (IS_ERR(inode)) {
dprintk("nfs_get_root: get root inode failed\n");
return ERR_PTR(PTR_ERR(inode));
}
/* root dentries normally start off anonymous and get spliced in later
* if the dentry tree reaches them; however if the dentry already
* exists, we'll pick it up at this point and use it as the root
*/
mntroot = d_alloc_anon(inode);
if (!mntroot) {
iput(inode);
dprintk("nfs_get_root: get root dentry failed\n");
return ERR_PTR(-ENOMEM);
}
security_d_instantiate(mntroot, inode);
if (!mntroot->d_op)
mntroot->d_op = server->nfs_client->rpc_ops->dentry_ops;
return mntroot;
}
#ifdef CONFIG_NFS_V4
/*
* Do a simple pathwalk from the root FH of the server to the nominated target
* of the mountpoint
* - give error on symlinks
* - give error on ".." occurring in the path
* - follow traversals
*/
int nfs4_path_walk(struct nfs_server *server,
struct nfs_fh *mntfh,
const char *path)
{
struct nfs_fsinfo fsinfo;
struct nfs_fattr fattr;
struct nfs_fh lastfh;
struct qstr name;
int ret;
//int referral_count = 0;
dprintk("--> nfs4_path_walk(,,%s)\n", path);
fsinfo.fattr = &fattr;
nfs_fattr_init(&fattr);
if (*path++ != '/') {
dprintk("nfs4_get_root: Path does not begin with a slash\n");
return -EINVAL;
}
/* Start by getting the root filehandle from the server */
ret = server->nfs_client->rpc_ops->getroot(server, mntfh, &fsinfo);
if (ret < 0) {
dprintk("nfs4_get_root: getroot error = %d\n", -ret);
return ret;
}
if (fattr.type != NFDIR) {
printk(KERN_ERR "nfs4_get_root:"
" getroot encountered non-directory\n");
return -ENOTDIR;
}
if (fattr.valid & NFS_ATTR_FATTR_V4_REFERRAL) {
printk(KERN_ERR "nfs4_get_root:"
" getroot obtained referral\n");
return -EREMOTE;
}
next_component:
dprintk("Next: %s\n", path);
/* extract the next bit of the path */
if (!*path)
goto path_walk_complete;
name.name = path;
while (*path && *path != '/')
path++;
name.len = path - (const char *) name.name;
eat_dot_dir:
while (*path == '/')
path++;
if (path[0] == '.' && (path[1] == '/' || !path[1])) {
path += 2;
goto eat_dot_dir;
}
if (path[0] == '.' && path[1] == '.' && (path[2] == '/' || !path[2])
) {
printk(KERN_ERR "nfs4_get_root:"
" Mount path contains reference to \"..\"\n");
return -EINVAL;
}
/* lookup the next FH in the sequence */
memcpy(&lastfh, mntfh, sizeof(lastfh));
dprintk("LookupFH: %*.*s [%s]\n", name.len, name.len, name.name, path);
ret = server->nfs_client->rpc_ops->lookupfh(server, &lastfh, &name,
mntfh, &fattr);
if (ret < 0) {
dprintk("nfs4_get_root: getroot error = %d\n", -ret);
return ret;
}
if (fattr.type != NFDIR) {
printk(KERN_ERR "nfs4_get_root:"
" lookupfh encountered non-directory\n");
return -ENOTDIR;
}
if (fattr.valid & NFS_ATTR_FATTR_V4_REFERRAL) {
printk(KERN_ERR "nfs4_get_root:"
" lookupfh obtained referral\n");
return -EREMOTE;
}
goto next_component;
path_walk_complete:
memcpy(&server->fsid, &fattr.fsid, sizeof(server->fsid));
dprintk("<-- nfs4_path_walk() = 0\n");
return 0;
}
/*
* get an NFS4 root dentry from the root filehandle
*/
struct dentry *nfs4_get_root(struct super_block *sb, struct nfs_fh *mntfh)
{
struct nfs_server *server = NFS_SB(sb);
struct nfs_fattr fattr;
struct dentry *mntroot;
struct inode *inode;
int error;
dprintk("--> nfs4_get_root()\n");
/* create a dummy root dentry with dummy inode for this superblock */
if (!sb->s_root) {
struct nfs_fh dummyfh;
struct dentry *root;
struct inode *iroot;
memset(&dummyfh, 0, sizeof(dummyfh));
memset(&fattr, 0, sizeof(fattr));
nfs_fattr_init(&fattr);
fattr.valid = NFS_ATTR_FATTR;
fattr.type = NFDIR;
fattr.mode = S_IFDIR | S_IRUSR | S_IWUSR;
fattr.nlink = 2;
iroot = nfs_fhget(sb, &dummyfh, &fattr);
if (IS_ERR(iroot))
return ERR_PTR(PTR_ERR(iroot));
root = d_alloc_root(iroot);
if (!root) {
iput(iroot);
return ERR_PTR(-ENOMEM);
}
sb->s_root = root;
}
/* get the info about the server and filesystem */
error = nfs4_server_capabilities(server, mntfh);
if (error < 0) {
dprintk("nfs_get_root: getcaps error = %d\n",
-error);
return ERR_PTR(error);
}
/* get the actual root for this mount */
error = server->nfs_client->rpc_ops->getattr(server, mntfh, &fattr);
if (error < 0) {
dprintk("nfs_get_root: getattr error = %d\n", -error);
return ERR_PTR(error);
}
inode = nfs_fhget(sb, mntfh, &fattr);
if (IS_ERR(inode)) {
dprintk("nfs_get_root: get root inode failed\n");
return ERR_PTR(PTR_ERR(inode));
}
/* root dentries normally start off anonymous and get spliced in later
* if the dentry tree reaches them; however if the dentry already
* exists, we'll pick it up at this point and use it as the root
*/
mntroot = d_alloc_anon(inode);
if (!mntroot) {
iput(inode);
dprintk("nfs_get_root: get root dentry failed\n");
return ERR_PTR(-ENOMEM);
}
security_d_instantiate(mntroot, inode);
if (!mntroot->d_op)
mntroot->d_op = server->nfs_client->rpc_ops->dentry_ops;
dprintk("<-- nfs4_get_root()\n");
return mntroot;
}
#endif /* CONFIG_NFS_V4 */

Просмотреть файл

@ -57,6 +57,20 @@
/* Default cache timeout is 10 minutes */
unsigned int nfs_idmap_cache_timeout = 600 * HZ;
static int param_set_idmap_timeout(const char *val, struct kernel_param *kp)
{
char *endp;
int num = simple_strtol(val, &endp, 0);
int jif = num * HZ;
if (endp == val || *endp || num < 0 || jif < num)
return -EINVAL;
*((int *)kp->arg) = jif;
return 0;
}
module_param_call(idmap_cache_timeout, param_set_idmap_timeout, param_get_int,
&nfs_idmap_cache_timeout, 0644);
struct idmap_hashent {
unsigned long ih_expires;
__u32 ih_id;
@ -70,7 +84,6 @@ struct idmap_hashtable {
};
struct idmap {
char idmap_path[48];
struct dentry *idmap_dentry;
wait_queue_head_t idmap_wq;
struct idmap_msg idmap_im;
@ -94,24 +107,23 @@ static struct rpc_pipe_ops idmap_upcall_ops = {
.destroy_msg = idmap_pipe_destroy_msg,
};
void
nfs_idmap_new(struct nfs4_client *clp)
int
nfs_idmap_new(struct nfs_client *clp)
{
struct idmap *idmap;
int error;
BUG_ON(clp->cl_idmap != NULL);
if (clp->cl_idmap != NULL)
return;
if ((idmap = kzalloc(sizeof(*idmap), GFP_KERNEL)) == NULL)
return;
return -ENOMEM;
snprintf(idmap->idmap_path, sizeof(idmap->idmap_path),
"%s/idmap", clp->cl_rpcclient->cl_pathname);
idmap->idmap_dentry = rpc_mkpipe(idmap->idmap_path,
idmap->idmap_dentry = rpc_mkpipe(clp->cl_rpcclient->cl_dentry, "idmap",
idmap, &idmap_upcall_ops, 0);
if (IS_ERR(idmap->idmap_dentry)) {
error = PTR_ERR(idmap->idmap_dentry);
kfree(idmap);
return;
return error;
}
mutex_init(&idmap->idmap_lock);
@ -121,10 +133,11 @@ nfs_idmap_new(struct nfs4_client *clp)
idmap->idmap_group_hash.h_type = IDMAP_TYPE_GROUP;
clp->cl_idmap = idmap;
return 0;
}
void
nfs_idmap_delete(struct nfs4_client *clp)
nfs_idmap_delete(struct nfs_client *clp)
{
struct idmap *idmap = clp->cl_idmap;
@ -477,27 +490,27 @@ static unsigned int fnvhash32(const void *buf, size_t buflen)
return (hash);
}
int nfs_map_name_to_uid(struct nfs4_client *clp, const char *name, size_t namelen, __u32 *uid)
int nfs_map_name_to_uid(struct nfs_client *clp, const char *name, size_t namelen, __u32 *uid)
{
struct idmap *idmap = clp->cl_idmap;
return nfs_idmap_id(idmap, &idmap->idmap_user_hash, name, namelen, uid);
}
int nfs_map_group_to_gid(struct nfs4_client *clp, const char *name, size_t namelen, __u32 *uid)
int nfs_map_group_to_gid(struct nfs_client *clp, const char *name, size_t namelen, __u32 *uid)
{
struct idmap *idmap = clp->cl_idmap;
return nfs_idmap_id(idmap, &idmap->idmap_group_hash, name, namelen, uid);
}
int nfs_map_uid_to_name(struct nfs4_client *clp, __u32 uid, char *buf)
int nfs_map_uid_to_name(struct nfs_client *clp, __u32 uid, char *buf)
{
struct idmap *idmap = clp->cl_idmap;
return nfs_idmap_name(idmap, &idmap->idmap_user_hash, uid, buf);
}
int nfs_map_gid_to_group(struct nfs4_client *clp, __u32 uid, char *buf)
int nfs_map_gid_to_group(struct nfs_client *clp, __u32 uid, char *buf)
{
struct idmap *idmap = clp->cl_idmap;

Просмотреть файл

@ -76,19 +76,14 @@ int nfs_write_inode(struct inode *inode, int sync)
void nfs_clear_inode(struct inode *inode)
{
struct nfs_inode *nfsi = NFS_I(inode);
struct rpc_cred *cred;
/*
* The following should never happen...
*/
BUG_ON(nfs_have_writebacks(inode));
BUG_ON (!list_empty(&nfsi->open_files));
BUG_ON(!list_empty(&NFS_I(inode)->open_files));
BUG_ON(atomic_read(&NFS_I(inode)->data_updates) != 0);
nfs_zap_acl_cache(inode);
cred = nfsi->cache_access.cred;
if (cred)
put_rpccred(cred);
BUG_ON(atomic_read(&nfsi->data_updates) != 0);
nfs_access_zap_cache(inode);
}
/**
@ -242,13 +237,13 @@ nfs_fhget(struct super_block *sb, struct nfs_fh *fh, struct nfs_fattr *fattr)
/* Why so? Because we want revalidate for devices/FIFOs, and
* that's precisely what we have in nfs_file_inode_operations.
*/
inode->i_op = NFS_SB(sb)->rpc_ops->file_inode_ops;
inode->i_op = NFS_SB(sb)->nfs_client->rpc_ops->file_inode_ops;
if (S_ISREG(inode->i_mode)) {
inode->i_fop = &nfs_file_operations;
inode->i_data.a_ops = &nfs_file_aops;
inode->i_data.backing_dev_info = &NFS_SB(sb)->backing_dev_info;
} else if (S_ISDIR(inode->i_mode)) {
inode->i_op = NFS_SB(sb)->rpc_ops->dir_inode_ops;
inode->i_op = NFS_SB(sb)->nfs_client->rpc_ops->dir_inode_ops;
inode->i_fop = &nfs_dir_operations;
if (nfs_server_capable(inode, NFS_CAP_READDIRPLUS)
&& fattr->size <= NFS_LIMIT_READDIRPLUS)
@ -290,7 +285,7 @@ nfs_fhget(struct super_block *sb, struct nfs_fh *fh, struct nfs_fattr *fattr)
nfsi->attrtimeo = NFS_MINATTRTIMEO(inode);
nfsi->attrtimeo_timestamp = jiffies;
memset(nfsi->cookieverf, 0, sizeof(nfsi->cookieverf));
nfsi->cache_access.cred = NULL;
nfsi->access_cache = RB_ROOT;
unlock_new_inode(inode);
} else
@ -722,13 +717,11 @@ void nfs_end_data_update(struct inode *inode)
{
struct nfs_inode *nfsi = NFS_I(inode);
if (!nfs_have_delegation(inode, FMODE_READ)) {
/* Directories and symlinks: invalidate page cache */
if (S_ISDIR(inode->i_mode) || S_ISLNK(inode->i_mode)) {
spin_lock(&inode->i_lock);
nfsi->cache_validity |= NFS_INO_INVALID_DATA;
spin_unlock(&inode->i_lock);
}
/* Directories: invalidate page cache */
if (S_ISDIR(inode->i_mode)) {
spin_lock(&inode->i_lock);
nfsi->cache_validity |= NFS_INO_INVALID_DATA;
spin_unlock(&inode->i_lock);
}
nfsi->cache_change_attribute = jiffies;
atomic_dec(&nfsi->data_updates);
@ -847,6 +840,12 @@ int nfs_refresh_inode(struct inode *inode, struct nfs_fattr *fattr)
*
* After an operation that has changed the inode metadata, mark the
* attribute cache as being invalid, then try to update it.
*
* NB: if the server didn't return any post op attributes, this
* function will force the retrieval of attributes before the next
* NFS request. Thus it should be used only for operations that
* are expected to change one or more attributes, to avoid
* unnecessary NFS requests and trips through nfs_update_inode().
*/
int nfs_post_op_update_inode(struct inode *inode, struct nfs_fattr *fattr)
{
@ -1025,7 +1024,7 @@ static int nfs_update_inode(struct inode *inode, struct nfs_fattr *fattr)
out_fileid:
printk(KERN_ERR "NFS: server %s error: fileid changed\n"
"fsid %s: expected fileid 0x%Lx, got 0x%Lx\n",
NFS_SERVER(inode)->hostname, inode->i_sb->s_id,
NFS_SERVER(inode)->nfs_client->cl_hostname, inode->i_sb->s_id,
(long long)nfsi->fileid, (long long)fattr->fileid);
goto out_err;
}
@ -1109,6 +1108,8 @@ static void init_once(void * foo, kmem_cache_t * cachep, unsigned long flags)
INIT_LIST_HEAD(&nfsi->dirty);
INIT_LIST_HEAD(&nfsi->commit);
INIT_LIST_HEAD(&nfsi->open_files);
INIT_LIST_HEAD(&nfsi->access_cache_entry_lru);
INIT_LIST_HEAD(&nfsi->access_cache_inode_lru);
INIT_RADIX_TREE(&nfsi->nfs_page_tree, GFP_ATOMIC);
atomic_set(&nfsi->data_updates, 0);
nfsi->ndirty = 0;
@ -1144,6 +1145,10 @@ static int __init init_nfs_fs(void)
{
int err;
err = nfs_fs_proc_init();
if (err)
goto out5;
err = nfs_init_nfspagecache();
if (err)
goto out4;
@ -1184,6 +1189,8 @@ out2:
out3:
nfs_destroy_nfspagecache();
out4:
nfs_fs_proc_exit();
out5:
return err;
}
@ -1198,6 +1205,7 @@ static void __exit exit_nfs_fs(void)
rpc_proc_unregister("nfs");
#endif
unregister_nfs_fs();
nfs_fs_proc_exit();
}
/* Not quite true; I just maintain it */

Просмотреть файл

@ -4,6 +4,18 @@
#include <linux/mount.h>
struct nfs_string;
struct nfs_mount_data;
struct nfs4_mount_data;
/* Maximum number of readahead requests
* FIXME: this should really be a sysctl so that users may tune it to suit
* their needs. People that do NFS over a slow network, might for
* instance want to reduce it to something closer to 1 for improved
* interactive response.
*/
#define NFS_MAX_READAHEAD (RPC_DEF_SLOT_TABLE - 1)
struct nfs_clone_mount {
const struct super_block *sb;
const struct dentry *dentry;
@ -15,7 +27,40 @@ struct nfs_clone_mount {
rpc_authflavor_t authflavor;
};
/* namespace-nfs4.c */
/* client.c */
extern struct rpc_program nfs_program;
extern void nfs_put_client(struct nfs_client *);
extern struct nfs_client *nfs_find_client(const struct sockaddr_in *, int);
extern struct nfs_server *nfs_create_server(const struct nfs_mount_data *,
struct nfs_fh *);
extern struct nfs_server *nfs4_create_server(const struct nfs4_mount_data *,
const char *,
const struct sockaddr_in *,
const char *,
const char *,
rpc_authflavor_t,
struct nfs_fh *);
extern struct nfs_server *nfs4_create_referral_server(struct nfs_clone_mount *,
struct nfs_fh *);
extern void nfs_free_server(struct nfs_server *server);
extern struct nfs_server *nfs_clone_server(struct nfs_server *,
struct nfs_fh *,
struct nfs_fattr *);
#ifdef CONFIG_PROC_FS
extern int __init nfs_fs_proc_init(void);
extern void nfs_fs_proc_exit(void);
#else
static inline int nfs_fs_proc_init(void)
{
return 0;
}
static inline void nfs_fs_proc_exit(void)
{
}
#endif
/* nfs4namespace.c */
#ifdef CONFIG_NFS_V4
extern struct vfsmount *nfs_do_refmount(const struct vfsmount *mnt_parent, struct dentry *dentry);
#else
@ -46,6 +91,7 @@ extern void nfs_destroy_directcache(void);
#endif
/* nfs2xdr.c */
extern int nfs_stat_to_errno(int);
extern struct rpc_procinfo nfs_procedures[];
extern u32 * nfs_decode_dirent(u32 *, struct nfs_entry *, int);
@ -54,8 +100,9 @@ extern struct rpc_procinfo nfs3_procedures[];
extern u32 *nfs3_decode_dirent(u32 *, struct nfs_entry *, int);
/* nfs4xdr.c */
extern int nfs_stat_to_errno(int);
#ifdef CONFIG_NFS_V4
extern u32 *nfs4_decode_dirent(u32 *p, struct nfs_entry *entry, int plus);
#endif
/* nfs4proc.c */
#ifdef CONFIG_NFS_V4
@ -66,6 +113,9 @@ extern int nfs4_proc_fs_locations(struct inode *dir, struct dentry *dentry,
struct page *page);
#endif
/* dir.c */
extern int nfs_access_cache_shrinker(int nr_to_scan, gfp_t gfp_mask);
/* inode.c */
extern struct inode *nfs_alloc_inode(struct super_block *sb);
extern void nfs_destroy_inode(struct inode *);
@ -76,10 +126,10 @@ extern void nfs4_clear_inode(struct inode *);
#endif
/* super.c */
extern struct file_system_type nfs_referral_nfs4_fs_type;
extern struct file_system_type clone_nfs_fs_type;
extern struct file_system_type nfs_xdev_fs_type;
#ifdef CONFIG_NFS_V4
extern struct file_system_type clone_nfs4_fs_type;
extern struct file_system_type nfs4_xdev_fs_type;
extern struct file_system_type nfs4_referral_fs_type;
#endif
extern struct rpc_stat nfs_rpcstat;
@ -88,30 +138,30 @@ extern int __init register_nfs_fs(void);
extern void __exit unregister_nfs_fs(void);
/* namespace.c */
extern char *nfs_path(const char *base, const struct dentry *dentry,
extern char *nfs_path(const char *base,
const struct dentry *droot,
const struct dentry *dentry,
char *buffer, ssize_t buflen);
/*
* Determine the mount path as a string
*/
static inline char *
nfs4_path(const struct dentry *dentry, char *buffer, ssize_t buflen)
{
/* getroot.c */
extern struct dentry *nfs_get_root(struct super_block *, struct nfs_fh *);
#ifdef CONFIG_NFS_V4
return nfs_path(NFS_SB(dentry->d_sb)->mnt_path, dentry, buffer, buflen);
#else
return NULL;
extern struct dentry *nfs4_get_root(struct super_block *, struct nfs_fh *);
extern int nfs4_path_walk(struct nfs_server *server,
struct nfs_fh *mntfh,
const char *path);
#endif
}
/*
* Determine the device name as a string
*/
static inline char *nfs_devname(const struct vfsmount *mnt_parent,
const struct dentry *dentry,
char *buffer, ssize_t buflen)
const struct dentry *dentry,
char *buffer, ssize_t buflen)
{
return nfs_path(mnt_parent->mnt_devname, dentry, buffer, buflen);
return nfs_path(mnt_parent->mnt_devname, mnt_parent->mnt_root,
dentry, buffer, buflen);
}
/*
@ -167,20 +217,3 @@ void nfs_super_set_maxbytes(struct super_block *sb, __u64 maxfilesize)
if (sb->s_maxbytes > MAX_LFS_FILESIZE || sb->s_maxbytes <= 0)
sb->s_maxbytes = MAX_LFS_FILESIZE;
}
/*
* Check if the string represents a "valid" IPv4 address
*/
static inline int valid_ipaddr4(const char *buf)
{
int rc, count, in[4];
rc = sscanf(buf, "%d.%d.%d.%d", &in[0], &in[1], &in[2], &in[3]);
if (rc != 4)
return -EINVAL;
for (count = 0; count < 4; count++) {
if (in[count] > 255)
return -EINVAL;
}
return 0;
}

Просмотреть файл

@ -14,7 +14,6 @@
#include <linux/net.h>
#include <linux/in.h>
#include <linux/sunrpc/clnt.h>
#include <linux/sunrpc/xprt.h>
#include <linux/sunrpc/sched.h>
#include <linux/nfs_fs.h>
@ -77,22 +76,19 @@ static struct rpc_clnt *
mnt_create(char *hostname, struct sockaddr_in *srvaddr, int version,
int protocol)
{
struct rpc_xprt *xprt;
struct rpc_clnt *clnt;
struct rpc_create_args args = {
.protocol = protocol,
.address = (struct sockaddr *)srvaddr,
.addrsize = sizeof(*srvaddr),
.servername = hostname,
.program = &mnt_program,
.version = version,
.authflavor = RPC_AUTH_UNIX,
.flags = (RPC_CLNT_CREATE_ONESHOT |
RPC_CLNT_CREATE_INTR),
};
xprt = xprt_create_proto(protocol, srvaddr, NULL);
if (IS_ERR(xprt))
return (struct rpc_clnt *)xprt;
clnt = rpc_create_client(xprt, hostname,
&mnt_program, version,
RPC_AUTH_UNIX);
if (!IS_ERR(clnt)) {
clnt->cl_softrtry = 1;
clnt->cl_oneshot = 1;
clnt->cl_intr = 1;
}
return clnt;
return rpc_create(&args);
}
/*

Просмотреть файл

@ -2,6 +2,7 @@
* linux/fs/nfs/namespace.c
*
* Copyright (C) 2005 Trond Myklebust <Trond.Myklebust@netapp.com>
* - Modified by David Howells <dhowells@redhat.com>
*
* NFS namespace
*/
@ -28,6 +29,7 @@ int nfs_mountpoint_expiry_timeout = 500 * HZ;
/*
* nfs_path - reconstruct the path given an arbitrary dentry
* @base - arbitrary string to prepend to the path
* @droot - pointer to root dentry for mountpoint
* @dentry - pointer to dentry
* @buffer - result buffer
* @buflen - length of buffer
@ -38,7 +40,9 @@ int nfs_mountpoint_expiry_timeout = 500 * HZ;
* This is mainly for use in figuring out the path on the
* server side when automounting on top of an existing partition.
*/
char *nfs_path(const char *base, const struct dentry *dentry,
char *nfs_path(const char *base,
const struct dentry *droot,
const struct dentry *dentry,
char *buffer, ssize_t buflen)
{
char *end = buffer+buflen;
@ -47,7 +51,7 @@ char *nfs_path(const char *base, const struct dentry *dentry,
*--end = '\0';
buflen--;
spin_lock(&dcache_lock);
while (!IS_ROOT(dentry)) {
while (!IS_ROOT(dentry) && dentry != droot) {
namelen = dentry->d_name.len;
buflen -= namelen + 1;
if (buflen < 0)
@ -96,15 +100,18 @@ static void * nfs_follow_mountpoint(struct dentry *dentry, struct nameidata *nd)
struct nfs_fattr fattr;
int err;
dprintk("--> nfs_follow_mountpoint()\n");
BUG_ON(IS_ROOT(dentry));
dprintk("%s: enter\n", __FUNCTION__);
dput(nd->dentry);
nd->dentry = dget(dentry);
if (d_mountpoint(nd->dentry))
goto out_follow;
/* Look it up again */
parent = dget_parent(nd->dentry);
err = server->rpc_ops->lookup(parent->d_inode, &nd->dentry->d_name, &fh, &fattr);
err = server->nfs_client->rpc_ops->lookup(parent->d_inode,
&nd->dentry->d_name,
&fh, &fattr);
dput(parent);
if (err != 0)
goto out_err;
@ -132,6 +139,8 @@ static void * nfs_follow_mountpoint(struct dentry *dentry, struct nameidata *nd)
schedule_delayed_work(&nfs_automount_task, nfs_mountpoint_expiry_timeout);
out:
dprintk("%s: done, returned %d\n", __FUNCTION__, err);
dprintk("<-- nfs_follow_mountpoint() = %d\n", err);
return ERR_PTR(err);
out_err:
path_release(nd);
@ -172,22 +181,23 @@ void nfs_release_automount_timer(void)
/*
* Clone a mountpoint of the appropriate type
*/
static struct vfsmount *nfs_do_clone_mount(struct nfs_server *server, char *devname,
static struct vfsmount *nfs_do_clone_mount(struct nfs_server *server,
const char *devname,
struct nfs_clone_mount *mountdata)
{
#ifdef CONFIG_NFS_V4
struct vfsmount *mnt = NULL;
switch (server->rpc_ops->version) {
switch (server->nfs_client->cl_nfsversion) {
case 2:
case 3:
mnt = vfs_kern_mount(&clone_nfs_fs_type, 0, devname, mountdata);
mnt = vfs_kern_mount(&nfs_xdev_fs_type, 0, devname, mountdata);
break;
case 4:
mnt = vfs_kern_mount(&clone_nfs4_fs_type, 0, devname, mountdata);
mnt = vfs_kern_mount(&nfs4_xdev_fs_type, 0, devname, mountdata);
}
return mnt;
#else
return vfs_kern_mount(&clone_nfs_fs_type, 0, devname, mountdata);
return vfs_kern_mount(&nfs_xdev_fs_type, 0, devname, mountdata);
#endif
}
@ -213,6 +223,8 @@ struct vfsmount *nfs_do_submount(const struct vfsmount *mnt_parent,
char *page = (char *) __get_free_page(GFP_USER);
char *devname;
dprintk("--> nfs_do_submount()\n");
dprintk("%s: submounting on %s/%s\n", __FUNCTION__,
dentry->d_parent->d_name.name,
dentry->d_name.name);
@ -227,5 +239,7 @@ free_page:
free_page((unsigned long)page);
out:
dprintk("%s: done\n", __FUNCTION__);
dprintk("<-- nfs_do_submount() = %p\n", mnt);
return mnt;
}

Просмотреть файл

@ -51,7 +51,7 @@
#define NFS_createargs_sz (NFS_diropargs_sz+NFS_sattr_sz)
#define NFS_renameargs_sz (NFS_diropargs_sz+NFS_diropargs_sz)
#define NFS_linkargs_sz (NFS_fhandle_sz+NFS_diropargs_sz)
#define NFS_symlinkargs_sz (NFS_diropargs_sz+NFS_path_sz+NFS_sattr_sz)
#define NFS_symlinkargs_sz (NFS_diropargs_sz+1+NFS_sattr_sz)
#define NFS_readdirargs_sz (NFS_fhandle_sz+2)
#define NFS_attrstat_sz (1+NFS_fattr_sz)
@ -351,11 +351,26 @@ nfs_xdr_linkargs(struct rpc_rqst *req, u32 *p, struct nfs_linkargs *args)
static int
nfs_xdr_symlinkargs(struct rpc_rqst *req, u32 *p, struct nfs_symlinkargs *args)
{
struct xdr_buf *sndbuf = &req->rq_snd_buf;
size_t pad;
p = xdr_encode_fhandle(p, args->fromfh);
p = xdr_encode_array(p, args->fromname, args->fromlen);
p = xdr_encode_array(p, args->topath, args->tolen);
*p++ = htonl(args->pathlen);
sndbuf->len = xdr_adjust_iovec(sndbuf->head, p);
xdr_encode_pages(sndbuf, args->pages, 0, args->pathlen);
/*
* xdr_encode_pages may have added a few bytes to ensure the
* pathname ends on a 4-byte boundary. Start encoding the
* attributes after the pad bytes.
*/
pad = sndbuf->tail->iov_len;
if (pad > 0)
p++;
p = xdr_encode_sattr(p, args->sattr);
req->rq_slen = xdr_adjust_iovec(req->rq_svec, p);
sndbuf->len += xdr_adjust_iovec(sndbuf->tail, p) - pad;
return 0;
}

Просмотреть файл

@ -81,7 +81,7 @@ do_proc_get_root(struct rpc_clnt *client, struct nfs_fh *fhandle,
}
/*
* Bare-bones access to getattr: this is for nfs_read_super.
* Bare-bones access to getattr: this is for nfs_get_root/nfs_get_sb
*/
static int
nfs3_proc_get_root(struct nfs_server *server, struct nfs_fh *fhandle,
@ -90,8 +90,8 @@ nfs3_proc_get_root(struct nfs_server *server, struct nfs_fh *fhandle,
int status;
status = do_proc_get_root(server->client, fhandle, info);
if (status && server->client_sys != server->client)
status = do_proc_get_root(server->client_sys, fhandle, info);
if (status && server->nfs_client->cl_rpcclient != server->client)
status = do_proc_get_root(server->nfs_client->cl_rpcclient, fhandle, info);
return status;
}
@ -544,23 +544,23 @@ nfs3_proc_link(struct inode *inode, struct inode *dir, struct qstr *name)
}
static int
nfs3_proc_symlink(struct inode *dir, struct qstr *name, struct qstr *path,
struct iattr *sattr, struct nfs_fh *fhandle,
struct nfs_fattr *fattr)
nfs3_proc_symlink(struct inode *dir, struct dentry *dentry, struct page *page,
unsigned int len, struct iattr *sattr)
{
struct nfs_fattr dir_attr;
struct nfs_fh fhandle;
struct nfs_fattr fattr, dir_attr;
struct nfs3_symlinkargs arg = {
.fromfh = NFS_FH(dir),
.fromname = name->name,
.fromlen = name->len,
.topath = path->name,
.tolen = path->len,
.fromname = dentry->d_name.name,
.fromlen = dentry->d_name.len,
.pages = &page,
.pathlen = len,
.sattr = sattr
};
struct nfs3_diropres res = {
.dir_attr = &dir_attr,
.fh = fhandle,
.fattr = fattr
.fh = &fhandle,
.fattr = &fattr
};
struct rpc_message msg = {
.rpc_proc = &nfs3_procedures[NFS3PROC_SYMLINK],
@ -569,13 +569,19 @@ nfs3_proc_symlink(struct inode *dir, struct qstr *name, struct qstr *path,
};
int status;
if (path->len > NFS3_MAXPATHLEN)
if (len > NFS3_MAXPATHLEN)
return -ENAMETOOLONG;
dprintk("NFS call symlink %s -> %s\n", name->name, path->name);
dprintk("NFS call symlink %s\n", dentry->d_name.name);
nfs_fattr_init(&dir_attr);
nfs_fattr_init(fattr);
nfs_fattr_init(&fattr);
status = rpc_call_sync(NFS_CLIENT(dir), &msg, 0);
nfs_post_op_update_inode(dir, &dir_attr);
if (status != 0)
goto out;
status = nfs_instantiate(dentry, &fhandle, &fattr);
out:
dprintk("NFS reply symlink: %d\n", status);
return status;
}
@ -785,7 +791,7 @@ nfs3_proc_fsinfo(struct nfs_server *server, struct nfs_fh *fhandle,
dprintk("NFS call fsinfo\n");
nfs_fattr_init(info->fattr);
status = rpc_call_sync(server->client_sys, &msg, 0);
status = rpc_call_sync(server->nfs_client->cl_rpcclient, &msg, 0);
dprintk("NFS reply fsinfo: %d\n", status);
return status;
}
@ -886,7 +892,7 @@ nfs3_proc_lock(struct file *filp, int cmd, struct file_lock *fl)
return nlmclnt_proc(filp->f_dentry->d_inode, cmd, fl);
}
struct nfs_rpc_ops nfs_v3_clientops = {
const struct nfs_rpc_ops nfs_v3_clientops = {
.version = 3, /* protocol version */
.dentry_ops = &nfs_dentry_operations,
.dir_inode_ops = &nfs3_dir_inode_operations,

Просмотреть файл

@ -56,7 +56,7 @@
#define NFS3_writeargs_sz (NFS3_fh_sz+5)
#define NFS3_createargs_sz (NFS3_diropargs_sz+NFS3_sattr_sz)
#define NFS3_mkdirargs_sz (NFS3_diropargs_sz+NFS3_sattr_sz)
#define NFS3_symlinkargs_sz (NFS3_diropargs_sz+NFS3_path_sz+NFS3_sattr_sz)
#define NFS3_symlinkargs_sz (NFS3_diropargs_sz+1+NFS3_sattr_sz)
#define NFS3_mknodargs_sz (NFS3_diropargs_sz+2+NFS3_sattr_sz)
#define NFS3_renameargs_sz (NFS3_diropargs_sz+NFS3_diropargs_sz)
#define NFS3_linkargs_sz (NFS3_fh_sz+NFS3_diropargs_sz)
@ -398,8 +398,11 @@ nfs3_xdr_symlinkargs(struct rpc_rqst *req, u32 *p, struct nfs3_symlinkargs *args
p = xdr_encode_fhandle(p, args->fromfh);
p = xdr_encode_array(p, args->fromname, args->fromlen);
p = xdr_encode_sattr(p, args->sattr);
p = xdr_encode_array(p, args->topath, args->tolen);
*p++ = htonl(args->pathlen);
req->rq_slen = xdr_adjust_iovec(req->rq_svec, p);
/* Copy the page */
xdr_encode_pages(&req->rq_snd_buf, args->pages, 0, args->pathlen);
return 0;
}

Просмотреть файл

@ -42,55 +42,6 @@ enum nfs4_client_state {
NFS4CLNT_LEASE_EXPIRED,
};
/*
* The nfs4_client identifies our client state to the server.
*/
struct nfs4_client {
struct list_head cl_servers; /* Global list of servers */
struct in_addr cl_addr; /* Server identifier */
u64 cl_clientid; /* constant */
nfs4_verifier cl_confirm;
unsigned long cl_state;
u32 cl_lockowner_id;
/*
* The following rwsem ensures exclusive access to the server
* while we recover the state following a lease expiration.
*/
struct rw_semaphore cl_sem;
struct list_head cl_delegations;
struct list_head cl_state_owners;
struct list_head cl_unused;
int cl_nunused;
spinlock_t cl_lock;
atomic_t cl_count;
struct rpc_clnt * cl_rpcclient;
struct list_head cl_superblocks; /* List of nfs_server structs */
unsigned long cl_lease_time;
unsigned long cl_last_renewal;
struct work_struct cl_renewd;
struct work_struct cl_recoverd;
struct rpc_wait_queue cl_rpcwaitq;
/* used for the setclientid verifier */
struct timespec cl_boot_time;
/* idmapper */
struct idmap * cl_idmap;
/* Our own IP address, as a null-terminated string.
* This is used to generate the clientid, and the callback address.
*/
char cl_ipaddr[16];
unsigned char cl_id_uniquifier;
};
/*
* struct rpc_sequence ensures that RPC calls are sent in the exact
* order that they appear on the list.
@ -127,7 +78,7 @@ static inline void nfs_confirm_seqid(struct nfs_seqid_counter *seqid, int status
struct nfs4_state_owner {
spinlock_t so_lock;
struct list_head so_list; /* per-clientid list of state_owners */
struct nfs4_client *so_client;
struct nfs_client *so_client;
u32 so_id; /* 32-bit identifier, unique */
atomic_t so_count;
@ -210,10 +161,10 @@ extern ssize_t nfs4_listxattr(struct dentry *, char *, size_t);
/* nfs4proc.c */
extern int nfs4_map_errors(int err);
extern int nfs4_proc_setclientid(struct nfs4_client *, u32, unsigned short, struct rpc_cred *);
extern int nfs4_proc_setclientid_confirm(struct nfs4_client *, struct rpc_cred *);
extern int nfs4_proc_async_renew(struct nfs4_client *, struct rpc_cred *);
extern int nfs4_proc_renew(struct nfs4_client *, struct rpc_cred *);
extern int nfs4_proc_setclientid(struct nfs_client *, u32, unsigned short, struct rpc_cred *);
extern int nfs4_proc_setclientid_confirm(struct nfs_client *, struct rpc_cred *);
extern int nfs4_proc_async_renew(struct nfs_client *, struct rpc_cred *);
extern int nfs4_proc_renew(struct nfs_client *, struct rpc_cred *);
extern int nfs4_do_close(struct inode *inode, struct nfs4_state *state);
extern struct dentry *nfs4_atomic_open(struct inode *, struct dentry *, struct nameidata *);
extern int nfs4_open_revalidate(struct inode *, struct dentry *, int, struct nameidata *);
@ -231,19 +182,14 @@ extern const u32 nfs4_fsinfo_bitmap[2];
extern const u32 nfs4_fs_locations_bitmap[2];
/* nfs4renewd.c */
extern void nfs4_schedule_state_renewal(struct nfs4_client *);
extern void nfs4_schedule_state_renewal(struct nfs_client *);
extern void nfs4_renewd_prepare_shutdown(struct nfs_server *);
extern void nfs4_kill_renewd(struct nfs4_client *);
extern void nfs4_kill_renewd(struct nfs_client *);
extern void nfs4_renew_state(void *);
/* nfs4state.c */
extern void init_nfsv4_state(struct nfs_server *);
extern void destroy_nfsv4_state(struct nfs_server *);
extern struct nfs4_client *nfs4_get_client(struct in_addr *);
extern void nfs4_put_client(struct nfs4_client *clp);
extern struct nfs4_client *nfs4_find_client(struct in_addr *);
struct rpc_cred *nfs4_get_renew_cred(struct nfs4_client *clp);
extern u32 nfs4_alloc_lockowner_id(struct nfs4_client *);
struct rpc_cred *nfs4_get_renew_cred(struct nfs_client *clp);
extern u32 nfs4_alloc_lockowner_id(struct nfs_client *);
extern struct nfs4_state_owner * nfs4_get_state_owner(struct nfs_server *, struct rpc_cred *);
extern void nfs4_put_state_owner(struct nfs4_state_owner *);
@ -252,7 +198,7 @@ extern struct nfs4_state * nfs4_get_open_state(struct inode *, struct nfs4_state
extern void nfs4_put_open_state(struct nfs4_state *);
extern void nfs4_close_state(struct nfs4_state *, mode_t);
extern void nfs4_state_set_mode_locked(struct nfs4_state *, mode_t);
extern void nfs4_schedule_state_recovery(struct nfs4_client *);
extern void nfs4_schedule_state_recovery(struct nfs_client *);
extern void nfs4_put_lock_state(struct nfs4_lock_state *lsp);
extern int nfs4_set_lock_state(struct nfs4_state *state, struct file_lock *fl);
extern void nfs4_copy_stateid(nfs4_stateid *, struct nfs4_state *, fl_owner_t);
@ -276,10 +222,6 @@ extern struct svc_version nfs4_callback_version1;
#else
#define init_nfsv4_state(server) do { } while (0)
#define destroy_nfsv4_state(server) do { } while (0)
#define nfs4_put_state_owner(inode, owner) do { } while (0)
#define nfs4_put_open_state(state) do { } while (0)
#define nfs4_close_state(a, b) do { } while (0)
#endif /* CONFIG_NFS_V4 */

Просмотреть файл

@ -2,6 +2,7 @@
* linux/fs/nfs/nfs4namespace.c
*
* Copyright (C) 2005 Trond Myklebust <Trond.Myklebust@netapp.com>
* - Modified by David Howells <dhowells@redhat.com>
*
* NFSv4 namespace
*/
@ -23,7 +24,7 @@
/*
* Check if fs_root is valid
*/
static inline char *nfs4_pathname_string(struct nfs4_pathname *pathname,
static inline char *nfs4_pathname_string(const struct nfs4_pathname *pathname,
char *buffer, ssize_t buflen)
{
char *end = buffer + buflen;
@ -34,7 +35,7 @@ static inline char *nfs4_pathname_string(struct nfs4_pathname *pathname,
n = pathname->ncomponents;
while (--n >= 0) {
struct nfs4_string *component = &pathname->components[n];
const struct nfs4_string *component = &pathname->components[n];
buflen -= component->len + 1;
if (buflen < 0)
goto Elong;
@ -47,6 +48,68 @@ Elong:
return ERR_PTR(-ENAMETOOLONG);
}
/*
* Determine the mount path as a string
*/
static char *nfs4_path(const struct vfsmount *mnt_parent,
const struct dentry *dentry,
char *buffer, ssize_t buflen)
{
const char *srvpath;
srvpath = strchr(mnt_parent->mnt_devname, ':');
if (srvpath)
srvpath++;
else
srvpath = mnt_parent->mnt_devname;
return nfs_path(srvpath, mnt_parent->mnt_root, dentry, buffer, buflen);
}
/*
* Check that fs_locations::fs_root [RFC3530 6.3] is a prefix for what we
* believe to be the server path to this dentry
*/
static int nfs4_validate_fspath(const struct vfsmount *mnt_parent,
const struct dentry *dentry,
const struct nfs4_fs_locations *locations,
char *page, char *page2)
{
const char *path, *fs_path;
path = nfs4_path(mnt_parent, dentry, page, PAGE_SIZE);
if (IS_ERR(path))
return PTR_ERR(path);
fs_path = nfs4_pathname_string(&locations->fs_path, page2, PAGE_SIZE);
if (IS_ERR(fs_path))
return PTR_ERR(fs_path);
if (strncmp(path, fs_path, strlen(fs_path)) != 0) {
dprintk("%s: path %s does not begin with fsroot %s\n",
__FUNCTION__, path, fs_path);
return -ENOENT;
}
return 0;
}
/*
* Check if the string represents a "valid" IPv4 address
*/
static inline int valid_ipaddr4(const char *buf)
{
int rc, count, in[4];
rc = sscanf(buf, "%d.%d.%d.%d", &in[0], &in[1], &in[2], &in[3]);
if (rc != 4)
return -EINVAL;
for (count = 0; count < 4; count++) {
if (in[count] > 255)
return -EINVAL;
}
return 0;
}
/**
* nfs_follow_referral - set up mountpoint when hitting a referral on moved error
@ -60,7 +123,7 @@ Elong:
*/
static struct vfsmount *nfs_follow_referral(const struct vfsmount *mnt_parent,
const struct dentry *dentry,
struct nfs4_fs_locations *locations)
const struct nfs4_fs_locations *locations)
{
struct vfsmount *mnt = ERR_PTR(-ENOENT);
struct nfs_clone_mount mountdata = {
@ -68,10 +131,9 @@ static struct vfsmount *nfs_follow_referral(const struct vfsmount *mnt_parent,
.dentry = dentry,
.authflavor = NFS_SB(mnt_parent->mnt_sb)->client->cl_auth->au_flavor,
};
char *page, *page2;
char *path, *fs_path;
char *page = NULL, *page2 = NULL;
char *devname;
int loc, s;
int loc, s, error;
if (locations == NULL || locations->nlocations <= 0)
goto out;
@ -79,36 +141,30 @@ static struct vfsmount *nfs_follow_referral(const struct vfsmount *mnt_parent,
dprintk("%s: referral at %s/%s\n", __FUNCTION__,
dentry->d_parent->d_name.name, dentry->d_name.name);
/* Ensure fs path is a prefix of current dentry path */
page = (char *) __get_free_page(GFP_USER);
if (page == NULL)
if (!page)
goto out;
page2 = (char *) __get_free_page(GFP_USER);
if (page2 == NULL)
if (!page2)
goto out;
path = nfs4_path(dentry, page, PAGE_SIZE);
if (IS_ERR(path))
goto out_free;
fs_path = nfs4_pathname_string(&locations->fs_path, page2, PAGE_SIZE);
if (IS_ERR(fs_path))
goto out_free;
if (strncmp(path, fs_path, strlen(fs_path)) != 0) {
dprintk("%s: path %s does not begin with fsroot %s\n", __FUNCTION__, path, fs_path);
goto out_free;
/* Ensure fs path is a prefix of current dentry path */
error = nfs4_validate_fspath(mnt_parent, dentry, locations, page, page2);
if (error < 0) {
mnt = ERR_PTR(error);
goto out;
}
devname = nfs_devname(mnt_parent, dentry, page, PAGE_SIZE);
if (IS_ERR(devname)) {
mnt = (struct vfsmount *)devname;
goto out_free;
goto out;
}
loc = 0;
while (loc < locations->nlocations && IS_ERR(mnt)) {
struct nfs4_fs_location *location = &locations->locations[loc];
const struct nfs4_fs_location *location = &locations->locations[loc];
char *mnt_path;
if (location == NULL || location->nservers <= 0 ||
@ -140,7 +196,7 @@ static struct vfsmount *nfs_follow_referral(const struct vfsmount *mnt_parent,
addr.sin_port = htons(NFS_PORT);
mountdata.addr = &addr;
mnt = vfs_kern_mount(&nfs_referral_nfs4_fs_type, 0, devname, &mountdata);
mnt = vfs_kern_mount(&nfs4_referral_fs_type, 0, devname, &mountdata);
if (!IS_ERR(mnt)) {
break;
}
@ -149,10 +205,9 @@ static struct vfsmount *nfs_follow_referral(const struct vfsmount *mnt_parent,
loc++;
}
out_free:
free_page((unsigned long)page);
free_page((unsigned long)page2);
out:
free_page((unsigned long) page);
free_page((unsigned long) page2);
dprintk("%s: done\n", __FUNCTION__);
return mnt;
}
@ -165,7 +220,7 @@ out:
*/
struct vfsmount *nfs_do_refmount(const struct vfsmount *mnt_parent, struct dentry *dentry)
{
struct vfsmount *mnt = ERR_PTR(-ENOENT);
struct vfsmount *mnt = ERR_PTR(-ENOMEM);
struct dentry *parent;
struct nfs4_fs_locations *fs_locations = NULL;
struct page *page;
@ -183,11 +238,16 @@ struct vfsmount *nfs_do_refmount(const struct vfsmount *mnt_parent, struct dentr
goto out_free;
/* Get locations */
mnt = ERR_PTR(-ENOENT);
parent = dget_parent(dentry);
dprintk("%s: getting locations for %s/%s\n", __FUNCTION__, parent->d_name.name, dentry->d_name.name);
dprintk("%s: getting locations for %s/%s\n",
__FUNCTION__, parent->d_name.name, dentry->d_name.name);
err = nfs4_proc_fs_locations(parent->d_inode, dentry, fs_locations, page);
dput(parent);
if (err != 0 || fs_locations->nlocations <= 0 ||
if (err != 0 ||
fs_locations->nlocations <= 0 ||
fs_locations->fs_path.ncomponents <= 0)
goto out_free;

Просмотреть файл

@ -55,7 +55,7 @@
#define NFSDBG_FACILITY NFSDBG_PROC
#define NFS4_POLL_RETRY_MIN (1*HZ)
#define NFS4_POLL_RETRY_MIN (HZ/10)
#define NFS4_POLL_RETRY_MAX (15*HZ)
struct nfs4_opendata;
@ -64,7 +64,7 @@ static int nfs4_do_fsinfo(struct nfs_server *, struct nfs_fh *, struct nfs_fsinf
static int nfs4_async_handle_error(struct rpc_task *, const struct nfs_server *);
static int _nfs4_proc_access(struct inode *inode, struct nfs_access_entry *entry);
static int nfs4_handle_exception(const struct nfs_server *server, int errorcode, struct nfs4_exception *exception);
static int nfs4_wait_clnt_recover(struct rpc_clnt *clnt, struct nfs4_client *clp);
static int nfs4_wait_clnt_recover(struct rpc_clnt *clnt, struct nfs_client *clp);
/* Prevent leaks of NFSv4 errors into userland */
int nfs4_map_errors(int err)
@ -195,7 +195,7 @@ static void nfs4_setup_readdir(u64 cookie, u32 *verifier, struct dentry *dentry,
static void renew_lease(const struct nfs_server *server, unsigned long timestamp)
{
struct nfs4_client *clp = server->nfs4_state;
struct nfs_client *clp = server->nfs_client;
spin_lock(&clp->cl_lock);
if (time_before(clp->cl_last_renewal,timestamp))
clp->cl_last_renewal = timestamp;
@ -252,7 +252,7 @@ static struct nfs4_opendata *nfs4_opendata_alloc(struct dentry *dentry,
atomic_inc(&sp->so_count);
p->o_arg.fh = NFS_FH(dir);
p->o_arg.open_flags = flags,
p->o_arg.clientid = server->nfs4_state->cl_clientid;
p->o_arg.clientid = server->nfs_client->cl_clientid;
p->o_arg.id = sp->so_id;
p->o_arg.name = &dentry->d_name;
p->o_arg.server = server;
@ -550,7 +550,7 @@ int nfs4_open_delegation_recall(struct dentry *dentry, struct nfs4_state *state)
case -NFS4ERR_STALE_STATEID:
case -NFS4ERR_EXPIRED:
/* Don't recall a delegation if it was lost */
nfs4_schedule_state_recovery(server->nfs4_state);
nfs4_schedule_state_recovery(server->nfs_client);
return err;
}
err = nfs4_handle_exception(server, err, &exception);
@ -758,7 +758,7 @@ static int _nfs4_proc_open(struct nfs4_opendata *data)
}
nfs_confirm_seqid(&data->owner->so_seqid, 0);
if (!(o_res->f_attr->valid & NFS_ATTR_FATTR))
return server->rpc_ops->getattr(server, &o_res->fh, o_res->f_attr);
return server->nfs_client->rpc_ops->getattr(server, &o_res->fh, o_res->f_attr);
return 0;
}
@ -792,11 +792,18 @@ out:
int nfs4_recover_expired_lease(struct nfs_server *server)
{
struct nfs4_client *clp = server->nfs4_state;
struct nfs_client *clp = server->nfs_client;
int ret;
if (test_and_clear_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state))
for (;;) {
ret = nfs4_wait_clnt_recover(server->client, clp);
if (ret != 0)
return ret;
if (!test_and_clear_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state))
break;
nfs4_schedule_state_recovery(clp);
return nfs4_wait_clnt_recover(server->client, clp);
}
return 0;
}
/*
@ -867,7 +874,7 @@ static int _nfs4_open_delegated(struct inode *inode, int flags, struct rpc_cred
{
struct nfs_delegation *delegation;
struct nfs_server *server = NFS_SERVER(inode);
struct nfs4_client *clp = server->nfs4_state;
struct nfs_client *clp = server->nfs_client;
struct nfs_inode *nfsi = NFS_I(inode);
struct nfs4_state_owner *sp = NULL;
struct nfs4_state *state = NULL;
@ -953,7 +960,7 @@ static int _nfs4_do_open(struct inode *dir, struct dentry *dentry, int flags, st
struct nfs4_state_owner *sp;
struct nfs4_state *state = NULL;
struct nfs_server *server = NFS_SERVER(dir);
struct nfs4_client *clp = server->nfs4_state;
struct nfs_client *clp = server->nfs_client;
struct nfs4_opendata *opendata;
int status;
@ -1133,7 +1140,7 @@ static void nfs4_close_done(struct rpc_task *task, void *data)
break;
case -NFS4ERR_STALE_STATEID:
case -NFS4ERR_EXPIRED:
nfs4_schedule_state_recovery(server->nfs4_state);
nfs4_schedule_state_recovery(server->nfs_client);
break;
default:
if (nfs4_async_handle_error(task, server) == -EAGAIN) {
@ -1268,7 +1275,7 @@ nfs4_atomic_open(struct inode *dir, struct dentry *dentry, struct nameidata *nd)
BUG_ON(nd->intent.open.flags & O_CREAT);
}
cred = rpcauth_lookupcred(NFS_SERVER(dir)->client->cl_auth, 0);
cred = rpcauth_lookupcred(NFS_CLIENT(dir)->cl_auth, 0);
if (IS_ERR(cred))
return (struct dentry *)cred;
state = nfs4_do_open(dir, dentry, nd->intent.open.flags, &attr, cred);
@ -1291,7 +1298,7 @@ nfs4_open_revalidate(struct inode *dir, struct dentry *dentry, int openflags, st
struct rpc_cred *cred;
struct nfs4_state *state;
cred = rpcauth_lookupcred(NFS_SERVER(dir)->client->cl_auth, 0);
cred = rpcauth_lookupcred(NFS_CLIENT(dir)->cl_auth, 0);
if (IS_ERR(cred))
return PTR_ERR(cred);
state = nfs4_open_delegated(dentry->d_inode, openflags, cred);
@ -1393,70 +1400,19 @@ static int nfs4_lookup_root(struct nfs_server *server, struct nfs_fh *fhandle,
return err;
}
/*
* get the file handle for the "/" directory on the server
*/
static int nfs4_proc_get_root(struct nfs_server *server, struct nfs_fh *fhandle,
struct nfs_fsinfo *info)
struct nfs_fsinfo *info)
{
struct nfs_fattr * fattr = info->fattr;
unsigned char * p;
struct qstr q;
struct nfs4_lookup_arg args = {
.dir_fh = fhandle,
.name = &q,
.bitmask = nfs4_fattr_bitmap,
};
struct nfs4_lookup_res res = {
.server = server,
.fattr = fattr,
.fh = fhandle,
};
struct rpc_message msg = {
.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_LOOKUP],
.rpc_argp = &args,
.rpc_resp = &res,
};
int status;
/*
* Now we do a separate LOOKUP for each component of the mount path.
* The LOOKUPs are done separately so that we can conveniently
* catch an ERR_WRONGSEC if it occurs along the way...
*/
status = nfs4_lookup_root(server, fhandle, info);
if (status)
goto out;
p = server->mnt_path;
for (;;) {
struct nfs4_exception exception = { };
while (*p == '/')
p++;
if (!*p)
break;
q.name = p;
while (*p && (*p != '/'))
p++;
q.len = p - q.name;
do {
nfs_fattr_init(fattr);
status = nfs4_handle_exception(server,
rpc_call_sync(server->client, &msg, 0),
&exception);
} while (exception.retry);
if (status == 0)
continue;
if (status == -ENOENT) {
printk(KERN_NOTICE "NFS: mount path %s does not exist!\n", server->mnt_path);
printk(KERN_NOTICE "NFS: suggestion: try mounting '/' instead.\n");
}
break;
}
if (status == 0)
status = nfs4_server_capabilities(server, fhandle);
if (status == 0)
status = nfs4_do_fsinfo(server, fhandle, info);
out:
return nfs4_map_errors(status);
}
@ -1565,7 +1521,7 @@ nfs4_proc_setattr(struct dentry *dentry, struct nfs_fattr *fattr,
nfs_fattr_init(fattr);
cred = rpcauth_lookupcred(NFS_SERVER(inode)->client->cl_auth, 0);
cred = rpcauth_lookupcred(NFS_CLIENT(inode)->cl_auth, 0);
if (IS_ERR(cred))
return PTR_ERR(cred);
@ -1583,6 +1539,52 @@ nfs4_proc_setattr(struct dentry *dentry, struct nfs_fattr *fattr,
return status;
}
static int _nfs4_proc_lookupfh(struct nfs_server *server, struct nfs_fh *dirfh,
struct qstr *name, struct nfs_fh *fhandle,
struct nfs_fattr *fattr)
{
int status;
struct nfs4_lookup_arg args = {
.bitmask = server->attr_bitmask,
.dir_fh = dirfh,
.name = name,
};
struct nfs4_lookup_res res = {
.server = server,
.fattr = fattr,
.fh = fhandle,
};
struct rpc_message msg = {
.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_LOOKUP],
.rpc_argp = &args,
.rpc_resp = &res,
};
nfs_fattr_init(fattr);
dprintk("NFS call lookupfh %s\n", name->name);
status = rpc_call_sync(server->client, &msg, 0);
dprintk("NFS reply lookupfh: %d\n", status);
if (status == -NFS4ERR_MOVED)
status = -EREMOTE;
return status;
}
static int nfs4_proc_lookupfh(struct nfs_server *server, struct nfs_fh *dirfh,
struct qstr *name, struct nfs_fh *fhandle,
struct nfs_fattr *fattr)
{
struct nfs4_exception exception = { };
int err;
do {
err = nfs4_handle_exception(server,
_nfs4_proc_lookupfh(server, dirfh, name,
fhandle, fattr),
&exception);
} while (exception.retry);
return err;
}
static int _nfs4_proc_lookup(struct inode *dir, struct qstr *name,
struct nfs_fh *fhandle, struct nfs_fattr *fattr)
{
@ -1881,7 +1883,7 @@ nfs4_proc_create(struct inode *dir, struct dentry *dentry, struct iattr *sattr,
struct rpc_cred *cred;
int status = 0;
cred = rpcauth_lookupcred(NFS_SERVER(dir)->client->cl_auth, 0);
cred = rpcauth_lookupcred(NFS_CLIENT(dir)->cl_auth, 0);
if (IS_ERR(cred)) {
status = PTR_ERR(cred);
goto out;
@ -2089,24 +2091,24 @@ static int nfs4_proc_link(struct inode *inode, struct inode *dir, struct qstr *n
return err;
}
static int _nfs4_proc_symlink(struct inode *dir, struct qstr *name,
struct qstr *path, struct iattr *sattr, struct nfs_fh *fhandle,
struct nfs_fattr *fattr)
static int _nfs4_proc_symlink(struct inode *dir, struct dentry *dentry,
struct page *page, unsigned int len, struct iattr *sattr)
{
struct nfs_server *server = NFS_SERVER(dir);
struct nfs_fattr dir_fattr;
struct nfs_fh fhandle;
struct nfs_fattr fattr, dir_fattr;
struct nfs4_create_arg arg = {
.dir_fh = NFS_FH(dir),
.server = server,
.name = name,
.name = &dentry->d_name,
.attrs = sattr,
.ftype = NF4LNK,
.bitmask = server->attr_bitmask,
};
struct nfs4_create_res res = {
.server = server,
.fh = fhandle,
.fattr = fattr,
.fh = &fhandle,
.fattr = &fattr,
.dir_fattr = &dir_fattr,
};
struct rpc_message msg = {
@ -2116,29 +2118,32 @@ static int _nfs4_proc_symlink(struct inode *dir, struct qstr *name,
};
int status;
if (path->len > NFS4_MAXPATHLEN)
if (len > NFS4_MAXPATHLEN)
return -ENAMETOOLONG;
arg.u.symlink = path;
nfs_fattr_init(fattr);
arg.u.symlink.pages = &page;
arg.u.symlink.len = len;
nfs_fattr_init(&fattr);
nfs_fattr_init(&dir_fattr);
status = rpc_call_sync(NFS_CLIENT(dir), &msg, 0);
if (!status)
if (!status) {
update_changeattr(dir, &res.dir_cinfo);
nfs_post_op_update_inode(dir, res.dir_fattr);
nfs_post_op_update_inode(dir, res.dir_fattr);
status = nfs_instantiate(dentry, &fhandle, &fattr);
}
return status;
}
static int nfs4_proc_symlink(struct inode *dir, struct qstr *name,
struct qstr *path, struct iattr *sattr, struct nfs_fh *fhandle,
struct nfs_fattr *fattr)
static int nfs4_proc_symlink(struct inode *dir, struct dentry *dentry,
struct page *page, unsigned int len, struct iattr *sattr)
{
struct nfs4_exception exception = { };
int err;
do {
err = nfs4_handle_exception(NFS_SERVER(dir),
_nfs4_proc_symlink(dir, name, path, sattr,
fhandle, fattr),
_nfs4_proc_symlink(dir, dentry, page,
len, sattr),
&exception);
} while (exception.retry);
return err;
@ -2521,7 +2526,7 @@ static void nfs4_proc_commit_setup(struct nfs_write_data *data, int how)
*/
static void nfs4_renew_done(struct rpc_task *task, void *data)
{
struct nfs4_client *clp = (struct nfs4_client *)task->tk_msg.rpc_argp;
struct nfs_client *clp = (struct nfs_client *)task->tk_msg.rpc_argp;
unsigned long timestamp = (unsigned long)data;
if (task->tk_status < 0) {
@ -2543,7 +2548,7 @@ static const struct rpc_call_ops nfs4_renew_ops = {
.rpc_call_done = nfs4_renew_done,
};
int nfs4_proc_async_renew(struct nfs4_client *clp, struct rpc_cred *cred)
int nfs4_proc_async_renew(struct nfs_client *clp, struct rpc_cred *cred)
{
struct rpc_message msg = {
.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_RENEW],
@ -2555,7 +2560,7 @@ int nfs4_proc_async_renew(struct nfs4_client *clp, struct rpc_cred *cred)
&nfs4_renew_ops, (void *)jiffies);
}
int nfs4_proc_renew(struct nfs4_client *clp, struct rpc_cred *cred)
int nfs4_proc_renew(struct nfs_client *clp, struct rpc_cred *cred)
{
struct rpc_message msg = {
.rpc_proc = &nfs4_procedures[NFSPROC4_CLNT_RENEW],
@ -2770,7 +2775,7 @@ static int __nfs4_proc_set_acl(struct inode *inode, const void *buf, size_t bufl
return -EOPNOTSUPP;
nfs_inode_return_delegation(inode);
buf_to_pages(buf, buflen, arg.acl_pages, &arg.acl_pgbase);
ret = rpc_call_sync(NFS_SERVER(inode)->client, &msg, 0);
ret = rpc_call_sync(NFS_CLIENT(inode), &msg, 0);
if (ret == 0)
nfs4_write_cached_acl(inode, buf, buflen);
return ret;
@ -2791,7 +2796,7 @@ static int nfs4_proc_set_acl(struct inode *inode, const void *buf, size_t buflen
static int
nfs4_async_handle_error(struct rpc_task *task, const struct nfs_server *server)
{
struct nfs4_client *clp = server->nfs4_state;
struct nfs_client *clp = server->nfs_client;
if (!clp || task->tk_status >= 0)
return 0;
@ -2828,7 +2833,7 @@ static int nfs4_wait_bit_interruptible(void *word)
return 0;
}
static int nfs4_wait_clnt_recover(struct rpc_clnt *clnt, struct nfs4_client *clp)
static int nfs4_wait_clnt_recover(struct rpc_clnt *clnt, struct nfs_client *clp)
{
sigset_t oldset;
int res;
@ -2871,7 +2876,7 @@ static int nfs4_delay(struct rpc_clnt *clnt, long *timeout)
*/
int nfs4_handle_exception(const struct nfs_server *server, int errorcode, struct nfs4_exception *exception)
{
struct nfs4_client *clp = server->nfs4_state;
struct nfs_client *clp = server->nfs_client;
int ret = errorcode;
exception->retry = 0;
@ -2886,6 +2891,7 @@ int nfs4_handle_exception(const struct nfs_server *server, int errorcode, struct
if (ret == 0)
exception->retry = 1;
break;
case -NFS4ERR_FILE_OPEN:
case -NFS4ERR_GRACE:
case -NFS4ERR_DELAY:
ret = nfs4_delay(server->client, &exception->timeout);
@ -2898,7 +2904,7 @@ int nfs4_handle_exception(const struct nfs_server *server, int errorcode, struct
return nfs4_map_errors(ret);
}
int nfs4_proc_setclientid(struct nfs4_client *clp, u32 program, unsigned short port, struct rpc_cred *cred)
int nfs4_proc_setclientid(struct nfs_client *clp, u32 program, unsigned short port, struct rpc_cred *cred)
{
nfs4_verifier sc_verifier;
struct nfs4_setclientid setclientid = {
@ -2922,7 +2928,7 @@ int nfs4_proc_setclientid(struct nfs4_client *clp, u32 program, unsigned short p
for(;;) {
setclientid.sc_name_len = scnprintf(setclientid.sc_name,
sizeof(setclientid.sc_name), "%s/%u.%u.%u.%u %s %u",
clp->cl_ipaddr, NIPQUAD(clp->cl_addr.s_addr),
clp->cl_ipaddr, NIPQUAD(clp->cl_addr.sin_addr),
cred->cr_ops->cr_name,
clp->cl_id_uniquifier);
setclientid.sc_netid_len = scnprintf(setclientid.sc_netid,
@ -2945,7 +2951,7 @@ int nfs4_proc_setclientid(struct nfs4_client *clp, u32 program, unsigned short p
return status;
}
static int _nfs4_proc_setclientid_confirm(struct nfs4_client *clp, struct rpc_cred *cred)
static int _nfs4_proc_setclientid_confirm(struct nfs_client *clp, struct rpc_cred *cred)
{
struct nfs_fsinfo fsinfo;
struct rpc_message msg = {
@ -2969,7 +2975,7 @@ static int _nfs4_proc_setclientid_confirm(struct nfs4_client *clp, struct rpc_cr
return status;
}
int nfs4_proc_setclientid_confirm(struct nfs4_client *clp, struct rpc_cred *cred)
int nfs4_proc_setclientid_confirm(struct nfs_client *clp, struct rpc_cred *cred)
{
long timeout;
int err;
@ -3077,7 +3083,7 @@ int nfs4_proc_delegreturn(struct inode *inode, struct rpc_cred *cred, const nfs4
switch (err) {
case -NFS4ERR_STALE_STATEID:
case -NFS4ERR_EXPIRED:
nfs4_schedule_state_recovery(server->nfs4_state);
nfs4_schedule_state_recovery(server->nfs_client);
case 0:
return 0;
}
@ -3106,7 +3112,7 @@ static int _nfs4_proc_getlk(struct nfs4_state *state, int cmd, struct file_lock
{
struct inode *inode = state->inode;
struct nfs_server *server = NFS_SERVER(inode);
struct nfs4_client *clp = server->nfs4_state;
struct nfs_client *clp = server->nfs_client;
struct nfs_lockt_args arg = {
.fh = NFS_FH(inode),
.fl = request,
@ -3231,7 +3237,7 @@ static void nfs4_locku_done(struct rpc_task *task, void *data)
break;
case -NFS4ERR_STALE_STATEID:
case -NFS4ERR_EXPIRED:
nfs4_schedule_state_recovery(calldata->server->nfs4_state);
nfs4_schedule_state_recovery(calldata->server->nfs_client);
break;
default:
if (nfs4_async_handle_error(task, calldata->server) == -EAGAIN) {
@ -3343,7 +3349,7 @@ static struct nfs4_lockdata *nfs4_alloc_lockdata(struct file_lock *fl,
if (p->arg.lock_seqid == NULL)
goto out_free;
p->arg.lock_stateid = &lsp->ls_stateid;
p->arg.lock_owner.clientid = server->nfs4_state->cl_clientid;
p->arg.lock_owner.clientid = server->nfs_client->cl_clientid;
p->arg.lock_owner.id = lsp->ls_id;
p->lsp = lsp;
atomic_inc(&lsp->ls_count);
@ -3513,7 +3519,7 @@ static int nfs4_lock_expired(struct nfs4_state *state, struct file_lock *request
static int _nfs4_proc_setlk(struct nfs4_state *state, int cmd, struct file_lock *request)
{
struct nfs4_client *clp = state->owner->so_client;
struct nfs_client *clp = state->owner->so_client;
unsigned char fl_flags = request->fl_flags;
int status;
@ -3715,7 +3721,7 @@ static struct inode_operations nfs4_file_inode_operations = {
.listxattr = nfs4_listxattr,
};
struct nfs_rpc_ops nfs_v4_clientops = {
const struct nfs_rpc_ops nfs_v4_clientops = {
.version = 4, /* protocol version */
.dentry_ops = &nfs4_dentry_operations,
.dir_inode_ops = &nfs4_dir_inode_operations,
@ -3723,6 +3729,7 @@ struct nfs_rpc_ops nfs_v4_clientops = {
.getroot = nfs4_proc_get_root,
.getattr = nfs4_proc_getattr,
.setattr = nfs4_proc_setattr,
.lookupfh = nfs4_proc_lookupfh,
.lookup = nfs4_proc_lookup,
.access = nfs4_proc_access,
.readlink = nfs4_proc_readlink,
@ -3743,6 +3750,7 @@ struct nfs_rpc_ops nfs_v4_clientops = {
.statfs = nfs4_proc_statfs,
.fsinfo = nfs4_proc_fsinfo,
.pathconf = nfs4_proc_pathconf,
.set_capabilities = nfs4_server_capabilities,
.decode_dirent = nfs4_decode_dirent,
.read_setup = nfs4_proc_read_setup,
.read_done = nfs4_read_done,

Просмотреть файл

@ -61,7 +61,7 @@
void
nfs4_renew_state(void *data)
{
struct nfs4_client *clp = (struct nfs4_client *)data;
struct nfs_client *clp = (struct nfs_client *)data;
struct rpc_cred *cred;
long lease, timeout;
unsigned long last, now;
@ -108,7 +108,7 @@ out:
/* Must be called with clp->cl_sem locked for writes */
void
nfs4_schedule_state_renewal(struct nfs4_client *clp)
nfs4_schedule_state_renewal(struct nfs_client *clp)
{
long timeout;
@ -121,32 +121,20 @@ nfs4_schedule_state_renewal(struct nfs4_client *clp)
__FUNCTION__, (timeout + HZ - 1) / HZ);
cancel_delayed_work(&clp->cl_renewd);
schedule_delayed_work(&clp->cl_renewd, timeout);
set_bit(NFS_CS_RENEWD, &clp->cl_res_state);
spin_unlock(&clp->cl_lock);
}
void
nfs4_renewd_prepare_shutdown(struct nfs_server *server)
{
struct nfs4_client *clp = server->nfs4_state;
if (!clp)
return;
flush_scheduled_work();
down_write(&clp->cl_sem);
if (!list_empty(&server->nfs4_siblings))
list_del_init(&server->nfs4_siblings);
up_write(&clp->cl_sem);
}
/* Must be called with clp->cl_sem locked for writes */
void
nfs4_kill_renewd(struct nfs4_client *clp)
nfs4_kill_renewd(struct nfs_client *clp)
{
down_read(&clp->cl_sem);
if (!list_empty(&clp->cl_superblocks)) {
up_read(&clp->cl_sem);
return;
}
cancel_delayed_work(&clp->cl_renewd);
up_read(&clp->cl_sem);
flush_scheduled_work();

Просмотреть файл

@ -50,149 +50,15 @@
#include "nfs4_fs.h"
#include "callback.h"
#include "delegation.h"
#include "internal.h"
#define OPENOWNER_POOL_SIZE 8
const nfs4_stateid zero_stateid;
static DEFINE_SPINLOCK(state_spinlock);
static LIST_HEAD(nfs4_clientid_list);
void
init_nfsv4_state(struct nfs_server *server)
{
server->nfs4_state = NULL;
INIT_LIST_HEAD(&server->nfs4_siblings);
}
void
destroy_nfsv4_state(struct nfs_server *server)
{
kfree(server->mnt_path);
server->mnt_path = NULL;
if (server->nfs4_state) {
nfs4_put_client(server->nfs4_state);
server->nfs4_state = NULL;
}
}
/*
* nfs4_get_client(): returns an empty client structure
* nfs4_put_client(): drops reference to client structure
*
* Since these are allocated/deallocated very rarely, we don't
* bother putting them in a slab cache...
*/
static struct nfs4_client *
nfs4_alloc_client(struct in_addr *addr)
{
struct nfs4_client *clp;
if (nfs_callback_up() < 0)
return NULL;
if ((clp = kzalloc(sizeof(*clp), GFP_KERNEL)) == NULL) {
nfs_callback_down();
return NULL;
}
memcpy(&clp->cl_addr, addr, sizeof(clp->cl_addr));
init_rwsem(&clp->cl_sem);
INIT_LIST_HEAD(&clp->cl_delegations);
INIT_LIST_HEAD(&clp->cl_state_owners);
INIT_LIST_HEAD(&clp->cl_unused);
spin_lock_init(&clp->cl_lock);
atomic_set(&clp->cl_count, 1);
INIT_WORK(&clp->cl_renewd, nfs4_renew_state, clp);
INIT_LIST_HEAD(&clp->cl_superblocks);
rpc_init_wait_queue(&clp->cl_rpcwaitq, "NFS4 client");
clp->cl_rpcclient = ERR_PTR(-EINVAL);
clp->cl_boot_time = CURRENT_TIME;
clp->cl_state = 1 << NFS4CLNT_LEASE_EXPIRED;
return clp;
}
static void
nfs4_free_client(struct nfs4_client *clp)
{
struct nfs4_state_owner *sp;
while (!list_empty(&clp->cl_unused)) {
sp = list_entry(clp->cl_unused.next,
struct nfs4_state_owner,
so_list);
list_del(&sp->so_list);
kfree(sp);
}
BUG_ON(!list_empty(&clp->cl_state_owners));
nfs_idmap_delete(clp);
if (!IS_ERR(clp->cl_rpcclient))
rpc_shutdown_client(clp->cl_rpcclient);
kfree(clp);
nfs_callback_down();
}
static struct nfs4_client *__nfs4_find_client(struct in_addr *addr)
{
struct nfs4_client *clp;
list_for_each_entry(clp, &nfs4_clientid_list, cl_servers) {
if (memcmp(&clp->cl_addr, addr, sizeof(clp->cl_addr)) == 0) {
atomic_inc(&clp->cl_count);
return clp;
}
}
return NULL;
}
struct nfs4_client *nfs4_find_client(struct in_addr *addr)
{
struct nfs4_client *clp;
spin_lock(&state_spinlock);
clp = __nfs4_find_client(addr);
spin_unlock(&state_spinlock);
return clp;
}
struct nfs4_client *
nfs4_get_client(struct in_addr *addr)
{
struct nfs4_client *clp, *new = NULL;
spin_lock(&state_spinlock);
for (;;) {
clp = __nfs4_find_client(addr);
if (clp != NULL)
break;
clp = new;
if (clp != NULL) {
list_add(&clp->cl_servers, &nfs4_clientid_list);
new = NULL;
break;
}
spin_unlock(&state_spinlock);
new = nfs4_alloc_client(addr);
spin_lock(&state_spinlock);
if (new == NULL)
break;
}
spin_unlock(&state_spinlock);
if (new)
nfs4_free_client(new);
return clp;
}
void
nfs4_put_client(struct nfs4_client *clp)
{
if (!atomic_dec_and_lock(&clp->cl_count, &state_spinlock))
return;
list_del(&clp->cl_servers);
spin_unlock(&state_spinlock);
BUG_ON(!list_empty(&clp->cl_superblocks));
rpc_wake_up(&clp->cl_rpcwaitq);
nfs4_kill_renewd(clp);
nfs4_free_client(clp);
}
static int nfs4_init_client(struct nfs4_client *clp, struct rpc_cred *cred)
static int nfs4_init_client(struct nfs_client *clp, struct rpc_cred *cred)
{
int status = nfs4_proc_setclientid(clp, NFS4_CALLBACK,
nfs_callback_tcpport, cred);
@ -204,13 +70,13 @@ static int nfs4_init_client(struct nfs4_client *clp, struct rpc_cred *cred)
}
u32
nfs4_alloc_lockowner_id(struct nfs4_client *clp)
nfs4_alloc_lockowner_id(struct nfs_client *clp)
{
return clp->cl_lockowner_id ++;
}
static struct nfs4_state_owner *
nfs4_client_grab_unused(struct nfs4_client *clp, struct rpc_cred *cred)
nfs4_client_grab_unused(struct nfs_client *clp, struct rpc_cred *cred)
{
struct nfs4_state_owner *sp = NULL;
@ -224,7 +90,7 @@ nfs4_client_grab_unused(struct nfs4_client *clp, struct rpc_cred *cred)
return sp;
}
struct rpc_cred *nfs4_get_renew_cred(struct nfs4_client *clp)
struct rpc_cred *nfs4_get_renew_cred(struct nfs_client *clp)
{
struct nfs4_state_owner *sp;
struct rpc_cred *cred = NULL;
@ -238,7 +104,7 @@ struct rpc_cred *nfs4_get_renew_cred(struct nfs4_client *clp)
return cred;
}
struct rpc_cred *nfs4_get_setclientid_cred(struct nfs4_client *clp)
struct rpc_cred *nfs4_get_setclientid_cred(struct nfs_client *clp)
{
struct nfs4_state_owner *sp;
@ -251,7 +117,7 @@ struct rpc_cred *nfs4_get_setclientid_cred(struct nfs4_client *clp)
}
static struct nfs4_state_owner *
nfs4_find_state_owner(struct nfs4_client *clp, struct rpc_cred *cred)
nfs4_find_state_owner(struct nfs_client *clp, struct rpc_cred *cred)
{
struct nfs4_state_owner *sp, *res = NULL;
@ -294,7 +160,7 @@ nfs4_alloc_state_owner(void)
void
nfs4_drop_state_owner(struct nfs4_state_owner *sp)
{
struct nfs4_client *clp = sp->so_client;
struct nfs_client *clp = sp->so_client;
spin_lock(&clp->cl_lock);
list_del_init(&sp->so_list);
spin_unlock(&clp->cl_lock);
@ -306,7 +172,7 @@ nfs4_drop_state_owner(struct nfs4_state_owner *sp)
*/
struct nfs4_state_owner *nfs4_get_state_owner(struct nfs_server *server, struct rpc_cred *cred)
{
struct nfs4_client *clp = server->nfs4_state;
struct nfs_client *clp = server->nfs_client;
struct nfs4_state_owner *sp, *new;
get_rpccred(cred);
@ -337,7 +203,7 @@ struct nfs4_state_owner *nfs4_get_state_owner(struct nfs_server *server, struct
*/
void nfs4_put_state_owner(struct nfs4_state_owner *sp)
{
struct nfs4_client *clp = sp->so_client;
struct nfs_client *clp = sp->so_client;
struct rpc_cred *cred = sp->so_cred;
if (!atomic_dec_and_lock(&sp->so_count, &clp->cl_lock))
@ -540,7 +406,7 @@ __nfs4_find_lock_state(struct nfs4_state *state, fl_owner_t fl_owner)
static struct nfs4_lock_state *nfs4_alloc_lock_state(struct nfs4_state *state, fl_owner_t fl_owner)
{
struct nfs4_lock_state *lsp;
struct nfs4_client *clp = state->owner->so_client;
struct nfs_client *clp = state->owner->so_client;
lsp = kzalloc(sizeof(*lsp), GFP_KERNEL);
if (lsp == NULL)
@ -752,7 +618,7 @@ out:
static int reclaimer(void *);
static inline void nfs4_clear_recover_bit(struct nfs4_client *clp)
static inline void nfs4_clear_recover_bit(struct nfs_client *clp)
{
smp_mb__before_clear_bit();
clear_bit(NFS4CLNT_STATE_RECOVER, &clp->cl_state);
@ -764,25 +630,25 @@ static inline void nfs4_clear_recover_bit(struct nfs4_client *clp)
/*
* State recovery routine
*/
static void nfs4_recover_state(struct nfs4_client *clp)
static void nfs4_recover_state(struct nfs_client *clp)
{
struct task_struct *task;
__module_get(THIS_MODULE);
atomic_inc(&clp->cl_count);
task = kthread_run(reclaimer, clp, "%u.%u.%u.%u-reclaim",
NIPQUAD(clp->cl_addr));
NIPQUAD(clp->cl_addr.sin_addr));
if (!IS_ERR(task))
return;
nfs4_clear_recover_bit(clp);
nfs4_put_client(clp);
nfs_put_client(clp);
module_put(THIS_MODULE);
}
/*
* Schedule a state recovery attempt
*/
void nfs4_schedule_state_recovery(struct nfs4_client *clp)
void nfs4_schedule_state_recovery(struct nfs_client *clp)
{
if (!clp)
return;
@ -879,7 +745,7 @@ out_err:
return status;
}
static void nfs4_state_mark_reclaim(struct nfs4_client *clp)
static void nfs4_state_mark_reclaim(struct nfs_client *clp)
{
struct nfs4_state_owner *sp;
struct nfs4_state *state;
@ -903,7 +769,7 @@ static void nfs4_state_mark_reclaim(struct nfs4_client *clp)
static int reclaimer(void *ptr)
{
struct nfs4_client *clp = ptr;
struct nfs_client *clp = ptr;
struct nfs4_state_owner *sp;
struct nfs4_state_recovery_ops *ops;
struct rpc_cred *cred;
@ -970,12 +836,12 @@ out:
if (status == -NFS4ERR_CB_PATH_DOWN)
nfs_handle_cb_pathdown(clp);
nfs4_clear_recover_bit(clp);
nfs4_put_client(clp);
nfs_put_client(clp);
module_put_and_exit(0);
return 0;
out_error:
printk(KERN_WARNING "Error: state recovery failed on NFSv4 server %u.%u.%u.%u with error %d\n",
NIPQUAD(clp->cl_addr.s_addr), -status);
NIPQUAD(clp->cl_addr.sin_addr), -status);
set_bit(NFS4CLNT_LEASE_EXPIRED, &clp->cl_state);
goto out;
}

Просмотреть файл

@ -58,7 +58,7 @@
/* Mapping from NFS error code to "errno" error code. */
#define errno_NFSERR_IO EIO
static int nfs_stat_to_errno(int);
static int nfs4_stat_to_errno(int);
/* NFSv4 COMPOUND tags are only wanted for debugging purposes */
#ifdef DEBUG
@ -128,7 +128,7 @@ static int nfs_stat_to_errno(int);
#define decode_link_maxsz (op_decode_hdr_maxsz + 5)
#define encode_symlink_maxsz (op_encode_hdr_maxsz + \
1 + nfs4_name_maxsz + \
nfs4_path_maxsz + \
1 + \
nfs4_fattr_maxsz)
#define decode_symlink_maxsz (op_decode_hdr_maxsz + 8)
#define encode_create_maxsz (op_encode_hdr_maxsz + \
@ -529,7 +529,7 @@ static int encode_attrs(struct xdr_stream *xdr, const struct iattr *iap, const s
if (iap->ia_valid & ATTR_MODE)
len += 4;
if (iap->ia_valid & ATTR_UID) {
owner_namelen = nfs_map_uid_to_name(server->nfs4_state, iap->ia_uid, owner_name);
owner_namelen = nfs_map_uid_to_name(server->nfs_client, iap->ia_uid, owner_name);
if (owner_namelen < 0) {
printk(KERN_WARNING "nfs: couldn't resolve uid %d to string\n",
iap->ia_uid);
@ -541,7 +541,7 @@ static int encode_attrs(struct xdr_stream *xdr, const struct iattr *iap, const s
len += 4 + (XDR_QUADLEN(owner_namelen) << 2);
}
if (iap->ia_valid & ATTR_GID) {
owner_grouplen = nfs_map_gid_to_group(server->nfs4_state, iap->ia_gid, owner_group);
owner_grouplen = nfs_map_gid_to_group(server->nfs_client, iap->ia_gid, owner_group);
if (owner_grouplen < 0) {
printk(KERN_WARNING "nfs4: couldn't resolve gid %d to string\n",
iap->ia_gid);
@ -673,9 +673,9 @@ static int encode_create(struct xdr_stream *xdr, const struct nfs4_create_arg *c
switch (create->ftype) {
case NF4LNK:
RESERVE_SPACE(4 + create->u.symlink->len);
WRITE32(create->u.symlink->len);
WRITEMEM(create->u.symlink->name, create->u.symlink->len);
RESERVE_SPACE(4);
WRITE32(create->u.symlink.len);
xdr_write_pages(xdr, create->u.symlink.pages, 0, create->u.symlink.len);
break;
case NF4BLK: case NF4CHR:
@ -1160,7 +1160,7 @@ static int encode_rename(struct xdr_stream *xdr, const struct qstr *oldname, con
return 0;
}
static int encode_renew(struct xdr_stream *xdr, const struct nfs4_client *client_stateid)
static int encode_renew(struct xdr_stream *xdr, const struct nfs_client *client_stateid)
{
uint32_t *p;
@ -1246,7 +1246,7 @@ static int encode_setclientid(struct xdr_stream *xdr, const struct nfs4_setclien
return 0;
}
static int encode_setclientid_confirm(struct xdr_stream *xdr, const struct nfs4_client *client_state)
static int encode_setclientid_confirm(struct xdr_stream *xdr, const struct nfs_client *client_state)
{
uint32_t *p;
@ -1945,7 +1945,7 @@ static int nfs4_xdr_enc_server_caps(struct rpc_rqst *req, uint32_t *p, const str
/*
* a RENEW request
*/
static int nfs4_xdr_enc_renew(struct rpc_rqst *req, uint32_t *p, struct nfs4_client *clp)
static int nfs4_xdr_enc_renew(struct rpc_rqst *req, uint32_t *p, struct nfs_client *clp)
{
struct xdr_stream xdr;
struct compound_hdr hdr = {
@ -1975,7 +1975,7 @@ static int nfs4_xdr_enc_setclientid(struct rpc_rqst *req, uint32_t *p, struct nf
/*
* a SETCLIENTID_CONFIRM request
*/
static int nfs4_xdr_enc_setclientid_confirm(struct rpc_rqst *req, uint32_t *p, struct nfs4_client *clp)
static int nfs4_xdr_enc_setclientid_confirm(struct rpc_rqst *req, uint32_t *p, struct nfs_client *clp)
{
struct xdr_stream xdr;
struct compound_hdr hdr = {
@ -2127,12 +2127,12 @@ static int decode_op_hdr(struct xdr_stream *xdr, enum nfs_opnum4 expected)
}
READ32(nfserr);
if (nfserr != NFS_OK)
return -nfs_stat_to_errno(nfserr);
return -nfs4_stat_to_errno(nfserr);
return 0;
}
/* Dummy routine */
static int decode_ace(struct xdr_stream *xdr, void *ace, struct nfs4_client *clp)
static int decode_ace(struct xdr_stream *xdr, void *ace, struct nfs_client *clp)
{
uint32_t *p;
unsigned int strlen;
@ -2636,7 +2636,7 @@ static int decode_attr_nlink(struct xdr_stream *xdr, uint32_t *bitmap, uint32_t
return 0;
}
static int decode_attr_owner(struct xdr_stream *xdr, uint32_t *bitmap, struct nfs4_client *clp, int32_t *uid)
static int decode_attr_owner(struct xdr_stream *xdr, uint32_t *bitmap, struct nfs_client *clp, int32_t *uid)
{
uint32_t len, *p;
@ -2660,7 +2660,7 @@ static int decode_attr_owner(struct xdr_stream *xdr, uint32_t *bitmap, struct nf
return 0;
}
static int decode_attr_group(struct xdr_stream *xdr, uint32_t *bitmap, struct nfs4_client *clp, int32_t *gid)
static int decode_attr_group(struct xdr_stream *xdr, uint32_t *bitmap, struct nfs_client *clp, int32_t *gid)
{
uint32_t len, *p;
@ -3051,9 +3051,9 @@ static int decode_getfattr(struct xdr_stream *xdr, struct nfs_fattr *fattr, cons
fattr->mode |= fmode;
if ((status = decode_attr_nlink(xdr, bitmap, &fattr->nlink)) != 0)
goto xdr_error;
if ((status = decode_attr_owner(xdr, bitmap, server->nfs4_state, &fattr->uid)) != 0)
if ((status = decode_attr_owner(xdr, bitmap, server->nfs_client, &fattr->uid)) != 0)
goto xdr_error;
if ((status = decode_attr_group(xdr, bitmap, server->nfs4_state, &fattr->gid)) != 0)
if ((status = decode_attr_group(xdr, bitmap, server->nfs_client, &fattr->gid)) != 0)
goto xdr_error;
if ((status = decode_attr_rdev(xdr, bitmap, &fattr->rdev)) != 0)
goto xdr_error;
@ -3254,7 +3254,7 @@ static int decode_delegation(struct xdr_stream *xdr, struct nfs_openres *res)
if (decode_space_limit(xdr, &res->maxsize) < 0)
return -EIO;
}
return decode_ace(xdr, NULL, res->server->nfs4_state);
return decode_ace(xdr, NULL, res->server->nfs_client);
}
static int decode_open(struct xdr_stream *xdr, struct nfs_openres *res)
@ -3565,7 +3565,7 @@ static int decode_setattr(struct xdr_stream *xdr, struct nfs_setattrres *res)
return 0;
}
static int decode_setclientid(struct xdr_stream *xdr, struct nfs4_client *clp)
static int decode_setclientid(struct xdr_stream *xdr, struct nfs_client *clp)
{
uint32_t *p;
uint32_t opnum;
@ -3598,7 +3598,7 @@ static int decode_setclientid(struct xdr_stream *xdr, struct nfs4_client *clp)
READ_BUF(len);
return -NFSERR_CLID_INUSE;
} else
return -nfs_stat_to_errno(nfserr);
return -nfs4_stat_to_errno(nfserr);
return 0;
}
@ -4256,7 +4256,7 @@ static int nfs4_xdr_dec_fsinfo(struct rpc_rqst *req, uint32_t *p, struct nfs_fsi
if (!status)
status = decode_fsinfo(&xdr, fsinfo);
if (!status)
status = -nfs_stat_to_errno(hdr.status);
status = -nfs4_stat_to_errno(hdr.status);
return status;
}
@ -4335,7 +4335,7 @@ static int nfs4_xdr_dec_renew(struct rpc_rqst *rqstp, uint32_t *p, void *dummy)
* a SETCLIENTID request
*/
static int nfs4_xdr_dec_setclientid(struct rpc_rqst *req, uint32_t *p,
struct nfs4_client *clp)
struct nfs_client *clp)
{
struct xdr_stream xdr;
struct compound_hdr hdr;
@ -4346,7 +4346,7 @@ static int nfs4_xdr_dec_setclientid(struct rpc_rqst *req, uint32_t *p,
if (!status)
status = decode_setclientid(&xdr, clp);
if (!status)
status = -nfs_stat_to_errno(hdr.status);
status = -nfs4_stat_to_errno(hdr.status);
return status;
}
@ -4368,7 +4368,7 @@ static int nfs4_xdr_dec_setclientid_confirm(struct rpc_rqst *req, uint32_t *p, s
if (!status)
status = decode_fsinfo(&xdr, fsinfo);
if (!status)
status = -nfs_stat_to_errno(hdr.status);
status = -nfs4_stat_to_errno(hdr.status);
return status;
}
@ -4521,7 +4521,7 @@ static struct {
* This one is used jointly by NFSv2 and NFSv3.
*/
static int
nfs_stat_to_errno(int stat)
nfs4_stat_to_errno(int stat)
{
int i;
for (i = 0; nfs_errtbl[i].stat != -1; i++) {

Просмотреть файл

@ -66,14 +66,14 @@ nfs_proc_get_root(struct nfs_server *server, struct nfs_fh *fhandle,
dprintk("%s: call getattr\n", __FUNCTION__);
nfs_fattr_init(fattr);
status = rpc_call_sync(server->client_sys, &msg, 0);
status = rpc_call_sync(server->nfs_client->cl_rpcclient, &msg, 0);
dprintk("%s: reply getattr: %d\n", __FUNCTION__, status);
if (status)
return status;
dprintk("%s: call statfs\n", __FUNCTION__);
msg.rpc_proc = &nfs_procedures[NFSPROC_STATFS];
msg.rpc_resp = &fsinfo;
status = rpc_call_sync(server->client_sys, &msg, 0);
status = rpc_call_sync(server->nfs_client->cl_rpcclient, &msg, 0);
dprintk("%s: reply statfs: %d\n", __FUNCTION__, status);
if (status)
return status;
@ -425,16 +425,17 @@ nfs_proc_link(struct inode *inode, struct inode *dir, struct qstr *name)
}
static int
nfs_proc_symlink(struct inode *dir, struct qstr *name, struct qstr *path,
struct iattr *sattr, struct nfs_fh *fhandle,
struct nfs_fattr *fattr)
nfs_proc_symlink(struct inode *dir, struct dentry *dentry, struct page *page,
unsigned int len, struct iattr *sattr)
{
struct nfs_fh fhandle;
struct nfs_fattr fattr;
struct nfs_symlinkargs arg = {
.fromfh = NFS_FH(dir),
.fromname = name->name,
.fromlen = name->len,
.topath = path->name,
.tolen = path->len,
.fromname = dentry->d_name.name,
.fromlen = dentry->d_name.len,
.pages = &page,
.pathlen = len,
.sattr = sattr
};
struct rpc_message msg = {
@ -443,13 +444,25 @@ nfs_proc_symlink(struct inode *dir, struct qstr *name, struct qstr *path,
};
int status;
if (path->len > NFS2_MAXPATHLEN)
if (len > NFS2_MAXPATHLEN)
return -ENAMETOOLONG;
dprintk("NFS call symlink %s -> %s\n", name->name, path->name);
nfs_fattr_init(fattr);
fhandle->size = 0;
dprintk("NFS call symlink %s\n", dentry->d_name.name);
status = rpc_call_sync(NFS_CLIENT(dir), &msg, 0);
nfs_mark_for_revalidate(dir);
/*
* V2 SYMLINK requests don't return any attributes. Setting the
* filehandle size to zero indicates to nfs_instantiate that it
* should fill in the data with a LOOKUP call on the wire.
*/
if (status == 0) {
nfs_fattr_init(&fattr);
fhandle.size = 0;
status = nfs_instantiate(dentry, &fhandle, &fattr);
}
dprintk("NFS reply symlink: %d\n", status);
return status;
}
@ -671,7 +684,7 @@ nfs_proc_lock(struct file *filp, int cmd, struct file_lock *fl)
}
struct nfs_rpc_ops nfs_v2_clientops = {
const struct nfs_rpc_ops nfs_v2_clientops = {
.version = 2, /* protocol version */
.dentry_ops = &nfs_dentry_operations,
.dir_inode_ops = &nfs_dir_inode_operations,

Просмотреть файл

@ -171,7 +171,7 @@ static int nfs_readpage_sync(struct nfs_open_context *ctx, struct inode *inode,
rdata->args.offset = page_offset(page) + rdata->args.pgbase;
dprintk("NFS: nfs_proc_read(%s, (%s/%Ld), %Lu, %u)\n",
NFS_SERVER(inode)->hostname,
NFS_SERVER(inode)->nfs_client->cl_hostname,
inode->i_sb->s_id,
(long long)NFS_FILEID(inode),
(unsigned long long)rdata->args.pgbase,
@ -568,8 +568,13 @@ int nfs_readpage_result(struct rpc_task *task, struct nfs_read_data *data)
nfs_add_stats(data->inode, NFSIOS_SERVERREADBYTES, resp->count);
/* Is this a short read? */
if (task->tk_status >= 0 && resp->count < argp->count && !resp->eof) {
if (task->tk_status < 0) {
if (task->tk_status == -ESTALE) {
set_bit(NFS_INO_STALE, &NFS_FLAGS(data->inode));
nfs_mark_for_revalidate(data->inode);
}
} else if (resp->count < argp->count && !resp->eof) {
/* This is a short read! */
nfs_inc_stats(data->inode, NFSIOS_SHORTREAD);
/* Has the server at least made some progress? */
if (resp->count != 0) {
@ -616,6 +621,10 @@ int nfs_readpage(struct file *file, struct page *page)
if (error)
goto out_error;
error = -ESTALE;
if (NFS_STALE(inode))
goto out_error;
if (file == NULL) {
ctx = nfs_find_open_context(inode, NULL, FMODE_READ);
if (ctx == NULL)
@ -678,7 +687,7 @@ int nfs_readpages(struct file *filp, struct address_space *mapping,
};
struct inode *inode = mapping->host;
struct nfs_server *server = NFS_SERVER(inode);
int ret;
int ret = -ESTALE;
dprintk("NFS: nfs_readpages (%s/%Ld %d)\n",
inode->i_sb->s_id,
@ -686,6 +695,9 @@ int nfs_readpages(struct file *filp, struct address_space *mapping,
nr_pages);
nfs_inc_stats(inode, NFSIOS_VFSREADPAGES);
if (NFS_STALE(inode))
goto out;
if (filp == NULL) {
desc.ctx = nfs_find_open_context(inode, NULL, FMODE_READ);
if (desc.ctx == NULL)
@ -701,6 +713,7 @@ int nfs_readpages(struct file *filp, struct address_space *mapping,
ret = err;
}
put_nfs_open_context(desc.ctx);
out:
return ret;
}

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -396,6 +396,7 @@ int nfs_writepages(struct address_space *mapping, struct writeback_control *wbc)
out:
clear_bit(BDI_write_congested, &bdi->state);
wake_up_all(&nfs_write_congestion);
writeback_congestion_end();
return err;
}
@ -1252,7 +1253,13 @@ int nfs_writeback_done(struct rpc_task *task, struct nfs_write_data *data)
dprintk("NFS: %4d nfs_writeback_done (status %d)\n",
task->tk_pid, task->tk_status);
/* Call the NFS version-specific code */
/*
* ->write_done will attempt to use post-op attributes to detect
* conflicting writes by other clients. A strict interpretation
* of close-to-open would allow us to continue caching even if
* another writer had changed the file, but some applications
* depend on tighter cache coherency when writing.
*/
status = NFS_PROTO(data->inode)->write_done(task, data);
if (status != 0)
return status;
@ -1273,7 +1280,7 @@ int nfs_writeback_done(struct rpc_task *task, struct nfs_write_data *data)
if (time_before(complain, jiffies)) {
dprintk("NFS: faulty NFS server %s:"
" (committed = %d) != (stable = %d)\n",
NFS_SERVER(data->inode)->hostname,
NFS_SERVER(data->inode)->nfs_client->cl_hostname,
resp->verf->committed, argp->stable);
complain = jiffies + 300 * HZ;
}

Просмотреть файл

@ -375,16 +375,28 @@ nfsd4_probe_callback(struct nfs4_client *clp)
{
struct sockaddr_in addr;
struct nfs4_callback *cb = &clp->cl_callback;
struct rpc_timeout timeparms;
struct rpc_xprt * xprt;
struct rpc_timeout timeparms = {
.to_initval = (NFSD_LEASE_TIME/4) * HZ,
.to_retries = 5,
.to_maxval = (NFSD_LEASE_TIME/2) * HZ,
.to_exponential = 1,
};
struct rpc_program * program = &cb->cb_program;
struct rpc_stat * stat = &cb->cb_stat;
struct rpc_clnt * clnt;
struct rpc_create_args args = {
.protocol = IPPROTO_TCP,
.address = (struct sockaddr *)&addr,
.addrsize = sizeof(addr),
.timeout = &timeparms,
.servername = clp->cl_name.data,
.program = program,
.version = nfs_cb_version[1]->number,
.authflavor = RPC_AUTH_UNIX, /* XXX: need AUTH_GSS... */
.flags = (RPC_CLNT_CREATE_NOPING),
};
struct rpc_message msg = {
.rpc_proc = &nfs4_cb_procedures[NFSPROC4_CLNT_CB_NULL],
.rpc_argp = clp,
};
char hostname[32];
int status;
if (atomic_read(&cb->cb_set))
@ -396,51 +408,27 @@ nfsd4_probe_callback(struct nfs4_client *clp)
addr.sin_port = htons(cb->cb_port);
addr.sin_addr.s_addr = htonl(cb->cb_addr);
/* Initialize timeout */
timeparms.to_initval = (NFSD_LEASE_TIME/4) * HZ;
timeparms.to_retries = 0;
timeparms.to_maxval = (NFSD_LEASE_TIME/2) * HZ;
timeparms.to_exponential = 1;
/* Create RPC transport */
xprt = xprt_create_proto(IPPROTO_TCP, &addr, &timeparms);
if (IS_ERR(xprt)) {
dprintk("NFSD: couldn't create callback transport!\n");
goto out_err;
}
/* Initialize rpc_program */
program->name = "nfs4_cb";
program->number = cb->cb_prog;
program->nrvers = ARRAY_SIZE(nfs_cb_version);
program->version = nfs_cb_version;
program->stats = stat;
program->stats = &cb->cb_stat;
/* Initialize rpc_stat */
memset(stat, 0, sizeof(struct rpc_stat));
stat->program = program;
memset(program->stats, 0, sizeof(cb->cb_stat));
program->stats->program = program;
/* Create RPC client
*
* XXX AUTH_UNIX only - need AUTH_GSS....
*/
sprintf(hostname, "%u.%u.%u.%u", NIPQUAD(addr.sin_addr.s_addr));
clnt = rpc_new_client(xprt, hostname, program, 1, RPC_AUTH_UNIX);
if (IS_ERR(clnt)) {
/* Create RPC client */
cb->cb_client = rpc_create(&args);
if (!cb->cb_client) {
dprintk("NFSD: couldn't create callback client\n");
goto out_err;
}
clnt->cl_intr = 0;
clnt->cl_softrtry = 1;
/* Kick rpciod, put the call on the wire. */
if (rpciod_up() != 0) {
dprintk("nfsd: couldn't start rpciod for callbacks!\n");
if (rpciod_up() != 0)
goto out_clnt;
}
cb->cb_client = clnt;
/* the task holds a reference to the nfs4_client struct */
atomic_inc(&clp->cl_count);
@ -448,7 +436,7 @@ nfsd4_probe_callback(struct nfs4_client *clp)
msg.rpc_cred = nfsd4_lookupcred(clp,0);
if (IS_ERR(msg.rpc_cred))
goto out_rpciod;
status = rpc_call_async(clnt, &msg, RPC_TASK_ASYNC, &nfs4_cb_null_ops, NULL);
status = rpc_call_async(cb->cb_client, &msg, RPC_TASK_ASYNC, &nfs4_cb_null_ops, NULL);
put_rpccred(msg.rpc_cred);
if (status != 0) {
@ -462,7 +450,7 @@ out_rpciod:
rpciod_down();
cb->cb_client = NULL;
out_clnt:
rpc_shutdown_client(clnt);
rpc_shutdown_client(cb->cb_client);
out_err:
dprintk("NFSD: warning: no callback path to client %.*s\n",
(int)clp->cl_name.len, clp->cl_name.data);

Просмотреть файл

@ -86,7 +86,7 @@ void ppc4xx_init(unsigned long r3, unsigned long r4, unsigned long r5,
#define PCI_DRAM_OFFSET 0
#endif
#elif CONFIG_44x
#elif defined(CONFIG_44x)
#if defined(CONFIG_BAMBOO)
#include <platforms/4xx/bamboo.h>

Просмотреть файл

@ -748,6 +748,7 @@ extern void blk_queue_invalidate_tags(request_queue_t *);
extern long blk_congestion_wait(int rw, long timeout);
extern struct blk_queue_tag *blk_init_tags(int);
extern void blk_free_tags(struct blk_queue_tag *);
extern void blk_congestion_end(int rw);
extern void blk_rq_bio_prep(request_queue_t *, struct request *, struct bio *);
extern int blkdev_issue_flush(struct block_device *, sector_t *);

Просмотреть файл

@ -114,7 +114,7 @@ extern void *__init alloc_large_system_hash(const char *tablename,
#else
#define HASHDIST_DEFAULT 0
#endif
extern int __initdata hashdist; /* Distribute hashes across NUMA nodes? */
extern int hashdist; /* Distribute hashes across NUMA nodes? */
#endif /* _LINUX_BOOTMEM_H */

Просмотреть файл

@ -221,6 +221,7 @@ static inline int dname_external(struct dentry *dentry)
*/
extern void d_instantiate(struct dentry *, struct inode *);
extern struct dentry * d_instantiate_unique(struct dentry *, struct inode *);
extern struct dentry * d_materialise_unique(struct dentry *, struct inode *);
extern void d_delete(struct dentry *);
/* allocate/de-allocate */

Просмотреть файл

@ -438,6 +438,7 @@ struct dccp_ackvec;
* @dccps_role - Role of this sock, one of %dccp_role
* @dccps_ndp_count - number of Non Data Packets since last data packet
* @dccps_hc_rx_ackvec - rx half connection ack vector
* @dccps_xmit_timer - timer for when CCID is not ready to send
*/
struct dccp_sock {
/* inet_connection_sock has to be the first member of dccp_sock */
@ -470,6 +471,7 @@ struct dccp_sock {
enum dccp_role dccps_role:2;
__u8 dccps_hc_rx_insert_options:1;
__u8 dccps_hc_tx_insert_options:1;
struct timer_list dccps_xmit_timer;
};
static inline struct dccp_sock *dccp_sk(const struct sock *sk)

65
include/linux/fib_rules.h Normal file
Просмотреть файл

@ -0,0 +1,65 @@
#ifndef __LINUX_FIB_RULES_H
#define __LINUX_FIB_RULES_H
#include <linux/types.h>
#include <linux/rtnetlink.h>
/* rule is permanent, and cannot be deleted */
#define FIB_RULE_PERMANENT 1
struct fib_rule_hdr
{
__u8 family;
__u8 dst_len;
__u8 src_len;
__u8 tos;
__u8 table;
__u8 res1; /* reserved */
__u8 res2; /* reserved */
__u8 action;
__u32 flags;
};
enum
{
FRA_UNSPEC,
FRA_DST, /* destination address */
FRA_SRC, /* source address */
FRA_IFNAME, /* interface name */
FRA_UNUSED1,
FRA_UNUSED2,
FRA_PRIORITY, /* priority/preference */
FRA_UNUSED3,
FRA_UNUSED4,
FRA_UNUSED5,
FRA_FWMARK, /* netfilter mark */
FRA_FLOW, /* flow/class id */
FRA_UNUSED6,
FRA_UNUSED7,
FRA_UNUSED8,
FRA_TABLE, /* Extended table id */
FRA_FWMASK, /* mask for netfilter mark */
__FRA_MAX
};
#define FRA_MAX (__FRA_MAX - 1)
enum
{
FR_ACT_UNSPEC,
FR_ACT_TO_TBL, /* Pass to fixed table */
FR_ACT_RES1,
FR_ACT_RES2,
FR_ACT_RES3,
FR_ACT_RES4,
FR_ACT_BLACKHOLE, /* Drop without notification */
FR_ACT_UNREACHABLE, /* Drop with ENETUNREACH */
FR_ACT_PROHIBIT, /* Drop with EACCES */
__FR_ACT_MAX,
};
#define FR_ACT_MAX (__FR_ACT_MAX - 1)
#endif

Просмотреть файл

@ -25,10 +25,10 @@
struct sock_filter /* Filter block */
{
__u16 code; /* Actual filter code */
__u8 jt; /* Jump true */
__u8 jf; /* Jump false */
__u32 k; /* Generic multiuse field */
__u16 code; /* Actual filter code */
__u8 jt; /* Jump true */
__u8 jf; /* Jump false */
__u32 k; /* Generic multiuse field */
};
struct sock_fprog /* Required for SO_ATTACH_FILTER. */
@ -41,8 +41,9 @@ struct sock_fprog /* Required for SO_ATTACH_FILTER. */
struct sk_filter
{
atomic_t refcnt;
unsigned int len; /* Number of filter blocks */
struct sock_filter insns[0];
unsigned int len; /* Number of filter blocks */
struct rcu_head rcu;
struct sock_filter insns[0];
};
static inline unsigned int sk_filter_len(struct sk_filter *fp)

Просмотреть файл

@ -16,6 +16,8 @@ struct genlmsghdr {
#define GENL_HDRLEN NLMSG_ALIGN(sizeof(struct genlmsghdr))
#define GENL_ADMIN_PERM 0x01
/*
* List of reserved static generic netlink identifiers:
*/
@ -43,9 +45,25 @@ enum {
CTRL_ATTR_UNSPEC,
CTRL_ATTR_FAMILY_ID,
CTRL_ATTR_FAMILY_NAME,
CTRL_ATTR_VERSION,
CTRL_ATTR_HDRSIZE,
CTRL_ATTR_MAXATTR,
CTRL_ATTR_OPS,
__CTRL_ATTR_MAX,
};
#define CTRL_ATTR_MAX (__CTRL_ATTR_MAX - 1)
enum {
CTRL_ATTR_OP_UNSPEC,
CTRL_ATTR_OP_ID,
CTRL_ATTR_OP_FLAGS,
CTRL_ATTR_OP_POLICY,
CTRL_ATTR_OP_DOIT,
CTRL_ATTR_OP_DUMPIT,
__CTRL_ATTR_OP_MAX,
};
#define CTRL_ATTR_OP_MAX (__CTRL_ATTR_OP_MAX - 1)
#endif /* __LINUX_GENERIC_NETLINK_H */

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше