Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next-2.6

* git://git.kernel.org/pub/scm/linux/kernel/git/davem/net-next-2.6: (1075 commits)
  myri10ge: update driver version number to 1.4.3-1.369
  r8169: add shutdown handler
  r8169: preliminary 8168d support
  r8169: support additional 8168cp chipset
  r8169: change default behavior for mildly identified 8168c chipsets
  r8169: add a new 8168cp flavor
  r8169: add a new 8168c flavor (bis)
  r8169: add a new 8168c flavor
  r8169: sync existing 8168 device hardware start sequences with vendor driver
  r8169: 8168b Tx performance tweak
  r8169: make room for more specific 8168 hardware start procedure
  r8169: shuffle some registers handling around (8168 operation only)
  r8169: new phy init parameters for the 8168b
  r8169: update phy init parameters
  r8169: wake up the PHY of the 8168
  af_key: fix SADB_X_SPDDELETE response
  ath9k: Fix return code when ath9k_hw_setpower() fails on reset
  ath9k: remove nasty FAIL macro from ath9k_hw_reset()
  gre: minor cleanups in netlink interface
  gre: fix copy and paste error
  ...
This commit is contained in:
Linus Torvalds 2008-10-11 09:33:18 -07:00
Родитель 86ed5a93b8 6861ff35ec
Коммит 4dd9ec4946
893 изменённых файлов: 91029 добавлений и 41742 удалений

Просмотреть файл

@ -145,7 +145,6 @@ usage should require reading the full document.
this though and the recommendation to allow only a single this though and the recommendation to allow only a single
interface in STA mode at first! interface in STA mode at first!
</para> </para>
!Finclude/net/mac80211.h ieee80211_if_types
!Finclude/net/mac80211.h ieee80211_if_init_conf !Finclude/net/mac80211.h ieee80211_if_init_conf
!Finclude/net/mac80211.h ieee80211_if_conf !Finclude/net/mac80211.h ieee80211_if_conf
</chapter> </chapter>
@ -177,8 +176,7 @@ usage should require reading the full document.
<title>functions/definitions</title> <title>functions/definitions</title>
!Finclude/net/mac80211.h ieee80211_rx_status !Finclude/net/mac80211.h ieee80211_rx_status
!Finclude/net/mac80211.h mac80211_rx_flags !Finclude/net/mac80211.h mac80211_rx_flags
!Finclude/net/mac80211.h ieee80211_tx_control !Finclude/net/mac80211.h ieee80211_tx_info
!Finclude/net/mac80211.h ieee80211_tx_status_flags
!Finclude/net/mac80211.h ieee80211_rx !Finclude/net/mac80211.h ieee80211_rx
!Finclude/net/mac80211.h ieee80211_rx_irqsafe !Finclude/net/mac80211.h ieee80211_rx_irqsafe
!Finclude/net/mac80211.h ieee80211_tx_status !Finclude/net/mac80211.h ieee80211_tx_status
@ -189,12 +187,11 @@ usage should require reading the full document.
!Finclude/net/mac80211.h ieee80211_ctstoself_duration !Finclude/net/mac80211.h ieee80211_ctstoself_duration
!Finclude/net/mac80211.h ieee80211_generic_frame_duration !Finclude/net/mac80211.h ieee80211_generic_frame_duration
!Finclude/net/mac80211.h ieee80211_get_hdrlen_from_skb !Finclude/net/mac80211.h ieee80211_get_hdrlen_from_skb
!Finclude/net/mac80211.h ieee80211_get_hdrlen !Finclude/net/mac80211.h ieee80211_hdrlen
!Finclude/net/mac80211.h ieee80211_wake_queue !Finclude/net/mac80211.h ieee80211_wake_queue
!Finclude/net/mac80211.h ieee80211_stop_queue !Finclude/net/mac80211.h ieee80211_stop_queue
!Finclude/net/mac80211.h ieee80211_start_queues
!Finclude/net/mac80211.h ieee80211_stop_queues
!Finclude/net/mac80211.h ieee80211_wake_queues !Finclude/net/mac80211.h ieee80211_wake_queues
!Finclude/net/mac80211.h ieee80211_stop_queues
</sect1> </sect1>
</chapter> </chapter>
@ -230,8 +227,7 @@ usage should require reading the full document.
<title>Multiple queues and QoS support</title> <title>Multiple queues and QoS support</title>
<para>TBD</para> <para>TBD</para>
!Finclude/net/mac80211.h ieee80211_tx_queue_params !Finclude/net/mac80211.h ieee80211_tx_queue_params
!Finclude/net/mac80211.h ieee80211_tx_queue_stats_data !Finclude/net/mac80211.h ieee80211_tx_queue_stats
!Finclude/net/mac80211.h ieee80211_tx_queue
</chapter> </chapter>
<chapter id="AP"> <chapter id="AP">

Просмотреть файл

@ -6,6 +6,24 @@ be removed from this file.
--------------------------- ---------------------------
What: old static regulatory information and ieee80211_regdom module parameter
When: 2.6.29
Why: The old regulatory infrastructure has been replaced with a new one
which does not require statically defined regulatory domains. We do
not want to keep static regulatory domains in the kernel due to the
the dynamic nature of regulatory law and localization. We kept around
the old static definitions for the regulatory domains of:
* US
* JP
* EU
and used by default the US when CONFIG_WIRELESS_OLD_REGULATORY was
set. We also kept around the ieee80211_regdom module parameter in case
some applications were relying on it. Changing regulatory domains
can now be done instead by using nl80211, as is done with iw.
Who: Luis R. Rodriguez <lrodriguez@atheros.com>
---------------------------
What: dev->power.power_state What: dev->power.power_state
When: July 2007 When: July 2007
Why: Broken design for runtime control over driver power states, confusing Why: Broken design for runtime control over driver power states, confusing
@ -232,6 +250,9 @@ What (Why):
- xt_mark match revision 0 - xt_mark match revision 0
(superseded by xt_mark match revision 1) (superseded by xt_mark match revision 1)
- xt_recent: the old ipt_recent proc dir
(superseded by /proc/net/xt_recent)
When: January 2009 or Linux 2.7.0, whichever comes first When: January 2009 or Linux 2.7.0, whichever comes first
Why: Superseded by newer revisions or modules Why: Superseded by newer revisions or modules
Who: Jan Engelhardt <jengelh@computergmbh.de> Who: Jan Engelhardt <jengelh@computergmbh.de>

Просмотреть файл

@ -0,0 +1,46 @@
Copyright (c) 2003-2008 QLogic Corporation
QLogic Linux Networking HBA Driver
This program includes a device driver for Linux 2.6 that may be
distributed with QLogic hardware specific firmware binary file.
You may modify and redistribute the device driver code under the
GNU General Public License as published by the Free Software
Foundation (version 2 or a later version).
You may redistribute the hardware specific firmware binary file
under the following terms:
1. Redistribution of source code (only if applicable),
must retain the above copyright notice, this list of
conditions and the following disclaimer.
2. Redistribution in binary form must reproduce the above
copyright notice, this list of conditions and the
following disclaimer in the documentation and/or other
materials provided with the distribution.
3. The name of QLogic Corporation may not be used to
endorse or promote products derived from this software
without specific prior written permission
REGARDLESS OF WHAT LICENSING MECHANISM IS USED OR APPLICABLE,
THIS PROGRAM IS PROVIDED BY QLOGIC CORPORATION "AS IS'' AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A
PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR
BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
USER ACKNOWLEDGES AND AGREES THAT USE OF THIS PROGRAM WILL NOT
CREATE OR GIVE GROUNDS FOR A LICENSE BY IMPLICATION, ESTOPPEL, OR
OTHERWISE IN ANY INTELLECTUAL PROPERTY RIGHTS (PATENT, COPYRIGHT,
TRADE SECRET, MASK WORK, OR OTHER PROPRIETARY RIGHT) EMBODIED IN
ANY OTHER QLOGIC HARDWARE OR SOFTWARE EITHER SOLELY OR IN
COMBINATION WITH THIS PROGRAM.

Просмотреть файл

@ -35,8 +35,9 @@ This file contains
6.1 general settings 6.1 general settings
6.2 local loopback of sent frames 6.2 local loopback of sent frames
6.3 CAN controller hardware filters 6.3 CAN controller hardware filters
6.4 currently supported CAN hardware 6.4 The virtual CAN driver (vcan)
6.5 todo 6.5 currently supported CAN hardware
6.6 todo
7 Credits 7 Credits
@ -584,7 +585,42 @@ solution for a couple of reasons:
@133MHz with four SJA1000 CAN controllers from 2002 under heavy bus @133MHz with four SJA1000 CAN controllers from 2002 under heavy bus
load without any problems ... load without any problems ...
6.4 currently supported CAN hardware (September 2007) 6.4 The virtual CAN driver (vcan)
Similar to the network loopback devices, vcan offers a virtual local
CAN interface. A full qualified address on CAN consists of
- a unique CAN Identifier (CAN ID)
- the CAN bus this CAN ID is transmitted on (e.g. can0)
so in common use cases more than one virtual CAN interface is needed.
The virtual CAN interfaces allow the transmission and reception of CAN
frames without real CAN controller hardware. Virtual CAN network
devices are usually named 'vcanX', like vcan0 vcan1 vcan2 ...
When compiled as a module the virtual CAN driver module is called vcan.ko
Since Linux Kernel version 2.6.24 the vcan driver supports the Kernel
netlink interface to create vcan network devices. The creation and
removal of vcan network devices can be managed with the ip(8) tool:
- Create a virtual CAN network interface:
ip link add type vcan
- Create a virtual CAN network interface with a specific name 'vcan42':
ip link add dev vcan42 type vcan
- Remove a (virtual CAN) network interface 'vcan42':
ip link del vcan42
The tool 'vcan' from the SocketCAN SVN repository on BerliOS is obsolete.
Virtual CAN network device creation in older Kernels:
In Linux Kernel versions < 2.6.24 the vcan driver creates 4 vcan
netdevices at module load time by default. This value can be changed
with the module parameter 'numdev'. E.g. 'modprobe vcan numdev=8'
6.5 currently supported CAN hardware
On the project website http://developer.berlios.de/projects/socketcan On the project website http://developer.berlios.de/projects/socketcan
there are different drivers available: there are different drivers available:
@ -603,7 +639,7 @@ solution for a couple of reasons:
Please check the Mailing Lists on the berlios OSS project website. Please check the Mailing Lists on the berlios OSS project website.
6.5 todo (September 2007) 6.6 todo
The configuration interface for CAN network drivers is still an open The configuration interface for CAN network drivers is still an open
issue that has not been finalized in the socketcan project. Also the issue that has not been finalized in the socketcan project. Also the

Просмотреть файл

@ -24,4 +24,56 @@ netif_{start|stop|wake}_subqueue() functions to manage each queue while the
device is still operational. netdev->queue_lock is still used when the device device is still operational. netdev->queue_lock is still used when the device
comes online or when it's completely shut down (unregister_netdev(), etc.). comes online or when it's completely shut down (unregister_netdev(), etc.).
Author: Peter P. Waskiewicz Jr. <peter.p.waskiewicz.jr@intel.com>
Section 2: Qdisc support for multiqueue devices
-----------------------------------------------
Currently two qdiscs are optimized for multiqueue devices. The first is the
default pfifo_fast qdisc. This qdisc supports one qdisc per hardware queue.
A new round-robin qdisc, sch_multiq also supports multiple hardware queues. The
qdisc is responsible for classifying the skb's and then directing the skb's to
bands and queues based on the value in skb->queue_mapping. Use this field in
the base driver to determine which queue to send the skb to.
sch_multiq has been added for hardware that wishes to avoid head-of-line
blocking. It will cycle though the bands and verify that the hardware queue
associated with the band is not stopped prior to dequeuing a packet.
On qdisc load, the number of bands is based on the number of queues on the
hardware. Once the association is made, any skb with skb->queue_mapping set,
will be queued to the band associated with the hardware queue.
Section 3: Brief howto using MULTIQ for multiqueue devices
---------------------------------------------------------------
The userspace command 'tc,' part of the iproute2 package, is used to configure
qdiscs. To add the MULTIQ qdisc to your network device, assuming the device
is called eth0, run the following command:
# tc qdisc add dev eth0 root handle 1: multiq
The qdisc will allocate the number of bands to equal the number of queues that
the device reports, and bring the qdisc online. Assuming eth0 has 4 Tx
queues, the band mapping would look like:
band 0 => queue 0
band 1 => queue 1
band 2 => queue 2
band 3 => queue 3
Traffic will begin flowing through each queue based on either the simple_tx_hash
function or based on netdev->select_queue() if you have it defined.
The behavior of tc filters remains the same. However a new tc action,
skbedit, has been added. Assuming you wanted to route all traffic to a
specific host, for example 192.168.0.3, through a specific queue you could use
this action and establish a filter such as:
tc filter add dev eth0 parent 1: protocol ip prio 1 u32 \
match ip dst 192.168.0.3 \
action skbedit queue_mapping 3
Author: Alexander Duyck <alexander.h.duyck@intel.com>
Original Author: Peter P. Waskiewicz Jr. <peter.p.waskiewicz.jr@intel.com>

Просмотреть файл

@ -0,0 +1,175 @@
Linux Phonet protocol family
============================
Introduction
------------
Phonet is a packet protocol used by Nokia cellular modems for both IPC
and RPC. With the Linux Phonet socket family, Linux host processes can
receive and send messages from/to the modem, or any other external
device attached to the modem. The modem takes care of routing.
Phonet packets can be exchanged through various hardware connections
depending on the device, such as:
- USB with the CDC Phonet interface,
- infrared,
- Bluetooth,
- an RS232 serial port (with a dedicated "FBUS" line discipline),
- the SSI bus with some TI OMAP processors.
Packets format
--------------
Phonet packets have a common header as follows:
struct phonethdr {
uint8_t pn_media; /* Media type (link-layer identifier) */
uint8_t pn_rdev; /* Receiver device ID */
uint8_t pn_sdev; /* Sender device ID */
uint8_t pn_res; /* Resource ID or function */
uint16_t pn_length; /* Big-endian message byte length (minus 6) */
uint8_t pn_robj; /* Receiver object ID */
uint8_t pn_sobj; /* Sender object ID */
};
On Linux, the link-layer header includes the pn_media byte (see below).
The next 7 bytes are part of the network-layer header.
The device ID is split: the 6 higher-order bits consitute the device
address, while the 2 lower-order bits are used for multiplexing, as are
the 8-bit object identifiers. As such, Phonet can be considered as a
network layer with 6 bits of address space and 10 bits for transport
protocol (much like port numbers in IP world).
The modem always has address number zero. All other device have a their
own 6-bit address.
Link layer
----------
Phonet links are always point-to-point links. The link layer header
consists of a single Phonet media type byte. It uniquely identifies the
link through which the packet is transmitted, from the modem's
perspective. Each Phonet network device shall prepend and set the media
type byte as appropriate. For convenience, a common phonet_header_ops
link-layer header operations structure is provided. It sets the
media type according to the network device hardware address.
Linux Phonet network interfaces support a dedicated link layer packets
type (ETH_P_PHONET) which is out of the Ethernet type range. They can
only send and receive Phonet packets.
The virtual TUN tunnel device driver can also be used for Phonet. This
requires IFF_TUN mode, _without_ the IFF_NO_PI flag. In this case,
there is no link-layer header, so there is no Phonet media type byte.
Note that Phonet interfaces are not allowed to re-order packets, so
only the (default) Linux FIFO qdisc should be used with them.
Network layer
-------------
The Phonet socket address family maps the Phonet packet header:
struct sockaddr_pn {
sa_family_t spn_family; /* AF_PHONET */
uint8_t spn_obj; /* Object ID */
uint8_t spn_dev; /* Device ID */
uint8_t spn_resource; /* Resource or function */
uint8_t spn_zero[...]; /* Padding */
};
The resource field is only used when sending and receiving;
It is ignored by bind() and getsockname().
Low-level datagram protocol
---------------------------
Applications can send Phonet messages using the Phonet datagram socket
protocol from the PF_PHONET family. Each socket is bound to one of the
2^10 object IDs available, and can send and receive packets with any
other peer.
struct sockaddr_pn addr = { .spn_family = AF_PHONET, };
ssize_t len;
socklen_t addrlen = sizeof(addr);
int fd;
fd = socket(PF_PHONET, SOCK_DGRAM, 0);
bind(fd, (struct sockaddr *)&addr, sizeof(addr));
/* ... */
sendto(fd, msg, msglen, 0, (struct sockaddr *)&addr, sizeof(addr));
len = recvfrom(fd, buf, sizeof(buf), 0,
(struct sockaddr *)&addr, &addrlen);
This protocol follows the SOCK_DGRAM connection-less semantics.
However, connect() and getpeername() are not supported, as they did
not seem useful with Phonet usages (could be added easily).
Phonet Pipe protocol
--------------------
The Phonet Pipe protocol is a simple sequenced packets protocol
with end-to-end congestion control. It uses the passive listening
socket paradigm. The listening socket is bound to an unique free object
ID. Each listening socket can handle up to 255 simultaneous
connections, one per accept()'d socket.
int lfd, cfd;
lfd = socket(PF_PHONET, SOCK_SEQPACKET, PN_PROTO_PIPE);
listen (lfd, INT_MAX);
/* ... */
cfd = accept(lfd, NULL, NULL);
for (;;)
{
char buf[...];
ssize_t len = read(cfd, buf, sizeof(buf));
/* ... */
write(cfd, msg, msglen);
}
Connections are established between two endpoints by a "third party"
application. This means that both endpoints are passive; so connect()
is not possible.
WARNING:
When polling a connected pipe socket for writability, there is an
intrinsic race condition whereby writability might be lost between the
polling and the writing system calls. In this case, the socket will
block until write because possible again, unless non-blocking mode
becomes enabled.
The pipe protocol provides two socket options at the SOL_PNPIPE level:
PNPIPE_ENCAP accepts one integer value (int) of:
PNPIPE_ENCAP_NONE: The socket operates normally (default).
PNPIPE_ENCAP_IP: The socket is used as a backend for a virtual IP
interface. This requires CAP_NET_ADMIN capability. GPRS data
support on Nokia modems can use this. Note that the socket cannot
be reliably poll()'d or read() from while in this mode.
PNPIPE_IFINDEX is a read-only integer value. It contains the
interface index of the network interface created by PNPIPE_ENCAP,
or zero if encapsulation is off.
Authors
-------
Linux Phonet was initially written by Sakari Ailus.
Other contributors include Mikä Liljeberg, Andras Domokos,
Carlos Chinea and Rémi Denis-Courmont.
Copyright (C) 2008 Nokia Corporation.

Просмотреть файл

@ -0,0 +1,194 @@
Linux wireless regulatory documentation
---------------------------------------
This document gives a brief review over how the Linux wireless
regulatory infrastructure works.
More up to date information can be obtained at the project's web page:
http://wireless.kernel.org/en/developers/Regulatory
Keeping regulatory domains in userspace
---------------------------------------
Due to the dynamic nature of regulatory domains we keep them
in userspace and provide a framework for userspace to upload
to the kernel one regulatory domain to be used as the central
core regulatory domain all wireless devices should adhere to.
How to get regulatory domains to the kernel
-------------------------------------------
Userspace gets a regulatory domain in the kernel by having
a userspace agent build it and send it via nl80211. Only
expected regulatory domains will be respected by the kernel.
A currently available userspace agent which can accomplish this
is CRDA - central regulatory domain agent. Its documented here:
http://wireless.kernel.org/en/developers/Regulatory/CRDA
Essentially the kernel will send a udev event when it knows
it needs a new regulatory domain. A udev rule can be put in place
to trigger crda to send the respective regulatory domain for a
specific ISO/IEC 3166 alpha2.
Below is an example udev rule which can be used:
# Example file, should be put in /etc/udev/rules.d/regulatory.rules
KERNEL=="regulatory*", ACTION=="change", SUBSYSTEM=="platform", RUN+="/sbin/crda"
The alpha2 is passed as an environment variable under the variable COUNTRY.
Who asks for regulatory domains?
--------------------------------
* Users
Users can use iw:
http://wireless.kernel.org/en/users/Documentation/iw
An example:
# set regulatory domain to "Costa Rica"
iw reg set CR
This will request the kernel to set the regulatory domain to
the specificied alpha2. The kernel in turn will then ask userspace
to provide a regulatory domain for the alpha2 specified by the user
by sending a uevent.
* Wireless subsystems for Country Information elements
The kernel will send a uevent to inform userspace a new
regulatory domain is required. More on this to be added
as its integration is added.
* Drivers
If drivers determine they need a specific regulatory domain
set they can inform the wireless core using regulatory_hint().
They have two options -- they either provide an alpha2 so that
crda can provide back a regulatory domain for that country or
they can build their own regulatory domain based on internal
custom knowledge so the wireless core can respect it.
*Most* drivers will rely on the first mechanism of providing a
regulatory hint with an alpha2. For these drivers there is an additional
check that can be used to ensure compliance based on custom EEPROM
regulatory data. This additional check can be used by drivers by
registering on its struct wiphy a reg_notifier() callback. This notifier
is called when the core's regulatory domain has been changed. The driver
can use this to review the changes made and also review who made them
(driver, user, country IE) and determine what to allow based on its
internal EEPROM data. Devices drivers wishing to be capable of world
roaming should use this callback. More on world roaming will be
added to this document when its support is enabled.
Device drivers who provide their own built regulatory domain
do not need a callback as the channels registered by them are
the only ones that will be allowed and therefore *additional*
cannels cannot be enabled.
Example code - drivers hinting an alpha2:
------------------------------------------
This example comes from the zd1211rw device driver. You can start
by having a mapping of your device's EEPROM country/regulatory
domain value to to a specific alpha2 as follows:
static struct zd_reg_alpha2_map reg_alpha2_map[] = {
{ ZD_REGDOMAIN_FCC, "US" },
{ ZD_REGDOMAIN_IC, "CA" },
{ ZD_REGDOMAIN_ETSI, "DE" }, /* Generic ETSI, use most restrictive */
{ ZD_REGDOMAIN_JAPAN, "JP" },
{ ZD_REGDOMAIN_JAPAN_ADD, "JP" },
{ ZD_REGDOMAIN_SPAIN, "ES" },
{ ZD_REGDOMAIN_FRANCE, "FR" },
Then you can define a routine to map your read EEPROM value to an alpha2,
as follows:
static int zd_reg2alpha2(u8 regdomain, char *alpha2)
{
unsigned int i;
struct zd_reg_alpha2_map *reg_map;
for (i = 0; i < ARRAY_SIZE(reg_alpha2_map); i++) {
reg_map = &reg_alpha2_map[i];
if (regdomain == reg_map->reg) {
alpha2[0] = reg_map->alpha2[0];
alpha2[1] = reg_map->alpha2[1];
return 0;
}
}
return 1;
}
Lastly, you can then hint to the core of your discovered alpha2, if a match
was found. You need to do this after you have registered your wiphy. You
are expected to do this during initialization.
r = zd_reg2alpha2(mac->regdomain, alpha2);
if (!r)
regulatory_hint(hw->wiphy, alpha2, NULL);
Example code - drivers providing a built in regulatory domain:
--------------------------------------------------------------
If you have regulatory information you can obtain from your
driver and you *need* to use this we let you build a regulatory domain
structure and pass it to the wireless core. To do this you should
kmalloc() a structure big enough to hold your regulatory domain
structure and you should then fill it with your data. Finally you simply
call regulatory_hint() with the regulatory domain structure in it.
Bellow is a simple example, with a regulatory domain cached using the stack.
Your implementation may vary (read EEPROM cache instead, for example).
Example cache of some regulatory domain
struct ieee80211_regdomain mydriver_jp_regdom = {
.n_reg_rules = 3,
.alpha2 = "JP",
//.alpha2 = "99", /* If I have no alpha2 to map it to */
.reg_rules = {
/* IEEE 802.11b/g, channels 1..14 */
REG_RULE(2412-20, 2484+20, 40, 6, 20, 0),
/* IEEE 802.11a, channels 34..48 */
REG_RULE(5170-20, 5240+20, 40, 6, 20,
NL80211_RRF_PASSIVE_SCAN),
/* IEEE 802.11a, channels 52..64 */
REG_RULE(5260-20, 5320+20, 40, 6, 20,
NL80211_RRF_NO_IBSS |
NL80211_RRF_DFS),
}
};
Then in some part of your code after your wiphy has been registered:
int r;
struct ieee80211_regdomain *rd;
int size_of_regd;
int num_rules = mydriver_jp_regdom.n_reg_rules;
unsigned int i;
size_of_regd = sizeof(struct ieee80211_regdomain) +
(num_rules * sizeof(struct ieee80211_reg_rule));
rd = kzalloc(size_of_regd, GFP_KERNEL);
if (!rd)
return -ENOMEM;
memcpy(rd, &mydriver_jp_regdom, sizeof(struct ieee80211_regdomain));
for (i=0; i < num_rules; i++) {
memcpy(&rd->reg_rules[i], &mydriver_jp_regdom.reg_rules[i],
sizeof(struct ieee80211_reg_rule));
}
r = regulatory_hint(hw->wiphy, NULL, rd);
if (r) {
kfree(rd);
return r;
}

Просмотреть файл

@ -0,0 +1,85 @@
Transparent proxy support
=========================
This feature adds Linux 2.2-like transparent proxy support to current kernels.
To use it, enable NETFILTER_TPROXY, the socket match and the TPROXY target in
your kernel config. You will need policy routing too, so be sure to enable that
as well.
1. Making non-local sockets work
================================
The idea is that you identify packets with destination address matching a local
socket on your box, set the packet mark to a certain value, and then match on that
value using policy routing to have those packets delivered locally:
# iptables -t mangle -N DIVERT
# iptables -t mangle -A PREROUTING -p tcp -m socket -j DIVERT
# iptables -t mangle -A DIVERT -j MARK --set-mark 1
# iptables -t mangle -A DIVERT -j ACCEPT
# ip rule add fwmark 1 lookup 100
# ip route add local 0.0.0.0/0 dev lo table 100
Because of certain restrictions in the IPv4 routing output code you'll have to
modify your application to allow it to send datagrams _from_ non-local IP
addresses. All you have to do is enable the (SOL_IP, IP_TRANSPARENT) socket
option before calling bind:
fd = socket(AF_INET, SOCK_STREAM, 0);
/* - 8< -*/
int value = 1;
setsockopt(fd, SOL_IP, IP_TRANSPARENT, &value, sizeof(value));
/* - 8< -*/
name.sin_family = AF_INET;
name.sin_port = htons(0xCAFE);
name.sin_addr.s_addr = htonl(0xDEADBEEF);
bind(fd, &name, sizeof(name));
A trivial patch for netcat is available here:
http://people.netfilter.org/hidden/tproxy/netcat-ip_transparent-support.patch
2. Redirecting traffic
======================
Transparent proxying often involves "intercepting" traffic on a router. This is
usually done with the iptables REDIRECT target; however, there are serious
limitations of that method. One of the major issues is that it actually
modifies the packets to change the destination address -- which might not be
acceptable in certain situations. (Think of proxying UDP for example: you won't
be able to find out the original destination address. Even in case of TCP
getting the original destination address is racy.)
The 'TPROXY' target provides similar functionality without relying on NAT. Simply
add rules like this to the iptables ruleset above:
# iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY \
--tproxy-mark 0x1/0x1 --on-port 50080
Note that for this to work you'll have to modify the proxy to enable (SOL_IP,
IP_TRANSPARENT) for the listening socket.
3. Iptables extensions
======================
To use tproxy you'll need to have the 'socket' and 'TPROXY' modules
compiled for iptables. A patched version of iptables is available
here: http://git.balabit.hu/?p=bazsi/iptables-tproxy.git
4. Application support
======================
4.1. Squid
----------
Squid 3.HEAD has support built-in. To use it, pass
'--enable-linux-netfilter' to configure and set the 'tproxy' option on
the HTTP listener you redirect traffic to with the TPROXY iptables
target.
For more information please consult the following page on the Squid
wiki: http://wiki.squid-cache.org/Features/Tproxy4

Просмотреть файл

@ -341,6 +341,8 @@ key that does nothing by itself, as well as any hot key that is type-specific
3.1 Guidelines for wireless device drivers 3.1 Guidelines for wireless device drivers
------------------------------------------ ------------------------------------------
(in this text, rfkill->foo means the foo field of struct rfkill).
1. Each independent transmitter in a wireless device (usually there is only one 1. Each independent transmitter in a wireless device (usually there is only one
transmitter per device) should have a SINGLE rfkill class attached to it. transmitter per device) should have a SINGLE rfkill class attached to it.
@ -363,10 +365,32 @@ This rule exists because users of the rfkill subsystem expect to get (and set,
when possible) the overall transmitter rfkill state, not of a particular rfkill when possible) the overall transmitter rfkill state, not of a particular rfkill
line. line.
5. During suspend, the rfkill class will attempt to soft-block the radio 5. The wireless device driver MUST NOT leave the transmitter enabled during
through a call to rfkill->toggle_radio, and will try to restore its previous suspend and hibernation unless:
state during resume. After a rfkill class is suspended, it will *not* call
rfkill->toggle_radio until it is resumed. 5.1. The transmitter has to be enabled for some sort of functionality
like wake-on-wireless-packet or autonomous packed forwarding in a mesh
network, and that functionality is enabled for this suspend/hibernation
cycle.
AND
5.2. The device was not on a user-requested BLOCKED state before
the suspend (i.e. the driver must NOT unblock a device, not even
to support wake-on-wireless-packet or remain in the mesh).
In other words, there is absolutely no allowed scenario where a driver can
automatically take action to unblock a rfkill controller (obviously, this deals
with scenarios where soft-blocking or both soft and hard blocking is happening.
Scenarios where hardware rfkill lines are the only ones blocking the
transmitter are outside of this rule, since the wireless device driver does not
control its input hardware rfkill lines in the first place).
6. During resume, rfkill will try to restore its previous state.
7. After a rfkill class is suspended, it will *not* call rfkill->toggle_radio
until it is resumed.
Example of a WLAN wireless driver connected to the rfkill subsystem: Example of a WLAN wireless driver connected to the rfkill subsystem:
-------------------------------------------------------------------- --------------------------------------------------------------------

Просмотреть файл

@ -1048,6 +1048,13 @@ L: cbe-oss-dev@ozlabs.org
W: http://www.ibm.com/developerworks/power/cell/ W: http://www.ibm.com/developerworks/power/cell/
S: Supported S: Supported
CISCO 10G ETHERNET DRIVER
P: Scott Feldman
M: scofeldm@cisco.com
P: Joe Eykholt
M: jeykholt@cisco.com
S: Supported
CFAG12864B LCD DRIVER CFAG12864B LCD DRIVER
P: Miguel Ojeda Sandonis P: Miguel Ojeda Sandonis
M: miguel.ojeda.sandonis@gmail.com M: miguel.ojeda.sandonis@gmail.com
@ -2319,6 +2326,12 @@ L: video4linux-list@redhat.com
W: http://www.ivtvdriver.org W: http://www.ivtvdriver.org
S: Maintained S: Maintained
JME NETWORK DRIVER
P: Guo-Fu Tseng
M: cooldavid@cooldavid.org
L: netdev@vger.kernel.org
S: Maintained
JOURNALLING FLASH FILE SYSTEM V2 (JFFS2) JOURNALLING FLASH FILE SYSTEM V2 (JFFS2)
P: David Woodhouse P: David Woodhouse
M: dwmw2@infradead.org M: dwmw2@infradead.org
@ -3384,6 +3397,13 @@ M: linux-driver@qlogic.com
L: netdev@vger.kernel.org L: netdev@vger.kernel.org
S: Supported S: Supported
QLOGIC QLGE 10Gb ETHERNET DRIVER
P: Ron Mercer
M: linux-driver@qlogic.com
M: ron.mercer@qlogic.com
L: netdev@vger.kernel.org
S: Supported
QNX4 FILESYSTEM QNX4 FILESYSTEM
P: Anders Larsen P: Anders Larsen
M: al@alarsen.net M: al@alarsen.net
@ -4336,6 +4356,12 @@ L: linux-usb@vger.kernel.org
W: http://www.connecttech.com W: http://www.connecttech.com
S: Supported S: Supported
USB SMSC95XX ETHERNET DRIVER
P: Steve Glendinning
M: steve.glendinning@smsc.com
L: netdev@vger.kernel.org
S: Supported
USB SN9C1xx DRIVER USB SN9C1xx DRIVER
P: Luca Risolia P: Luca Risolia
M: luca.risolia@studio.unibo.it M: luca.risolia@studio.unibo.it

Просмотреть файл

@ -25,7 +25,7 @@
#include "common.h" #include "common.h"
static struct mv643xx_eth_platform_data db88f6281_ge00_data = { static struct mv643xx_eth_platform_data db88f6281_ge00_data = {
.phy_addr = 8, .phy_addr = MV643XX_ETH_PHY_ADDR(8),
}; };
static struct mv_sata_platform_data db88f6281_sata_data = { static struct mv_sata_platform_data db88f6281_sata_data = {

Просмотреть файл

@ -30,7 +30,7 @@
#define RD88F6192_GPIO_USB_VBUS 10 #define RD88F6192_GPIO_USB_VBUS 10
static struct mv643xx_eth_platform_data rd88f6192_ge00_data = { static struct mv643xx_eth_platform_data rd88f6192_ge00_data = {
.phy_addr = 8, .phy_addr = MV643XX_ETH_PHY_ADDR(8),
}; };
static struct mv_sata_platform_data rd88f6192_sata_data = { static struct mv_sata_platform_data rd88f6192_sata_data = {

Просмотреть файл

@ -69,7 +69,7 @@ static struct platform_device rd88f6281_nand_flash = {
}; };
static struct mv643xx_eth_platform_data rd88f6281_ge00_data = { static struct mv643xx_eth_platform_data rd88f6281_ge00_data = {
.phy_addr = -1, .phy_addr = MV643XX_ETH_PHY_NONE,
.speed = SPEED_1000, .speed = SPEED_1000,
.duplex = DUPLEX_FULL, .duplex = DUPLEX_FULL,
}; };

Просмотреть файл

@ -67,7 +67,7 @@ static struct platform_device lb88rc8480_boot_flash = {
}; };
static struct mv643xx_eth_platform_data lb88rc8480_ge0_data = { static struct mv643xx_eth_platform_data lb88rc8480_ge0_data = {
.phy_addr = 1, .phy_addr = MV643XX_ETH_PHY_ADDR(1),
.mac_addr = { 0x00, 0x50, 0x43, 0x11, 0x22, 0x33 }, .mac_addr = { 0x00, 0x50, 0x43, 0x11, 0x22, 0x33 },
}; };

Просмотреть файл

@ -330,6 +330,7 @@ void __init mv78xx0_ge00_init(struct mv643xx_eth_platform_data *eth_data)
struct mv643xx_eth_shared_platform_data mv78xx0_ge01_shared_data = { struct mv643xx_eth_shared_platform_data mv78xx0_ge01_shared_data = {
.t_clk = 0, .t_clk = 0,
.dram = &mv78xx0_mbus_dram_info, .dram = &mv78xx0_mbus_dram_info,
.shared_smi = &mv78xx0_ge00_shared,
}; };
static struct resource mv78xx0_ge01_shared_resources[] = { static struct resource mv78xx0_ge01_shared_resources[] = {
@ -370,7 +371,6 @@ static struct platform_device mv78xx0_ge01 = {
void __init mv78xx0_ge01_init(struct mv643xx_eth_platform_data *eth_data) void __init mv78xx0_ge01_init(struct mv643xx_eth_platform_data *eth_data)
{ {
eth_data->shared = &mv78xx0_ge01_shared; eth_data->shared = &mv78xx0_ge01_shared;
eth_data->shared_smi = &mv78xx0_ge00_shared;
mv78xx0_ge01.dev.platform_data = eth_data; mv78xx0_ge01.dev.platform_data = eth_data;
platform_device_register(&mv78xx0_ge01_shared); platform_device_register(&mv78xx0_ge01_shared);
@ -384,6 +384,7 @@ void __init mv78xx0_ge01_init(struct mv643xx_eth_platform_data *eth_data)
struct mv643xx_eth_shared_platform_data mv78xx0_ge10_shared_data = { struct mv643xx_eth_shared_platform_data mv78xx0_ge10_shared_data = {
.t_clk = 0, .t_clk = 0,
.dram = &mv78xx0_mbus_dram_info, .dram = &mv78xx0_mbus_dram_info,
.shared_smi = &mv78xx0_ge00_shared,
}; };
static struct resource mv78xx0_ge10_shared_resources[] = { static struct resource mv78xx0_ge10_shared_resources[] = {
@ -424,7 +425,6 @@ static struct platform_device mv78xx0_ge10 = {
void __init mv78xx0_ge10_init(struct mv643xx_eth_platform_data *eth_data) void __init mv78xx0_ge10_init(struct mv643xx_eth_platform_data *eth_data)
{ {
eth_data->shared = &mv78xx0_ge10_shared; eth_data->shared = &mv78xx0_ge10_shared;
eth_data->shared_smi = &mv78xx0_ge00_shared;
mv78xx0_ge10.dev.platform_data = eth_data; mv78xx0_ge10.dev.platform_data = eth_data;
platform_device_register(&mv78xx0_ge10_shared); platform_device_register(&mv78xx0_ge10_shared);
@ -438,6 +438,7 @@ void __init mv78xx0_ge10_init(struct mv643xx_eth_platform_data *eth_data)
struct mv643xx_eth_shared_platform_data mv78xx0_ge11_shared_data = { struct mv643xx_eth_shared_platform_data mv78xx0_ge11_shared_data = {
.t_clk = 0, .t_clk = 0,
.dram = &mv78xx0_mbus_dram_info, .dram = &mv78xx0_mbus_dram_info,
.shared_smi = &mv78xx0_ge00_shared,
}; };
static struct resource mv78xx0_ge11_shared_resources[] = { static struct resource mv78xx0_ge11_shared_resources[] = {
@ -478,7 +479,6 @@ static struct platform_device mv78xx0_ge11 = {
void __init mv78xx0_ge11_init(struct mv643xx_eth_platform_data *eth_data) void __init mv78xx0_ge11_init(struct mv643xx_eth_platform_data *eth_data)
{ {
eth_data->shared = &mv78xx0_ge11_shared; eth_data->shared = &mv78xx0_ge11_shared;
eth_data->shared_smi = &mv78xx0_ge00_shared;
mv78xx0_ge11.dev.platform_data = eth_data; mv78xx0_ge11.dev.platform_data = eth_data;
platform_device_register(&mv78xx0_ge11_shared); platform_device_register(&mv78xx0_ge11_shared);

Просмотреть файл

@ -19,19 +19,19 @@
#include "common.h" #include "common.h"
static struct mv643xx_eth_platform_data db78x00_ge00_data = { static struct mv643xx_eth_platform_data db78x00_ge00_data = {
.phy_addr = 8, .phy_addr = MV643XX_ETH_PHY_ADDR(8),
}; };
static struct mv643xx_eth_platform_data db78x00_ge01_data = { static struct mv643xx_eth_platform_data db78x00_ge01_data = {
.phy_addr = 9, .phy_addr = MV643XX_ETH_PHY_ADDR(9),
}; };
static struct mv643xx_eth_platform_data db78x00_ge10_data = { static struct mv643xx_eth_platform_data db78x00_ge10_data = {
.phy_addr = -1, .phy_addr = MV643XX_ETH_PHY_NONE,
}; };
static struct mv643xx_eth_platform_data db78x00_ge11_data = { static struct mv643xx_eth_platform_data db78x00_ge11_data = {
.phy_addr = -1, .phy_addr = MV643XX_ETH_PHY_NONE,
}; };
static struct mv_sata_platform_data db78x00_sata_data = { static struct mv_sata_platform_data db78x00_sata_data = {

Просмотреть файл

@ -285,7 +285,7 @@ subsys_initcall(db88f5281_pci_init);
* Ethernet * Ethernet
****************************************************************************/ ****************************************************************************/
static struct mv643xx_eth_platform_data db88f5281_eth_data = { static struct mv643xx_eth_platform_data db88f5281_eth_data = {
.phy_addr = 8, .phy_addr = MV643XX_ETH_PHY_ADDR(8),
}; };
/***************************************************************************** /*****************************************************************************

Просмотреть файл

@ -79,7 +79,7 @@ subsys_initcall(dns323_pci_init);
*/ */
static struct mv643xx_eth_platform_data dns323_eth_data = { static struct mv643xx_eth_platform_data dns323_eth_data = {
.phy_addr = 8, .phy_addr = MV643XX_ETH_PHY_ADDR(8),
}; };
/**************************************************************************** /****************************************************************************

Просмотреть файл

@ -161,7 +161,7 @@ subsys_initcall(kurobox_pro_pci_init);
****************************************************************************/ ****************************************************************************/
static struct mv643xx_eth_platform_data kurobox_pro_eth_data = { static struct mv643xx_eth_platform_data kurobox_pro_eth_data = {
.phy_addr = 8, .phy_addr = MV643XX_ETH_PHY_ADDR(8),
}; };
/***************************************************************************** /*****************************************************************************

Просмотреть файл

@ -109,7 +109,7 @@ subsys_initcall(mss2_pci_init);
****************************************************************************/ ****************************************************************************/
static struct mv643xx_eth_platform_data mss2_eth_data = { static struct mv643xx_eth_platform_data mss2_eth_data = {
.phy_addr = 8, .phy_addr = MV643XX_ETH_PHY_ADDR(8),
}; };
/***************************************************************************** /*****************************************************************************

Просмотреть файл

@ -39,7 +39,7 @@
* Ethernet * Ethernet
****************************************************************************/ ****************************************************************************/
static struct mv643xx_eth_platform_data mv2120_eth_data = { static struct mv643xx_eth_platform_data mv2120_eth_data = {
.phy_addr = 8, .phy_addr = MV643XX_ETH_PHY_ADDR(8),
}; };
static struct mv_sata_platform_data mv2120_sata_data = { static struct mv_sata_platform_data mv2120_sata_data = {

Просмотреть файл

@ -88,7 +88,7 @@ static struct orion5x_mpp_mode rd88f5181l_fxo_mpp_modes[] __initdata = {
}; };
static struct mv643xx_eth_platform_data rd88f5181l_fxo_eth_data = { static struct mv643xx_eth_platform_data rd88f5181l_fxo_eth_data = {
.phy_addr = -1, .phy_addr = MV643XX_ETH_PHY_NONE,
.speed = SPEED_1000, .speed = SPEED_1000,
.duplex = DUPLEX_FULL, .duplex = DUPLEX_FULL,
}; };

Просмотреть файл

@ -89,7 +89,7 @@ static struct orion5x_mpp_mode rd88f5181l_ge_mpp_modes[] __initdata = {
}; };
static struct mv643xx_eth_platform_data rd88f5181l_ge_eth_data = { static struct mv643xx_eth_platform_data rd88f5181l_ge_eth_data = {
.phy_addr = -1, .phy_addr = MV643XX_ETH_PHY_NONE,
.speed = SPEED_1000, .speed = SPEED_1000,
.duplex = DUPLEX_FULL, .duplex = DUPLEX_FULL,
}; };

Просмотреть файл

@ -221,7 +221,7 @@ subsys_initcall(rd88f5182_pci_init);
****************************************************************************/ ****************************************************************************/
static struct mv643xx_eth_platform_data rd88f5182_eth_data = { static struct mv643xx_eth_platform_data rd88f5182_eth_data = {
.phy_addr = 8, .phy_addr = MV643XX_ETH_PHY_ADDR(8),
}; };
/***************************************************************************** /*****************************************************************************

Просмотреть файл

@ -103,8 +103,7 @@ static struct platform_device ts78xx_nor_boot_flash = {
* Ethernet * Ethernet
****************************************************************************/ ****************************************************************************/
static struct mv643xx_eth_platform_data ts78xx_eth_data = { static struct mv643xx_eth_platform_data ts78xx_eth_data = {
.phy_addr = 0, .phy_addr = MV643XX_ETH_PHY_ADDR(0),
.force_phy_addr = 1,
}; };
/***************************************************************************** /*****************************************************************************

Просмотреть файл

@ -48,7 +48,7 @@ void qnap_tsx09_power_off(void)
****************************************************************************/ ****************************************************************************/
struct mv643xx_eth_platform_data qnap_tsx09_eth_data = { struct mv643xx_eth_platform_data qnap_tsx09_eth_data = {
.phy_addr = 8, .phy_addr = MV643XX_ETH_PHY_ADDR(8),
}; };
static int __init qnap_tsx09_parse_hex_nibble(char n) static int __init qnap_tsx09_parse_hex_nibble(char n)

Просмотреть файл

@ -92,7 +92,7 @@ static struct platform_device wnr854t_nor_flash = {
}; };
static struct mv643xx_eth_platform_data wnr854t_eth_data = { static struct mv643xx_eth_platform_data wnr854t_eth_data = {
.phy_addr = -1, .phy_addr = MV643XX_ETH_PHY_NONE,
.speed = SPEED_1000, .speed = SPEED_1000,
.duplex = DUPLEX_FULL, .duplex = DUPLEX_FULL,
}; };

Просмотреть файл

@ -100,7 +100,7 @@ static struct platform_device wrt350n_v2_nor_flash = {
}; };
static struct mv643xx_eth_platform_data wrt350n_v2_eth_data = { static struct mv643xx_eth_platform_data wrt350n_v2_eth_data = {
.phy_addr = -1, .phy_addr = MV643XX_ETH_PHY_NONE,
.speed = SPEED_1000, .speed = SPEED_1000,
.duplex = DUPLEX_FULL, .duplex = DUPLEX_FULL,
}; };

Просмотреть файл

@ -68,6 +68,10 @@
#define SDR0_UART3 0x0123 #define SDR0_UART3 0x0123
#define SDR0_CUST0 0x4000 #define SDR0_CUST0 0x4000
/* SDRs (460EX/460GT) */
#define SDR0_ETH_CFG 0x4103
#define SDR0_ETH_CFG_ECS 0x00000100 /* EMAC int clk source */
/* /*
* All those DCR register addresses are offsets from the base address * All those DCR register addresses are offsets from the base address
* for the SRAM0 controller (e.g. 0x20 on 440GX). The base address is * for the SRAM0 controller (e.g. 0x20 on 440GX). The base address is

Просмотреть файл

@ -137,7 +137,7 @@ static int __devinit ep8248e_mdio_probe(struct of_device *ofdev,
bus->irq[i] = -1; bus->irq[i] = -1;
bus->name = "ep8248e-mdio-bitbang"; bus->name = "ep8248e-mdio-bitbang";
bus->dev = &ofdev->dev; bus->parent = &ofdev->dev;
snprintf(bus->id, MII_BUS_ID_SIZE, "%x", res.start); snprintf(bus->id, MII_BUS_ID_SIZE, "%x", res.start);
return mdiobus_register(bus); return mdiobus_register(bus);

Просмотреть файл

@ -230,7 +230,7 @@ static int __devinit gpio_mdio_probe(struct of_device *ofdev,
if (!priv) if (!priv)
goto out; goto out;
new_bus = kzalloc(sizeof(struct mii_bus), GFP_KERNEL); new_bus = mdiobus_alloc();
if (!new_bus) if (!new_bus)
goto out_free_priv; goto out_free_priv;
@ -272,7 +272,7 @@ static int __devinit gpio_mdio_probe(struct of_device *ofdev,
prop = of_get_property(np, "mdio-pin", NULL); prop = of_get_property(np, "mdio-pin", NULL);
priv->mdio_pin = *prop; priv->mdio_pin = *prop;
new_bus->dev = dev; new_bus->parent = dev;
dev_set_drvdata(dev, new_bus); dev_set_drvdata(dev, new_bus);
err = mdiobus_register(new_bus); err = mdiobus_register(new_bus);
@ -306,7 +306,7 @@ static int gpio_mdio_remove(struct of_device *dev)
kfree(bus->priv); kfree(bus->priv);
bus->priv = NULL; bus->priv = NULL;
kfree(bus); mdiobus_free(bus);
return 0; return 0;
} }

Просмотреть файл

@ -293,10 +293,8 @@ static int __init mv64x60_eth_device_setup(struct device_node *np, int id,
return -ENODEV; return -ENODEV;
prop = of_get_property(phy, "reg", NULL); prop = of_get_property(phy, "reg", NULL);
if (prop) { if (prop)
pdata.force_phy_addr = 1; pdata.phy_addr = MV643XX_ETH_PHY_ADDR(*prop);
pdata.phy_addr = *prop;
}
of_node_put(phy); of_node_put(phy);

Просмотреть файл

@ -260,6 +260,9 @@ config ACPI_ASUS
config ACPI_TOSHIBA config ACPI_TOSHIBA
tristate "Toshiba Laptop Extras" tristate "Toshiba Laptop Extras"
depends on X86 depends on X86
select INPUT_POLLDEV
select NET
select RFKILL
select BACKLIGHT_CLASS_DEVICE select BACKLIGHT_CLASS_DEVICE
---help--- ---help---
This driver adds support for access to certain system settings This driver adds support for access to certain system settings

Просмотреть файл

@ -3,6 +3,7 @@
* *
* *
* Copyright (C) 2002-2004 John Belmonte * Copyright (C) 2002-2004 John Belmonte
* Copyright (C) 2008 Philip Langdale
* *
* This program is free software; you can redistribute it and/or modify * This program is free software; you can redistribute it and/or modify
* it under the terms of the GNU General Public License as published by * it under the terms of the GNU General Public License as published by
@ -33,7 +34,7 @@
* *
*/ */
#define TOSHIBA_ACPI_VERSION "0.18" #define TOSHIBA_ACPI_VERSION "0.19"
#define PROC_INTERFACE_VERSION 1 #define PROC_INTERFACE_VERSION 1
#include <linux/kernel.h> #include <linux/kernel.h>
@ -42,6 +43,9 @@
#include <linux/types.h> #include <linux/types.h>
#include <linux/proc_fs.h> #include <linux/proc_fs.h>
#include <linux/backlight.h> #include <linux/backlight.h>
#include <linux/platform_device.h>
#include <linux/rfkill.h>
#include <linux/input-polldev.h>
#include <asm/uaccess.h> #include <asm/uaccess.h>
@ -90,6 +94,7 @@ MODULE_LICENSE("GPL");
#define HCI_VIDEO_OUT 0x001c #define HCI_VIDEO_OUT 0x001c
#define HCI_HOTKEY_EVENT 0x001e #define HCI_HOTKEY_EVENT 0x001e
#define HCI_LCD_BRIGHTNESS 0x002a #define HCI_LCD_BRIGHTNESS 0x002a
#define HCI_WIRELESS 0x0056
/* field definitions */ /* field definitions */
#define HCI_LCD_BRIGHTNESS_BITS 3 #define HCI_LCD_BRIGHTNESS_BITS 3
@ -98,9 +103,14 @@ MODULE_LICENSE("GPL");
#define HCI_VIDEO_OUT_LCD 0x1 #define HCI_VIDEO_OUT_LCD 0x1
#define HCI_VIDEO_OUT_CRT 0x2 #define HCI_VIDEO_OUT_CRT 0x2
#define HCI_VIDEO_OUT_TV 0x4 #define HCI_VIDEO_OUT_TV 0x4
#define HCI_WIRELESS_KILL_SWITCH 0x01
#define HCI_WIRELESS_BT_PRESENT 0x0f
#define HCI_WIRELESS_BT_ATTACH 0x40
#define HCI_WIRELESS_BT_POWER 0x80
static const struct acpi_device_id toshiba_device_ids[] = { static const struct acpi_device_id toshiba_device_ids[] = {
{"TOS6200", 0}, {"TOS6200", 0},
{"TOS6208", 0},
{"TOS1900", 0}, {"TOS1900", 0},
{"", 0}, {"", 0},
}; };
@ -193,7 +203,7 @@ static acpi_status hci_raw(const u32 in[HCI_WORDS], u32 out[HCI_WORDS])
return status; return status;
} }
/* common hci tasks (get or set one value) /* common hci tasks (get or set one or two value)
* *
* In addition to the ACPI status, the HCI system returns a result which * In addition to the ACPI status, the HCI system returns a result which
* may be useful (such as "not supported"). * may be useful (such as "not supported").
@ -218,6 +228,152 @@ static acpi_status hci_read1(u32 reg, u32 * out1, u32 * result)
return status; return status;
} }
static acpi_status hci_write2(u32 reg, u32 in1, u32 in2, u32 *result)
{
u32 in[HCI_WORDS] = { HCI_SET, reg, in1, in2, 0, 0 };
u32 out[HCI_WORDS];
acpi_status status = hci_raw(in, out);
*result = (status == AE_OK) ? out[0] : HCI_FAILURE;
return status;
}
static acpi_status hci_read2(u32 reg, u32 *out1, u32 *out2, u32 *result)
{
u32 in[HCI_WORDS] = { HCI_GET, reg, *out1, *out2, 0, 0 };
u32 out[HCI_WORDS];
acpi_status status = hci_raw(in, out);
*out1 = out[2];
*out2 = out[3];
*result = (status == AE_OK) ? out[0] : HCI_FAILURE;
return status;
}
struct toshiba_acpi_dev {
struct platform_device *p_dev;
struct rfkill *rfk_dev;
struct input_polled_dev *poll_dev;
const char *bt_name;
const char *rfk_name;
bool last_rfk_state;
struct mutex mutex;
};
static struct toshiba_acpi_dev toshiba_acpi = {
.bt_name = "Toshiba Bluetooth",
.rfk_name = "Toshiba RFKill Switch",
.last_rfk_state = false,
};
/* Bluetooth rfkill handlers */
static u32 hci_get_bt_present(bool *present)
{
u32 hci_result;
u32 value, value2;
value = 0;
value2 = 0;
hci_read2(HCI_WIRELESS, &value, &value2, &hci_result);
if (hci_result == HCI_SUCCESS)
*present = (value & HCI_WIRELESS_BT_PRESENT) ? true : false;
return hci_result;
}
static u32 hci_get_bt_on(bool *on)
{
u32 hci_result;
u32 value, value2;
value = 0;
value2 = 0x0001;
hci_read2(HCI_WIRELESS, &value, &value2, &hci_result);
if (hci_result == HCI_SUCCESS)
*on = (value & HCI_WIRELESS_BT_POWER) &&
(value & HCI_WIRELESS_BT_ATTACH);
return hci_result;
}
static u32 hci_get_radio_state(bool *radio_state)
{
u32 hci_result;
u32 value, value2;
value = 0;
value2 = 0x0001;
hci_read2(HCI_WIRELESS, &value, &value2, &hci_result);
*radio_state = value & HCI_WIRELESS_KILL_SWITCH;
return hci_result;
}
static int bt_rfkill_toggle_radio(void *data, enum rfkill_state state)
{
u32 result1, result2;
u32 value;
bool radio_state;
struct toshiba_acpi_dev *dev = data;
value = (state == RFKILL_STATE_UNBLOCKED);
if (hci_get_radio_state(&radio_state) != HCI_SUCCESS)
return -EFAULT;
switch (state) {
case RFKILL_STATE_UNBLOCKED:
if (!radio_state)
return -EPERM;
break;
case RFKILL_STATE_SOFT_BLOCKED:
break;
default:
return -EINVAL;
}
mutex_lock(&dev->mutex);
hci_write2(HCI_WIRELESS, value, HCI_WIRELESS_BT_POWER, &result1);
hci_write2(HCI_WIRELESS, value, HCI_WIRELESS_BT_ATTACH, &result2);
mutex_unlock(&dev->mutex);
if (result1 != HCI_SUCCESS || result2 != HCI_SUCCESS)
return -EFAULT;
return 0;
}
static void bt_poll_rfkill(struct input_polled_dev *poll_dev)
{
bool state_changed;
bool new_rfk_state;
bool value;
u32 hci_result;
struct toshiba_acpi_dev *dev = poll_dev->private;
hci_result = hci_get_radio_state(&value);
if (hci_result != HCI_SUCCESS)
return; /* Can't do anything useful */
new_rfk_state = value;
mutex_lock(&dev->mutex);
state_changed = new_rfk_state != dev->last_rfk_state;
dev->last_rfk_state = new_rfk_state;
mutex_unlock(&dev->mutex);
if (unlikely(state_changed)) {
rfkill_force_state(dev->rfk_dev,
new_rfk_state ?
RFKILL_STATE_SOFT_BLOCKED :
RFKILL_STATE_HARD_BLOCKED);
input_report_switch(poll_dev->input, SW_RFKILL_ALL,
new_rfk_state);
}
}
static struct proc_dir_entry *toshiba_proc_dir /*= 0*/ ; static struct proc_dir_entry *toshiba_proc_dir /*= 0*/ ;
static struct backlight_device *toshiba_backlight_device; static struct backlight_device *toshiba_backlight_device;
static int force_fan; static int force_fan;
@ -547,6 +703,14 @@ static struct backlight_ops toshiba_backlight_data = {
static void toshiba_acpi_exit(void) static void toshiba_acpi_exit(void)
{ {
if (toshiba_acpi.poll_dev) {
input_unregister_polled_device(toshiba_acpi.poll_dev);
input_free_polled_device(toshiba_acpi.poll_dev);
}
if (toshiba_acpi.rfk_dev)
rfkill_unregister(toshiba_acpi.rfk_dev);
if (toshiba_backlight_device) if (toshiba_backlight_device)
backlight_device_unregister(toshiba_backlight_device); backlight_device_unregister(toshiba_backlight_device);
@ -555,6 +719,8 @@ static void toshiba_acpi_exit(void)
if (toshiba_proc_dir) if (toshiba_proc_dir)
remove_proc_entry(PROC_TOSHIBA, acpi_root_dir); remove_proc_entry(PROC_TOSHIBA, acpi_root_dir);
platform_device_unregister(toshiba_acpi.p_dev);
return; return;
} }
@ -562,6 +728,10 @@ static int __init toshiba_acpi_init(void)
{ {
acpi_status status = AE_OK; acpi_status status = AE_OK;
u32 hci_result; u32 hci_result;
bool bt_present;
bool bt_on;
bool radio_on;
int ret = 0;
if (acpi_disabled) if (acpi_disabled)
return -ENODEV; return -ENODEV;
@ -578,6 +748,18 @@ static int __init toshiba_acpi_init(void)
TOSHIBA_ACPI_VERSION); TOSHIBA_ACPI_VERSION);
printk(MY_INFO " HCI method: %s\n", method_hci); printk(MY_INFO " HCI method: %s\n", method_hci);
mutex_init(&toshiba_acpi.mutex);
toshiba_acpi.p_dev = platform_device_register_simple("toshiba_acpi",
-1, NULL, 0);
if (IS_ERR(toshiba_acpi.p_dev)) {
ret = PTR_ERR(toshiba_acpi.p_dev);
printk(MY_ERR "unable to register platform device\n");
toshiba_acpi.p_dev = NULL;
toshiba_acpi_exit();
return ret;
}
force_fan = 0; force_fan = 0;
key_event_valid = 0; key_event_valid = 0;
@ -586,19 +768,23 @@ static int __init toshiba_acpi_init(void)
toshiba_proc_dir = proc_mkdir(PROC_TOSHIBA, acpi_root_dir); toshiba_proc_dir = proc_mkdir(PROC_TOSHIBA, acpi_root_dir);
if (!toshiba_proc_dir) { if (!toshiba_proc_dir) {
status = AE_ERROR; toshiba_acpi_exit();
return -ENODEV;
} else { } else {
toshiba_proc_dir->owner = THIS_MODULE; toshiba_proc_dir->owner = THIS_MODULE;
status = add_device(); status = add_device();
if (ACPI_FAILURE(status)) if (ACPI_FAILURE(status)) {
remove_proc_entry(PROC_TOSHIBA, acpi_root_dir); toshiba_acpi_exit();
return -ENODEV;
}
} }
toshiba_backlight_device = backlight_device_register("toshiba",NULL, toshiba_backlight_device = backlight_device_register("toshiba",
&toshiba_acpi.p_dev->dev,
NULL, NULL,
&toshiba_backlight_data); &toshiba_backlight_data);
if (IS_ERR(toshiba_backlight_device)) { if (IS_ERR(toshiba_backlight_device)) {
int ret = PTR_ERR(toshiba_backlight_device); ret = PTR_ERR(toshiba_backlight_device);
printk(KERN_ERR "Could not register toshiba backlight device\n"); printk(KERN_ERR "Could not register toshiba backlight device\n");
toshiba_backlight_device = NULL; toshiba_backlight_device = NULL;
@ -607,7 +793,66 @@ static int __init toshiba_acpi_init(void)
} }
toshiba_backlight_device->props.max_brightness = HCI_LCD_BRIGHTNESS_LEVELS - 1; toshiba_backlight_device->props.max_brightness = HCI_LCD_BRIGHTNESS_LEVELS - 1;
return (ACPI_SUCCESS(status)) ? 0 : -ENODEV; /* Register rfkill switch for Bluetooth */
if (hci_get_bt_present(&bt_present) == HCI_SUCCESS && bt_present) {
toshiba_acpi.rfk_dev = rfkill_allocate(&toshiba_acpi.p_dev->dev,
RFKILL_TYPE_BLUETOOTH);
if (!toshiba_acpi.rfk_dev) {
printk(MY_ERR "unable to allocate rfkill device\n");
toshiba_acpi_exit();
return -ENOMEM;
}
toshiba_acpi.rfk_dev->name = toshiba_acpi.bt_name;
toshiba_acpi.rfk_dev->toggle_radio = bt_rfkill_toggle_radio;
toshiba_acpi.rfk_dev->user_claim_unsupported = 1;
toshiba_acpi.rfk_dev->data = &toshiba_acpi;
if (hci_get_bt_on(&bt_on) == HCI_SUCCESS && bt_on) {
toshiba_acpi.rfk_dev->state = RFKILL_STATE_UNBLOCKED;
} else if (hci_get_radio_state(&radio_on) == HCI_SUCCESS &&
radio_on) {
toshiba_acpi.rfk_dev->state = RFKILL_STATE_SOFT_BLOCKED;
} else {
toshiba_acpi.rfk_dev->state = RFKILL_STATE_HARD_BLOCKED;
}
ret = rfkill_register(toshiba_acpi.rfk_dev);
if (ret) {
printk(MY_ERR "unable to register rfkill device\n");
toshiba_acpi_exit();
return -ENOMEM;
}
}
/* Register input device for kill switch */
toshiba_acpi.poll_dev = input_allocate_polled_device();
if (!toshiba_acpi.poll_dev) {
printk(MY_ERR "unable to allocate kill-switch input device\n");
toshiba_acpi_exit();
return -ENOMEM;
}
toshiba_acpi.poll_dev->private = &toshiba_acpi;
toshiba_acpi.poll_dev->poll = bt_poll_rfkill;
toshiba_acpi.poll_dev->poll_interval = 1000; /* msecs */
toshiba_acpi.poll_dev->input->name = toshiba_acpi.rfk_name;
toshiba_acpi.poll_dev->input->id.bustype = BUS_HOST;
toshiba_acpi.poll_dev->input->id.vendor = 0x0930; /* Toshiba USB ID */
set_bit(EV_SW, toshiba_acpi.poll_dev->input->evbit);
set_bit(SW_RFKILL_ALL, toshiba_acpi.poll_dev->input->swbit);
input_report_switch(toshiba_acpi.poll_dev->input, SW_RFKILL_ALL, TRUE);
ret = input_register_polled_device(toshiba_acpi.poll_dev);
if (ret) {
printk(MY_ERR "unable to register kill-switch input device\n");
rfkill_free(toshiba_acpi.rfk_dev);
toshiba_acpi.rfk_dev = NULL;
toshiba_acpi_exit();
return ret;
}
return 0;
} }
module_init(toshiba_acpi_init); module_init(toshiba_acpi_init);

Просмотреть файл

@ -1270,7 +1270,7 @@ static int comp_tx(struct eni_dev *eni_dev,int *pcr,int reserved,int *pre,
if (*pre < 3) (*pre)++; /* else fail later */ if (*pre < 3) (*pre)++; /* else fail later */
div = pre_div[*pre]*-*pcr; div = pre_div[*pre]*-*pcr;
DPRINTK("max div %d\n",div); DPRINTK("max div %d\n",div);
*res = (TS_CLOCK+div-1)/div-1; *res = DIV_ROUND_UP(TS_CLOCK, div)-1;
} }
if (*res < 0) *res = 0; if (*res < 0) *res = 0;
if (*res > MID_SEG_MAX_RATE) *res = MID_SEG_MAX_RATE; if (*res > MID_SEG_MAX_RATE) *res = MID_SEG_MAX_RATE;

Просмотреть файл

@ -635,7 +635,7 @@ static int make_rate (const hrz_dev * dev, u32 c, rounding r,
// take care of rounding // take care of rounding
switch (r) { switch (r) {
case round_down: case round_down:
pre = (br+(c<<div)-1)/(c<<div); pre = DIV_ROUND_UP(br, c<<div);
// but p must be non-zero // but p must be non-zero
if (!pre) if (!pre)
pre = 1; pre = 1;
@ -668,7 +668,7 @@ static int make_rate (const hrz_dev * dev, u32 c, rounding r,
// take care of rounding // take care of rounding
switch (r) { switch (r) {
case round_down: case round_down:
pre = (br+(c<<div)-1)/(c<<div); pre = DIV_ROUND_UP(br, c<<div);
break; break;
case round_nearest: case round_nearest:
pre = (br+(c<<div)/2)/(c<<div); pre = (br+(c<<div)/2)/(c<<div);
@ -698,7 +698,7 @@ got_it:
if (bits) if (bits)
*bits = (div<<CLOCK_SELECT_SHIFT) | (pre-1); *bits = (div<<CLOCK_SELECT_SHIFT) | (pre-1);
if (actual) { if (actual) {
*actual = (br + (pre<<div) - 1) / (pre<<div); *actual = DIV_ROUND_UP(br, pre<<div);
PRINTD (DBG_QOS, "actual rate: %u", *actual); PRINTD (DBG_QOS, "actual rate: %u", *actual);
} }
return 0; return 0;
@ -1967,7 +1967,7 @@ static int __devinit hrz_init (hrz_dev * dev) {
// Set the max AAL5 cell count to be just enough to contain the // Set the max AAL5 cell count to be just enough to contain the
// largest AAL5 frame that the user wants to receive // largest AAL5 frame that the user wants to receive
wr_regw (dev, MAX_AAL5_CELL_COUNT_OFF, wr_regw (dev, MAX_AAL5_CELL_COUNT_OFF,
(max_rx_size + ATM_AAL5_TRAILER + ATM_CELL_PAYLOAD - 1) / ATM_CELL_PAYLOAD); DIV_ROUND_UP(max_rx_size + ATM_AAL5_TRAILER, ATM_CELL_PAYLOAD));
// Enable receive // Enable receive
wr_regw (dev, RX_CONFIG_OFF, rd_regw (dev, RX_CONFIG_OFF) | RX_ENABLE); wr_regw (dev, RX_CONFIG_OFF, rd_regw (dev, RX_CONFIG_OFF) | RX_ENABLE);

Просмотреть файл

@ -1114,11 +1114,8 @@ dequeue_rx(struct idt77252_dev *card, struct rsq_entry *rsqe)
rpp = &vc->rcv.rx_pool; rpp = &vc->rcv.rx_pool;
__skb_queue_tail(&rpp->queue, skb);
rpp->len += skb->len; rpp->len += skb->len;
if (!rpp->count++)
rpp->first = skb;
*rpp->last = skb;
rpp->last = &skb->next;
if (stat & SAR_RSQE_EPDU) { if (stat & SAR_RSQE_EPDU) {
unsigned char *l1l2; unsigned char *l1l2;
@ -1145,7 +1142,7 @@ dequeue_rx(struct idt77252_dev *card, struct rsq_entry *rsqe)
atomic_inc(&vcc->stats->rx_err); atomic_inc(&vcc->stats->rx_err);
return; return;
} }
if (rpp->count > 1) { if (skb_queue_len(&rpp->queue) > 1) {
struct sk_buff *sb; struct sk_buff *sb;
skb = dev_alloc_skb(rpp->len); skb = dev_alloc_skb(rpp->len);
@ -1161,12 +1158,9 @@ dequeue_rx(struct idt77252_dev *card, struct rsq_entry *rsqe)
dev_kfree_skb(skb); dev_kfree_skb(skb);
return; return;
} }
sb = rpp->first; skb_queue_walk(&rpp->queue, sb)
for (i = 0; i < rpp->count; i++) {
memcpy(skb_put(skb, sb->len), memcpy(skb_put(skb, sb->len),
sb->data, sb->len); sb->data, sb->len);
sb = sb->next;
}
recycle_rx_pool_skb(card, rpp); recycle_rx_pool_skb(card, rpp);
@ -1180,7 +1174,6 @@ dequeue_rx(struct idt77252_dev *card, struct rsq_entry *rsqe)
return; return;
} }
skb->next = NULL;
flush_rx_pool(card, rpp); flush_rx_pool(card, rpp);
if (!atm_charge(vcc, skb->truesize)) { if (!atm_charge(vcc, skb->truesize)) {
@ -1918,25 +1911,18 @@ recycle_rx_skb(struct idt77252_dev *card, struct sk_buff *skb)
static void static void
flush_rx_pool(struct idt77252_dev *card, struct rx_pool *rpp) flush_rx_pool(struct idt77252_dev *card, struct rx_pool *rpp)
{ {
skb_queue_head_init(&rpp->queue);
rpp->len = 0; rpp->len = 0;
rpp->count = 0;
rpp->first = NULL;
rpp->last = &rpp->first;
} }
static void static void
recycle_rx_pool_skb(struct idt77252_dev *card, struct rx_pool *rpp) recycle_rx_pool_skb(struct idt77252_dev *card, struct rx_pool *rpp)
{ {
struct sk_buff *skb, *next; struct sk_buff *skb, *tmp;
int i;
skb = rpp->first; skb_queue_walk_safe(&rpp->queue, skb, tmp)
for (i = 0; i < rpp->count; i++) {
next = skb->next;
skb->next = NULL;
recycle_rx_skb(card, skb); recycle_rx_skb(card, skb);
skb = next;
}
flush_rx_pool(card, rpp); flush_rx_pool(card, rpp);
} }
@ -2537,7 +2523,7 @@ idt77252_close(struct atm_vcc *vcc)
waitfor_idle(card); waitfor_idle(card);
spin_unlock_irqrestore(&card->cmd_lock, flags); spin_unlock_irqrestore(&card->cmd_lock, flags);
if (vc->rcv.rx_pool.count) { if (skb_queue_len(&vc->rcv.rx_pool.queue) != 0) {
DPRINTK("%s: closing a VC with pending rx buffers.\n", DPRINTK("%s: closing a VC with pending rx buffers.\n",
card->name); card->name);
@ -2970,7 +2956,7 @@ close_card_oam(struct idt77252_dev *card)
waitfor_idle(card); waitfor_idle(card);
spin_unlock_irqrestore(&card->cmd_lock, flags); spin_unlock_irqrestore(&card->cmd_lock, flags);
if (vc->rcv.rx_pool.count) { if (skb_queue_len(&vc->rcv.rx_pool.queue) != 0) {
DPRINTK("%s: closing a VC " DPRINTK("%s: closing a VC "
"with pending rx buffers.\n", "with pending rx buffers.\n",
card->name); card->name);

Просмотреть файл

@ -173,10 +173,8 @@ struct scq_info
}; };
struct rx_pool { struct rx_pool {
struct sk_buff *first; struct sk_buff_head queue;
struct sk_buff **last;
unsigned int len; unsigned int len;
unsigned int count;
}; };
struct aal1 { struct aal1 {

Просмотреть файл

@ -496,8 +496,8 @@ static int open_rx_first(struct atm_vcc *vcc)
vcc->qos.rxtp.max_sdu = 65464; vcc->qos.rxtp.max_sdu = 65464;
/* fix this - we may want to receive 64kB SDUs /* fix this - we may want to receive 64kB SDUs
later */ later */
cells = (vcc->qos.rxtp.max_sdu+ATM_AAL5_TRAILER+ cells = DIV_ROUND_UP(vcc->qos.rxtp.max_sdu + ATM_AAL5_TRAILER,
ATM_CELL_PAYLOAD-1)/ATM_CELL_PAYLOAD; ATM_CELL_PAYLOAD);
zatm_vcc->pool = pool_index(cells*ATM_CELL_PAYLOAD); zatm_vcc->pool = pool_index(cells*ATM_CELL_PAYLOAD);
} }
else { else {
@ -820,7 +820,7 @@ static int alloc_shaper(struct atm_dev *dev,int *pcr,int min,int max,int ubr)
} }
else { else {
i = 255; i = 255;
m = (ATM_OC3_PCR*255+max-1)/max; m = DIV_ROUND_UP(ATM_OC3_PCR*255, max);
} }
} }
if (i > m) { if (i > m) {

Просмотреть файл

@ -159,11 +159,8 @@ struct aoedev {
sector_t ssize; sector_t ssize;
struct timer_list timer; struct timer_list timer;
spinlock_t lock; spinlock_t lock;
struct sk_buff *sendq_hd; /* packets needing to be sent, list head */ struct sk_buff_head sendq;
struct sk_buff *sendq_tl; struct sk_buff_head skbpool;
struct sk_buff *skbpool_hd;
struct sk_buff *skbpool_tl;
int nskbpool;
mempool_t *bufpool; /* for deadlock-free Buf allocation */ mempool_t *bufpool; /* for deadlock-free Buf allocation */
struct list_head bufq; /* queue of bios to work on */ struct list_head bufq; /* queue of bios to work on */
struct buf *inprocess; /* the one we're currently working on */ struct buf *inprocess; /* the one we're currently working on */
@ -199,7 +196,7 @@ int aoedev_flush(const char __user *str, size_t size);
int aoenet_init(void); int aoenet_init(void);
void aoenet_exit(void); void aoenet_exit(void);
void aoenet_xmit(struct sk_buff *); void aoenet_xmit(struct sk_buff_head *);
int is_aoe_netif(struct net_device *ifp); int is_aoe_netif(struct net_device *ifp);
int set_aoe_iflist(const char __user *str, size_t size); int set_aoe_iflist(const char __user *str, size_t size);

Просмотреть файл

@ -158,9 +158,9 @@ aoeblk_release(struct inode *inode, struct file *filp)
static int static int
aoeblk_make_request(struct request_queue *q, struct bio *bio) aoeblk_make_request(struct request_queue *q, struct bio *bio)
{ {
struct sk_buff_head queue;
struct aoedev *d; struct aoedev *d;
struct buf *buf; struct buf *buf;
struct sk_buff *sl;
ulong flags; ulong flags;
blk_queue_bounce(q, &bio); blk_queue_bounce(q, &bio);
@ -213,11 +213,11 @@ aoeblk_make_request(struct request_queue *q, struct bio *bio)
list_add_tail(&buf->bufs, &d->bufq); list_add_tail(&buf->bufs, &d->bufq);
aoecmd_work(d); aoecmd_work(d);
sl = d->sendq_hd; __skb_queue_head_init(&queue);
d->sendq_hd = d->sendq_tl = NULL; skb_queue_splice_init(&d->sendq, &queue);
spin_unlock_irqrestore(&d->lock, flags); spin_unlock_irqrestore(&d->lock, flags);
aoenet_xmit(sl); aoenet_xmit(&queue);
return 0; return 0;
} }

Просмотреть файл

@ -9,6 +9,7 @@
#include <linux/completion.h> #include <linux/completion.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/smp_lock.h> #include <linux/smp_lock.h>
#include <linux/skbuff.h>
#include "aoe.h" #include "aoe.h"
enum { enum {
@ -103,7 +104,12 @@ loop:
spin_lock_irqsave(&d->lock, flags); spin_lock_irqsave(&d->lock, flags);
goto loop; goto loop;
} }
aoenet_xmit(skb); if (skb) {
struct sk_buff_head queue;
__skb_queue_head_init(&queue);
__skb_queue_tail(&queue, skb);
aoenet_xmit(&queue);
}
aoecmd_cfg(major, minor); aoecmd_cfg(major, minor);
return 0; return 0;
} }

Просмотреть файл

@ -114,29 +114,22 @@ ifrotate(struct aoetgt *t)
static void static void
skb_pool_put(struct aoedev *d, struct sk_buff *skb) skb_pool_put(struct aoedev *d, struct sk_buff *skb)
{ {
if (!d->skbpool_hd) __skb_queue_tail(&d->skbpool, skb);
d->skbpool_hd = skb;
else
d->skbpool_tl->next = skb;
d->skbpool_tl = skb;
} }
static struct sk_buff * static struct sk_buff *
skb_pool_get(struct aoedev *d) skb_pool_get(struct aoedev *d)
{ {
struct sk_buff *skb; struct sk_buff *skb = skb_peek(&d->skbpool);
skb = d->skbpool_hd;
if (skb && atomic_read(&skb_shinfo(skb)->dataref) == 1) { if (skb && atomic_read(&skb_shinfo(skb)->dataref) == 1) {
d->skbpool_hd = skb->next; __skb_unlink(skb, &d->skbpool);
skb->next = NULL;
return skb; return skb;
} }
if (d->nskbpool < NSKBPOOLMAX if (skb_queue_len(&d->skbpool) < NSKBPOOLMAX &&
&& (skb = new_skb(ETH_ZLEN))) { (skb = new_skb(ETH_ZLEN)))
d->nskbpool++;
return skb; return skb;
}
return NULL; return NULL;
} }
@ -293,29 +286,22 @@ aoecmd_ata_rw(struct aoedev *d)
skb->dev = t->ifp->nd; skb->dev = t->ifp->nd;
skb = skb_clone(skb, GFP_ATOMIC); skb = skb_clone(skb, GFP_ATOMIC);
if (skb) { if (skb)
if (d->sendq_hd) __skb_queue_tail(&d->sendq, skb);
d->sendq_tl->next = skb;
else
d->sendq_hd = skb;
d->sendq_tl = skb;
}
return 1; return 1;
} }
/* some callers cannot sleep, and they can call this function, /* some callers cannot sleep, and they can call this function,
* transmitting the packets later, when interrupts are on * transmitting the packets later, when interrupts are on
*/ */
static struct sk_buff * static void
aoecmd_cfg_pkts(ushort aoemajor, unsigned char aoeminor, struct sk_buff **tail) aoecmd_cfg_pkts(ushort aoemajor, unsigned char aoeminor, struct sk_buff_head *queue)
{ {
struct aoe_hdr *h; struct aoe_hdr *h;
struct aoe_cfghdr *ch; struct aoe_cfghdr *ch;
struct sk_buff *skb, *sl, *sl_tail; struct sk_buff *skb;
struct net_device *ifp; struct net_device *ifp;
sl = sl_tail = NULL;
read_lock(&dev_base_lock); read_lock(&dev_base_lock);
for_each_netdev(&init_net, ifp) { for_each_netdev(&init_net, ifp) {
dev_hold(ifp); dev_hold(ifp);
@ -329,8 +315,7 @@ aoecmd_cfg_pkts(ushort aoemajor, unsigned char aoeminor, struct sk_buff **tail)
} }
skb_put(skb, sizeof *h + sizeof *ch); skb_put(skb, sizeof *h + sizeof *ch);
skb->dev = ifp; skb->dev = ifp;
if (sl_tail == NULL) __skb_queue_tail(queue, skb);
sl_tail = skb;
h = (struct aoe_hdr *) skb_mac_header(skb); h = (struct aoe_hdr *) skb_mac_header(skb);
memset(h, 0, sizeof *h + sizeof *ch); memset(h, 0, sizeof *h + sizeof *ch);
@ -342,16 +327,10 @@ aoecmd_cfg_pkts(ushort aoemajor, unsigned char aoeminor, struct sk_buff **tail)
h->minor = aoeminor; h->minor = aoeminor;
h->cmd = AOECMD_CFG; h->cmd = AOECMD_CFG;
skb->next = sl;
sl = skb;
cont: cont:
dev_put(ifp); dev_put(ifp);
} }
read_unlock(&dev_base_lock); read_unlock(&dev_base_lock);
if (tail != NULL)
*tail = sl_tail;
return sl;
} }
static void static void
@ -406,11 +385,7 @@ resend(struct aoedev *d, struct aoetgt *t, struct frame *f)
skb = skb_clone(skb, GFP_ATOMIC); skb = skb_clone(skb, GFP_ATOMIC);
if (skb == NULL) if (skb == NULL)
return; return;
if (d->sendq_hd) __skb_queue_tail(&d->sendq, skb);
d->sendq_tl->next = skb;
else
d->sendq_hd = skb;
d->sendq_tl = skb;
} }
static int static int
@ -508,16 +483,15 @@ ata_scnt(unsigned char *packet) {
static void static void
rexmit_timer(ulong vp) rexmit_timer(ulong vp)
{ {
struct sk_buff_head queue;
struct aoedev *d; struct aoedev *d;
struct aoetgt *t, **tt, **te; struct aoetgt *t, **tt, **te;
struct aoeif *ifp; struct aoeif *ifp;
struct frame *f, *e; struct frame *f, *e;
struct sk_buff *sl;
register long timeout; register long timeout;
ulong flags, n; ulong flags, n;
d = (struct aoedev *) vp; d = (struct aoedev *) vp;
sl = NULL;
/* timeout is always ~150% of the moving average */ /* timeout is always ~150% of the moving average */
timeout = d->rttavg; timeout = d->rttavg;
@ -589,7 +563,7 @@ rexmit_timer(ulong vp)
} }
} }
if (d->sendq_hd) { if (!skb_queue_empty(&d->sendq)) {
n = d->rttavg <<= 1; n = d->rttavg <<= 1;
if (n > MAXTIMER) if (n > MAXTIMER)
d->rttavg = MAXTIMER; d->rttavg = MAXTIMER;
@ -600,15 +574,15 @@ rexmit_timer(ulong vp)
aoecmd_work(d); aoecmd_work(d);
} }
sl = d->sendq_hd; __skb_queue_head_init(&queue);
d->sendq_hd = d->sendq_tl = NULL; skb_queue_splice_init(&d->sendq, &queue);
d->timer.expires = jiffies + TIMERTICK; d->timer.expires = jiffies + TIMERTICK;
add_timer(&d->timer); add_timer(&d->timer);
spin_unlock_irqrestore(&d->lock, flags); spin_unlock_irqrestore(&d->lock, flags);
aoenet_xmit(sl); aoenet_xmit(&queue);
} }
/* enters with d->lock held */ /* enters with d->lock held */
@ -772,12 +746,12 @@ diskstats(struct gendisk *disk, struct bio *bio, ulong duration, sector_t sector
void void
aoecmd_ata_rsp(struct sk_buff *skb) aoecmd_ata_rsp(struct sk_buff *skb)
{ {
struct sk_buff_head queue;
struct aoedev *d; struct aoedev *d;
struct aoe_hdr *hin, *hout; struct aoe_hdr *hin, *hout;
struct aoe_atahdr *ahin, *ahout; struct aoe_atahdr *ahin, *ahout;
struct frame *f; struct frame *f;
struct buf *buf; struct buf *buf;
struct sk_buff *sl;
struct aoetgt *t; struct aoetgt *t;
struct aoeif *ifp; struct aoeif *ifp;
register long n; register long n;
@ -898,21 +872,21 @@ aoecmd_ata_rsp(struct sk_buff *skb)
aoecmd_work(d); aoecmd_work(d);
xmit: xmit:
sl = d->sendq_hd; __skb_queue_head_init(&queue);
d->sendq_hd = d->sendq_tl = NULL; skb_queue_splice_init(&d->sendq, &queue);
spin_unlock_irqrestore(&d->lock, flags); spin_unlock_irqrestore(&d->lock, flags);
aoenet_xmit(sl); aoenet_xmit(&queue);
} }
void void
aoecmd_cfg(ushort aoemajor, unsigned char aoeminor) aoecmd_cfg(ushort aoemajor, unsigned char aoeminor)
{ {
struct sk_buff *sl; struct sk_buff_head queue;
sl = aoecmd_cfg_pkts(aoemajor, aoeminor, NULL); __skb_queue_head_init(&queue);
aoecmd_cfg_pkts(aoemajor, aoeminor, &queue);
aoenet_xmit(sl); aoenet_xmit(&queue);
} }
struct sk_buff * struct sk_buff *
@ -1081,7 +1055,12 @@ aoecmd_cfg_rsp(struct sk_buff *skb)
spin_unlock_irqrestore(&d->lock, flags); spin_unlock_irqrestore(&d->lock, flags);
aoenet_xmit(sl); if (sl) {
struct sk_buff_head queue;
__skb_queue_head_init(&queue);
__skb_queue_tail(&queue, sl);
aoenet_xmit(&queue);
}
} }
void void

Просмотреть файл

@ -188,14 +188,12 @@ skbfree(struct sk_buff *skb)
static void static void
skbpoolfree(struct aoedev *d) skbpoolfree(struct aoedev *d)
{ {
struct sk_buff *skb; struct sk_buff *skb, *tmp;
while ((skb = d->skbpool_hd)) { skb_queue_walk_safe(&d->skbpool, skb, tmp)
d->skbpool_hd = skb->next;
skb->next = NULL;
skbfree(skb); skbfree(skb);
}
d->skbpool_tl = NULL; __skb_queue_head_init(&d->skbpool);
} }
/* find it or malloc it */ /* find it or malloc it */
@ -217,6 +215,8 @@ aoedev_by_sysminor_m(ulong sysminor)
goto out; goto out;
INIT_WORK(&d->work, aoecmd_sleepwork); INIT_WORK(&d->work, aoecmd_sleepwork);
spin_lock_init(&d->lock); spin_lock_init(&d->lock);
skb_queue_head_init(&d->sendq);
skb_queue_head_init(&d->skbpool);
init_timer(&d->timer); init_timer(&d->timer);
d->timer.data = (ulong) d; d->timer.data = (ulong) d;
d->timer.function = dummy_timer; d->timer.function = dummy_timer;

Просмотреть файл

@ -7,6 +7,7 @@
#include <linux/hdreg.h> #include <linux/hdreg.h>
#include <linux/blkdev.h> #include <linux/blkdev.h>
#include <linux/module.h> #include <linux/module.h>
#include <linux/skbuff.h>
#include "aoe.h" #include "aoe.h"
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");

Просмотреть файл

@ -95,13 +95,12 @@ mac_addr(char addr[6])
} }
void void
aoenet_xmit(struct sk_buff *sl) aoenet_xmit(struct sk_buff_head *queue)
{ {
struct sk_buff *skb; struct sk_buff *skb, *tmp;
while ((skb = sl)) { skb_queue_walk_safe(queue, skb, tmp) {
sl = sl->next; __skb_unlink(skb, queue);
skb->next = skb->prev = NULL;
dev_queue_xmit(skb); dev_queue_xmit(skb);
} }
} }

Просмотреть файл

@ -352,14 +352,14 @@ static int bcsp_flush(struct hci_uart *hu)
/* Remove ack'ed packets */ /* Remove ack'ed packets */
static void bcsp_pkt_cull(struct bcsp_struct *bcsp) static void bcsp_pkt_cull(struct bcsp_struct *bcsp)
{ {
struct sk_buff *skb, *tmp;
unsigned long flags; unsigned long flags;
struct sk_buff *skb;
int i, pkts_to_be_removed; int i, pkts_to_be_removed;
u8 seqno; u8 seqno;
spin_lock_irqsave(&bcsp->unack.lock, flags); spin_lock_irqsave(&bcsp->unack.lock, flags);
pkts_to_be_removed = bcsp->unack.qlen; pkts_to_be_removed = skb_queue_len(&bcsp->unack);
seqno = bcsp->msgq_txseq; seqno = bcsp->msgq_txseq;
while (pkts_to_be_removed) { while (pkts_to_be_removed) {
@ -373,19 +373,19 @@ static void bcsp_pkt_cull(struct bcsp_struct *bcsp)
BT_ERR("Peer acked invalid packet"); BT_ERR("Peer acked invalid packet");
BT_DBG("Removing %u pkts out of %u, up to seqno %u", BT_DBG("Removing %u pkts out of %u, up to seqno %u",
pkts_to_be_removed, bcsp->unack.qlen, (seqno - 1) & 0x07); pkts_to_be_removed, skb_queue_len(&bcsp->unack),
(seqno - 1) & 0x07);
for (i = 0, skb = ((struct sk_buff *) &bcsp->unack)->next; i < pkts_to_be_removed i = 0;
&& skb != (struct sk_buff *) &bcsp->unack; i++) { skb_queue_walk_safe(&bcsp->unack, skb, tmp) {
struct sk_buff *nskb; if (i++ >= pkts_to_be_removed)
break;
nskb = skb->next;
__skb_unlink(skb, &bcsp->unack); __skb_unlink(skb, &bcsp->unack);
kfree_skb(skb); kfree_skb(skb);
skb = nskb;
} }
if (bcsp->unack.qlen == 0) if (skb_queue_empty(&bcsp->unack))
del_timer(&bcsp->tbcsp); del_timer(&bcsp->tbcsp);
spin_unlock_irqrestore(&bcsp->unack.lock, flags); spin_unlock_irqrestore(&bcsp->unack.lock, flags);

Просмотреть файл

@ -70,8 +70,8 @@ static inline void _urb_queue_head(struct _urb_queue *q, struct _urb *_urb)
{ {
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&q->lock, flags); spin_lock_irqsave(&q->lock, flags);
/* _urb_unlink needs to know which spinlock to use, thus mb(). */ /* _urb_unlink needs to know which spinlock to use, thus smp_mb(). */
_urb->queue = q; mb(); list_add(&_urb->list, &q->head); _urb->queue = q; smp_mb(); list_add(&_urb->list, &q->head);
spin_unlock_irqrestore(&q->lock, flags); spin_unlock_irqrestore(&q->lock, flags);
} }
@ -79,8 +79,8 @@ static inline void _urb_queue_tail(struct _urb_queue *q, struct _urb *_urb)
{ {
unsigned long flags; unsigned long flags;
spin_lock_irqsave(&q->lock, flags); spin_lock_irqsave(&q->lock, flags);
/* _urb_unlink needs to know which spinlock to use, thus mb(). */ /* _urb_unlink needs to know which spinlock to use, thus smp_mb(). */
_urb->queue = q; mb(); list_add_tail(&_urb->list, &q->head); _urb->queue = q; smp_mb(); list_add_tail(&_urb->list, &q->head);
spin_unlock_irqrestore(&q->lock, flags); spin_unlock_irqrestore(&q->lock, flags);
} }
@ -89,7 +89,7 @@ static inline void _urb_unlink(struct _urb *_urb)
struct _urb_queue *q; struct _urb_queue *q;
unsigned long flags; unsigned long flags;
mb(); smp_mb();
q = _urb->queue; q = _urb->queue;
/* If q is NULL, it will die at easy-to-debug NULL pointer dereference. /* If q is NULL, it will die at easy-to-debug NULL pointer dereference.
No need to BUG(). */ No need to BUG(). */

Просмотреть файл

@ -828,15 +828,18 @@ static int old_capi_manufacturer(unsigned int cmd, void __user *data)
return -ESRCH; return -ESRCH;
if (card->load_firmware == NULL) { if (card->load_firmware == NULL) {
printk(KERN_DEBUG "kcapi: load: no load function\n"); printk(KERN_DEBUG "kcapi: load: no load function\n");
capi_ctr_put(card);
return -ESRCH; return -ESRCH;
} }
if (ldef.t4file.len <= 0) { if (ldef.t4file.len <= 0) {
printk(KERN_DEBUG "kcapi: load: invalid parameter: length of t4file is %d ?\n", ldef.t4file.len); printk(KERN_DEBUG "kcapi: load: invalid parameter: length of t4file is %d ?\n", ldef.t4file.len);
capi_ctr_put(card);
return -EINVAL; return -EINVAL;
} }
if (ldef.t4file.data == NULL) { if (ldef.t4file.data == NULL) {
printk(KERN_DEBUG "kcapi: load: invalid parameter: dataptr is 0\n"); printk(KERN_DEBUG "kcapi: load: invalid parameter: dataptr is 0\n");
capi_ctr_put(card);
return -EINVAL; return -EINVAL;
} }
@ -849,6 +852,7 @@ static int old_capi_manufacturer(unsigned int cmd, void __user *data)
if (card->cardstate != CARD_DETECTED) { if (card->cardstate != CARD_DETECTED) {
printk(KERN_INFO "kcapi: load: contr=%d not in detect state\n", ldef.contr); printk(KERN_INFO "kcapi: load: contr=%d not in detect state\n", ldef.contr);
capi_ctr_put(card);
return -EBUSY; return -EBUSY;
} }
card->cardstate = CARD_LOADING; card->cardstate = CARD_LOADING;

Просмотреть файл

@ -183,8 +183,8 @@
#define D_FREG_MASK 0xF #define D_FREG_MASK 0xF
struct zt { struct zt {
unsigned short z1; /* Z1 pointer 16 Bit */ __le16 z1; /* Z1 pointer 16 Bit */
unsigned short z2; /* Z2 pointer 16 Bit */ __le16 z2; /* Z2 pointer 16 Bit */
}; };
struct dfifo { struct dfifo {

Просмотреть файл

@ -43,7 +43,7 @@ MODULE_LICENSE("GPL");
module_param(debug, uint, 0); module_param(debug, uint, 0);
static LIST_HEAD(HFClist); static LIST_HEAD(HFClist);
DEFINE_RWLOCK(HFClock); static DEFINE_RWLOCK(HFClock);
enum { enum {
HFC_CCD_2BD0, HFC_CCD_2BD0,
@ -88,7 +88,7 @@ struct hfcPCI_hw {
unsigned char bswapped; unsigned char bswapped;
unsigned char protocol; unsigned char protocol;
int nt_timer; int nt_timer;
unsigned char *pci_io; /* start of PCI IO memory */ unsigned char __iomem *pci_io; /* start of PCI IO memory */
dma_addr_t dmahandle; dma_addr_t dmahandle;
void *fifos; /* FIFO memory */ void *fifos; /* FIFO memory */
int last_bfifo_cnt[2]; int last_bfifo_cnt[2];
@ -153,7 +153,7 @@ release_io_hfcpci(struct hfc_pci *hc)
pci_write_config_word(hc->pdev, PCI_COMMAND, 0); pci_write_config_word(hc->pdev, PCI_COMMAND, 0);
del_timer(&hc->hw.timer); del_timer(&hc->hw.timer);
pci_free_consistent(hc->pdev, 0x8000, hc->hw.fifos, hc->hw.dmahandle); pci_free_consistent(hc->pdev, 0x8000, hc->hw.fifos, hc->hw.dmahandle);
iounmap((void *)hc->hw.pci_io); iounmap(hc->hw.pci_io);
} }
/* /*
@ -366,8 +366,7 @@ static void hfcpci_clear_fifo_tx(struct hfc_pci *hc, int fifo)
bzt->f2 = MAX_B_FRAMES; bzt->f2 = MAX_B_FRAMES;
bzt->f1 = bzt->f2; /* init F pointers to remain constant */ bzt->f1 = bzt->f2; /* init F pointers to remain constant */
bzt->za[MAX_B_FRAMES].z1 = cpu_to_le16(B_FIFO_SIZE + B_SUB_VAL - 1); bzt->za[MAX_B_FRAMES].z1 = cpu_to_le16(B_FIFO_SIZE + B_SUB_VAL - 1);
bzt->za[MAX_B_FRAMES].z2 = cpu_to_le16( bzt->za[MAX_B_FRAMES].z2 = cpu_to_le16(B_FIFO_SIZE + B_SUB_VAL - 2);
le16_to_cpu(bzt->za[MAX_B_FRAMES].z1 - 1));
if (fifo_state) if (fifo_state)
hc->hw.fifo_en |= fifo_state; hc->hw.fifo_en |= fifo_state;
Write_hfc(hc, HFCPCI_FIFO_EN, hc->hw.fifo_en); Write_hfc(hc, HFCPCI_FIFO_EN, hc->hw.fifo_en);
@ -482,7 +481,7 @@ receive_dmsg(struct hfc_pci *hc)
df->f2 = ((df->f2 + 1) & MAX_D_FRAMES) | df->f2 = ((df->f2 + 1) & MAX_D_FRAMES) |
(MAX_D_FRAMES + 1); /* next buffer */ (MAX_D_FRAMES + 1); /* next buffer */
df->za[df->f2 & D_FREG_MASK].z2 = df->za[df->f2 & D_FREG_MASK].z2 =
cpu_to_le16((zp->z2 + rcnt) & (D_FIFO_SIZE - 1)); cpu_to_le16((le16_to_cpu(zp->z2) + rcnt) & (D_FIFO_SIZE - 1));
} else { } else {
dch->rx_skb = mI_alloc_skb(rcnt - 3, GFP_ATOMIC); dch->rx_skb = mI_alloc_skb(rcnt - 3, GFP_ATOMIC);
if (!dch->rx_skb) { if (!dch->rx_skb) {
@ -523,10 +522,10 @@ receive_dmsg(struct hfc_pci *hc)
/* /*
* check for transparent receive data and read max one threshold size if avail * check for transparent receive data and read max one threshold size if avail
*/ */
int static int
hfcpci_empty_fifo_trans(struct bchannel *bch, struct bzfifo *bz, u_char *bdata) hfcpci_empty_fifo_trans(struct bchannel *bch, struct bzfifo *bz, u_char *bdata)
{ {
unsigned short *z1r, *z2r; __le16 *z1r, *z2r;
int new_z2, fcnt, maxlen; int new_z2, fcnt, maxlen;
u_char *ptr, *ptr1; u_char *ptr, *ptr1;
@ -576,7 +575,7 @@ hfcpci_empty_fifo_trans(struct bchannel *bch, struct bzfifo *bz, u_char *bdata)
/* /*
* B-channel main receive routine * B-channel main receive routine
*/ */
void static void
main_rec_hfcpci(struct bchannel *bch) main_rec_hfcpci(struct bchannel *bch)
{ {
struct hfc_pci *hc = bch->hw; struct hfc_pci *hc = bch->hw;
@ -724,7 +723,7 @@ hfcpci_fill_fifo(struct bchannel *bch)
struct bzfifo *bz; struct bzfifo *bz;
u_char *bdata; u_char *bdata;
u_char new_f1, *src, *dst; u_char new_f1, *src, *dst;
unsigned short *z1t, *z2t; __le16 *z1t, *z2t;
if ((bch->debug & DEBUG_HW_BCHANNEL) && !(bch->debug & DEBUG_HW_BFIFO)) if ((bch->debug & DEBUG_HW_BCHANNEL) && !(bch->debug & DEBUG_HW_BFIFO))
printk(KERN_DEBUG "%s\n", __func__); printk(KERN_DEBUG "%s\n", __func__);
@ -1679,7 +1678,7 @@ hfcpci_l2l1B(struct mISDNchannel *ch, struct sk_buff *skb)
* called for card init message * called for card init message
*/ */
void static void
inithfcpci(struct hfc_pci *hc) inithfcpci(struct hfc_pci *hc)
{ {
printk(KERN_DEBUG "inithfcpci: entered\n"); printk(KERN_DEBUG "inithfcpci: entered\n");
@ -1966,7 +1965,7 @@ setup_hw(struct hfc_pci *hc)
printk(KERN_WARNING "HFC-PCI: No IRQ for PCI card found\n"); printk(KERN_WARNING "HFC-PCI: No IRQ for PCI card found\n");
return 1; return 1;
} }
hc->hw.pci_io = (char *)(ulong)hc->pdev->resource[1].start; hc->hw.pci_io = (char __iomem *)(unsigned long)hc->pdev->resource[1].start;
if (!hc->hw.pci_io) { if (!hc->hw.pci_io) {
printk(KERN_WARNING "HFC-PCI: No IO-Mem for PCI card found\n"); printk(KERN_WARNING "HFC-PCI: No IO-Mem for PCI card found\n");

Просмотреть файл

@ -1533,8 +1533,10 @@ static int isdn_ppp_mp_bundle_array_init(void)
int sz = ISDN_MAX_CHANNELS*sizeof(ippp_bundle); int sz = ISDN_MAX_CHANNELS*sizeof(ippp_bundle);
if( (isdn_ppp_bundle_arr = kzalloc(sz, GFP_KERNEL)) == NULL ) if( (isdn_ppp_bundle_arr = kzalloc(sz, GFP_KERNEL)) == NULL )
return -ENOMEM; return -ENOMEM;
for( i = 0; i < ISDN_MAX_CHANNELS; i++ ) for (i = 0; i < ISDN_MAX_CHANNELS; i++) {
spin_lock_init(&isdn_ppp_bundle_arr[i].lock); spin_lock_init(&isdn_ppp_bundle_arr[i].lock);
skb_queue_head_init(&isdn_ppp_bundle_arr[i].frags);
}
return 0; return 0;
} }
@ -1567,7 +1569,7 @@ static int isdn_ppp_mp_init( isdn_net_local * lp, ippp_bundle * add_to )
if ((lp->netdev->pb = isdn_ppp_mp_bundle_alloc()) == NULL) if ((lp->netdev->pb = isdn_ppp_mp_bundle_alloc()) == NULL)
return -ENOMEM; return -ENOMEM;
lp->next = lp->last = lp; /* nobody else in a queue */ lp->next = lp->last = lp; /* nobody else in a queue */
lp->netdev->pb->frags = NULL; skb_queue_head_init(&lp->netdev->pb->frags);
lp->netdev->pb->frames = 0; lp->netdev->pb->frames = 0;
lp->netdev->pb->seq = UINT_MAX; lp->netdev->pb->seq = UINT_MAX;
} }
@ -1579,28 +1581,29 @@ static int isdn_ppp_mp_init( isdn_net_local * lp, ippp_bundle * add_to )
static u32 isdn_ppp_mp_get_seq( int short_seq, static u32 isdn_ppp_mp_get_seq( int short_seq,
struct sk_buff * skb, u32 last_seq ); struct sk_buff * skb, u32 last_seq );
static struct sk_buff * isdn_ppp_mp_discard( ippp_bundle * mp, static void isdn_ppp_mp_discard(ippp_bundle *mp, struct sk_buff *from,
struct sk_buff * from, struct sk_buff * to ); struct sk_buff *to);
static void isdn_ppp_mp_reassembly( isdn_net_dev * net_dev, isdn_net_local * lp, static void isdn_ppp_mp_reassembly(isdn_net_dev *net_dev, isdn_net_local *lp,
struct sk_buff * from, struct sk_buff * to ); struct sk_buff *from, struct sk_buff *to,
static void isdn_ppp_mp_free_skb( ippp_bundle * mp, struct sk_buff * skb ); u32 lastseq);
static void isdn_ppp_mp_free_skb(ippp_bundle *mp, struct sk_buff *skb);
static void isdn_ppp_mp_print_recv_pkt( int slot, struct sk_buff * skb ); static void isdn_ppp_mp_print_recv_pkt( int slot, struct sk_buff * skb );
static void isdn_ppp_mp_receive(isdn_net_dev * net_dev, isdn_net_local * lp, static void isdn_ppp_mp_receive(isdn_net_dev * net_dev, isdn_net_local * lp,
struct sk_buff *skb) struct sk_buff *skb)
{ {
struct ippp_struct *is; struct sk_buff *newfrag, *frag, *start, *nextf;
isdn_net_local * lpq;
ippp_bundle * mp;
isdn_mppp_stats * stats;
struct sk_buff * newfrag, * frag, * start, *nextf;
u32 newseq, minseq, thisseq; u32 newseq, minseq, thisseq;
isdn_mppp_stats *stats;
struct ippp_struct *is;
unsigned long flags; unsigned long flags;
isdn_net_local *lpq;
ippp_bundle *mp;
int slot; int slot;
spin_lock_irqsave(&net_dev->pb->lock, flags); spin_lock_irqsave(&net_dev->pb->lock, flags);
mp = net_dev->pb; mp = net_dev->pb;
stats = &mp->stats; stats = &mp->stats;
slot = lp->ppp_slot; slot = lp->ppp_slot;
if (slot < 0 || slot >= ISDN_MAX_CHANNELS) { if (slot < 0 || slot >= ISDN_MAX_CHANNELS) {
printk(KERN_ERR "%s: lp->ppp_slot(%d)\n", printk(KERN_ERR "%s: lp->ppp_slot(%d)\n",
@ -1611,20 +1614,19 @@ static void isdn_ppp_mp_receive(isdn_net_dev * net_dev, isdn_net_local * lp,
return; return;
} }
is = ippp_table[slot]; is = ippp_table[slot];
if( ++mp->frames > stats->max_queue_len ) if (++mp->frames > stats->max_queue_len)
stats->max_queue_len = mp->frames; stats->max_queue_len = mp->frames;
if (is->debug & 0x8) if (is->debug & 0x8)
isdn_ppp_mp_print_recv_pkt(lp->ppp_slot, skb); isdn_ppp_mp_print_recv_pkt(lp->ppp_slot, skb);
newseq = isdn_ppp_mp_get_seq(is->mpppcfg & SC_IN_SHORT_SEQ, newseq = isdn_ppp_mp_get_seq(is->mpppcfg & SC_IN_SHORT_SEQ,
skb, is->last_link_seqno); skb, is->last_link_seqno);
/* if this packet seq # is less than last already processed one, /* if this packet seq # is less than last already processed one,
* toss it right away, but check for sequence start case first * toss it right away, but check for sequence start case first
*/ */
if( mp->seq > MP_LONGSEQ_MAX && (newseq & MP_LONGSEQ_MAXBIT) ) { if (mp->seq > MP_LONGSEQ_MAX && (newseq & MP_LONGSEQ_MAXBIT)) {
mp->seq = newseq; /* the first packet: required for mp->seq = newseq; /* the first packet: required for
* rfc1990 non-compliant clients -- * rfc1990 non-compliant clients --
* prevents constant packet toss */ * prevents constant packet toss */
@ -1634,7 +1636,7 @@ static void isdn_ppp_mp_receive(isdn_net_dev * net_dev, isdn_net_local * lp,
spin_unlock_irqrestore(&mp->lock, flags); spin_unlock_irqrestore(&mp->lock, flags);
return; return;
} }
/* find the minimum received sequence number over all links */ /* find the minimum received sequence number over all links */
is->last_link_seqno = minseq = newseq; is->last_link_seqno = minseq = newseq;
for (lpq = net_dev->queue;;) { for (lpq = net_dev->queue;;) {
@ -1655,22 +1657,31 @@ static void isdn_ppp_mp_receive(isdn_net_dev * net_dev, isdn_net_local * lp,
* packets */ * packets */
newfrag = skb; newfrag = skb;
/* if this new fragment is before the first one, then enqueue it now. */ /* Insert new fragment into the proper sequence slot. */
if ((frag = mp->frags) == NULL || MP_LT(newseq, MP_SEQ(frag))) { skb_queue_walk(&mp->frags, frag) {
newfrag->next = frag; if (MP_SEQ(frag) == newseq) {
mp->frags = frag = newfrag; isdn_ppp_mp_free_skb(mp, newfrag);
newfrag = NULL; newfrag = NULL;
} break;
}
if (MP_LT(newseq, MP_SEQ(frag))) {
__skb_queue_before(&mp->frags, frag, newfrag);
newfrag = NULL;
break;
}
}
if (newfrag)
__skb_queue_tail(&mp->frags, newfrag);
start = MP_FLAGS(frag) & MP_BEGIN_FRAG && frag = skb_peek(&mp->frags);
MP_SEQ(frag) == mp->seq ? frag : NULL; start = ((MP_FLAGS(frag) & MP_BEGIN_FRAG) &&
(MP_SEQ(frag) == mp->seq)) ? frag : NULL;
if (!start)
goto check_overflow;
/* /* main fragment traversing loop
* main fragment traversing loop
* *
* try to accomplish several tasks: * try to accomplish several tasks:
* - insert new fragment into the proper sequence slot (once that's done
* newfrag will be set to NULL)
* - reassemble any complete fragment sequence (non-null 'start' * - reassemble any complete fragment sequence (non-null 'start'
* indicates there is a continguous sequence present) * indicates there is a continguous sequence present)
* - discard any incomplete sequences that are below minseq -- due * - discard any incomplete sequences that are below minseq -- due
@ -1679,71 +1690,46 @@ static void isdn_ppp_mp_receive(isdn_net_dev * net_dev, isdn_net_local * lp,
* come to complete such sequence and it should be discarded * come to complete such sequence and it should be discarded
* *
* loop completes when we accomplished the following tasks: * loop completes when we accomplished the following tasks:
* - new fragment is inserted in the proper sequence ('newfrag' is
* set to NULL)
* - we hit a gap in the sequence, so no reassembly/processing is * - we hit a gap in the sequence, so no reassembly/processing is
* possible ('start' would be set to NULL) * possible ('start' would be set to NULL)
* *
* algorithm for this code is derived from code in the book * algorithm for this code is derived from code in the book
* 'PPP Design And Debugging' by James Carlson (Addison-Wesley) * 'PPP Design And Debugging' by James Carlson (Addison-Wesley)
*/ */
while (start != NULL || newfrag != NULL) { skb_queue_walk_safe(&mp->frags, frag, nextf) {
thisseq = MP_SEQ(frag);
thisseq = MP_SEQ(frag); /* check for misplaced start */
nextf = frag->next; if (start != frag && (MP_FLAGS(frag) & MP_BEGIN_FRAG)) {
printk(KERN_WARNING"isdn_mppp(seq %d): new "
/* drop any duplicate fragments */ "BEGIN flag with no prior END", thisseq);
if (newfrag != NULL && thisseq == newseq) { stats->seqerrs++;
isdn_ppp_mp_free_skb(mp, newfrag); stats->frame_drops++;
newfrag = NULL; isdn_ppp_mp_discard(mp, start, frag);
} start = frag;
} else if (MP_LE(thisseq, minseq)) {
/* insert new fragment before next element if possible. */ if (MP_FLAGS(frag) & MP_BEGIN_FRAG)
if (newfrag != NULL && (nextf == NULL ||
MP_LT(newseq, MP_SEQ(nextf)))) {
newfrag->next = nextf;
frag->next = nextf = newfrag;
newfrag = NULL;
}
if (start != NULL) {
/* check for misplaced start */
if (start != frag && (MP_FLAGS(frag) & MP_BEGIN_FRAG)) {
printk(KERN_WARNING"isdn_mppp(seq %d): new "
"BEGIN flag with no prior END", thisseq);
stats->seqerrs++;
stats->frame_drops++;
start = isdn_ppp_mp_discard(mp, start,frag);
nextf = frag->next;
}
} else if (MP_LE(thisseq, minseq)) {
if (MP_FLAGS(frag) & MP_BEGIN_FRAG)
start = frag; start = frag;
else { else {
if (MP_FLAGS(frag) & MP_END_FRAG) if (MP_FLAGS(frag) & MP_END_FRAG)
stats->frame_drops++; stats->frame_drops++;
if( mp->frags == frag ) __skb_unlink(skb, &mp->frags);
mp->frags = nextf;
isdn_ppp_mp_free_skb(mp, frag); isdn_ppp_mp_free_skb(mp, frag);
frag = nextf;
continue; continue;
} }
} }
/* if start is non-null and we have end fragment, then
* we have full reassembly sequence -- reassemble
* and process packet now
*/
if (start != NULL && (MP_FLAGS(frag) & MP_END_FRAG)) {
minseq = mp->seq = (thisseq+1) & MP_LONGSEQ_MASK;
/* Reassemble the packet then dispatch it */
isdn_ppp_mp_reassembly(net_dev, lp, start, nextf);
start = NULL;
frag = NULL;
mp->frags = nextf; /* if we have end fragment, then we have full reassembly
} * sequence -- reassemble and process packet now
*/
if (MP_FLAGS(frag) & MP_END_FRAG) {
minseq = mp->seq = (thisseq+1) & MP_LONGSEQ_MASK;
/* Reassemble the packet then dispatch it */
isdn_ppp_mp_reassembly(net_dev, lp, start, frag, thisseq);
start = NULL;
frag = NULL;
}
/* check if need to update start pointer: if we just /* check if need to update start pointer: if we just
* reassembled the packet and sequence is contiguous * reassembled the packet and sequence is contiguous
@ -1754,26 +1740,25 @@ static void isdn_ppp_mp_receive(isdn_net_dev * net_dev, isdn_net_local * lp,
* below low watermark and set start to the next frag or * below low watermark and set start to the next frag or
* clear start ptr. * clear start ptr.
*/ */
if (nextf != NULL && if (nextf != (struct sk_buff *)&mp->frags &&
((thisseq+1) & MP_LONGSEQ_MASK) == MP_SEQ(nextf)) { ((thisseq+1) & MP_LONGSEQ_MASK) == MP_SEQ(nextf)) {
/* if we just reassembled and the next one is here, /* if we just reassembled and the next one is here,
* then start another reassembly. */ * then start another reassembly.
*/
if (frag == NULL) { if (frag == NULL) {
if (MP_FLAGS(nextf) & MP_BEGIN_FRAG) if (MP_FLAGS(nextf) & MP_BEGIN_FRAG)
start = nextf; start = nextf;
else else {
{ printk(KERN_WARNING"isdn_mppp(seq %d):"
printk(KERN_WARNING"isdn_mppp(seq %d):" " END flag with no following "
" END flag with no following " "BEGIN", thisseq);
"BEGIN", thisseq);
stats->seqerrs++; stats->seqerrs++;
} }
} }
} else {
} else { if (nextf != (struct sk_buff *)&mp->frags &&
if ( nextf != NULL && frag != NULL && frag != NULL &&
MP_LT(thisseq, minseq)) { MP_LT(thisseq, minseq)) {
/* we've got a break in the sequence /* we've got a break in the sequence
* and we not at the end yet * and we not at the end yet
* and we did not just reassembled * and we did not just reassembled
@ -1782,41 +1767,39 @@ static void isdn_ppp_mp_receive(isdn_net_dev * net_dev, isdn_net_local * lp,
* discard all the frames below low watermark * discard all the frames below low watermark
* and start over */ * and start over */
stats->frame_drops++; stats->frame_drops++;
mp->frags = isdn_ppp_mp_discard(mp,start,nextf); isdn_ppp_mp_discard(mp, start, nextf);
} }
/* break in the sequence, no reassembly */ /* break in the sequence, no reassembly */
start = NULL; start = NULL;
} }
if (!start)
frag = nextf; break;
} /* while -- main loop */ }
if (mp->frags == NULL) check_overflow:
mp->frags = frag;
/* rather straighforward way to deal with (not very) possible /* rather straighforward way to deal with (not very) possible
* queue overflow */ * queue overflow
*/
if (mp->frames > MP_MAX_QUEUE_LEN) { if (mp->frames > MP_MAX_QUEUE_LEN) {
stats->overflows++; stats->overflows++;
while (mp->frames > MP_MAX_QUEUE_LEN) { skb_queue_walk_safe(&mp->frags, frag, nextf) {
frag = mp->frags->next; if (mp->frames <= MP_MAX_QUEUE_LEN)
isdn_ppp_mp_free_skb(mp, mp->frags); break;
mp->frags = frag; __skb_unlink(frag, &mp->frags);
isdn_ppp_mp_free_skb(mp, frag);
} }
} }
spin_unlock_irqrestore(&mp->lock, flags); spin_unlock_irqrestore(&mp->lock, flags);
} }
static void isdn_ppp_mp_cleanup( isdn_net_local * lp ) static void isdn_ppp_mp_cleanup(isdn_net_local *lp)
{ {
struct sk_buff * frag = lp->netdev->pb->frags; struct sk_buff *skb, *tmp;
struct sk_buff * nextfrag;
while( frag ) { skb_queue_walk_safe(&lp->netdev->pb->frags, skb, tmp) {
nextfrag = frag->next; __skb_unlink(skb, &lp->netdev->pb->frags);
isdn_ppp_mp_free_skb(lp->netdev->pb, frag); isdn_ppp_mp_free_skb(lp->netdev->pb, skb);
frag = nextfrag;
} }
lp->netdev->pb->frags = NULL;
} }
static u32 isdn_ppp_mp_get_seq( int short_seq, static u32 isdn_ppp_mp_get_seq( int short_seq,
@ -1853,72 +1836,115 @@ static u32 isdn_ppp_mp_get_seq( int short_seq,
return seq; return seq;
} }
struct sk_buff * isdn_ppp_mp_discard( ippp_bundle * mp, static void isdn_ppp_mp_discard(ippp_bundle *mp, struct sk_buff *from,
struct sk_buff * from, struct sk_buff * to ) struct sk_buff *to)
{ {
if( from ) if (from) {
while (from != to) { struct sk_buff *skb, *tmp;
struct sk_buff * next = from->next; int freeing = 0;
isdn_ppp_mp_free_skb(mp, from);
from = next; skb_queue_walk_safe(&mp->frags, skb, tmp) {
if (skb == to)
break;
if (skb == from)
freeing = 1;
if (!freeing)
continue;
__skb_unlink(skb, &mp->frags);
isdn_ppp_mp_free_skb(mp, skb);
} }
return from; }
} }
void isdn_ppp_mp_reassembly( isdn_net_dev * net_dev, isdn_net_local * lp, static unsigned int calc_tot_len(struct sk_buff_head *queue,
struct sk_buff * from, struct sk_buff * to ) struct sk_buff *from, struct sk_buff *to)
{ {
ippp_bundle * mp = net_dev->pb; unsigned int tot_len = 0;
int proto; struct sk_buff *skb;
struct sk_buff * skb; int found_start = 0;
skb_queue_walk(queue, skb) {
if (skb == from)
found_start = 1;
if (!found_start)
continue;
tot_len += skb->len - MP_HEADER_LEN;
if (skb == to)
break;
}
return tot_len;
}
/* Reassemble packet using fragments in the reassembly queue from
* 'from' until 'to', inclusive.
*/
static void isdn_ppp_mp_reassembly(isdn_net_dev *net_dev, isdn_net_local *lp,
struct sk_buff *from, struct sk_buff *to,
u32 lastseq)
{
ippp_bundle *mp = net_dev->pb;
unsigned int tot_len; unsigned int tot_len;
struct sk_buff *skb;
int proto;
if (lp->ppp_slot < 0 || lp->ppp_slot >= ISDN_MAX_CHANNELS) { if (lp->ppp_slot < 0 || lp->ppp_slot >= ISDN_MAX_CHANNELS) {
printk(KERN_ERR "%s: lp->ppp_slot(%d) out of range\n", printk(KERN_ERR "%s: lp->ppp_slot(%d) out of range\n",
__func__, lp->ppp_slot); __func__, lp->ppp_slot);
return; return;
} }
if( MP_FLAGS(from) == (MP_BEGIN_FRAG | MP_END_FRAG) ) {
if( ippp_table[lp->ppp_slot]->debug & 0x40 ) tot_len = calc_tot_len(&mp->frags, from, to);
if (MP_FLAGS(from) == (MP_BEGIN_FRAG | MP_END_FRAG)) {
if (ippp_table[lp->ppp_slot]->debug & 0x40)
printk(KERN_DEBUG "isdn_mppp: reassembly: frame %d, " printk(KERN_DEBUG "isdn_mppp: reassembly: frame %d, "
"len %d\n", MP_SEQ(from), from->len ); "len %d\n", MP_SEQ(from), from->len);
skb = from; skb = from;
skb_pull(skb, MP_HEADER_LEN); skb_pull(skb, MP_HEADER_LEN);
__skb_unlink(skb, &mp->frags);
mp->frames--; mp->frames--;
} else { } else {
struct sk_buff * frag; struct sk_buff *walk, *tmp;
int n; int found_start = 0;
for(tot_len=n=0, frag=from; frag != to; frag=frag->next, n++) if (ippp_table[lp->ppp_slot]->debug & 0x40)
tot_len += frag->len - MP_HEADER_LEN;
if( ippp_table[lp->ppp_slot]->debug & 0x40 )
printk(KERN_DEBUG"isdn_mppp: reassembling frames %d " printk(KERN_DEBUG"isdn_mppp: reassembling frames %d "
"to %d, len %d\n", MP_SEQ(from), "to %d, len %d\n", MP_SEQ(from), lastseq,
(MP_SEQ(from)+n-1) & MP_LONGSEQ_MASK, tot_len ); tot_len);
if( (skb = dev_alloc_skb(tot_len)) == NULL ) {
skb = dev_alloc_skb(tot_len);
if (!skb)
printk(KERN_ERR "isdn_mppp: cannot allocate sk buff " printk(KERN_ERR "isdn_mppp: cannot allocate sk buff "
"of size %d\n", tot_len); "of size %d\n", tot_len);
isdn_ppp_mp_discard(mp, from, to);
return;
}
while( from != to ) { found_start = 0;
unsigned int len = from->len - MP_HEADER_LEN; skb_queue_walk_safe(&mp->frags, walk, tmp) {
if (walk == from)
found_start = 1;
if (!found_start)
continue;
skb_copy_from_linear_data_offset(from, MP_HEADER_LEN, if (skb) {
skb_put(skb,len), unsigned int len = walk->len - MP_HEADER_LEN;
len); skb_copy_from_linear_data_offset(walk, MP_HEADER_LEN,
frag = from->next; skb_put(skb, len),
isdn_ppp_mp_free_skb(mp, from); len);
from = frag; }
__skb_unlink(walk, &mp->frags);
isdn_ppp_mp_free_skb(mp, walk);
if (walk == to)
break;
} }
} }
if (!skb)
return;
proto = isdn_ppp_strip_proto(skb); proto = isdn_ppp_strip_proto(skb);
isdn_ppp_push_higher(net_dev, lp, skb, proto); isdn_ppp_push_higher(net_dev, lp, skb, proto);
} }
static void isdn_ppp_mp_free_skb(ippp_bundle * mp, struct sk_buff * skb) static void isdn_ppp_mp_free_skb(ippp_bundle *mp, struct sk_buff *skb)
{ {
dev_kfree_skb(skb); dev_kfree_skb(skb);
mp->frames--; mp->frames--;

Просмотреть файл

@ -124,18 +124,6 @@ mISDN_read(struct file *filep, char *buf, size_t count, loff_t *off)
return ret; return ret;
} }
static loff_t
mISDN_llseek(struct file *filep, loff_t offset, int orig)
{
return -ESPIPE;
}
static ssize_t
mISDN_write(struct file *filep, const char *buf, size_t count, loff_t *off)
{
return -EOPNOTSUPP;
}
static unsigned int static unsigned int
mISDN_poll(struct file *filep, poll_table *wait) mISDN_poll(struct file *filep, poll_table *wait)
{ {
@ -157,8 +145,9 @@ mISDN_poll(struct file *filep, poll_table *wait)
} }
static void static void
dev_expire_timer(struct mISDNtimer *timer) dev_expire_timer(unsigned long data)
{ {
struct mISDNtimer *timer = (void *)data;
u_long flags; u_long flags;
spin_lock_irqsave(&timer->dev->lock, flags); spin_lock_irqsave(&timer->dev->lock, flags);
@ -191,7 +180,7 @@ misdn_add_timer(struct mISDNtimerdev *dev, int timeout)
spin_unlock_irqrestore(&dev->lock, flags); spin_unlock_irqrestore(&dev->lock, flags);
timer->dev = dev; timer->dev = dev;
timer->tl.data = (long)timer; timer->tl.data = (long)timer;
timer->tl.function = (void *) dev_expire_timer; timer->tl.function = dev_expire_timer;
init_timer(&timer->tl); init_timer(&timer->tl);
timer->tl.expires = jiffies + ((HZ * (u_long)timeout) / 1000); timer->tl.expires = jiffies + ((HZ * (u_long)timeout) / 1000);
add_timer(&timer->tl); add_timer(&timer->tl);
@ -211,6 +200,9 @@ misdn_del_timer(struct mISDNtimerdev *dev, int id)
list_for_each_entry(timer, &dev->pending, list) { list_for_each_entry(timer, &dev->pending, list) {
if (timer->id == id) { if (timer->id == id) {
list_del_init(&timer->list); list_del_init(&timer->list);
/* RED-PEN AK: race -- timer can be still running on
* other CPU. Needs reference count I think
*/
del_timer(&timer->tl); del_timer(&timer->tl);
ret = timer->id; ret = timer->id;
kfree(timer); kfree(timer);
@ -268,9 +260,7 @@ mISDN_ioctl(struct inode *inode, struct file *filep, unsigned int cmd,
} }
static struct file_operations mISDN_fops = { static struct file_operations mISDN_fops = {
.llseek = mISDN_llseek,
.read = mISDN_read, .read = mISDN_read,
.write = mISDN_write,
.poll = mISDN_poll, .poll = mISDN_poll,
.ioctl = mISDN_ioctl, .ioctl = mISDN_ioctl,
.open = mISDN_open, .open = mISDN_open,

Просмотреть файл

@ -130,12 +130,12 @@ static const char filename[] = __FILE__;
static const char timeout_msg[] = "*** timeout at %s:%s (line %d) ***\n"; static const char timeout_msg[] = "*** timeout at %s:%s (line %d) ***\n";
#define TIMEOUT_MSG(lineno) \ #define TIMEOUT_MSG(lineno) \
printk(timeout_msg, filename,__FUNCTION__,(lineno)) printk(timeout_msg, filename,__func__,(lineno))
static const char invalid_pcb_msg[] = static const char invalid_pcb_msg[] =
"*** invalid pcb length %d at %s:%s (line %d) ***\n"; "*** invalid pcb length %d at %s:%s (line %d) ***\n";
#define INVALID_PCB_MSG(len) \ #define INVALID_PCB_MSG(len) \
printk(invalid_pcb_msg, (len),filename,__FUNCTION__,__LINE__) printk(invalid_pcb_msg, (len),filename,__func__,__LINE__)
static char search_msg[] __initdata = KERN_INFO "%s: Looking for 3c505 adapter at address %#x..."; static char search_msg[] __initdata = KERN_INFO "%s: Looking for 3c505 adapter at address %#x...";

Просмотреть файл

@ -127,7 +127,6 @@ MODULE_PARM_DESC (multicast_filter_limit, "8139cp: maximum number of filtered mu
(CP)->tx_tail - (CP)->tx_head - 1) (CP)->tx_tail - (CP)->tx_head - 1)
#define PKT_BUF_SZ 1536 /* Size of each temporary Rx buffer.*/ #define PKT_BUF_SZ 1536 /* Size of each temporary Rx buffer.*/
#define RX_OFFSET 2
#define CP_INTERNAL_PHY 32 #define CP_INTERNAL_PHY 32
/* The following settings are log_2(bytes)-4: 0 == 16 bytes .. 6==1024, 7==end of packet. */ /* The following settings are log_2(bytes)-4: 0 == 16 bytes .. 6==1024, 7==end of packet. */
@ -552,14 +551,14 @@ rx_status_loop:
printk(KERN_DEBUG "%s: rx slot %d status 0x%x len %d\n", printk(KERN_DEBUG "%s: rx slot %d status 0x%x len %d\n",
dev->name, rx_tail, status, len); dev->name, rx_tail, status, len);
buflen = cp->rx_buf_sz + RX_OFFSET; buflen = cp->rx_buf_sz + NET_IP_ALIGN;
new_skb = dev_alloc_skb (buflen); new_skb = netdev_alloc_skb(dev, buflen);
if (!new_skb) { if (!new_skb) {
dev->stats.rx_dropped++; dev->stats.rx_dropped++;
goto rx_next; goto rx_next;
} }
skb_reserve(new_skb, RX_OFFSET); skb_reserve(new_skb, NET_IP_ALIGN);
dma_unmap_single(&cp->pdev->dev, mapping, dma_unmap_single(&cp->pdev->dev, mapping,
buflen, PCI_DMA_FROMDEVICE); buflen, PCI_DMA_FROMDEVICE);
@ -1051,19 +1050,20 @@ static void cp_init_hw (struct cp_private *cp)
cpw8_f(Cfg9346, Cfg9346_Lock); cpw8_f(Cfg9346, Cfg9346_Lock);
} }
static int cp_refill_rx (struct cp_private *cp) static int cp_refill_rx(struct cp_private *cp)
{ {
struct net_device *dev = cp->dev;
unsigned i; unsigned i;
for (i = 0; i < CP_RX_RING_SIZE; i++) { for (i = 0; i < CP_RX_RING_SIZE; i++) {
struct sk_buff *skb; struct sk_buff *skb;
dma_addr_t mapping; dma_addr_t mapping;
skb = dev_alloc_skb(cp->rx_buf_sz + RX_OFFSET); skb = netdev_alloc_skb(dev, cp->rx_buf_sz + NET_IP_ALIGN);
if (!skb) if (!skb)
goto err_out; goto err_out;
skb_reserve(skb, RX_OFFSET); skb_reserve(skb, NET_IP_ALIGN);
mapping = dma_map_single(&cp->pdev->dev, skb->data, mapping = dma_map_single(&cp->pdev->dev, skb->data,
cp->rx_buf_sz, PCI_DMA_FROMDEVICE); cp->rx_buf_sz, PCI_DMA_FROMDEVICE);

Просмотреть файл

@ -309,7 +309,7 @@ enum RTL8139_registers {
Cfg9346 = 0x50, Cfg9346 = 0x50,
Config0 = 0x51, Config0 = 0x51,
Config1 = 0x52, Config1 = 0x52,
FlashReg = 0x54, TimerInt = 0x54,
MediaStatus = 0x58, MediaStatus = 0x58,
Config3 = 0x59, Config3 = 0x59,
Config4 = 0x5A, /* absent on RTL-8139A */ Config4 = 0x5A, /* absent on RTL-8139A */
@ -325,6 +325,7 @@ enum RTL8139_registers {
FIFOTMS = 0x70, /* FIFO Control and test. */ FIFOTMS = 0x70, /* FIFO Control and test. */
CSCR = 0x74, /* Chip Status and Configuration Register. */ CSCR = 0x74, /* Chip Status and Configuration Register. */
PARA78 = 0x78, PARA78 = 0x78,
FlashReg = 0xD4, /* Communication with Flash ROM, four bytes. */
PARA7c = 0x7c, /* Magic transceiver parameter register. */ PARA7c = 0x7c, /* Magic transceiver parameter register. */
Config5 = 0xD8, /* absent on RTL-8139A */ Config5 = 0xD8, /* absent on RTL-8139A */
}; };
@ -1722,13 +1723,18 @@ static int rtl8139_start_xmit (struct sk_buff *skb, struct net_device *dev)
} }
spin_lock_irqsave(&tp->lock, flags); spin_lock_irqsave(&tp->lock, flags);
/*
* Writing to TxStatus triggers a DMA transfer of the data
* copied to tp->tx_buf[entry] above. Use a memory barrier
* to make sure that the device sees the updated data.
*/
wmb();
RTL_W32_F (TxStatus0 + (entry * sizeof (u32)), RTL_W32_F (TxStatus0 + (entry * sizeof (u32)),
tp->tx_flag | max(len, (unsigned int)ETH_ZLEN)); tp->tx_flag | max(len, (unsigned int)ETH_ZLEN));
dev->trans_start = jiffies; dev->trans_start = jiffies;
tp->cur_tx++; tp->cur_tx++;
wmb();
if ((tp->cur_tx - NUM_TX_DESC) == tp->dirty_tx) if ((tp->cur_tx - NUM_TX_DESC) == tp->dirty_tx)
netif_stop_queue (dev); netif_stop_queue (dev);
@ -2009,9 +2015,9 @@ no_early_rx:
/* Malloc up new buffer, compatible with net-2e. */ /* Malloc up new buffer, compatible with net-2e. */
/* Omit the four octet CRC from the length. */ /* Omit the four octet CRC from the length. */
skb = dev_alloc_skb (pkt_size + 2); skb = netdev_alloc_skb(dev, pkt_size + NET_IP_ALIGN);
if (likely(skb)) { if (likely(skb)) {
skb_reserve (skb, 2); /* 16 byte align the IP fields. */ skb_reserve (skb, NET_IP_ALIGN); /* 16 byte align the IP fields. */
#if RX_BUF_IDX == 3 #if RX_BUF_IDX == 3
wrap_copy(skb, rx_ring, ring_offset+4, pkt_size); wrap_copy(skb, rx_ring, ring_offset+4, pkt_size);
#else #else

Просмотреть файл

@ -1813,7 +1813,7 @@ config FEC2
config FEC_MPC52xx config FEC_MPC52xx
tristate "MPC52xx FEC driver" tristate "MPC52xx FEC driver"
depends on PPC_MERGE && PPC_MPC52xx && PPC_BESTCOMM_FEC depends on PPC_MPC52xx && PPC_BESTCOMM_FEC
select CRC32 select CRC32
select PHYLIB select PHYLIB
---help--- ---help---
@ -1840,6 +1840,17 @@ config NE_H8300
Say Y here if you want to use the NE2000 compatible Say Y here if you want to use the NE2000 compatible
controller on the Renesas H8/300 processor. controller on the Renesas H8/300 processor.
config ATL2
tristate "Atheros L2 Fast Ethernet support"
depends on PCI
select CRC32
select MII
help
This driver supports the Atheros L2 fast ethernet adapter.
To compile this driver as a module, choose M here. The module
will be called atl2.
source "drivers/net/fs_enet/Kconfig" source "drivers/net/fs_enet/Kconfig"
endif # NET_ETHERNET endif # NET_ETHERNET
@ -1927,15 +1938,6 @@ config E1000
To compile this driver as a module, choose M here. The module To compile this driver as a module, choose M here. The module
will be called e1000. will be called e1000.
config E1000_DISABLE_PACKET_SPLIT
bool "Disable Packet Split for PCI express adapters"
depends on E1000
help
Say Y here if you want to use the legacy receive path for PCI express
hardware.
If in doubt, say N.
config E1000E config E1000E
tristate "Intel(R) PRO/1000 PCI-Express Gigabit Ethernet support" tristate "Intel(R) PRO/1000 PCI-Express Gigabit Ethernet support"
depends on PCI && (!SPARC32 || BROKEN) depends on PCI && (!SPARC32 || BROKEN)
@ -2046,6 +2048,7 @@ config R8169
tristate "Realtek 8169 gigabit ethernet support" tristate "Realtek 8169 gigabit ethernet support"
depends on PCI depends on PCI
select CRC32 select CRC32
select MII
---help--- ---help---
Say Y here if you have a Realtek 8169 PCI Gigabit Ethernet adapter. Say Y here if you have a Realtek 8169 PCI Gigabit Ethernet adapter.
@ -2262,7 +2265,7 @@ config UGETH_TX_ON_DEMAND
config MV643XX_ETH config MV643XX_ETH
tristate "Marvell Discovery (643XX) and Orion ethernet support" tristate "Marvell Discovery (643XX) and Orion ethernet support"
depends on MV64360 || MV64X60 || (PPC_MULTIPLATFORM && PPC32) || PLAT_ORION depends on MV64360 || MV64X60 || (PPC_MULTIPLATFORM && PPC32) || PLAT_ORION
select MII select PHYLIB
help help
This driver supports the gigabit ethernet MACs in the This driver supports the gigabit ethernet MACs in the
Marvell Discovery PPC/MIPS chipset family (MV643XX) and Marvell Discovery PPC/MIPS chipset family (MV643XX) and
@ -2281,12 +2284,13 @@ config QLA3XXX
will be called qla3xxx. will be called qla3xxx.
config ATL1 config ATL1
tristate "Attansic L1 Gigabit Ethernet support (EXPERIMENTAL)" tristate "Atheros/Attansic L1 Gigabit Ethernet support"
depends on PCI && EXPERIMENTAL depends on PCI
select CRC32 select CRC32
select MII select MII
help help
This driver supports the Attansic L1 gigabit ethernet adapter. This driver supports the Atheros/Attansic L1 gigabit ethernet
adapter.
To compile this driver as a module, choose M here. The module To compile this driver as a module, choose M here. The module
will be called atl1. will be called atl1.
@ -2302,6 +2306,18 @@ config ATL1E
To compile this driver as a module, choose M here. The module To compile this driver as a module, choose M here. The module
will be called atl1e. will be called atl1e.
config JME
tristate "JMicron(R) PCI-Express Gigabit Ethernet support"
depends on PCI
select CRC32
select MII
---help---
This driver supports the PCI-Express gigabit ethernet adapters
based on JMicron JMC250 chipset.
To compile this driver as a module, choose M here. The module
will be called jme.
endif # NETDEV_1000 endif # NETDEV_1000
# #
@ -2377,10 +2393,18 @@ config EHEA
To compile the driver as a module, choose M here. The module To compile the driver as a module, choose M here. The module
will be called ehea. will be called ehea.
config ENIC
tristate "E, the Cisco 10G Ethernet NIC"
depends on PCI && INET
select INET_LRO
help
This enables the support for the Cisco 10G Ethernet card.
config IXGBE config IXGBE
tristate "Intel(R) 10GbE PCI Express adapters support" tristate "Intel(R) 10GbE PCI Express adapters support"
depends on PCI && INET depends on PCI && INET
select INET_LRO select INET_LRO
select INTEL_IOATDMA
---help--- ---help---
This driver supports Intel(R) 10GbE PCI Express family of This driver supports Intel(R) 10GbE PCI Express family of
adapters. For more information on how to identify your adapter, go adapters. For more information on how to identify your adapter, go
@ -2432,6 +2456,7 @@ config MYRI10GE
select FW_LOADER select FW_LOADER
select CRC32 select CRC32
select INET_LRO select INET_LRO
select INTEL_IOATDMA
---help--- ---help---
This driver supports Myricom Myri-10G Dual Protocol interface in This driver supports Myricom Myri-10G Dual Protocol interface in
Ethernet mode. If the eeprom on your board is not recent enough, Ethernet mode. If the eeprom on your board is not recent enough,
@ -2496,6 +2521,15 @@ config BNX2X
To compile this driver as a module, choose M here: the module To compile this driver as a module, choose M here: the module
will be called bnx2x. This is recommended. will be called bnx2x. This is recommended.
config QLGE
tristate "QLogic QLGE 10Gb Ethernet Driver Support"
depends on PCI
help
This driver supports QLogic ISP8XXX 10Gb Ethernet cards.
To compile this driver as a module, choose M here: the module
will be called qlge.
source "drivers/net/sfc/Kconfig" source "drivers/net/sfc/Kconfig"
endif # NETDEV_10000 endif # NETDEV_10000

Просмотреть файл

@ -15,9 +15,12 @@ obj-$(CONFIG_EHEA) += ehea/
obj-$(CONFIG_CAN) += can/ obj-$(CONFIG_CAN) += can/
obj-$(CONFIG_BONDING) += bonding/ obj-$(CONFIG_BONDING) += bonding/
obj-$(CONFIG_ATL1) += atlx/ obj-$(CONFIG_ATL1) += atlx/
obj-$(CONFIG_ATL2) += atlx/
obj-$(CONFIG_ATL1E) += atl1e/ obj-$(CONFIG_ATL1E) += atl1e/
obj-$(CONFIG_GIANFAR) += gianfar_driver.o obj-$(CONFIG_GIANFAR) += gianfar_driver.o
obj-$(CONFIG_TEHUTI) += tehuti.o obj-$(CONFIG_TEHUTI) += tehuti.o
obj-$(CONFIG_ENIC) += enic/
obj-$(CONFIG_JME) += jme.o
gianfar_driver-objs := gianfar.o \ gianfar_driver-objs := gianfar.o \
gianfar_ethtool.o \ gianfar_ethtool.o \
@ -111,7 +114,7 @@ obj-$(CONFIG_EL2) += 3c503.o 8390p.o
obj-$(CONFIG_NE2000) += ne.o 8390p.o obj-$(CONFIG_NE2000) += ne.o 8390p.o
obj-$(CONFIG_NE2_MCA) += ne2.o 8390p.o obj-$(CONFIG_NE2_MCA) += ne2.o 8390p.o
obj-$(CONFIG_HPLAN) += hp.o 8390p.o obj-$(CONFIG_HPLAN) += hp.o 8390p.o
obj-$(CONFIG_HPLAN_PLUS) += hp-plus.o 8390p.o obj-$(CONFIG_HPLAN_PLUS) += hp-plus.o 8390.o
obj-$(CONFIG_ULTRA) += smc-ultra.o 8390.o obj-$(CONFIG_ULTRA) += smc-ultra.o 8390.o
obj-$(CONFIG_ULTRAMCA) += smc-mca.o 8390.o obj-$(CONFIG_ULTRAMCA) += smc-mca.o 8390.o
obj-$(CONFIG_ULTRA32) += smc-ultra32.o 8390.o obj-$(CONFIG_ULTRA32) += smc-ultra32.o 8390.o
@ -128,6 +131,7 @@ obj-$(CONFIG_AX88796) += ax88796.o
obj-$(CONFIG_TSI108_ETH) += tsi108_eth.o obj-$(CONFIG_TSI108_ETH) += tsi108_eth.o
obj-$(CONFIG_MV643XX_ETH) += mv643xx_eth.o obj-$(CONFIG_MV643XX_ETH) += mv643xx_eth.o
obj-$(CONFIG_QLA3XXX) += qla3xxx.o obj-$(CONFIG_QLA3XXX) += qla3xxx.o
obj-$(CONFIG_QLGE) += qlge/
obj-$(CONFIG_PPP) += ppp_generic.o obj-$(CONFIG_PPP) += ppp_generic.o
obj-$(CONFIG_PPP_ASYNC) += ppp_async.o obj-$(CONFIG_PPP_ASYNC) += ppp_async.o

Просмотреть файл

@ -442,24 +442,24 @@ static int arcnet_open(struct net_device *dev)
BUGMSG(D_NORMAL, "WARNING! Station address FF may confuse " BUGMSG(D_NORMAL, "WARNING! Station address FF may confuse "
"DOS networking programs!\n"); "DOS networking programs!\n");
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__); BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
if (ASTATUS() & RESETflag) { if (ASTATUS() & RESETflag) {
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__); BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
ACOMMAND(CFLAGScmd | RESETclear); ACOMMAND(CFLAGScmd | RESETclear);
} }
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__); BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
/* make sure we're ready to receive IRQ's. */ /* make sure we're ready to receive IRQ's. */
AINTMASK(0); AINTMASK(0);
udelay(1); /* give it time to set the mask before udelay(1); /* give it time to set the mask before
* we reset it again. (may not even be * we reset it again. (may not even be
* necessary) * necessary)
*/ */
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__); BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
lp->intmask = NORXflag | RECONflag; lp->intmask = NORXflag | RECONflag;
AINTMASK(lp->intmask); AINTMASK(lp->intmask);
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__); BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
netif_start_queue(dev); netif_start_queue(dev);
@ -670,14 +670,14 @@ static int arcnet_send_packet(struct sk_buff *skb, struct net_device *dev)
freeskb = 0; freeskb = 0;
} }
BUGMSG(D_DEBUG, "%s: %d: %s, status: %x\n",__FILE__,__LINE__,__FUNCTION__,ASTATUS()); BUGMSG(D_DEBUG, "%s: %d: %s, status: %x\n",__FILE__,__LINE__,__func__,ASTATUS());
/* make sure we didn't ignore a TX IRQ while we were in here */ /* make sure we didn't ignore a TX IRQ while we were in here */
AINTMASK(0); AINTMASK(0);
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__); BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
lp->intmask |= TXFREEflag|EXCNAKflag; lp->intmask |= TXFREEflag|EXCNAKflag;
AINTMASK(lp->intmask); AINTMASK(lp->intmask);
BUGMSG(D_DEBUG, "%s: %d: %s, status: %x\n",__FILE__,__LINE__,__FUNCTION__,ASTATUS()); BUGMSG(D_DEBUG, "%s: %d: %s, status: %x\n",__FILE__,__LINE__,__func__,ASTATUS());
spin_unlock_irqrestore(&lp->lock, flags); spin_unlock_irqrestore(&lp->lock, flags);
if (freeskb) { if (freeskb) {
@ -798,7 +798,7 @@ irqreturn_t arcnet_interrupt(int irq, void *dev_id)
diagstatus = (status >> 8) & 0xFF; diagstatus = (status >> 8) & 0xFF;
BUGMSG(D_DEBUG, "%s: %d: %s: status=%x\n", BUGMSG(D_DEBUG, "%s: %d: %s: status=%x\n",
__FILE__,__LINE__,__FUNCTION__,status); __FILE__,__LINE__,__func__,status);
didsomething = 0; didsomething = 0;
/* /*

Просмотреть файл

@ -238,15 +238,15 @@ static int com20020_reset(struct net_device *dev, int really_reset)
u_char inbyte; u_char inbyte;
BUGMSG(D_DEBUG, "%s: %d: %s: dev: %p, lp: %p, dev->name: %s\n", BUGMSG(D_DEBUG, "%s: %d: %s: dev: %p, lp: %p, dev->name: %s\n",
__FILE__,__LINE__,__FUNCTION__,dev,lp,dev->name); __FILE__,__LINE__,__func__,dev,lp,dev->name);
BUGMSG(D_INIT, "Resetting %s (status=%02Xh)\n", BUGMSG(D_INIT, "Resetting %s (status=%02Xh)\n",
dev->name, ASTATUS()); dev->name, ASTATUS());
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__); BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
lp->config = TXENcfg | (lp->timeout << 3) | (lp->backplane << 2); lp->config = TXENcfg | (lp->timeout << 3) | (lp->backplane << 2);
/* power-up defaults */ /* power-up defaults */
SETCONF; SETCONF;
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__); BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
if (really_reset) { if (really_reset) {
/* reset the card */ /* reset the card */
@ -254,22 +254,22 @@ static int com20020_reset(struct net_device *dev, int really_reset)
mdelay(RESETtime * 2); /* COM20020 seems to be slower sometimes */ mdelay(RESETtime * 2); /* COM20020 seems to be slower sometimes */
} }
/* clear flags & end reset */ /* clear flags & end reset */
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__); BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
ACOMMAND(CFLAGScmd | RESETclear | CONFIGclear); ACOMMAND(CFLAGScmd | RESETclear | CONFIGclear);
/* verify that the ARCnet signature byte is present */ /* verify that the ARCnet signature byte is present */
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__); BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
com20020_copy_from_card(dev, 0, 0, &inbyte, 1); com20020_copy_from_card(dev, 0, 0, &inbyte, 1);
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__); BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
if (inbyte != TESTvalue) { if (inbyte != TESTvalue) {
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__); BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
BUGMSG(D_NORMAL, "reset failed: TESTvalue not present.\n"); BUGMSG(D_NORMAL, "reset failed: TESTvalue not present.\n");
return 1; return 1;
} }
/* enable extended (512-byte) packets */ /* enable extended (512-byte) packets */
ACOMMAND(CONFIGcmd | EXTconf); ACOMMAND(CONFIGcmd | EXTconf);
BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__FUNCTION__); BUGMSG(D_DEBUG, "%s: %d: %s\n",__FILE__,__LINE__,__func__);
/* done! return success. */ /* done! return success. */
return 0; return 0;

Просмотреть файл

@ -397,7 +397,7 @@ static int atl1e_phy_setup_autoneg_adv(struct atl1e_hw *hw)
*/ */
int atl1e_phy_commit(struct atl1e_hw *hw) int atl1e_phy_commit(struct atl1e_hw *hw)
{ {
struct atl1e_adapter *adapter = (struct atl1e_adapter *)hw->adapter; struct atl1e_adapter *adapter = hw->adapter;
struct pci_dev *pdev = adapter->pdev; struct pci_dev *pdev = adapter->pdev;
int ret_val; int ret_val;
u16 phy_data; u16 phy_data;
@ -431,7 +431,7 @@ int atl1e_phy_commit(struct atl1e_hw *hw)
int atl1e_phy_init(struct atl1e_hw *hw) int atl1e_phy_init(struct atl1e_hw *hw)
{ {
struct atl1e_adapter *adapter = (struct atl1e_adapter *)hw->adapter; struct atl1e_adapter *adapter = hw->adapter;
struct pci_dev *pdev = adapter->pdev; struct pci_dev *pdev = adapter->pdev;
s32 ret_val; s32 ret_val;
u16 phy_val; u16 phy_val;
@ -525,7 +525,7 @@ int atl1e_phy_init(struct atl1e_hw *hw)
*/ */
int atl1e_reset_hw(struct atl1e_hw *hw) int atl1e_reset_hw(struct atl1e_hw *hw)
{ {
struct atl1e_adapter *adapter = (struct atl1e_adapter *)hw->adapter; struct atl1e_adapter *adapter = hw->adapter;
struct pci_dev *pdev = adapter->pdev; struct pci_dev *pdev = adapter->pdev;
u32 idle_status_data = 0; u32 idle_status_data = 0;

Просмотреть файл

@ -2390,9 +2390,7 @@ static int __devinit atl1e_probe(struct pci_dev *pdev,
} }
/* Init GPHY as early as possible due to power saving issue */ /* Init GPHY as early as possible due to power saving issue */
spin_lock(&adapter->mdio_lock);
atl1e_phy_init(&adapter->hw); atl1e_phy_init(&adapter->hw);
spin_unlock(&adapter->mdio_lock);
/* reset the controller to /* reset the controller to
* put the device in a known good starting state */ * put the device in a known good starting state */
err = atl1e_reset_hw(&adapter->hw); err = atl1e_reset_hw(&adapter->hw);

Просмотреть файл

@ -1 +1,3 @@
obj-$(CONFIG_ATL1) += atl1.o obj-$(CONFIG_ATL1) += atl1.o
obj-$(CONFIG_ATL2) += atl2.o

Просмотреть файл

@ -24,16 +24,12 @@
* file called COPYING. * file called COPYING.
* *
* Contact Information: * Contact Information:
* Xiong Huang <xiong_huang@attansic.com> * Xiong Huang <xiong.huang@atheros.com>
* Attansic Technology Corp. 3F 147, Xianzheng 9th Road, Zhubei, * Jie Yang <jie.yang@atheros.com>
* Xinzhu 302, TAIWAN, REPUBLIC OF CHINA
*
* Chris Snook <csnook@redhat.com> * Chris Snook <csnook@redhat.com>
* Jay Cliburn <jcliburn@gmail.com> * Jay Cliburn <jcliburn@gmail.com>
* *
* This version is adapted from the Attansic reference driver for * This version is adapted from the Attansic reference driver.
* inclusion in the Linux kernel. It is currently under heavy development.
* A very incomplete list of things that need to be dealt with:
* *
* TODO: * TODO:
* Add more ethtool functions. * Add more ethtool functions.
@ -2109,7 +2105,6 @@ static u16 atl1_tpd_avail(struct atl1_tpd_ring *tpd_ring)
static int atl1_tso(struct atl1_adapter *adapter, struct sk_buff *skb, static int atl1_tso(struct atl1_adapter *adapter, struct sk_buff *skb,
struct tx_packet_desc *ptpd) struct tx_packet_desc *ptpd)
{ {
/* spinlock held */
u8 hdr_len, ip_off; u8 hdr_len, ip_off;
u32 real_len; u32 real_len;
int err; int err;
@ -2196,7 +2191,6 @@ static int atl1_tx_csum(struct atl1_adapter *adapter, struct sk_buff *skb,
static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb, static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
struct tx_packet_desc *ptpd) struct tx_packet_desc *ptpd)
{ {
/* spinlock held */
struct atl1_tpd_ring *tpd_ring = &adapter->tpd_ring; struct atl1_tpd_ring *tpd_ring = &adapter->tpd_ring;
struct atl1_buffer *buffer_info; struct atl1_buffer *buffer_info;
u16 buf_len = skb->len; u16 buf_len = skb->len;
@ -2303,7 +2297,6 @@ static void atl1_tx_map(struct atl1_adapter *adapter, struct sk_buff *skb,
static void atl1_tx_queue(struct atl1_adapter *adapter, u16 count, static void atl1_tx_queue(struct atl1_adapter *adapter, u16 count,
struct tx_packet_desc *ptpd) struct tx_packet_desc *ptpd)
{ {
/* spinlock held */
struct atl1_tpd_ring *tpd_ring = &adapter->tpd_ring; struct atl1_tpd_ring *tpd_ring = &adapter->tpd_ring;
struct atl1_buffer *buffer_info; struct atl1_buffer *buffer_info;
struct tx_packet_desc *tpd; struct tx_packet_desc *tpd;
@ -2361,7 +2354,6 @@ static int atl1_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
struct tx_packet_desc *ptpd; struct tx_packet_desc *ptpd;
u16 frag_size; u16 frag_size;
u16 vlan_tag; u16 vlan_tag;
unsigned long flags;
unsigned int nr_frags = 0; unsigned int nr_frags = 0;
unsigned int mss = 0; unsigned int mss = 0;
unsigned int f; unsigned int f;
@ -2399,18 +2391,9 @@ static int atl1_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
} }
} }
if (!spin_trylock_irqsave(&adapter->lock, flags)) {
/* Can't get lock - tell upper layer to requeue */
if (netif_msg_tx_queued(adapter))
dev_printk(KERN_DEBUG, &adapter->pdev->dev,
"tx locked\n");
return NETDEV_TX_LOCKED;
}
if (atl1_tpd_avail(&adapter->tpd_ring) < count) { if (atl1_tpd_avail(&adapter->tpd_ring) < count) {
/* not enough descriptors */ /* not enough descriptors */
netif_stop_queue(netdev); netif_stop_queue(netdev);
spin_unlock_irqrestore(&adapter->lock, flags);
if (netif_msg_tx_queued(adapter)) if (netif_msg_tx_queued(adapter))
dev_printk(KERN_DEBUG, &adapter->pdev->dev, dev_printk(KERN_DEBUG, &adapter->pdev->dev,
"tx busy\n"); "tx busy\n");
@ -2432,7 +2415,6 @@ static int atl1_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
tso = atl1_tso(adapter, skb, ptpd); tso = atl1_tso(adapter, skb, ptpd);
if (tso < 0) { if (tso < 0) {
spin_unlock_irqrestore(&adapter->lock, flags);
dev_kfree_skb_any(skb); dev_kfree_skb_any(skb);
return NETDEV_TX_OK; return NETDEV_TX_OK;
} }
@ -2440,7 +2422,6 @@ static int atl1_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
if (!tso) { if (!tso) {
ret_val = atl1_tx_csum(adapter, skb, ptpd); ret_val = atl1_tx_csum(adapter, skb, ptpd);
if (ret_val < 0) { if (ret_val < 0) {
spin_unlock_irqrestore(&adapter->lock, flags);
dev_kfree_skb_any(skb); dev_kfree_skb_any(skb);
return NETDEV_TX_OK; return NETDEV_TX_OK;
} }
@ -2449,7 +2430,7 @@ static int atl1_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
atl1_tx_map(adapter, skb, ptpd); atl1_tx_map(adapter, skb, ptpd);
atl1_tx_queue(adapter, count, ptpd); atl1_tx_queue(adapter, count, ptpd);
atl1_update_mailbox(adapter); atl1_update_mailbox(adapter);
spin_unlock_irqrestore(&adapter->lock, flags); mmiowb();
netdev->trans_start = jiffies; netdev->trans_start = jiffies;
return NETDEV_TX_OK; return NETDEV_TX_OK;
} }
@ -2642,6 +2623,7 @@ static void atl1_down(struct atl1_adapter *adapter)
{ {
struct net_device *netdev = adapter->netdev; struct net_device *netdev = adapter->netdev;
netif_stop_queue(netdev);
del_timer_sync(&adapter->watchdog_timer); del_timer_sync(&adapter->watchdog_timer);
del_timer_sync(&adapter->phy_config_timer); del_timer_sync(&adapter->phy_config_timer);
adapter->phy_timer_pending = false; adapter->phy_timer_pending = false;
@ -2655,7 +2637,6 @@ static void atl1_down(struct atl1_adapter *adapter)
adapter->link_speed = SPEED_0; adapter->link_speed = SPEED_0;
adapter->link_duplex = -1; adapter->link_duplex = -1;
netif_carrier_off(netdev); netif_carrier_off(netdev);
netif_stop_queue(netdev);
atl1_clean_tx_ring(adapter); atl1_clean_tx_ring(adapter);
atl1_clean_rx_ring(adapter); atl1_clean_rx_ring(adapter);
@ -2724,6 +2705,8 @@ static int atl1_open(struct net_device *netdev)
struct atl1_adapter *adapter = netdev_priv(netdev); struct atl1_adapter *adapter = netdev_priv(netdev);
int err; int err;
netif_carrier_off(netdev);
/* allocate transmit descriptors */ /* allocate transmit descriptors */
err = atl1_setup_ring_resources(adapter); err = atl1_setup_ring_resources(adapter);
if (err) if (err)
@ -3022,7 +3005,6 @@ static int __devinit atl1_probe(struct pci_dev *pdev,
netdev->features = NETIF_F_HW_CSUM; netdev->features = NETIF_F_HW_CSUM;
netdev->features |= NETIF_F_SG; netdev->features |= NETIF_F_SG;
netdev->features |= (NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX); netdev->features |= (NETIF_F_HW_VLAN_TX | NETIF_F_HW_VLAN_RX);
netdev->features |= NETIF_F_LLTX;
/* /*
* patch for some L1 of old version, * patch for some L1 of old version,

3119
drivers/net/atlx/atl2.c Normal file

Разница между файлами не показана из-за своего большого размера Загрузить разницу

529
drivers/net/atlx/atl2.h Normal file
Просмотреть файл

@ -0,0 +1,529 @@
/* atl2.h -- atl2 driver definitions
*
* Copyright(c) 2007 Atheros Corporation. All rights reserved.
* Copyright(c) 2006 xiong huang <xiong.huang@atheros.com>
* Copyright(c) 2007 Chris Snook <csnook@redhat.com>
*
* Derived from Intel e1000 driver
* Copyright(c) 1999 - 2005 Intel Corporation. All rights reserved.
*
* This program is free software; you can redistribute it and/or modify it
* under the terms of the GNU General Public License as published by the Free
* Software Foundation; either version 2 of the License, or (at your option)
* any later version.
*
* This program is distributed in the hope that it will be useful, but WITHOUT
* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for
* more details.
*
* You should have received a copy of the GNU General Public License along with
* this program; if not, write to the Free Software Foundation, Inc., 59
* Temple Place - Suite 330, Boston, MA 02111-1307, USA.
*/
#ifndef _ATL2_H_
#define _ATL2_H_
#include <asm/atomic.h>
#include <linux/netdevice.h>
#ifndef _ATL2_HW_H_
#define _ATL2_HW_H_
#ifndef _ATL2_OSDEP_H_
#define _ATL2_OSDEP_H_
#include <linux/pci.h>
#include <linux/delay.h>
#include <linux/interrupt.h>
#include <linux/if_ether.h>
#include "atlx.h"
#ifdef ETHTOOL_OPS_COMPAT
extern int ethtool_ioctl(struct ifreq *ifr);
#endif
#define PCI_COMMAND_REGISTER PCI_COMMAND
#define CMD_MEM_WRT_INVALIDATE PCI_COMMAND_INVALIDATE
#define ETH_ADDR_LEN ETH_ALEN
#define ATL2_WRITE_REG(a, reg, value) (iowrite32((value), \
((a)->hw_addr + (reg))))
#define ATL2_WRITE_FLUSH(a) (ioread32((a)->hw_addr))
#define ATL2_READ_REG(a, reg) (ioread32((a)->hw_addr + (reg)))
#define ATL2_WRITE_REGB(a, reg, value) (iowrite8((value), \
((a)->hw_addr + (reg))))
#define ATL2_READ_REGB(a, reg) (ioread8((a)->hw_addr + (reg)))
#define ATL2_WRITE_REGW(a, reg, value) (iowrite16((value), \
((a)->hw_addr + (reg))))
#define ATL2_READ_REGW(a, reg) (ioread16((a)->hw_addr + (reg)))
#define ATL2_WRITE_REG_ARRAY(a, reg, offset, value) \
(iowrite32((value), (((a)->hw_addr + (reg)) + ((offset) << 2))))
#define ATL2_READ_REG_ARRAY(a, reg, offset) \
(ioread32(((a)->hw_addr + (reg)) + ((offset) << 2)))
#endif /* _ATL2_OSDEP_H_ */
struct atl2_adapter;
struct atl2_hw;
/* function prototype */
static s32 atl2_reset_hw(struct atl2_hw *hw);
static s32 atl2_read_mac_addr(struct atl2_hw *hw);
static s32 atl2_init_hw(struct atl2_hw *hw);
static s32 atl2_get_speed_and_duplex(struct atl2_hw *hw, u16 *speed,
u16 *duplex);
static u32 atl2_hash_mc_addr(struct atl2_hw *hw, u8 *mc_addr);
static void atl2_hash_set(struct atl2_hw *hw, u32 hash_value);
static s32 atl2_read_phy_reg(struct atl2_hw *hw, u16 reg_addr, u16 *phy_data);
static s32 atl2_write_phy_reg(struct atl2_hw *hw, u32 reg_addr, u16 phy_data);
static void atl2_read_pci_cfg(struct atl2_hw *hw, u32 reg, u16 *value);
static void atl2_write_pci_cfg(struct atl2_hw *hw, u32 reg, u16 *value);
static void atl2_set_mac_addr(struct atl2_hw *hw);
static bool atl2_read_eeprom(struct atl2_hw *hw, u32 Offset, u32 *pValue);
static bool atl2_write_eeprom(struct atl2_hw *hw, u32 offset, u32 value);
static s32 atl2_phy_init(struct atl2_hw *hw);
static int atl2_check_eeprom_exist(struct atl2_hw *hw);
static void atl2_force_ps(struct atl2_hw *hw);
/* register definition */
/* Block IDLE Status Register */
#define IDLE_STATUS_RXMAC 1 /* 1: RXMAC is non-IDLE */
#define IDLE_STATUS_TXMAC 2 /* 1: TXMAC is non-IDLE */
#define IDLE_STATUS_DMAR 8 /* 1: DMAR is non-IDLE */
#define IDLE_STATUS_DMAW 4 /* 1: DMAW is non-IDLE */
/* MDIO Control Register */
#define MDIO_WAIT_TIMES 10
/* MAC Control Register */
#define MAC_CTRL_DBG_TX_BKPRESURE 0x100000 /* 1: TX max backoff */
#define MAC_CTRL_MACLP_CLK_PHY 0x8000000 /* 1: 25MHz from phy */
#define MAC_CTRL_HALF_LEFT_BUF_SHIFT 28
#define MAC_CTRL_HALF_LEFT_BUF_MASK 0xF /* MAC retry buf x32B */
/* Internal SRAM Partition Register */
#define REG_SRAM_TXRAM_END 0x1500 /* Internal tail address of TXRAM
* default: 2byte*1024 */
#define REG_SRAM_RXRAM_END 0x1502 /* Internal tail address of RXRAM
* default: 2byte*1024 */
/* Descriptor Control register */
#define REG_TXD_BASE_ADDR_LO 0x1544 /* The base address of the Transmit
* Data Mem low 32-bit(dword align) */
#define REG_TXD_MEM_SIZE 0x1548 /* Transmit Data Memory size(by
* double word , max 256KB) */
#define REG_TXS_BASE_ADDR_LO 0x154C /* The base address of the Transmit
* Status Memory low 32-bit(dword word
* align) */
#define REG_TXS_MEM_SIZE 0x1550 /* double word unit, max 4*2047
* bytes. */
#define REG_RXD_BASE_ADDR_LO 0x1554 /* The base address of the Transmit
* Status Memory low 32-bit(unit 8
* bytes) */
#define REG_RXD_BUF_NUM 0x1558 /* Receive Data & Status Memory buffer
* number (unit 1536bytes, max
* 1536*2047) */
/* DMAR Control Register */
#define REG_DMAR 0x1580
#define DMAR_EN 0x1 /* 1: Enable DMAR */
/* TX Cur-Through (early tx threshold) Control Register */
#define REG_TX_CUT_THRESH 0x1590 /* TxMac begin transmit packet
* threshold(unit word) */
/* DMAW Control Register */
#define REG_DMAW 0x15A0
#define DMAW_EN 0x1
/* Flow control register */
#define REG_PAUSE_ON_TH 0x15A8 /* RXD high watermark of overflow
* threshold configuration register */
#define REG_PAUSE_OFF_TH 0x15AA /* RXD lower watermark of overflow
* threshold configuration register */
/* Mailbox Register */
#define REG_MB_TXD_WR_IDX 0x15f0 /* double word align */
#define REG_MB_RXD_RD_IDX 0x15F4 /* RXD Read index (unit: 1536byets) */
/* Interrupt Status Register */
#define ISR_TIMER 1 /* Interrupt when Timer counts down to zero */
#define ISR_MANUAL 2 /* Software manual interrupt, for debug. Set
* when SW_MAN_INT_EN is set in Table 51
* Selene Master Control Register
* (Offset 0x1400). */
#define ISR_RXF_OV 4 /* RXF overflow interrupt */
#define ISR_TXF_UR 8 /* TXF underrun interrupt */
#define ISR_TXS_OV 0x10 /* Internal transmit status buffer full
* interrupt */
#define ISR_RXS_OV 0x20 /* Internal receive status buffer full
* interrupt */
#define ISR_LINK_CHG 0x40 /* Link Status Change Interrupt */
#define ISR_HOST_TXD_UR 0x80
#define ISR_HOST_RXD_OV 0x100 /* Host rx data memory full , one pulse */
#define ISR_DMAR_TO_RST 0x200 /* DMAR op timeout interrupt. SW should
* do Reset */
#define ISR_DMAW_TO_RST 0x400
#define ISR_PHY 0x800 /* phy interrupt */
#define ISR_TS_UPDATE 0x10000 /* interrupt after new tx pkt status written
* to host */
#define ISR_RS_UPDATE 0x20000 /* interrupt ater new rx pkt status written
* to host. */
#define ISR_TX_EARLY 0x40000 /* interrupt when txmac begin transmit one
* packet */
#define ISR_TX_EVENT (ISR_TXF_UR | ISR_TXS_OV | ISR_HOST_TXD_UR |\
ISR_TS_UPDATE | ISR_TX_EARLY)
#define ISR_RX_EVENT (ISR_RXF_OV | ISR_RXS_OV | ISR_HOST_RXD_OV |\
ISR_RS_UPDATE)
#define IMR_NORMAL_MASK (\
/*ISR_LINK_CHG |*/\
ISR_MANUAL |\
ISR_DMAR_TO_RST |\
ISR_DMAW_TO_RST |\
ISR_PHY |\
ISR_PHY_LINKDOWN |\
ISR_TS_UPDATE |\
ISR_RS_UPDATE)
/* Receive MAC Statistics Registers */
#define REG_STS_RX_PAUSE 0x1700 /* Num pause packets received */
#define REG_STS_RXD_OV 0x1704 /* Num frames dropped due to RX
* FIFO overflow */
#define REG_STS_RXS_OV 0x1708 /* Num frames dropped due to RX
* Status Buffer Overflow */
#define REG_STS_RX_FILTER 0x170C /* Num packets dropped due to
* address filtering */
/* MII definitions */
/* PHY Common Register */
#define MII_SMARTSPEED 0x14
#define MII_DBG_ADDR 0x1D
#define MII_DBG_DATA 0x1E
/* PCI Command Register Bit Definitions */
#define PCI_REG_COMMAND 0x04
#define CMD_IO_SPACE 0x0001
#define CMD_MEMORY_SPACE 0x0002
#define CMD_BUS_MASTER 0x0004
#define MEDIA_TYPE_100M_FULL 1
#define MEDIA_TYPE_100M_HALF 2
#define MEDIA_TYPE_10M_FULL 3
#define MEDIA_TYPE_10M_HALF 4
#define AUTONEG_ADVERTISE_SPEED_DEFAULT 0x000F /* Everything */
/* The size (in bytes) of a ethernet packet */
#define ENET_HEADER_SIZE 14
#define MAXIMUM_ETHERNET_FRAME_SIZE 1518 /* with FCS */
#define MINIMUM_ETHERNET_FRAME_SIZE 64 /* with FCS */
#define ETHERNET_FCS_SIZE 4
#define MAX_JUMBO_FRAME_SIZE 0x2000
#define VLAN_SIZE 4
struct tx_pkt_header {
unsigned pkt_size:11;
unsigned:4; /* reserved */
unsigned ins_vlan:1; /* txmac should insert vlan */
unsigned short vlan; /* vlan tag */
};
/* FIXME: replace above bitfields with MASK/SHIFT defines below */
#define TX_PKT_HEADER_SIZE_MASK 0x7FF
#define TX_PKT_HEADER_SIZE_SHIFT 0
#define TX_PKT_HEADER_INS_VLAN_MASK 0x1
#define TX_PKT_HEADER_INS_VLAN_SHIFT 15
#define TX_PKT_HEADER_VLAN_TAG_MASK 0xFFFF
#define TX_PKT_HEADER_VLAN_TAG_SHIFT 16
struct tx_pkt_status {
unsigned pkt_size:11;
unsigned:5; /* reserved */
unsigned ok:1; /* current packet transmitted without error */
unsigned bcast:1; /* broadcast packet */
unsigned mcast:1; /* multicast packet */
unsigned pause:1; /* transmiited a pause frame */
unsigned ctrl:1;
unsigned defer:1; /* current packet is xmitted with defer */
unsigned exc_defer:1;
unsigned single_col:1;
unsigned multi_col:1;
unsigned late_col:1;
unsigned abort_col:1;
unsigned underun:1; /* current packet is aborted
* due to txram underrun */
unsigned:3; /* reserved */
unsigned update:1; /* always 1'b1 in tx_status_buf */
};
/* FIXME: replace above bitfields with MASK/SHIFT defines below */
#define TX_PKT_STATUS_SIZE_MASK 0x7FF
#define TX_PKT_STATUS_SIZE_SHIFT 0
#define TX_PKT_STATUS_OK_MASK 0x1
#define TX_PKT_STATUS_OK_SHIFT 16
#define TX_PKT_STATUS_BCAST_MASK 0x1
#define TX_PKT_STATUS_BCAST_SHIFT 17
#define TX_PKT_STATUS_MCAST_MASK 0x1
#define TX_PKT_STATUS_MCAST_SHIFT 18
#define TX_PKT_STATUS_PAUSE_MASK 0x1
#define TX_PKT_STATUS_PAUSE_SHIFT 19
#define TX_PKT_STATUS_CTRL_MASK 0x1
#define TX_PKT_STATUS_CTRL_SHIFT 20
#define TX_PKT_STATUS_DEFER_MASK 0x1
#define TX_PKT_STATUS_DEFER_SHIFT 21
#define TX_PKT_STATUS_EXC_DEFER_MASK 0x1
#define TX_PKT_STATUS_EXC_DEFER_SHIFT 22
#define TX_PKT_STATUS_SINGLE_COL_MASK 0x1
#define TX_PKT_STATUS_SINGLE_COL_SHIFT 23
#define TX_PKT_STATUS_MULTI_COL_MASK 0x1
#define TX_PKT_STATUS_MULTI_COL_SHIFT 24
#define TX_PKT_STATUS_LATE_COL_MASK 0x1
#define TX_PKT_STATUS_LATE_COL_SHIFT 25
#define TX_PKT_STATUS_ABORT_COL_MASK 0x1
#define TX_PKT_STATUS_ABORT_COL_SHIFT 26
#define TX_PKT_STATUS_UNDERRUN_MASK 0x1
#define TX_PKT_STATUS_UNDERRUN_SHIFT 27
#define TX_PKT_STATUS_UPDATE_MASK 0x1
#define TX_PKT_STATUS_UPDATE_SHIFT 31
struct rx_pkt_status {
unsigned pkt_size:11; /* packet size, max 2047 bytes */
unsigned:5; /* reserved */
unsigned ok:1; /* current packet received ok without error */
unsigned bcast:1; /* current packet is broadcast */
unsigned mcast:1; /* current packet is multicast */
unsigned pause:1;
unsigned ctrl:1;
unsigned crc:1; /* received a packet with crc error */
unsigned code:1; /* received a packet with code error */
unsigned runt:1; /* received a packet less than 64 bytes
* with good crc */
unsigned frag:1; /* received a packet less than 64 bytes
* with bad crc */
unsigned trunc:1; /* current frame truncated due to rxram full */
unsigned align:1; /* this packet is alignment error */
unsigned vlan:1; /* this packet has vlan */
unsigned:3; /* reserved */
unsigned update:1;
unsigned short vtag; /* vlan tag */
unsigned:16;
};
/* FIXME: replace above bitfields with MASK/SHIFT defines below */
#define RX_PKT_STATUS_SIZE_MASK 0x7FF
#define RX_PKT_STATUS_SIZE_SHIFT 0
#define RX_PKT_STATUS_OK_MASK 0x1
#define RX_PKT_STATUS_OK_SHIFT 16
#define RX_PKT_STATUS_BCAST_MASK 0x1
#define RX_PKT_STATUS_BCAST_SHIFT 17
#define RX_PKT_STATUS_MCAST_MASK 0x1
#define RX_PKT_STATUS_MCAST_SHIFT 18
#define RX_PKT_STATUS_PAUSE_MASK 0x1
#define RX_PKT_STATUS_PAUSE_SHIFT 19
#define RX_PKT_STATUS_CTRL_MASK 0x1
#define RX_PKT_STATUS_CTRL_SHIFT 20
#define RX_PKT_STATUS_CRC_MASK 0x1
#define RX_PKT_STATUS_CRC_SHIFT 21
#define RX_PKT_STATUS_CODE_MASK 0x1
#define RX_PKT_STATUS_CODE_SHIFT 22
#define RX_PKT_STATUS_RUNT_MASK 0x1
#define RX_PKT_STATUS_RUNT_SHIFT 23
#define RX_PKT_STATUS_FRAG_MASK 0x1
#define RX_PKT_STATUS_FRAG_SHIFT 24
#define RX_PKT_STATUS_TRUNK_MASK 0x1
#define RX_PKT_STATUS_TRUNK_SHIFT 25
#define RX_PKT_STATUS_ALIGN_MASK 0x1
#define RX_PKT_STATUS_ALIGN_SHIFT 26
#define RX_PKT_STATUS_VLAN_MASK 0x1
#define RX_PKT_STATUS_VLAN_SHIFT 27
#define RX_PKT_STATUS_UPDATE_MASK 0x1
#define RX_PKT_STATUS_UPDATE_SHIFT 31
#define RX_PKT_STATUS_VLAN_TAG_MASK 0xFFFF
#define RX_PKT_STATUS_VLAN_TAG_SHIFT 32
struct rx_desc {
struct rx_pkt_status status;
unsigned char packet[1536-sizeof(struct rx_pkt_status)];
};
enum atl2_speed_duplex {
atl2_10_half = 0,
atl2_10_full = 1,
atl2_100_half = 2,
atl2_100_full = 3
};
struct atl2_spi_flash_dev {
const char *manu_name; /* manufacturer id */
/* op-code */
u8 cmdWRSR;
u8 cmdREAD;
u8 cmdPROGRAM;
u8 cmdWREN;
u8 cmdWRDI;
u8 cmdRDSR;
u8 cmdRDID;
u8 cmdSECTOR_ERASE;
u8 cmdCHIP_ERASE;
};
/* Structure containing variables used by the shared code (atl2_hw.c) */
struct atl2_hw {
u8 __iomem *hw_addr;
void *back;
u8 preamble_len;
u8 max_retry; /* Retransmission maximum, afterwards the
* packet will be discarded. */
u8 jam_ipg; /* IPG to start JAM for collision based flow
* control in half-duplex mode. In unit of
* 8-bit time. */
u8 ipgt; /* Desired back to back inter-packet gap. The
* default is 96-bit time. */
u8 min_ifg; /* Minimum number of IFG to enforce in between
* RX frames. Frame gap below such IFP is
* dropped. */
u8 ipgr1; /* 64bit Carrier-Sense window */
u8 ipgr2; /* 96-bit IPG window */
u8 retry_buf; /* When half-duplex mode, should hold some
* bytes for mac retry . (8*4bytes unit) */
u16 fc_rxd_hi;
u16 fc_rxd_lo;
u16 lcol; /* Collision Window */
u16 max_frame_size;
u16 MediaType;
u16 autoneg_advertised;
u16 pci_cmd_word;
u16 mii_autoneg_adv_reg;
u32 mem_rang;
u32 txcw;
u32 mc_filter_type;
u32 num_mc_addrs;
u32 collision_delta;
u32 tx_packet_delta;
u16 phy_spd_default;
u16 device_id;
u16 vendor_id;
u16 subsystem_id;
u16 subsystem_vendor_id;
u8 revision_id;
/* spi flash */
u8 flash_vendor;
u8 dma_fairness;
u8 mac_addr[NODE_ADDRESS_SIZE];
u8 perm_mac_addr[NODE_ADDRESS_SIZE];
/* FIXME */
/* bool phy_preamble_sup; */
bool phy_configured;
};
#endif /* _ATL2_HW_H_ */
struct atl2_ring_header {
/* pointer to the descriptor ring memory */
void *desc;
/* physical adress of the descriptor ring */
dma_addr_t dma;
/* length of descriptor ring in bytes */
unsigned int size;
};
/* board specific private data structure */
struct atl2_adapter {
/* OS defined structs */
struct net_device *netdev;
struct pci_dev *pdev;
struct net_device_stats net_stats;
#ifdef NETIF_F_HW_VLAN_TX
struct vlan_group *vlgrp;
#endif
u32 wol;
u16 link_speed;
u16 link_duplex;
spinlock_t stats_lock;
struct work_struct reset_task;
struct work_struct link_chg_task;
struct timer_list watchdog_timer;
struct timer_list phy_config_timer;
unsigned long cfg_phy;
bool mac_disabled;
/* All Descriptor memory */
dma_addr_t ring_dma;
void *ring_vir_addr;
int ring_size;
struct tx_pkt_header *txd_ring;
dma_addr_t txd_dma;
struct tx_pkt_status *txs_ring;
dma_addr_t txs_dma;
struct rx_desc *rxd_ring;
dma_addr_t rxd_dma;
u32 txd_ring_size; /* bytes per unit */
u32 txs_ring_size; /* dwords per unit */
u32 rxd_ring_size; /* 1536 bytes per unit */
/* read /write ptr: */
/* host */
u32 txd_write_ptr;
u32 txs_next_clear;
u32 rxd_read_ptr;
/* nic */
atomic_t txd_read_ptr;
atomic_t txs_write_ptr;
u32 rxd_write_ptr;
/* Interrupt Moderator timer ( 2us resolution) */
u16 imt;
/* Interrupt Clear timer (2us resolution) */
u16 ict;
unsigned long flags;
/* structs defined in atl2_hw.h */
u32 bd_number; /* board number */
bool pci_using_64;
bool have_msi;
struct atl2_hw hw;
u32 usr_cmd;
/* FIXME */
/* u32 regs_buff[ATL2_REGS_LEN]; */
u32 pci_state[16];
u32 *config_space;
};
enum atl2_state_t {
__ATL2_TESTING,
__ATL2_RESETTING,
__ATL2_DOWN
};
#endif /* _ATL2_H_ */

Просмотреть файл

@ -105,7 +105,6 @@ static void atlx_check_for_link(struct atlx_adapter *adapter)
netdev->name); netdev->name);
adapter->link_speed = SPEED_0; adapter->link_speed = SPEED_0;
netif_carrier_off(netdev); netif_carrier_off(netdev);
netif_stop_queue(netdev);
} }
} }
schedule_work(&adapter->link_chg_task); schedule_work(&adapter->link_chg_task);

Просмотреть файл

@ -290,7 +290,7 @@ static int mii_probe (struct net_device *dev)
if(aup->mac_id == 0) { /* get PHY0 */ if(aup->mac_id == 0) { /* get PHY0 */
# if defined(AU1XXX_PHY0_ADDR) # if defined(AU1XXX_PHY0_ADDR)
phydev = au_macs[AU1XXX_PHY0_BUSID]->mii_bus.phy_map[AU1XXX_PHY0_ADDR]; phydev = au_macs[AU1XXX_PHY0_BUSID]->mii_bus->phy_map[AU1XXX_PHY0_ADDR];
# else # else
printk (KERN_INFO DRV_NAME ":%s: using PHY-less setup\n", printk (KERN_INFO DRV_NAME ":%s: using PHY-less setup\n",
dev->name); dev->name);
@ -298,7 +298,7 @@ static int mii_probe (struct net_device *dev)
# endif /* defined(AU1XXX_PHY0_ADDR) */ # endif /* defined(AU1XXX_PHY0_ADDR) */
} else if (aup->mac_id == 1) { /* get PHY1 */ } else if (aup->mac_id == 1) { /* get PHY1 */
# if defined(AU1XXX_PHY1_ADDR) # if defined(AU1XXX_PHY1_ADDR)
phydev = au_macs[AU1XXX_PHY1_BUSID]->mii_bus.phy_map[AU1XXX_PHY1_ADDR]; phydev = au_macs[AU1XXX_PHY1_BUSID]->mii_bus->phy_map[AU1XXX_PHY1_ADDR];
# else # else
printk (KERN_INFO DRV_NAME ":%s: using PHY-less setup\n", printk (KERN_INFO DRV_NAME ":%s: using PHY-less setup\n",
dev->name); dev->name);
@ -311,8 +311,8 @@ static int mii_probe (struct net_device *dev)
/* find the first (lowest address) PHY on the current MAC's MII bus */ /* find the first (lowest address) PHY on the current MAC's MII bus */
for (phy_addr = 0; phy_addr < PHY_MAX_ADDR; phy_addr++) for (phy_addr = 0; phy_addr < PHY_MAX_ADDR; phy_addr++)
if (aup->mii_bus.phy_map[phy_addr]) { if (aup->mii_bus->phy_map[phy_addr]) {
phydev = aup->mii_bus.phy_map[phy_addr]; phydev = aup->mii_bus->phy_map[phy_addr];
# if !defined(AU1XXX_PHY_SEARCH_HIGHEST_ADDR) # if !defined(AU1XXX_PHY_SEARCH_HIGHEST_ADDR)
break; /* break out with first one found */ break; /* break out with first one found */
# endif # endif
@ -331,7 +331,7 @@ static int mii_probe (struct net_device *dev)
* the MAC0 MII bus */ * the MAC0 MII bus */
for (phy_addr = 0; phy_addr < PHY_MAX_ADDR; phy_addr++) { for (phy_addr = 0; phy_addr < PHY_MAX_ADDR; phy_addr++) {
struct phy_device *const tmp_phydev = struct phy_device *const tmp_phydev =
au_macs[0]->mii_bus.phy_map[phy_addr]; au_macs[0]->mii_bus->phy_map[phy_addr];
if (!tmp_phydev) if (!tmp_phydev)
continue; /* no PHY here... */ continue; /* no PHY here... */
@ -653,6 +653,8 @@ static struct net_device * au1000_probe(int port_num)
aup = dev->priv; aup = dev->priv;
spin_lock_init(&aup->lock);
/* Allocate the data buffers */ /* Allocate the data buffers */
/* Snooping works fine with eth on all au1xxx */ /* Snooping works fine with eth on all au1xxx */
aup->vaddr = (u32)dma_alloc_noncoherent(NULL, MAX_BUF_SIZE * aup->vaddr = (u32)dma_alloc_noncoherent(NULL, MAX_BUF_SIZE *
@ -696,28 +698,32 @@ static struct net_device * au1000_probe(int port_num)
*aup->enable = 0; *aup->enable = 0;
aup->mac_enabled = 0; aup->mac_enabled = 0;
aup->mii_bus.priv = dev; aup->mii_bus = mdiobus_alloc();
aup->mii_bus.read = mdiobus_read; if (aup->mii_bus == NULL)
aup->mii_bus.write = mdiobus_write; goto err_out;
aup->mii_bus.reset = mdiobus_reset;
aup->mii_bus.name = "au1000_eth_mii"; aup->mii_bus->priv = dev;
snprintf(aup->mii_bus.id, MII_BUS_ID_SIZE, "%x", aup->mac_id); aup->mii_bus->read = mdiobus_read;
aup->mii_bus.irq = kmalloc(sizeof(int)*PHY_MAX_ADDR, GFP_KERNEL); aup->mii_bus->write = mdiobus_write;
aup->mii_bus->reset = mdiobus_reset;
aup->mii_bus->name = "au1000_eth_mii";
snprintf(aup->mii_bus->id, MII_BUS_ID_SIZE, "%x", aup->mac_id);
aup->mii_bus->irq = kmalloc(sizeof(int)*PHY_MAX_ADDR, GFP_KERNEL);
for(i = 0; i < PHY_MAX_ADDR; ++i) for(i = 0; i < PHY_MAX_ADDR; ++i)
aup->mii_bus.irq[i] = PHY_POLL; aup->mii_bus->irq[i] = PHY_POLL;
/* if known, set corresponding PHY IRQs */ /* if known, set corresponding PHY IRQs */
#if defined(AU1XXX_PHY_STATIC_CONFIG) #if defined(AU1XXX_PHY_STATIC_CONFIG)
# if defined(AU1XXX_PHY0_IRQ) # if defined(AU1XXX_PHY0_IRQ)
if (AU1XXX_PHY0_BUSID == aup->mac_id) if (AU1XXX_PHY0_BUSID == aup->mac_id)
aup->mii_bus.irq[AU1XXX_PHY0_ADDR] = AU1XXX_PHY0_IRQ; aup->mii_bus->irq[AU1XXX_PHY0_ADDR] = AU1XXX_PHY0_IRQ;
# endif # endif
# if defined(AU1XXX_PHY1_IRQ) # if defined(AU1XXX_PHY1_IRQ)
if (AU1XXX_PHY1_BUSID == aup->mac_id) if (AU1XXX_PHY1_BUSID == aup->mac_id)
aup->mii_bus.irq[AU1XXX_PHY1_ADDR] = AU1XXX_PHY1_IRQ; aup->mii_bus->irq[AU1XXX_PHY1_ADDR] = AU1XXX_PHY1_IRQ;
# endif # endif
#endif #endif
mdiobus_register(&aup->mii_bus); mdiobus_register(aup->mii_bus);
if (mii_probe(dev) != 0) { if (mii_probe(dev) != 0) {
goto err_out; goto err_out;
@ -753,7 +759,6 @@ static struct net_device * au1000_probe(int port_num)
aup->tx_db_inuse[i] = pDB; aup->tx_db_inuse[i] = pDB;
} }
spin_lock_init(&aup->lock);
dev->base_addr = base; dev->base_addr = base;
dev->irq = irq; dev->irq = irq;
dev->open = au1000_open; dev->open = au1000_open;
@ -774,6 +779,11 @@ static struct net_device * au1000_probe(int port_num)
return dev; return dev;
err_out: err_out:
if (aup->mii_bus != NULL) {
mdiobus_unregister(aup->mii_bus);
mdiobus_free(aup->mii_bus);
}
/* here we should have a valid dev plus aup-> register addresses /* here we should have a valid dev plus aup-> register addresses
* so we can reset the mac properly.*/ * so we can reset the mac properly.*/
reset_mac(dev); reset_mac(dev);
@ -1004,6 +1014,8 @@ static void __exit au1000_cleanup_module(void)
if (dev) { if (dev) {
aup = (struct au1000_private *) dev->priv; aup = (struct au1000_private *) dev->priv;
unregister_netdev(dev); unregister_netdev(dev);
mdiobus_unregister(aup->mii_bus);
mdiobus_free(aup->mii_bus);
for (j = 0; j < NUM_RX_DMA; j++) for (j = 0; j < NUM_RX_DMA; j++)
if (aup->rx_db_inuse[j]) if (aup->rx_db_inuse[j])
ReleaseDB(aup, aup->rx_db_inuse[j]); ReleaseDB(aup, aup->rx_db_inuse[j]);

Просмотреть файл

@ -106,7 +106,7 @@ struct au1000_private {
int old_duplex; int old_duplex;
struct phy_device *phy_dev; struct phy_device *phy_dev;
struct mii_bus mii_bus; struct mii_bus *mii_bus;
/* These variables are just for quick access to certain regs addresses. */ /* These variables are just for quick access to certain regs addresses. */
volatile mac_reg_t *mac; /* mac registers */ volatile mac_reg_t *mac; /* mac registers */

Просмотреть файл

@ -153,7 +153,7 @@ static void ax_reset_8390(struct net_device *dev)
while ((ei_inb(addr + EN0_ISR) & ENISR_RESET) == 0) { while ((ei_inb(addr + EN0_ISR) & ENISR_RESET) == 0) {
if (jiffies - reset_start_time > 2*HZ/100) { if (jiffies - reset_start_time > 2*HZ/100) {
dev_warn(&ax->dev->dev, "%s: %s did not complete.\n", dev_warn(&ax->dev->dev, "%s: %s did not complete.\n",
__FUNCTION__, dev->name); __func__, dev->name);
break; break;
} }
} }
@ -173,7 +173,7 @@ static void ax_get_8390_hdr(struct net_device *dev, struct e8390_pkt_hdr *hdr,
if (ei_status.dmaing) { if (ei_status.dmaing) {
dev_err(&ax->dev->dev, "%s: DMAing conflict in %s " dev_err(&ax->dev->dev, "%s: DMAing conflict in %s "
"[DMAstat:%d][irqlock:%d].\n", "[DMAstat:%d][irqlock:%d].\n",
dev->name, __FUNCTION__, dev->name, __func__,
ei_status.dmaing, ei_status.irqlock); ei_status.dmaing, ei_status.irqlock);
return; return;
} }
@ -215,7 +215,7 @@ static void ax_block_input(struct net_device *dev, int count,
dev_err(&ax->dev->dev, dev_err(&ax->dev->dev,
"%s: DMAing conflict in %s " "%s: DMAing conflict in %s "
"[DMAstat:%d][irqlock:%d].\n", "[DMAstat:%d][irqlock:%d].\n",
dev->name, __FUNCTION__, dev->name, __func__,
ei_status.dmaing, ei_status.irqlock); ei_status.dmaing, ei_status.irqlock);
return; return;
} }
@ -260,7 +260,7 @@ static void ax_block_output(struct net_device *dev, int count,
if (ei_status.dmaing) { if (ei_status.dmaing) {
dev_err(&ax->dev->dev, "%s: DMAing conflict in %s." dev_err(&ax->dev->dev, "%s: DMAing conflict in %s."
"[DMAstat:%d][irqlock:%d]\n", "[DMAstat:%d][irqlock:%d]\n",
dev->name, __FUNCTION__, dev->name, __func__,
ei_status.dmaing, ei_status.irqlock); ei_status.dmaing, ei_status.irqlock);
return; return;
} }
@ -396,7 +396,7 @@ ax_phy_issueaddr(struct net_device *dev, int phy_addr, int reg, int opc)
{ {
if (phy_debug) if (phy_debug)
pr_debug("%s: dev %p, %04x, %04x, %d\n", pr_debug("%s: dev %p, %04x, %04x, %d\n",
__FUNCTION__, dev, phy_addr, reg, opc); __func__, dev, phy_addr, reg, opc);
ax_mii_ei_outbits(dev, 0x3f, 6); /* pre-amble */ ax_mii_ei_outbits(dev, 0x3f, 6); /* pre-amble */
ax_mii_ei_outbits(dev, 1, 2); /* frame-start */ ax_mii_ei_outbits(dev, 1, 2); /* frame-start */
@ -422,7 +422,7 @@ ax_phy_read(struct net_device *dev, int phy_addr, int reg)
spin_unlock_irqrestore(&ei_local->page_lock, flags); spin_unlock_irqrestore(&ei_local->page_lock, flags);
if (phy_debug) if (phy_debug)
pr_debug("%s: %04x.%04x => read %04x\n", __FUNCTION__, pr_debug("%s: %04x.%04x => read %04x\n", __func__,
phy_addr, reg, result); phy_addr, reg, result);
return result; return result;
@ -436,7 +436,7 @@ ax_phy_write(struct net_device *dev, int phy_addr, int reg, int value)
unsigned long flags; unsigned long flags;
dev_dbg(&ax->dev->dev, "%s: %p, %04x, %04x %04x\n", dev_dbg(&ax->dev->dev, "%s: %p, %04x, %04x %04x\n",
__FUNCTION__, dev, phy_addr, reg, value); __func__, dev, phy_addr, reg, value);
spin_lock_irqsave(&ei->page_lock, flags); spin_lock_irqsave(&ei->page_lock, flags);

Просмотреть файл

@ -398,7 +398,7 @@ static int mii_probe(struct net_device *dev)
/* search for connect PHY device */ /* search for connect PHY device */
for (i = 0; i < PHY_MAX_ADDR; i++) { for (i = 0; i < PHY_MAX_ADDR; i++) {
struct phy_device *const tmp_phydev = lp->mii_bus.phy_map[i]; struct phy_device *const tmp_phydev = lp->mii_bus->phy_map[i];
if (!tmp_phydev) if (!tmp_phydev)
continue; /* no PHY here... */ continue; /* no PHY here... */
@ -811,7 +811,7 @@ static void bfin_mac_enable(void)
{ {
u32 opmode; u32 opmode;
pr_debug("%s: %s\n", DRV_NAME, __FUNCTION__); pr_debug("%s: %s\n", DRV_NAME, __func__);
/* Set RX DMA */ /* Set RX DMA */
bfin_write_DMA1_NEXT_DESC_PTR(&(rx_list_head->desc_a)); bfin_write_DMA1_NEXT_DESC_PTR(&(rx_list_head->desc_a));
@ -847,7 +847,7 @@ static void bfin_mac_enable(void)
/* Our watchdog timed out. Called by the networking layer */ /* Our watchdog timed out. Called by the networking layer */
static void bfin_mac_timeout(struct net_device *dev) static void bfin_mac_timeout(struct net_device *dev)
{ {
pr_debug("%s: %s\n", dev->name, __FUNCTION__); pr_debug("%s: %s\n", dev->name, __func__);
bfin_mac_disable(); bfin_mac_disable();
@ -949,7 +949,7 @@ static int bfin_mac_open(struct net_device *dev)
{ {
struct bfin_mac_local *lp = netdev_priv(dev); struct bfin_mac_local *lp = netdev_priv(dev);
int retval; int retval;
pr_debug("%s: %s\n", dev->name, __FUNCTION__); pr_debug("%s: %s\n", dev->name, __func__);
/* /*
* Check that the address is valid. If its not, refuse * Check that the address is valid. If its not, refuse
@ -989,7 +989,7 @@ static int bfin_mac_open(struct net_device *dev)
static int bfin_mac_close(struct net_device *dev) static int bfin_mac_close(struct net_device *dev)
{ {
struct bfin_mac_local *lp = netdev_priv(dev); struct bfin_mac_local *lp = netdev_priv(dev);
pr_debug("%s: %s\n", dev->name, __FUNCTION__); pr_debug("%s: %s\n", dev->name, __func__);
netif_stop_queue(dev); netif_stop_queue(dev);
netif_carrier_off(dev); netif_carrier_off(dev);
@ -1058,17 +1058,21 @@ static int __devinit bfin_mac_probe(struct platform_device *pdev)
setup_mac_addr(ndev->dev_addr); setup_mac_addr(ndev->dev_addr);
/* MDIO bus initial */ /* MDIO bus initial */
lp->mii_bus.priv = ndev; lp->mii_bus = mdiobus_alloc();
lp->mii_bus.read = mdiobus_read; if (lp->mii_bus == NULL)
lp->mii_bus.write = mdiobus_write; goto out_err_mdiobus_alloc;
lp->mii_bus.reset = mdiobus_reset;
lp->mii_bus.name = "bfin_mac_mdio";
snprintf(lp->mii_bus.id, MII_BUS_ID_SIZE, "0");
lp->mii_bus.irq = kmalloc(sizeof(int)*PHY_MAX_ADDR, GFP_KERNEL);
for (i = 0; i < PHY_MAX_ADDR; ++i)
lp->mii_bus.irq[i] = PHY_POLL;
rc = mdiobus_register(&lp->mii_bus); lp->mii_bus->priv = ndev;
lp->mii_bus->read = mdiobus_read;
lp->mii_bus->write = mdiobus_write;
lp->mii_bus->reset = mdiobus_reset;
lp->mii_bus->name = "bfin_mac_mdio";
snprintf(lp->mii_bus->id, MII_BUS_ID_SIZE, "0");
lp->mii_bus->irq = kmalloc(sizeof(int)*PHY_MAX_ADDR, GFP_KERNEL);
for (i = 0; i < PHY_MAX_ADDR; ++i)
lp->mii_bus->irq[i] = PHY_POLL;
rc = mdiobus_register(lp->mii_bus);
if (rc) { if (rc) {
dev_err(&pdev->dev, "Cannot register MDIO bus!\n"); dev_err(&pdev->dev, "Cannot register MDIO bus!\n");
goto out_err_mdiobus_register; goto out_err_mdiobus_register;
@ -1121,8 +1125,10 @@ out_err_reg_ndev:
free_irq(IRQ_MAC_RX, ndev); free_irq(IRQ_MAC_RX, ndev);
out_err_request_irq: out_err_request_irq:
out_err_mii_probe: out_err_mii_probe:
mdiobus_unregister(&lp->mii_bus); mdiobus_unregister(lp->mii_bus);
out_err_mdiobus_register: out_err_mdiobus_register:
mdiobus_free(lp->mii_bus);
out_err_mdiobus_alloc:
peripheral_free_list(pin_req); peripheral_free_list(pin_req);
out_err_setup_pin_mux: out_err_setup_pin_mux:
out_err_probe_mac: out_err_probe_mac:
@ -1139,7 +1145,8 @@ static int __devexit bfin_mac_remove(struct platform_device *pdev)
platform_set_drvdata(pdev, NULL); platform_set_drvdata(pdev, NULL);
mdiobus_unregister(&lp->mii_bus); mdiobus_unregister(lp->mii_bus);
mdiobus_free(lp->mii_bus);
unregister_netdev(ndev); unregister_netdev(ndev);

Просмотреть файл

@ -66,7 +66,7 @@ struct bfin_mac_local {
int old_duplex; int old_duplex;
struct phy_device *phydev; struct phy_device *phydev;
struct mii_bus mii_bus; struct mii_bus *mii_bus;
}; };
extern void bfin_get_ether_addr(char *addr); extern void bfin_get_ether_addr(char *addr);

Просмотреть файл

@ -57,8 +57,8 @@
#define DRV_MODULE_NAME "bnx2" #define DRV_MODULE_NAME "bnx2"
#define PFX DRV_MODULE_NAME ": " #define PFX DRV_MODULE_NAME ": "
#define DRV_MODULE_VERSION "1.8.0" #define DRV_MODULE_VERSION "1.8.1"
#define DRV_MODULE_RELDATE "Aug 14, 2008" #define DRV_MODULE_RELDATE "Oct 7, 2008"
#define RUN_AT(x) (jiffies + (x)) #define RUN_AT(x) (jiffies + (x))
@ -69,7 +69,7 @@ static char version[] __devinitdata =
"Broadcom NetXtreme II Gigabit Ethernet Driver " DRV_MODULE_NAME " v" DRV_MODULE_VERSION " (" DRV_MODULE_RELDATE ")\n"; "Broadcom NetXtreme II Gigabit Ethernet Driver " DRV_MODULE_NAME " v" DRV_MODULE_VERSION " (" DRV_MODULE_RELDATE ")\n";
MODULE_AUTHOR("Michael Chan <mchan@broadcom.com>"); MODULE_AUTHOR("Michael Chan <mchan@broadcom.com>");
MODULE_DESCRIPTION("Broadcom NetXtreme II BCM5706/5708/5709 Driver"); MODULE_DESCRIPTION("Broadcom NetXtreme II BCM5706/5708/5709/5716 Driver");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_VERSION(DRV_MODULE_VERSION); MODULE_VERSION(DRV_MODULE_VERSION);
@ -1127,7 +1127,7 @@ bnx2_init_all_rx_contexts(struct bnx2 *bp)
} }
} }
static int static void
bnx2_set_mac_link(struct bnx2 *bp) bnx2_set_mac_link(struct bnx2 *bp)
{ {
u32 val; u32 val;
@ -1193,8 +1193,6 @@ bnx2_set_mac_link(struct bnx2 *bp)
if (CHIP_NUM(bp) == CHIP_NUM_5709) if (CHIP_NUM(bp) == CHIP_NUM_5709)
bnx2_init_all_rx_contexts(bp); bnx2_init_all_rx_contexts(bp);
return 0;
} }
static void static void
@ -2478,6 +2476,11 @@ bnx2_alloc_rx_page(struct bnx2 *bp, struct bnx2_rx_ring_info *rxr, u16 index)
return -ENOMEM; return -ENOMEM;
mapping = pci_map_page(bp->pdev, page, 0, PAGE_SIZE, mapping = pci_map_page(bp->pdev, page, 0, PAGE_SIZE,
PCI_DMA_FROMDEVICE); PCI_DMA_FROMDEVICE);
if (pci_dma_mapping_error(bp->pdev, mapping)) {
__free_page(page);
return -EIO;
}
rx_pg->page = page; rx_pg->page = page;
pci_unmap_addr_set(rx_pg, mapping, mapping); pci_unmap_addr_set(rx_pg, mapping, mapping);
rxbd->rx_bd_haddr_hi = (u64) mapping >> 32; rxbd->rx_bd_haddr_hi = (u64) mapping >> 32;
@ -2520,6 +2523,10 @@ bnx2_alloc_rx_skb(struct bnx2 *bp, struct bnx2_rx_ring_info *rxr, u16 index)
mapping = pci_map_single(bp->pdev, skb->data, bp->rx_buf_use_size, mapping = pci_map_single(bp->pdev, skb->data, bp->rx_buf_use_size,
PCI_DMA_FROMDEVICE); PCI_DMA_FROMDEVICE);
if (pci_dma_mapping_error(bp->pdev, mapping)) {
dev_kfree_skb(skb);
return -EIO;
}
rx_buf->skb = skb; rx_buf->skb = skb;
pci_unmap_addr_set(rx_buf, mapping, mapping); pci_unmap_addr_set(rx_buf, mapping, mapping);
@ -2594,7 +2601,7 @@ bnx2_tx_int(struct bnx2 *bp, struct bnx2_napi *bnapi, int budget)
sw_cons = txr->tx_cons; sw_cons = txr->tx_cons;
while (sw_cons != hw_cons) { while (sw_cons != hw_cons) {
struct sw_bd *tx_buf; struct sw_tx_bd *tx_buf;
struct sk_buff *skb; struct sk_buff *skb;
int i, last; int i, last;
@ -2619,21 +2626,13 @@ bnx2_tx_int(struct bnx2 *bp, struct bnx2_napi *bnapi, int budget)
} }
} }
pci_unmap_single(bp->pdev, pci_unmap_addr(tx_buf, mapping), skb_dma_unmap(&bp->pdev->dev, skb, DMA_TO_DEVICE);
skb_headlen(skb), PCI_DMA_TODEVICE);
tx_buf->skb = NULL; tx_buf->skb = NULL;
last = skb_shinfo(skb)->nr_frags; last = skb_shinfo(skb)->nr_frags;
for (i = 0; i < last; i++) { for (i = 0; i < last; i++) {
sw_cons = NEXT_TX_BD(sw_cons); sw_cons = NEXT_TX_BD(sw_cons);
pci_unmap_page(bp->pdev,
pci_unmap_addr(
&txr->tx_buf_ring[TX_RING_IDX(sw_cons)],
mapping),
skb_shinfo(skb)->frags[i].size,
PCI_DMA_TODEVICE);
} }
sw_cons = NEXT_TX_BD(sw_cons); sw_cons = NEXT_TX_BD(sw_cons);
@ -2674,11 +2673,31 @@ bnx2_reuse_rx_skb_pages(struct bnx2 *bp, struct bnx2_rx_ring_info *rxr,
{ {
struct sw_pg *cons_rx_pg, *prod_rx_pg; struct sw_pg *cons_rx_pg, *prod_rx_pg;
struct rx_bd *cons_bd, *prod_bd; struct rx_bd *cons_bd, *prod_bd;
dma_addr_t mapping;
int i; int i;
u16 hw_prod = rxr->rx_pg_prod, prod; u16 hw_prod, prod;
u16 cons = rxr->rx_pg_cons; u16 cons = rxr->rx_pg_cons;
cons_rx_pg = &rxr->rx_pg_ring[cons];
/* The caller was unable to allocate a new page to replace the
* last one in the frags array, so we need to recycle that page
* and then free the skb.
*/
if (skb) {
struct page *page;
struct skb_shared_info *shinfo;
shinfo = skb_shinfo(skb);
shinfo->nr_frags--;
page = shinfo->frags[shinfo->nr_frags].page;
shinfo->frags[shinfo->nr_frags].page = NULL;
cons_rx_pg->page = page;
dev_kfree_skb(skb);
}
hw_prod = rxr->rx_pg_prod;
for (i = 0; i < count; i++) { for (i = 0; i < count; i++) {
prod = RX_PG_RING_IDX(hw_prod); prod = RX_PG_RING_IDX(hw_prod);
@ -2687,20 +2706,6 @@ bnx2_reuse_rx_skb_pages(struct bnx2 *bp, struct bnx2_rx_ring_info *rxr,
cons_bd = &rxr->rx_pg_desc_ring[RX_RING(cons)][RX_IDX(cons)]; cons_bd = &rxr->rx_pg_desc_ring[RX_RING(cons)][RX_IDX(cons)];
prod_bd = &rxr->rx_pg_desc_ring[RX_RING(prod)][RX_IDX(prod)]; prod_bd = &rxr->rx_pg_desc_ring[RX_RING(prod)][RX_IDX(prod)];
if (i == 0 && skb) {
struct page *page;
struct skb_shared_info *shinfo;
shinfo = skb_shinfo(skb);
shinfo->nr_frags--;
page = shinfo->frags[shinfo->nr_frags].page;
shinfo->frags[shinfo->nr_frags].page = NULL;
mapping = pci_map_page(bp->pdev, page, 0, PAGE_SIZE,
PCI_DMA_FROMDEVICE);
cons_rx_pg->page = page;
pci_unmap_addr_set(cons_rx_pg, mapping, mapping);
dev_kfree_skb(skb);
}
if (prod != cons) { if (prod != cons) {
prod_rx_pg->page = cons_rx_pg->page; prod_rx_pg->page = cons_rx_pg->page;
cons_rx_pg->page = NULL; cons_rx_pg->page = NULL;
@ -2786,6 +2791,8 @@ bnx2_rx_skb(struct bnx2 *bp, struct bnx2_rx_ring_info *rxr, struct sk_buff *skb,
skb_put(skb, hdr_len); skb_put(skb, hdr_len);
for (i = 0; i < pages; i++) { for (i = 0; i < pages; i++) {
dma_addr_t mapping_old;
frag_len = min(frag_size, (unsigned int) PAGE_SIZE); frag_len = min(frag_size, (unsigned int) PAGE_SIZE);
if (unlikely(frag_len <= 4)) { if (unlikely(frag_len <= 4)) {
unsigned int tail = 4 - frag_len; unsigned int tail = 4 - frag_len;
@ -2808,9 +2815,10 @@ bnx2_rx_skb(struct bnx2 *bp, struct bnx2_rx_ring_info *rxr, struct sk_buff *skb,
} }
rx_pg = &rxr->rx_pg_ring[pg_cons]; rx_pg = &rxr->rx_pg_ring[pg_cons];
pci_unmap_page(bp->pdev, pci_unmap_addr(rx_pg, mapping), /* Don't unmap yet. If we're unable to allocate a new
PAGE_SIZE, PCI_DMA_FROMDEVICE); * page, we need to recycle the page and the DMA addr.
*/
mapping_old = pci_unmap_addr(rx_pg, mapping);
if (i == pages - 1) if (i == pages - 1)
frag_len -= 4; frag_len -= 4;
@ -2827,6 +2835,9 @@ bnx2_rx_skb(struct bnx2 *bp, struct bnx2_rx_ring_info *rxr, struct sk_buff *skb,
return err; return err;
} }
pci_unmap_page(bp->pdev, mapping_old,
PAGE_SIZE, PCI_DMA_FROMDEVICE);
frag_size -= frag_len; frag_size -= frag_len;
skb->data_len += frag_len; skb->data_len += frag_len;
skb->truesize += frag_len; skb->truesize += frag_len;
@ -3250,6 +3261,9 @@ bnx2_set_rx_mode(struct net_device *dev)
struct dev_addr_list *uc_ptr; struct dev_addr_list *uc_ptr;
int i; int i;
if (!netif_running(dev))
return;
spin_lock_bh(&bp->phy_lock); spin_lock_bh(&bp->phy_lock);
rx_mode = bp->rx_mode & ~(BNX2_EMAC_RX_MODE_PROMISCUOUS | rx_mode = bp->rx_mode & ~(BNX2_EMAC_RX_MODE_PROMISCUOUS |
@ -4970,31 +4984,20 @@ bnx2_free_tx_skbs(struct bnx2 *bp)
continue; continue;
for (j = 0; j < TX_DESC_CNT; ) { for (j = 0; j < TX_DESC_CNT; ) {
struct sw_bd *tx_buf = &txr->tx_buf_ring[j]; struct sw_tx_bd *tx_buf = &txr->tx_buf_ring[j];
struct sk_buff *skb = tx_buf->skb; struct sk_buff *skb = tx_buf->skb;
int k, last;
if (skb == NULL) { if (skb == NULL) {
j++; j++;
continue; continue;
} }
pci_unmap_single(bp->pdev, skb_dma_unmap(&bp->pdev->dev, skb, DMA_TO_DEVICE);
pci_unmap_addr(tx_buf, mapping),
skb_headlen(skb), PCI_DMA_TODEVICE);
tx_buf->skb = NULL; tx_buf->skb = NULL;
last = skb_shinfo(skb)->nr_frags; j += skb_shinfo(skb)->nr_frags + 1;
for (k = 0; k < last; k++) {
tx_buf = &txr->tx_buf_ring[j + k + 1];
pci_unmap_page(bp->pdev,
pci_unmap_addr(tx_buf, mapping),
skb_shinfo(skb)->frags[j].size,
PCI_DMA_TODEVICE);
}
dev_kfree_skb(skb); dev_kfree_skb(skb);
j += k + 1;
} }
} }
} }
@ -5074,6 +5077,21 @@ bnx2_init_nic(struct bnx2 *bp, int reset_phy)
return 0; return 0;
} }
static int
bnx2_shutdown_chip(struct bnx2 *bp)
{
u32 reset_code;
if (bp->flags & BNX2_FLAG_NO_WOL)
reset_code = BNX2_DRV_MSG_CODE_UNLOAD_LNK_DN;
else if (bp->wol)
reset_code = BNX2_DRV_MSG_CODE_SUSPEND_WOL;
else
reset_code = BNX2_DRV_MSG_CODE_SUSPEND_NO_WOL;
return bnx2_reset_chip(bp, reset_code);
}
static int static int
bnx2_test_registers(struct bnx2 *bp) bnx2_test_registers(struct bnx2 *bp)
{ {
@ -5357,8 +5375,11 @@ bnx2_run_loopback(struct bnx2 *bp, int loopback_mode)
for (i = 14; i < pkt_size; i++) for (i = 14; i < pkt_size; i++)
packet[i] = (unsigned char) (i & 0xff); packet[i] = (unsigned char) (i & 0xff);
map = pci_map_single(bp->pdev, skb->data, pkt_size, if (skb_dma_map(&bp->pdev->dev, skb, DMA_TO_DEVICE)) {
PCI_DMA_TODEVICE); dev_kfree_skb(skb);
return -EIO;
}
map = skb_shinfo(skb)->dma_maps[0];
REG_WR(bp, BNX2_HC_COMMAND, REG_WR(bp, BNX2_HC_COMMAND,
bp->hc_cmd | BNX2_HC_COMMAND_COAL_NOW_WO_INT); bp->hc_cmd | BNX2_HC_COMMAND_COAL_NOW_WO_INT);
@ -5393,7 +5414,7 @@ bnx2_run_loopback(struct bnx2 *bp, int loopback_mode)
udelay(5); udelay(5);
pci_unmap_single(bp->pdev, map, pkt_size, PCI_DMA_TODEVICE); skb_dma_unmap(&bp->pdev->dev, skb, DMA_TO_DEVICE);
dev_kfree_skb(skb); dev_kfree_skb(skb);
if (bnx2_get_hw_tx_cons(tx_napi) != txr->tx_prod) if (bnx2_get_hw_tx_cons(tx_napi) != txr->tx_prod)
@ -5508,6 +5529,9 @@ bnx2_test_link(struct bnx2 *bp)
{ {
u32 bmsr; u32 bmsr;
if (!netif_running(bp->dev))
return -ENODEV;
if (bp->phy_flags & BNX2_PHY_FLAG_REMOTE_PHY_CAP) { if (bp->phy_flags & BNX2_PHY_FLAG_REMOTE_PHY_CAP) {
if (bp->link_up) if (bp->link_up)
return 0; return 0;
@ -5600,7 +5624,7 @@ bnx2_5706_serdes_timer(struct bnx2 *bp)
} else if ((bp->link_up == 0) && (bp->autoneg & AUTONEG_SPEED)) { } else if ((bp->link_up == 0) && (bp->autoneg & AUTONEG_SPEED)) {
u32 bmcr; u32 bmcr;
bp->current_interval = bp->timer_interval; bp->current_interval = BNX2_TIMER_INTERVAL;
bnx2_read_phy(bp, bp->mii_bmcr, &bmcr); bnx2_read_phy(bp, bp->mii_bmcr, &bmcr);
@ -5629,7 +5653,7 @@ bnx2_5706_serdes_timer(struct bnx2 *bp)
bp->phy_flags &= ~BNX2_PHY_FLAG_PARALLEL_DETECT; bp->phy_flags &= ~BNX2_PHY_FLAG_PARALLEL_DETECT;
} }
} else } else
bp->current_interval = bp->timer_interval; bp->current_interval = BNX2_TIMER_INTERVAL;
if (check_link) { if (check_link) {
u32 val; u32 val;
@ -5674,11 +5698,11 @@ bnx2_5708_serdes_timer(struct bnx2 *bp)
} else { } else {
bnx2_disable_forced_2g5(bp); bnx2_disable_forced_2g5(bp);
bp->serdes_an_pending = 2; bp->serdes_an_pending = 2;
bp->current_interval = bp->timer_interval; bp->current_interval = BNX2_TIMER_INTERVAL;
} }
} else } else
bp->current_interval = bp->timer_interval; bp->current_interval = BNX2_TIMER_INTERVAL;
spin_unlock(&bp->phy_lock); spin_unlock(&bp->phy_lock);
} }
@ -5951,13 +5975,14 @@ bnx2_start_xmit(struct sk_buff *skb, struct net_device *dev)
struct bnx2 *bp = netdev_priv(dev); struct bnx2 *bp = netdev_priv(dev);
dma_addr_t mapping; dma_addr_t mapping;
struct tx_bd *txbd; struct tx_bd *txbd;
struct sw_bd *tx_buf; struct sw_tx_bd *tx_buf;
u32 len, vlan_tag_flags, last_frag, mss; u32 len, vlan_tag_flags, last_frag, mss;
u16 prod, ring_prod; u16 prod, ring_prod;
int i; int i;
struct bnx2_napi *bnapi; struct bnx2_napi *bnapi;
struct bnx2_tx_ring_info *txr; struct bnx2_tx_ring_info *txr;
struct netdev_queue *txq; struct netdev_queue *txq;
struct skb_shared_info *sp;
/* Determine which tx ring we will be placed on */ /* Determine which tx ring we will be placed on */
i = skb_get_queue_mapping(skb); i = skb_get_queue_mapping(skb);
@ -5989,7 +6014,7 @@ bnx2_start_xmit(struct sk_buff *skb, struct net_device *dev)
} }
#endif #endif
if ((mss = skb_shinfo(skb)->gso_size)) { if ((mss = skb_shinfo(skb)->gso_size)) {
u32 tcp_opt_len, ip_tcp_len; u32 tcp_opt_len;
struct iphdr *iph; struct iphdr *iph;
vlan_tag_flags |= TX_BD_FLAGS_SW_LSO; vlan_tag_flags |= TX_BD_FLAGS_SW_LSO;
@ -6013,21 +6038,7 @@ bnx2_start_xmit(struct sk_buff *skb, struct net_device *dev)
mss |= (tcp_off & 0xc) << TX_BD_TCP6_OFF2_SHL; mss |= (tcp_off & 0xc) << TX_BD_TCP6_OFF2_SHL;
} }
} else { } else {
if (skb_header_cloned(skb) &&
pskb_expand_head(skb, 0, 0, GFP_ATOMIC)) {
dev_kfree_skb(skb);
return NETDEV_TX_OK;
}
ip_tcp_len = ip_hdrlen(skb) + sizeof(struct tcphdr);
iph = ip_hdr(skb); iph = ip_hdr(skb);
iph->check = 0;
iph->tot_len = htons(mss + ip_tcp_len + tcp_opt_len);
tcp_hdr(skb)->check = ~csum_tcpudp_magic(iph->saddr,
iph->daddr, 0,
IPPROTO_TCP,
0);
if (tcp_opt_len || (iph->ihl > 5)) { if (tcp_opt_len || (iph->ihl > 5)) {
vlan_tag_flags |= ((iph->ihl - 5) + vlan_tag_flags |= ((iph->ihl - 5) +
(tcp_opt_len >> 2)) << 8; (tcp_opt_len >> 2)) << 8;
@ -6036,11 +6047,16 @@ bnx2_start_xmit(struct sk_buff *skb, struct net_device *dev)
} else } else
mss = 0; mss = 0;
mapping = pci_map_single(bp->pdev, skb->data, len, PCI_DMA_TODEVICE); if (skb_dma_map(&bp->pdev->dev, skb, DMA_TO_DEVICE)) {
dev_kfree_skb(skb);
return NETDEV_TX_OK;
}
sp = skb_shinfo(skb);
mapping = sp->dma_maps[0];
tx_buf = &txr->tx_buf_ring[ring_prod]; tx_buf = &txr->tx_buf_ring[ring_prod];
tx_buf->skb = skb; tx_buf->skb = skb;
pci_unmap_addr_set(tx_buf, mapping, mapping);
txbd = &txr->tx_desc_ring[ring_prod]; txbd = &txr->tx_desc_ring[ring_prod];
@ -6059,10 +6075,7 @@ bnx2_start_xmit(struct sk_buff *skb, struct net_device *dev)
txbd = &txr->tx_desc_ring[ring_prod]; txbd = &txr->tx_desc_ring[ring_prod];
len = frag->size; len = frag->size;
mapping = pci_map_page(bp->pdev, frag->page, frag->page_offset, mapping = sp->dma_maps[i + 1];
len, PCI_DMA_TODEVICE);
pci_unmap_addr_set(&txr->tx_buf_ring[ring_prod],
mapping, mapping);
txbd->tx_bd_haddr_hi = (u64) mapping >> 32; txbd->tx_bd_haddr_hi = (u64) mapping >> 32;
txbd->tx_bd_haddr_lo = (u64) mapping & 0xffffffff; txbd->tx_bd_haddr_lo = (u64) mapping & 0xffffffff;
@ -6097,20 +6110,13 @@ static int
bnx2_close(struct net_device *dev) bnx2_close(struct net_device *dev)
{ {
struct bnx2 *bp = netdev_priv(dev); struct bnx2 *bp = netdev_priv(dev);
u32 reset_code;
cancel_work_sync(&bp->reset_task); cancel_work_sync(&bp->reset_task);
bnx2_disable_int_sync(bp); bnx2_disable_int_sync(bp);
bnx2_napi_disable(bp); bnx2_napi_disable(bp);
del_timer_sync(&bp->timer); del_timer_sync(&bp->timer);
if (bp->flags & BNX2_FLAG_NO_WOL) bnx2_shutdown_chip(bp);
reset_code = BNX2_DRV_MSG_CODE_UNLOAD_LNK_DN;
else if (bp->wol)
reset_code = BNX2_DRV_MSG_CODE_SUSPEND_WOL;
else
reset_code = BNX2_DRV_MSG_CODE_SUSPEND_NO_WOL;
bnx2_reset_chip(bp, reset_code);
bnx2_free_irq(bp); bnx2_free_irq(bp);
bnx2_free_skbs(bp); bnx2_free_skbs(bp);
bnx2_free_mem(bp); bnx2_free_mem(bp);
@ -6479,6 +6485,9 @@ bnx2_nway_reset(struct net_device *dev)
struct bnx2 *bp = netdev_priv(dev); struct bnx2 *bp = netdev_priv(dev);
u32 bmcr; u32 bmcr;
if (!netif_running(dev))
return -EAGAIN;
if (!(bp->autoneg & AUTONEG_SPEED)) { if (!(bp->autoneg & AUTONEG_SPEED)) {
return -EINVAL; return -EINVAL;
} }
@ -6534,6 +6543,9 @@ bnx2_get_eeprom(struct net_device *dev, struct ethtool_eeprom *eeprom,
struct bnx2 *bp = netdev_priv(dev); struct bnx2 *bp = netdev_priv(dev);
int rc; int rc;
if (!netif_running(dev))
return -EAGAIN;
/* parameters already validated in ethtool_get_eeprom */ /* parameters already validated in ethtool_get_eeprom */
rc = bnx2_nvram_read(bp, eeprom->offset, eebuf, eeprom->len); rc = bnx2_nvram_read(bp, eeprom->offset, eebuf, eeprom->len);
@ -6548,6 +6560,9 @@ bnx2_set_eeprom(struct net_device *dev, struct ethtool_eeprom *eeprom,
struct bnx2 *bp = netdev_priv(dev); struct bnx2 *bp = netdev_priv(dev);
int rc; int rc;
if (!netif_running(dev))
return -EAGAIN;
/* parameters already validated in ethtool_set_eeprom */ /* parameters already validated in ethtool_set_eeprom */
rc = bnx2_nvram_write(bp, eeprom->offset, eebuf, eeprom->len); rc = bnx2_nvram_write(bp, eeprom->offset, eebuf, eeprom->len);
@ -6712,11 +6727,11 @@ bnx2_set_pauseparam(struct net_device *dev, struct ethtool_pauseparam *epause)
bp->autoneg &= ~AUTONEG_FLOW_CTRL; bp->autoneg &= ~AUTONEG_FLOW_CTRL;
} }
spin_lock_bh(&bp->phy_lock); if (netif_running(dev)) {
spin_lock_bh(&bp->phy_lock);
bnx2_setup_phy(bp, bp->phy_port); bnx2_setup_phy(bp, bp->phy_port);
spin_unlock_bh(&bp->phy_lock);
spin_unlock_bh(&bp->phy_lock); }
return 0; return 0;
} }
@ -6907,6 +6922,8 @@ bnx2_self_test(struct net_device *dev, struct ethtool_test *etest, u64 *buf)
{ {
struct bnx2 *bp = netdev_priv(dev); struct bnx2 *bp = netdev_priv(dev);
bnx2_set_power_state(bp, PCI_D0);
memset(buf, 0, sizeof(u64) * BNX2_NUM_TESTS); memset(buf, 0, sizeof(u64) * BNX2_NUM_TESTS);
if (etest->flags & ETH_TEST_FL_OFFLINE) { if (etest->flags & ETH_TEST_FL_OFFLINE) {
int i; int i;
@ -6926,9 +6943,8 @@ bnx2_self_test(struct net_device *dev, struct ethtool_test *etest, u64 *buf)
if ((buf[2] = bnx2_test_loopback(bp)) != 0) if ((buf[2] = bnx2_test_loopback(bp)) != 0)
etest->flags |= ETH_TEST_FL_FAILED; etest->flags |= ETH_TEST_FL_FAILED;
if (!netif_running(bp->dev)) { if (!netif_running(bp->dev))
bnx2_reset_chip(bp, BNX2_DRV_MSG_CODE_RESET); bnx2_shutdown_chip(bp);
}
else { else {
bnx2_init_nic(bp, 1); bnx2_init_nic(bp, 1);
bnx2_netif_start(bp); bnx2_netif_start(bp);
@ -6956,6 +6972,8 @@ bnx2_self_test(struct net_device *dev, struct ethtool_test *etest, u64 *buf)
etest->flags |= ETH_TEST_FL_FAILED; etest->flags |= ETH_TEST_FL_FAILED;
} }
if (!netif_running(bp->dev))
bnx2_set_power_state(bp, PCI_D3hot);
} }
static void static void
@ -7021,6 +7039,8 @@ bnx2_phys_id(struct net_device *dev, u32 data)
int i; int i;
u32 save; u32 save;
bnx2_set_power_state(bp, PCI_D0);
if (data == 0) if (data == 0)
data = 2; data = 2;
@ -7045,6 +7065,10 @@ bnx2_phys_id(struct net_device *dev, u32 data)
} }
REG_WR(bp, BNX2_EMAC_LED, 0); REG_WR(bp, BNX2_EMAC_LED, 0);
REG_WR(bp, BNX2_MISC_CFG, save); REG_WR(bp, BNX2_MISC_CFG, save);
if (!netif_running(dev))
bnx2_set_power_state(bp, PCI_D3hot);
return 0; return 0;
} }
@ -7516,8 +7540,7 @@ bnx2_init_board(struct pci_dev *pdev, struct net_device *dev)
bp->stats_ticks = USEC_PER_SEC & BNX2_HC_STATS_TICKS_HC_STAT_TICKS; bp->stats_ticks = USEC_PER_SEC & BNX2_HC_STATS_TICKS_HC_STAT_TICKS;
bp->timer_interval = HZ; bp->current_interval = BNX2_TIMER_INTERVAL;
bp->current_interval = HZ;
bp->phy_addr = 1; bp->phy_addr = 1;
@ -7607,7 +7630,7 @@ bnx2_init_board(struct pci_dev *pdev, struct net_device *dev)
bp->req_flow_ctrl = FLOW_CTRL_RX | FLOW_CTRL_TX; bp->req_flow_ctrl = FLOW_CTRL_RX | FLOW_CTRL_TX;
init_timer(&bp->timer); init_timer(&bp->timer);
bp->timer.expires = RUN_AT(bp->timer_interval); bp->timer.expires = RUN_AT(BNX2_TIMER_INTERVAL);
bp->timer.data = (unsigned long) bp; bp->timer.data = (unsigned long) bp;
bp->timer.function = bnx2_timer; bp->timer.function = bnx2_timer;
@ -7720,7 +7743,6 @@ bnx2_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
memcpy(dev->dev_addr, bp->mac_addr, 6); memcpy(dev->dev_addr, bp->mac_addr, 6);
memcpy(dev->perm_addr, bp->mac_addr, 6); memcpy(dev->perm_addr, bp->mac_addr, 6);
bp->name = board_info[ent->driver_data].name;
dev->features |= NETIF_F_IP_CSUM | NETIF_F_SG; dev->features |= NETIF_F_IP_CSUM | NETIF_F_SG;
if (CHIP_NUM(bp) == CHIP_NUM_5709) if (CHIP_NUM(bp) == CHIP_NUM_5709)
@ -7747,7 +7769,7 @@ bnx2_init_one(struct pci_dev *pdev, const struct pci_device_id *ent)
printk(KERN_INFO "%s: %s (%c%d) %s found at mem %lx, " printk(KERN_INFO "%s: %s (%c%d) %s found at mem %lx, "
"IRQ %d, node addr %s\n", "IRQ %d, node addr %s\n",
dev->name, dev->name,
bp->name, board_info[ent->driver_data].name,
((CHIP_ID(bp) & 0xf000) >> 12) + 'A', ((CHIP_ID(bp) & 0xf000) >> 12) + 'A',
((CHIP_ID(bp) & 0x0ff0) >> 4), ((CHIP_ID(bp) & 0x0ff0) >> 4),
bnx2_bus_string(bp, str), bnx2_bus_string(bp, str),
@ -7781,7 +7803,6 @@ bnx2_suspend(struct pci_dev *pdev, pm_message_t state)
{ {
struct net_device *dev = pci_get_drvdata(pdev); struct net_device *dev = pci_get_drvdata(pdev);
struct bnx2 *bp = netdev_priv(dev); struct bnx2 *bp = netdev_priv(dev);
u32 reset_code;
/* PCI register 4 needs to be saved whether netif_running() or not. /* PCI register 4 needs to be saved whether netif_running() or not.
* MSI address and data need to be saved if using MSI and * MSI address and data need to be saved if using MSI and
@ -7795,13 +7816,7 @@ bnx2_suspend(struct pci_dev *pdev, pm_message_t state)
bnx2_netif_stop(bp); bnx2_netif_stop(bp);
netif_device_detach(dev); netif_device_detach(dev);
del_timer_sync(&bp->timer); del_timer_sync(&bp->timer);
if (bp->flags & BNX2_FLAG_NO_WOL) bnx2_shutdown_chip(bp);
reset_code = BNX2_DRV_MSG_CODE_UNLOAD_LNK_DN;
else if (bp->wol)
reset_code = BNX2_DRV_MSG_CODE_SUSPEND_WOL;
else
reset_code = BNX2_DRV_MSG_CODE_SUSPEND_NO_WOL;
bnx2_reset_chip(bp, reset_code);
bnx2_free_skbs(bp); bnx2_free_skbs(bp);
bnx2_set_power_state(bp, pci_choose_state(pdev, state)); bnx2_set_power_state(bp, pci_choose_state(pdev, state));
return 0; return 0;

Просмотреть файл

@ -6526,10 +6526,14 @@ struct sw_pg {
DECLARE_PCI_UNMAP_ADDR(mapping) DECLARE_PCI_UNMAP_ADDR(mapping)
}; };
struct sw_tx_bd {
struct sk_buff *skb;
};
#define SW_RXBD_RING_SIZE (sizeof(struct sw_bd) * RX_DESC_CNT) #define SW_RXBD_RING_SIZE (sizeof(struct sw_bd) * RX_DESC_CNT)
#define SW_RXPG_RING_SIZE (sizeof(struct sw_pg) * RX_DESC_CNT) #define SW_RXPG_RING_SIZE (sizeof(struct sw_pg) * RX_DESC_CNT)
#define RXBD_RING_SIZE (sizeof(struct rx_bd) * RX_DESC_CNT) #define RXBD_RING_SIZE (sizeof(struct rx_bd) * RX_DESC_CNT)
#define SW_TXBD_RING_SIZE (sizeof(struct sw_bd) * TX_DESC_CNT) #define SW_TXBD_RING_SIZE (sizeof(struct sw_tx_bd) * TX_DESC_CNT)
#define TXBD_RING_SIZE (sizeof(struct tx_bd) * TX_DESC_CNT) #define TXBD_RING_SIZE (sizeof(struct tx_bd) * TX_DESC_CNT)
/* Buffered flash (Atmel: AT45DB011B) specific information */ /* Buffered flash (Atmel: AT45DB011B) specific information */
@ -6609,7 +6613,7 @@ struct bnx2_tx_ring_info {
u32 tx_bseq_addr; u32 tx_bseq_addr;
struct tx_bd *tx_desc_ring; struct tx_bd *tx_desc_ring;
struct sw_bd *tx_buf_ring; struct sw_tx_bd *tx_buf_ring;
u16 tx_cons; u16 tx_cons;
u16 hw_tx_cons; u16 hw_tx_cons;
@ -6654,6 +6658,8 @@ struct bnx2_napi {
struct bnx2_tx_ring_info tx_ring; struct bnx2_tx_ring_info tx_ring;
}; };
#define BNX2_TIMER_INTERVAL HZ
struct bnx2 { struct bnx2 {
/* Fields used in the tx and intr/napi performance paths are grouped */ /* Fields used in the tx and intr/napi performance paths are grouped */
/* together in the beginning of the structure. */ /* together in the beginning of the structure. */
@ -6701,9 +6707,6 @@ struct bnx2 {
/* End of fields used in the performance code paths. */ /* End of fields used in the performance code paths. */
char *name;
int timer_interval;
int current_interval; int current_interval;
struct timer_list timer; struct timer_list timer;
struct work_struct reset_task; struct work_struct reset_task;

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -59,8 +59,8 @@
#include "bnx2x.h" #include "bnx2x.h"
#include "bnx2x_init.h" #include "bnx2x_init.h"
#define DRV_MODULE_VERSION "1.45.21" #define DRV_MODULE_VERSION "1.45.22"
#define DRV_MODULE_RELDATE "2008/09/03" #define DRV_MODULE_RELDATE "2008/09/09"
#define BNX2X_BC_VER 0x040200 #define BNX2X_BC_VER 0x040200
/* Time in jiffies before concluding the transmitter is hung */ /* Time in jiffies before concluding the transmitter is hung */
@ -649,15 +649,16 @@ static void bnx2x_int_disable(struct bnx2x *bp)
BNX2X_ERR("BUG! proper val not read from IGU!\n"); BNX2X_ERR("BUG! proper val not read from IGU!\n");
} }
static void bnx2x_int_disable_sync(struct bnx2x *bp) static void bnx2x_int_disable_sync(struct bnx2x *bp, int disable_hw)
{ {
int msix = (bp->flags & USING_MSIX_FLAG) ? 1 : 0; int msix = (bp->flags & USING_MSIX_FLAG) ? 1 : 0;
int i; int i;
/* disable interrupt handling */ /* disable interrupt handling */
atomic_inc(&bp->intr_sem); atomic_inc(&bp->intr_sem);
/* prevent the HW from sending interrupts */ if (disable_hw)
bnx2x_int_disable(bp); /* prevent the HW from sending interrupts */
bnx2x_int_disable(bp);
/* make sure all ISRs are done */ /* make sure all ISRs are done */
if (msix) { if (msix) {
@ -6086,9 +6087,9 @@ static void bnx2x_netif_start(struct bnx2x *bp)
} }
} }
static void bnx2x_netif_stop(struct bnx2x *bp) static void bnx2x_netif_stop(struct bnx2x *bp, int disable_hw)
{ {
bnx2x_int_disable_sync(bp); bnx2x_int_disable_sync(bp, disable_hw);
if (netif_running(bp->dev)) { if (netif_running(bp->dev)) {
bnx2x_napi_disable(bp); bnx2x_napi_disable(bp);
netif_tx_disable(bp->dev); netif_tx_disable(bp->dev);
@ -6475,7 +6476,7 @@ load_rings_free:
for_each_queue(bp, i) for_each_queue(bp, i)
bnx2x_free_rx_sge_range(bp, bp->fp + i, NUM_RX_SGE); bnx2x_free_rx_sge_range(bp, bp->fp + i, NUM_RX_SGE);
load_int_disable: load_int_disable:
bnx2x_int_disable_sync(bp); bnx2x_int_disable_sync(bp, 1);
/* Release IRQs */ /* Release IRQs */
bnx2x_free_irq(bp); bnx2x_free_irq(bp);
load_error: load_error:
@ -6650,7 +6651,7 @@ static int bnx2x_nic_unload(struct bnx2x *bp, int unload_mode)
bp->rx_mode = BNX2X_RX_MODE_NONE; bp->rx_mode = BNX2X_RX_MODE_NONE;
bnx2x_set_storm_rx_mode(bp); bnx2x_set_storm_rx_mode(bp);
bnx2x_netif_stop(bp); bnx2x_netif_stop(bp, 1);
if (!netif_running(bp->dev)) if (!netif_running(bp->dev))
bnx2x_napi_disable(bp); bnx2x_napi_disable(bp);
del_timer_sync(&bp->timer); del_timer_sync(&bp->timer);
@ -8791,7 +8792,7 @@ static int bnx2x_test_loopback(struct bnx2x *bp, u8 link_up)
if (!netif_running(bp->dev)) if (!netif_running(bp->dev))
return BNX2X_LOOPBACK_FAILED; return BNX2X_LOOPBACK_FAILED;
bnx2x_netif_stop(bp); bnx2x_netif_stop(bp, 1);
if (bnx2x_run_loopback(bp, BNX2X_MAC_LOOPBACK, link_up)) { if (bnx2x_run_loopback(bp, BNX2X_MAC_LOOPBACK, link_up)) {
DP(NETIF_MSG_PROBE, "MAC loopback failed\n"); DP(NETIF_MSG_PROBE, "MAC loopback failed\n");
@ -10346,6 +10347,74 @@ static int bnx2x_resume(struct pci_dev *pdev)
return rc; return rc;
} }
static int bnx2x_eeh_nic_unload(struct bnx2x *bp)
{
int i;
bp->state = BNX2X_STATE_ERROR;
bp->rx_mode = BNX2X_RX_MODE_NONE;
bnx2x_netif_stop(bp, 0);
del_timer_sync(&bp->timer);
bp->stats_state = STATS_STATE_DISABLED;
DP(BNX2X_MSG_STATS, "stats_state - DISABLED\n");
/* Release IRQs */
bnx2x_free_irq(bp);
if (CHIP_IS_E1(bp)) {
struct mac_configuration_cmd *config =
bnx2x_sp(bp, mcast_config);
for (i = 0; i < config->hdr.length_6b; i++)
CAM_INVALIDATE(config->config_table[i]);
}
/* Free SKBs, SGEs, TPA pool and driver internals */
bnx2x_free_skbs(bp);
for_each_queue(bp, i)
bnx2x_free_rx_sge_range(bp, bp->fp + i, NUM_RX_SGE);
bnx2x_free_mem(bp);
bp->state = BNX2X_STATE_CLOSED;
netif_carrier_off(bp->dev);
return 0;
}
static void bnx2x_eeh_recover(struct bnx2x *bp)
{
u32 val;
mutex_init(&bp->port.phy_mutex);
bp->common.shmem_base = REG_RD(bp, MISC_REG_SHARED_MEM_ADDR);
bp->link_params.shmem_base = bp->common.shmem_base;
BNX2X_DEV_INFO("shmem offset is 0x%x\n", bp->common.shmem_base);
if (!bp->common.shmem_base ||
(bp->common.shmem_base < 0xA0000) ||
(bp->common.shmem_base >= 0xC0000)) {
BNX2X_DEV_INFO("MCP not active\n");
bp->flags |= NO_MCP_FLAG;
return;
}
val = SHMEM_RD(bp, validity_map[BP_PORT(bp)]);
if ((val & (SHR_MEM_VALIDITY_DEV_INFO | SHR_MEM_VALIDITY_MB))
!= (SHR_MEM_VALIDITY_DEV_INFO | SHR_MEM_VALIDITY_MB))
BNX2X_ERR("BAD MCP validity signature\n");
if (!BP_NOMCP(bp)) {
bp->fw_seq = (SHMEM_RD(bp, func_mb[BP_FUNC(bp)].drv_mb_header)
& DRV_MSG_SEQ_NUMBER_MASK);
BNX2X_DEV_INFO("fw_seq 0x%08x\n", bp->fw_seq);
}
}
/** /**
* bnx2x_io_error_detected - called when PCI error is detected * bnx2x_io_error_detected - called when PCI error is detected
* @pdev: Pointer to PCI device * @pdev: Pointer to PCI device
@ -10365,7 +10434,7 @@ static pci_ers_result_t bnx2x_io_error_detected(struct pci_dev *pdev,
netif_device_detach(dev); netif_device_detach(dev);
if (netif_running(dev)) if (netif_running(dev))
bnx2x_nic_unload(bp, UNLOAD_CLOSE); bnx2x_eeh_nic_unload(bp);
pci_disable_device(pdev); pci_disable_device(pdev);
@ -10420,8 +10489,10 @@ static void bnx2x_io_resume(struct pci_dev *pdev)
rtnl_lock(); rtnl_lock();
bnx2x_eeh_recover(bp);
if (netif_running(dev)) if (netif_running(dev))
bnx2x_nic_load(bp, LOAD_OPEN); bnx2x_nic_load(bp, LOAD_NORMAL);
netif_device_attach(dev); netif_device_attach(dev);

Просмотреть файл

@ -38,6 +38,7 @@
#include <linux/in.h> #include <linux/in.h>
#include <net/ipx.h> #include <net/ipx.h>
#include <net/arp.h> #include <net/arp.h>
#include <net/ipv6.h>
#include <asm/byteorder.h> #include <asm/byteorder.h>
#include "bonding.h" #include "bonding.h"
#include "bond_alb.h" #include "bond_alb.h"
@ -81,6 +82,7 @@
#define RLB_PROMISC_TIMEOUT 10*ALB_TIMER_TICKS_PER_SEC #define RLB_PROMISC_TIMEOUT 10*ALB_TIMER_TICKS_PER_SEC
static const u8 mac_bcast[ETH_ALEN] = {0xff,0xff,0xff,0xff,0xff,0xff}; static const u8 mac_bcast[ETH_ALEN] = {0xff,0xff,0xff,0xff,0xff,0xff};
static const u8 mac_v6_allmcast[ETH_ALEN] = {0x33,0x33,0x00,0x00,0x00,0x01};
static const int alb_delta_in_ticks = HZ / ALB_TIMER_TICKS_PER_SEC; static const int alb_delta_in_ticks = HZ / ALB_TIMER_TICKS_PER_SEC;
#pragma pack(1) #pragma pack(1)
@ -710,7 +712,7 @@ static struct slave *rlb_arp_xmit(struct sk_buff *skb, struct bonding *bond)
struct arp_pkt *arp = arp_pkt(skb); struct arp_pkt *arp = arp_pkt(skb);
struct slave *tx_slave = NULL; struct slave *tx_slave = NULL;
if (arp->op_code == __constant_htons(ARPOP_REPLY)) { if (arp->op_code == htons(ARPOP_REPLY)) {
/* the arp must be sent on the selected /* the arp must be sent on the selected
* rx channel * rx channel
*/ */
@ -719,7 +721,7 @@ static struct slave *rlb_arp_xmit(struct sk_buff *skb, struct bonding *bond)
memcpy(arp->mac_src,tx_slave->dev->dev_addr, ETH_ALEN); memcpy(arp->mac_src,tx_slave->dev->dev_addr, ETH_ALEN);
} }
dprintk("Server sent ARP Reply packet\n"); dprintk("Server sent ARP Reply packet\n");
} else if (arp->op_code == __constant_htons(ARPOP_REQUEST)) { } else if (arp->op_code == htons(ARPOP_REQUEST)) {
/* Create an entry in the rx_hashtbl for this client as a /* Create an entry in the rx_hashtbl for this client as a
* place holder. * place holder.
* When the arp reply is received the entry will be updated * When the arp reply is received the entry will be updated
@ -1290,6 +1292,7 @@ int bond_alb_xmit(struct sk_buff *skb, struct net_device *bond_dev)
u32 hash_index = 0; u32 hash_index = 0;
const u8 *hash_start = NULL; const u8 *hash_start = NULL;
int res = 1; int res = 1;
struct ipv6hdr *ip6hdr;
skb_reset_mac_header(skb); skb_reset_mac_header(skb);
eth_data = eth_hdr(skb); eth_data = eth_hdr(skb);
@ -1319,11 +1322,32 @@ int bond_alb_xmit(struct sk_buff *skb, struct net_device *bond_dev)
} }
break; break;
case ETH_P_IPV6: case ETH_P_IPV6:
/* IPv6 doesn't really use broadcast mac address, but leave
* that here just in case.
*/
if (memcmp(eth_data->h_dest, mac_bcast, ETH_ALEN) == 0) { if (memcmp(eth_data->h_dest, mac_bcast, ETH_ALEN) == 0) {
do_tx_balance = 0; do_tx_balance = 0;
break; break;
} }
/* IPv6 uses all-nodes multicast as an equivalent to
* broadcasts in IPv4.
*/
if (memcmp(eth_data->h_dest, mac_v6_allmcast, ETH_ALEN) == 0) {
do_tx_balance = 0;
break;
}
/* Additianally, DAD probes should not be tx-balanced as that
* will lead to false positives for duplicate addresses and
* prevent address configuration from working.
*/
ip6hdr = ipv6_hdr(skb);
if (ipv6_addr_any(&ip6hdr->saddr)) {
do_tx_balance = 0;
break;
}
hash_start = (char *)&(ipv6_hdr(skb)->daddr); hash_start = (char *)&(ipv6_hdr(skb)->daddr);
hash_size = sizeof(ipv6_hdr(skb)->daddr); hash_size = sizeof(ipv6_hdr(skb)->daddr);
break; break;

Просмотреть файл

@ -3702,7 +3702,7 @@ static int bond_xmit_hash_policy_l23(struct sk_buff *skb,
struct ethhdr *data = (struct ethhdr *)skb->data; struct ethhdr *data = (struct ethhdr *)skb->data;
struct iphdr *iph = ip_hdr(skb); struct iphdr *iph = ip_hdr(skb);
if (skb->protocol == __constant_htons(ETH_P_IP)) { if (skb->protocol == htons(ETH_P_IP)) {
return ((ntohl(iph->saddr ^ iph->daddr) & 0xffff) ^ return ((ntohl(iph->saddr ^ iph->daddr) & 0xffff) ^
(data->h_dest[5] ^ bond_dev->dev_addr[5])) % count; (data->h_dest[5] ^ bond_dev->dev_addr[5])) % count;
} }
@ -3723,8 +3723,8 @@ static int bond_xmit_hash_policy_l34(struct sk_buff *skb,
__be16 *layer4hdr = (__be16 *)((u32 *)iph + iph->ihl); __be16 *layer4hdr = (__be16 *)((u32 *)iph + iph->ihl);
int layer4_xor = 0; int layer4_xor = 0;
if (skb->protocol == __constant_htons(ETH_P_IP)) { if (skb->protocol == htons(ETH_P_IP)) {
if (!(iph->frag_off & __constant_htons(IP_MF|IP_OFFSET)) && if (!(iph->frag_off & htons(IP_MF|IP_OFFSET)) &&
(iph->protocol == IPPROTO_TCP || (iph->protocol == IPPROTO_TCP ||
iph->protocol == IPPROTO_UDP)) { iph->protocol == IPPROTO_UDP)) {
layer4_xor = ntohs((*layer4hdr ^ *(layer4hdr + 1))); layer4_xor = ntohs((*layer4hdr ^ *(layer4hdr + 1)));
@ -4493,6 +4493,12 @@ static void bond_ethtool_get_drvinfo(struct net_device *bond_dev,
static const struct ethtool_ops bond_ethtool_ops = { static const struct ethtool_ops bond_ethtool_ops = {
.get_drvinfo = bond_ethtool_get_drvinfo, .get_drvinfo = bond_ethtool_get_drvinfo,
.get_link = ethtool_op_get_link,
.get_tx_csum = ethtool_op_get_tx_csum,
.get_sg = ethtool_op_get_sg,
.get_tso = ethtool_op_get_tso,
.get_ufo = ethtool_op_get_ufo,
.get_flags = ethtool_op_get_flags,
}; };
/* /*

Просмотреть файл

@ -32,7 +32,7 @@
#ifdef BONDING_DEBUG #ifdef BONDING_DEBUG
#define dprintk(fmt, args...) \ #define dprintk(fmt, args...) \
printk(KERN_DEBUG \ printk(KERN_DEBUG \
DRV_NAME ": %s() %d: " fmt, __FUNCTION__, __LINE__ , ## args ) DRV_NAME ": %s() %d: " fmt, __func__, __LINE__ , ## args )
#else #else
#define dprintk(fmt, args...) #define dprintk(fmt, args...)
#endif /* BONDING_DEBUG */ #endif /* BONDING_DEBUG */
@ -333,5 +333,13 @@ void bond_change_active_slave(struct bonding *bond, struct slave *new_active);
void bond_register_arp(struct bonding *); void bond_register_arp(struct bonding *);
void bond_unregister_arp(struct bonding *); void bond_unregister_arp(struct bonding *);
/* exported from bond_main.c */
extern struct list_head bond_dev_list;
extern struct bond_parm_tbl bond_lacp_tbl[];
extern struct bond_parm_tbl bond_mode_tbl[];
extern struct bond_parm_tbl xmit_hashtype_tbl[];
extern struct bond_parm_tbl arp_validate_tbl[];
extern struct bond_parm_tbl fail_over_mac_tbl[];
#endif /* _LINUX_BONDING_H */ #endif /* _LINUX_BONDING_H */

Просмотреть файл

@ -74,6 +74,7 @@
#include <linux/slab.h> #include <linux/slab.h>
#include <linux/delay.h> #include <linux/delay.h>
#include <linux/init.h> #include <linux/init.h>
#include <linux/vmalloc.h>
#include <linux/ioport.h> #include <linux/ioport.h>
#include <linux/pci.h> #include <linux/pci.h>
#include <linux/mm.h> #include <linux/mm.h>
@ -91,6 +92,7 @@
#include <linux/ip.h> #include <linux/ip.h>
#include <linux/tcp.h> #include <linux/tcp.h>
#include <linux/mutex.h> #include <linux/mutex.h>
#include <linux/firmware.h>
#include <net/checksum.h> #include <net/checksum.h>
@ -197,6 +199,7 @@ static int link_mode;
MODULE_AUTHOR("Adrian Sun (asun@darksunrising.com)"); MODULE_AUTHOR("Adrian Sun (asun@darksunrising.com)");
MODULE_DESCRIPTION("Sun Cassini(+) ethernet driver"); MODULE_DESCRIPTION("Sun Cassini(+) ethernet driver");
MODULE_LICENSE("GPL"); MODULE_LICENSE("GPL");
MODULE_FIRMWARE("sun/cassini.bin");
module_param(cassini_debug, int, 0); module_param(cassini_debug, int, 0);
MODULE_PARM_DESC(cassini_debug, "Cassini bitmapped debugging message enable value"); MODULE_PARM_DESC(cassini_debug, "Cassini bitmapped debugging message enable value");
module_param(link_mode, int, 0); module_param(link_mode, int, 0);
@ -812,9 +815,44 @@ static int cas_reset_mii_phy(struct cas *cp)
return (limit <= 0); return (limit <= 0);
} }
static int cas_saturn_firmware_init(struct cas *cp)
{
const struct firmware *fw;
const char fw_name[] = "sun/cassini.bin";
int err;
if (PHY_NS_DP83065 != cp->phy_id)
return 0;
err = request_firmware(&fw, fw_name, &cp->pdev->dev);
if (err) {
printk(KERN_ERR "cassini: Failed to load firmware \"%s\"\n",
fw_name);
return err;
}
if (fw->size < 2) {
printk(KERN_ERR "cassini: bogus length %zu in \"%s\"\n",
fw->size, fw_name);
err = -EINVAL;
goto out;
}
cp->fw_load_addr= fw->data[1] << 8 | fw->data[0];
cp->fw_size = fw->size - 2;
cp->fw_data = vmalloc(cp->fw_size);
if (!cp->fw_data) {
err = -ENOMEM;
printk(KERN_ERR "cassini: \"%s\" Failed %d\n", fw_name, err);
goto out;
}
memcpy(cp->fw_data, &fw->data[2], cp->fw_size);
out:
release_firmware(fw);
return err;
}
static void cas_saturn_firmware_load(struct cas *cp) static void cas_saturn_firmware_load(struct cas *cp)
{ {
cas_saturn_patch_t *patch = cas_saturn_patch; int i;
cas_phy_powerdown(cp); cas_phy_powerdown(cp);
@ -833,11 +871,9 @@ static void cas_saturn_firmware_load(struct cas *cp)
/* download new firmware */ /* download new firmware */
cas_phy_write(cp, DP83065_MII_MEM, 0x1); cas_phy_write(cp, DP83065_MII_MEM, 0x1);
cas_phy_write(cp, DP83065_MII_REGE, patch->addr); cas_phy_write(cp, DP83065_MII_REGE, cp->fw_load_addr);
while (patch->addr) { for (i = 0; i < cp->fw_size; i++)
cas_phy_write(cp, DP83065_MII_REGD, patch->val); cas_phy_write(cp, DP83065_MII_REGD, cp->fw_data[i]);
patch++;
}
/* enable firmware */ /* enable firmware */
cas_phy_write(cp, DP83065_MII_REGE, 0x8ff8); cas_phy_write(cp, DP83065_MII_REGE, 0x8ff8);
@ -2182,7 +2218,7 @@ static inline void cas_rx_flow_pkt(struct cas *cp, const u64 *words,
* do any additional locking here. stick the buffer * do any additional locking here. stick the buffer
* at the end. * at the end.
*/ */
__skb_insert(skb, flow->prev, (struct sk_buff *) flow, flow); __skb_queue_tail(flow, skb);
if (words[0] & RX_COMP1_RELEASE_FLOW) { if (words[0] & RX_COMP1_RELEASE_FLOW) {
while ((skb = __skb_dequeue(flow))) { while ((skb = __skb_dequeue(flow))) {
cas_skb_release(skb); cas_skb_release(skb);
@ -5108,6 +5144,9 @@ static int __devinit cas_init_one(struct pci_dev *pdev,
cas_reset(cp, 0); cas_reset(cp, 0);
if (cas_check_invariants(cp)) if (cas_check_invariants(cp))
goto err_out_iounmap; goto err_out_iounmap;
if (cp->cas_flags & CAS_FLAG_SATURN)
if (cas_saturn_firmware_init(cp))
goto err_out_iounmap;
cp->init_block = (struct cas_init_block *) cp->init_block = (struct cas_init_block *)
pci_alloc_consistent(pdev, sizeof(struct cas_init_block), pci_alloc_consistent(pdev, sizeof(struct cas_init_block),
@ -5217,6 +5256,9 @@ static void __devexit cas_remove_one(struct pci_dev *pdev)
cp = netdev_priv(dev); cp = netdev_priv(dev);
unregister_netdev(dev); unregister_netdev(dev);
if (cp->fw_data)
vfree(cp->fw_data);
mutex_lock(&cp->pm_mutex); mutex_lock(&cp->pm_mutex);
flush_scheduled_work(); flush_scheduled_work();
if (cp->hw_running) if (cp->hw_running)

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -302,13 +302,7 @@ static int cpmac_mdio_reset(struct mii_bus *bus)
static int mii_irqs[PHY_MAX_ADDR] = { PHY_POLL, }; static int mii_irqs[PHY_MAX_ADDR] = { PHY_POLL, };
static struct mii_bus cpmac_mii = { static struct mii_bus *cpmac_mii;
.name = "cpmac-mii",
.read = cpmac_mdio_read,
.write = cpmac_mdio_write,
.reset = cpmac_mdio_reset,
.irq = mii_irqs,
};
static int cpmac_config(struct net_device *dev, struct ifmap *map) static int cpmac_config(struct net_device *dev, struct ifmap *map)
{ {
@ -1116,7 +1110,7 @@ static int __devinit cpmac_probe(struct platform_device *pdev)
for (phy_id = 0; phy_id < PHY_MAX_ADDR; phy_id++) { for (phy_id = 0; phy_id < PHY_MAX_ADDR; phy_id++) {
if (!(pdata->phy_mask & (1 << phy_id))) if (!(pdata->phy_mask & (1 << phy_id)))
continue; continue;
if (!cpmac_mii.phy_map[phy_id]) if (!cpmac_mii->phy_map[phy_id])
continue; continue;
break; break;
} }
@ -1168,7 +1162,7 @@ static int __devinit cpmac_probe(struct platform_device *pdev)
priv->msg_enable = netif_msg_init(debug_level, 0xff); priv->msg_enable = netif_msg_init(debug_level, 0xff);
memcpy(dev->dev_addr, pdata->dev_addr, sizeof(dev->dev_addr)); memcpy(dev->dev_addr, pdata->dev_addr, sizeof(dev->dev_addr));
priv->phy = phy_connect(dev, cpmac_mii.phy_map[phy_id]->dev.bus_id, priv->phy = phy_connect(dev, cpmac_mii->phy_map[phy_id]->dev.bus_id,
&cpmac_adjust_link, 0, PHY_INTERFACE_MODE_MII); &cpmac_adjust_link, 0, PHY_INTERFACE_MODE_MII);
if (IS_ERR(priv->phy)) { if (IS_ERR(priv->phy)) {
if (netif_msg_drv(priv)) if (netif_msg_drv(priv))
@ -1216,11 +1210,22 @@ int __devinit cpmac_init(void)
u32 mask; u32 mask;
int i, res; int i, res;
cpmac_mii.priv = ioremap(AR7_REGS_MDIO, 256); cpmac_mii = mdiobus_alloc();
if (cpmac_mii == NULL)
return -ENOMEM;
if (!cpmac_mii.priv) { cpmac_mii->name = "cpmac-mii";
cpmac_mii->read = cpmac_mdio_read;
cpmac_mii->write = cpmac_mdio_write;
cpmac_mii->reset = cpmac_mdio_reset;
cpmac_mii->irq = mii_irqs;
cpmac_mii->priv = ioremap(AR7_REGS_MDIO, 256);
if (!cpmac_mii->priv) {
printk(KERN_ERR "Can't ioremap mdio registers\n"); printk(KERN_ERR "Can't ioremap mdio registers\n");
return -ENXIO; res = -ENXIO;
goto fail_alloc;
} }
#warning FIXME: unhardcode gpio&reset bits #warning FIXME: unhardcode gpio&reset bits
@ -1230,10 +1235,10 @@ int __devinit cpmac_init(void)
ar7_device_reset(AR7_RESET_BIT_CPMAC_HI); ar7_device_reset(AR7_RESET_BIT_CPMAC_HI);
ar7_device_reset(AR7_RESET_BIT_EPHY); ar7_device_reset(AR7_RESET_BIT_EPHY);
cpmac_mii.reset(&cpmac_mii); cpmac_mii->reset(cpmac_mii);
for (i = 0; i < 300000; i++) for (i = 0; i < 300000; i++)
if ((mask = cpmac_read(cpmac_mii.priv, CPMAC_MDIO_ALIVE))) if ((mask = cpmac_read(cpmac_mii->priv, CPMAC_MDIO_ALIVE)))
break; break;
else else
cpu_relax(); cpu_relax();
@ -1244,10 +1249,10 @@ int __devinit cpmac_init(void)
mask = 0; mask = 0;
} }
cpmac_mii.phy_mask = ~(mask | 0x80000000); cpmac_mii->phy_mask = ~(mask | 0x80000000);
snprintf(cpmac_mii.id, MII_BUS_ID_SIZE, "0"); snprintf(cpmac_mii->id, MII_BUS_ID_SIZE, "0");
res = mdiobus_register(&cpmac_mii); res = mdiobus_register(cpmac_mii);
if (res) if (res)
goto fail_mii; goto fail_mii;
@ -1258,10 +1263,13 @@ int __devinit cpmac_init(void)
return 0; return 0;
fail_cpmac: fail_cpmac:
mdiobus_unregister(&cpmac_mii); mdiobus_unregister(cpmac_mii);
fail_mii: fail_mii:
iounmap(cpmac_mii.priv); iounmap(cpmac_mii->priv);
fail_alloc:
mdiobus_free(cpmac_mii);
return res; return res;
} }
@ -1269,8 +1277,9 @@ fail_mii:
void __devexit cpmac_exit(void) void __devexit cpmac_exit(void)
{ {
platform_driver_unregister(&cpmac_driver); platform_driver_unregister(&cpmac_driver);
mdiobus_unregister(&cpmac_mii); mdiobus_unregister(cpmac_mii);
iounmap(cpmac_mii.priv); mdiobus_free(cpmac_mii);
iounmap(cpmac_mii->priv);
} }
module_init(cpmac_init); module_init(cpmac_init);

Просмотреть файл

@ -1397,9 +1397,7 @@ net_open(struct net_device *dev)
release_dma: release_dma:
#if ALLOW_DMA #if ALLOW_DMA
free_dma(dev->dma); free_dma(dev->dma);
#endif
release_irq: release_irq:
#if ALLOW_DMA
release_dma_buff(lp); release_dma_buff(lp);
#endif #endif
writereg(dev, PP_LineCTL, readreg(dev, PP_LineCTL) & ~(SERIAL_TX_ON | SERIAL_RX_ON)); writereg(dev, PP_LineCTL, readreg(dev, PP_LineCTL) & ~(SERIAL_TX_ON | SERIAL_RX_ON));

Просмотреть файл

@ -54,7 +54,6 @@ struct port_info {
struct adapter *adapter; struct adapter *adapter;
struct vlan_group *vlan_grp; struct vlan_group *vlan_grp;
struct sge_qset *qs; struct sge_qset *qs;
const struct port_type_info *port_type;
u8 port_id; u8 port_id;
u8 rx_csum_offload; u8 rx_csum_offload;
u8 nqsets; u8 nqsets;
@ -124,8 +123,7 @@ struct sge_rspq { /* state for an SGE response queue */
dma_addr_t phys_addr; /* physical address of the ring */ dma_addr_t phys_addr; /* physical address of the ring */
unsigned int cntxt_id; /* SGE context id for the response q */ unsigned int cntxt_id; /* SGE context id for the response q */
spinlock_t lock; /* guards response processing */ spinlock_t lock; /* guards response processing */
struct sk_buff *rx_head; /* offload packet receive queue head */ struct sk_buff_head rx_queue; /* offload packet receive queue */
struct sk_buff *rx_tail; /* offload packet receive queue tail */
struct sk_buff *pg_skb; /* used to build frag list in napi handler */ struct sk_buff *pg_skb; /* used to build frag list in napi handler */
unsigned long offload_pkts; unsigned long offload_pkts;
@ -241,6 +239,7 @@ struct adapter {
unsigned int check_task_cnt; unsigned int check_task_cnt;
struct delayed_work adap_check_task; struct delayed_work adap_check_task;
struct work_struct ext_intr_handler_task; struct work_struct ext_intr_handler_task;
struct work_struct fatal_error_handler_task;
struct dentry *debugfs_root; struct dentry *debugfs_root;
@ -282,9 +281,11 @@ int t3_offload_tx(struct t3cdev *tdev, struct sk_buff *skb);
void t3_os_ext_intr_handler(struct adapter *adapter); void t3_os_ext_intr_handler(struct adapter *adapter);
void t3_os_link_changed(struct adapter *adapter, int port_id, int link_status, void t3_os_link_changed(struct adapter *adapter, int port_id, int link_status,
int speed, int duplex, int fc); int speed, int duplex, int fc);
void t3_os_phymod_changed(struct adapter *adap, int port_id);
void t3_sge_start(struct adapter *adap); void t3_sge_start(struct adapter *adap);
void t3_sge_stop(struct adapter *adap); void t3_sge_stop(struct adapter *adap);
void t3_stop_sge_timers(struct adapter *adap);
void t3_free_sge_resources(struct adapter *adap); void t3_free_sge_resources(struct adapter *adap);
void t3_sge_err_intr_handler(struct adapter *adapter); void t3_sge_err_intr_handler(struct adapter *adapter);
irq_handler_t t3_intr_handler(struct adapter *adap, int polling); irq_handler_t t3_intr_handler(struct adapter *adap, int polling);

Разница между файлами не показана из-за своего большого размера Загрузить разницу

Просмотреть файл

@ -193,22 +193,13 @@ struct mdio_ops {
struct adapter_info { struct adapter_info {
unsigned char nports; /* # of ports */ unsigned char nports; /* # of ports */
unsigned char phy_base_addr; /* MDIO PHY base address */ unsigned char phy_base_addr; /* MDIO PHY base address */
unsigned char mdien;
unsigned char mdiinv;
unsigned int gpio_out; /* GPIO output settings */ unsigned int gpio_out; /* GPIO output settings */
unsigned int gpio_intr; /* GPIO IRQ enable mask */ unsigned char gpio_intr[MAX_NPORTS]; /* GPIO PHY IRQ pins */
unsigned long caps; /* adapter capabilities */ unsigned long caps; /* adapter capabilities */
const struct mdio_ops *mdio_ops; /* MDIO operations */ const struct mdio_ops *mdio_ops; /* MDIO operations */
const char *desc; /* product description */ const char *desc; /* product description */
}; };
struct port_type_info {
void (*phy_prep)(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *ops);
unsigned int caps;
const char *desc;
};
struct mc5_stats { struct mc5_stats {
unsigned long parity_err; unsigned long parity_err;
unsigned long active_rgn_full; unsigned long active_rgn_full;
@ -358,6 +349,7 @@ struct qset_params { /* SGE queue set parameters */
unsigned int jumbo_size; /* # of entries in jumbo free list */ unsigned int jumbo_size; /* # of entries in jumbo free list */
unsigned int txq_size[SGE_TXQ_PER_SET]; /* Tx queue sizes */ unsigned int txq_size[SGE_TXQ_PER_SET]; /* Tx queue sizes */
unsigned int cong_thres; /* FL congestion threshold */ unsigned int cong_thres; /* FL congestion threshold */
unsigned int vector; /* Interrupt (line or vector) number */
}; };
struct sge_params { struct sge_params {
@ -525,12 +517,25 @@ enum {
MAC_RXFIFO_SIZE = 32768 MAC_RXFIFO_SIZE = 32768
}; };
/* IEEE 802.3ae specified MDIO devices */ /* IEEE 802.3 specified MDIO devices */
enum { enum {
MDIO_DEV_PMA_PMD = 1, MDIO_DEV_PMA_PMD = 1,
MDIO_DEV_WIS = 2, MDIO_DEV_WIS = 2,
MDIO_DEV_PCS = 3, MDIO_DEV_PCS = 3,
MDIO_DEV_XGXS = 4 MDIO_DEV_XGXS = 4,
MDIO_DEV_ANEG = 7,
MDIO_DEV_VEND1 = 30,
MDIO_DEV_VEND2 = 31
};
/* LASI control and status registers */
enum {
RX_ALARM_CTRL = 0x9000,
TX_ALARM_CTRL = 0x9001,
LASI_CTRL = 0x9002,
RX_ALARM_STAT = 0x9003,
TX_ALARM_STAT = 0x9004,
LASI_STAT = 0x9005
}; };
/* PHY loopback direction */ /* PHY loopback direction */
@ -542,12 +547,23 @@ enum {
/* PHY interrupt types */ /* PHY interrupt types */
enum { enum {
cphy_cause_link_change = 1, cphy_cause_link_change = 1,
cphy_cause_fifo_error = 2 cphy_cause_fifo_error = 2,
cphy_cause_module_change = 4,
};
/* PHY module types */
enum {
phy_modtype_none,
phy_modtype_sr,
phy_modtype_lr,
phy_modtype_lrm,
phy_modtype_twinax,
phy_modtype_twinax_long,
phy_modtype_unknown
}; };
/* PHY operations */ /* PHY operations */
struct cphy_ops { struct cphy_ops {
void (*destroy)(struct cphy *phy);
int (*reset)(struct cphy *phy, int wait); int (*reset)(struct cphy *phy, int wait);
int (*intr_enable)(struct cphy *phy); int (*intr_enable)(struct cphy *phy);
@ -568,8 +584,12 @@ struct cphy_ops {
/* A PHY instance */ /* A PHY instance */
struct cphy { struct cphy {
int addr; /* PHY address */ u8 addr; /* PHY address */
u8 modtype; /* PHY module type */
short priv; /* scratch pad */
unsigned int caps; /* PHY capabilities */
struct adapter *adapter; /* associated adapter */ struct adapter *adapter; /* associated adapter */
const char *desc; /* PHY description */
unsigned long fifo_errors; /* FIFO over/under-flows */ unsigned long fifo_errors; /* FIFO over/under-flows */
const struct cphy_ops *ops; /* PHY operations */ const struct cphy_ops *ops; /* PHY operations */
int (*mdio_read)(struct adapter *adapter, int phy_addr, int mmd_addr, int (*mdio_read)(struct adapter *adapter, int phy_addr, int mmd_addr,
@ -594,10 +614,13 @@ static inline int mdio_write(struct cphy *phy, int mmd, int reg,
/* Convenience initializer */ /* Convenience initializer */
static inline void cphy_init(struct cphy *phy, struct adapter *adapter, static inline void cphy_init(struct cphy *phy, struct adapter *adapter,
int phy_addr, struct cphy_ops *phy_ops, int phy_addr, struct cphy_ops *phy_ops,
const struct mdio_ops *mdio_ops) const struct mdio_ops *mdio_ops,
unsigned int caps, const char *desc)
{ {
phy->adapter = adapter;
phy->addr = phy_addr; phy->addr = phy_addr;
phy->caps = caps;
phy->adapter = adapter;
phy->desc = desc;
phy->ops = phy_ops; phy->ops = phy_ops;
if (mdio_ops) { if (mdio_ops) {
phy->mdio_read = mdio_ops->read; phy->mdio_read = mdio_ops->read;
@ -668,7 +691,12 @@ int t3_mdio_change_bits(struct cphy *phy, int mmd, int reg, unsigned int clear,
unsigned int set); unsigned int set);
int t3_phy_reset(struct cphy *phy, int mmd, int wait); int t3_phy_reset(struct cphy *phy, int mmd, int wait);
int t3_phy_advertise(struct cphy *phy, unsigned int advert); int t3_phy_advertise(struct cphy *phy, unsigned int advert);
int t3_phy_advertise_fiber(struct cphy *phy, unsigned int advert);
int t3_set_phy_speed_duplex(struct cphy *phy, int speed, int duplex); int t3_set_phy_speed_duplex(struct cphy *phy, int speed, int duplex);
int t3_phy_lasi_intr_enable(struct cphy *phy);
int t3_phy_lasi_intr_disable(struct cphy *phy);
int t3_phy_lasi_intr_clear(struct cphy *phy);
int t3_phy_lasi_intr_handler(struct cphy *phy);
void t3_intr_enable(struct adapter *adapter); void t3_intr_enable(struct adapter *adapter);
void t3_intr_disable(struct adapter *adapter); void t3_intr_disable(struct adapter *adapter);
@ -698,6 +726,7 @@ int t3_check_fw_version(struct adapter *adapter, int *must_load);
int t3_init_hw(struct adapter *adapter, u32 fw_params); int t3_init_hw(struct adapter *adapter, u32 fw_params);
void mac_prep(struct cmac *mac, struct adapter *adapter, int index); void mac_prep(struct cmac *mac, struct adapter *adapter, int index);
void early_hw_init(struct adapter *adapter, const struct adapter_info *ai); void early_hw_init(struct adapter *adapter, const struct adapter_info *ai);
int t3_reset_adapter(struct adapter *adapter);
int t3_prep_adapter(struct adapter *adapter, const struct adapter_info *ai, int t3_prep_adapter(struct adapter *adapter, const struct adapter_info *ai,
int reset); int reset);
int t3_replay_prep_adapter(struct adapter *adapter); int t3_replay_prep_adapter(struct adapter *adapter);
@ -774,14 +803,16 @@ int t3_sge_read_rspq(struct adapter *adapter, unsigned int id, u32 data[4]);
int t3_sge_cqcntxt_op(struct adapter *adapter, unsigned int id, unsigned int op, int t3_sge_cqcntxt_op(struct adapter *adapter, unsigned int id, unsigned int op,
unsigned int credits); unsigned int credits);
void t3_vsc8211_phy_prep(struct cphy *phy, struct adapter *adapter, int t3_vsc8211_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops); int phy_addr, const struct mdio_ops *mdio_ops);
void t3_ael1002_phy_prep(struct cphy *phy, struct adapter *adapter, int t3_ael1002_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops); int phy_addr, const struct mdio_ops *mdio_ops);
void t3_ael1006_phy_prep(struct cphy *phy, struct adapter *adapter, int t3_ael1006_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops); int phy_addr, const struct mdio_ops *mdio_ops);
void t3_qt2045_phy_prep(struct cphy *phy, struct adapter *adapter, int phy_addr, int t3_ael2005_phy_prep(struct cphy *phy, struct adapter *adapter,
const struct mdio_ops *mdio_ops); int phy_addr, const struct mdio_ops *mdio_ops);
void t3_xaui_direct_phy_prep(struct cphy *phy, struct adapter *adapter, int t3_qt2045_phy_prep(struct cphy *phy, struct adapter *adapter, int phy_addr,
int phy_addr, const struct mdio_ops *mdio_ops); const struct mdio_ops *mdio_ops);
int t3_xaui_direct_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops);
#endif /* __CHELSIO_COMMON_H */ #endif /* __CHELSIO_COMMON_H */

Просмотреть файл

@ -92,6 +92,8 @@ struct ch_qset_params {
int32_t polling; int32_t polling;
int32_t lro; int32_t lro;
int32_t cong_thres; int32_t cong_thres;
int32_t vector;
int32_t qnum;
}; };
struct ch_pktsched_params { struct ch_pktsched_params {

Просмотреть файл

@ -208,6 +208,31 @@ void t3_os_link_changed(struct adapter *adapter, int port_id, int link_stat,
} }
} }
/**
* t3_os_phymod_changed - handle PHY module changes
* @phy: the PHY reporting the module change
* @mod_type: new module type
*
* This is the OS-dependent handler for PHY module changes. It is
* invoked when a PHY module is removed or inserted for any OS-specific
* processing.
*/
void t3_os_phymod_changed(struct adapter *adap, int port_id)
{
static const char *mod_str[] = {
NULL, "SR", "LR", "LRM", "TWINAX", "TWINAX", "unknown"
};
const struct net_device *dev = adap->port[port_id];
const struct port_info *pi = netdev_priv(dev);
if (pi->phy.modtype == phy_modtype_none)
printk(KERN_INFO "%s: PHY module unplugged\n", dev->name);
else
printk(KERN_INFO "%s: %s PHY module inserted\n", dev->name,
mod_str[pi->phy.modtype]);
}
static void cxgb_set_rxmode(struct net_device *dev) static void cxgb_set_rxmode(struct net_device *dev)
{ {
struct t3_rx_mode rm; struct t3_rx_mode rm;
@ -274,10 +299,10 @@ static void name_msix_vecs(struct adapter *adap)
for (i = 0; i < pi->nqsets; i++, msi_idx++) { for (i = 0; i < pi->nqsets; i++, msi_idx++) {
snprintf(adap->msix_info[msi_idx].desc, n, snprintf(adap->msix_info[msi_idx].desc, n,
"%s (queue %d)", d->name, i); "%s-%d", d->name, pi->first_qset + i);
adap->msix_info[msi_idx].desc[n] = 0; adap->msix_info[msi_idx].desc[n] = 0;
} }
} }
} }
static int request_msix_data_irqs(struct adapter *adap) static int request_msix_data_irqs(struct adapter *adap)
@ -306,6 +331,22 @@ static int request_msix_data_irqs(struct adapter *adap)
return 0; return 0;
} }
static void free_irq_resources(struct adapter *adapter)
{
if (adapter->flags & USING_MSIX) {
int i, n = 0;
free_irq(adapter->msix_info[0].vec, adapter);
for_each_port(adapter, i)
n += adap2pinfo(adapter, i)->nqsets;
for (i = 0; i < n; ++i)
free_irq(adapter->msix_info[i + 1].vec,
&adapter->sge.qs[i]);
} else
free_irq(adapter->pdev->irq, adapter);
}
static int await_mgmt_replies(struct adapter *adap, unsigned long init_cnt, static int await_mgmt_replies(struct adapter *adap, unsigned long init_cnt,
unsigned long n) unsigned long n)
{ {
@ -473,12 +514,16 @@ static int setup_sge_qsets(struct adapter *adap)
struct port_info *pi = netdev_priv(dev); struct port_info *pi = netdev_priv(dev);
pi->qs = &adap->sge.qs[pi->first_qset]; pi->qs = &adap->sge.qs[pi->first_qset];
for (j = 0; j < pi->nqsets; ++j, ++qset_idx) { for (j = pi->first_qset; j < pi->first_qset + pi->nqsets;
++j, ++qset_idx) {
if (!pi->rx_csum_offload)
adap->params.sge.qset[qset_idx].lro = 0;
err = t3_sge_alloc_qset(adap, qset_idx, 1, err = t3_sge_alloc_qset(adap, qset_idx, 1,
(adap->flags & USING_MSIX) ? qset_idx + 1 : (adap->flags & USING_MSIX) ? qset_idx + 1 :
irq_idx, irq_idx,
&adap->params.sge.qset[qset_idx], ntxq, dev); &adap->params.sge.qset[qset_idx], ntxq, dev);
if (err) { if (err) {
t3_stop_sge_timers(adap);
t3_free_sge_resources(adap); t3_free_sge_resources(adap);
return err; return err;
} }
@ -739,11 +784,12 @@ static void init_port_mtus(struct adapter *adapter)
t3_write_reg(adapter, A_TP_MTU_PORT_TABLE, mtus); t3_write_reg(adapter, A_TP_MTU_PORT_TABLE, mtus);
} }
static void send_pktsched_cmd(struct adapter *adap, int sched, int qidx, int lo, static int send_pktsched_cmd(struct adapter *adap, int sched, int qidx, int lo,
int hi, int port) int hi, int port)
{ {
struct sk_buff *skb; struct sk_buff *skb;
struct mngt_pktsched_wr *req; struct mngt_pktsched_wr *req;
int ret;
skb = alloc_skb(sizeof(*req), GFP_KERNEL | __GFP_NOFAIL); skb = alloc_skb(sizeof(*req), GFP_KERNEL | __GFP_NOFAIL);
req = (struct mngt_pktsched_wr *)skb_put(skb, sizeof(*req)); req = (struct mngt_pktsched_wr *)skb_put(skb, sizeof(*req));
@ -754,20 +800,28 @@ static void send_pktsched_cmd(struct adapter *adap, int sched, int qidx, int lo,
req->min = lo; req->min = lo;
req->max = hi; req->max = hi;
req->binding = port; req->binding = port;
t3_mgmt_tx(adap, skb); ret = t3_mgmt_tx(adap, skb);
return ret;
} }
static void bind_qsets(struct adapter *adap) static int bind_qsets(struct adapter *adap)
{ {
int i, j; int i, j, err = 0;
for_each_port(adap, i) { for_each_port(adap, i) {
const struct port_info *pi = adap2pinfo(adap, i); const struct port_info *pi = adap2pinfo(adap, i);
for (j = 0; j < pi->nqsets; ++j) for (j = 0; j < pi->nqsets; ++j) {
send_pktsched_cmd(adap, 1, pi->first_qset + j, -1, int ret = send_pktsched_cmd(adap, 1,
-1, i); pi->first_qset + j, -1,
-1, i);
if (ret)
err = ret;
}
} }
return err;
} }
#define FW_FNAME "t3fw-%d.%d.%d.bin" #define FW_FNAME "t3fw-%d.%d.%d.bin"
@ -891,6 +945,13 @@ static int cxgb_up(struct adapter *adap)
goto out; goto out;
} }
/*
* Clear interrupts now to catch errors if t3_init_hw fails.
* We clear them again later as initialization may trigger
* conditions that can interrupt.
*/
t3_intr_clear(adap);
err = t3_init_hw(adap, 0); err = t3_init_hw(adap, 0);
if (err) if (err)
goto out; goto out;
@ -946,9 +1007,16 @@ static int cxgb_up(struct adapter *adap)
t3_write_reg(adap, A_TP_INT_ENABLE, 0x7fbfffff); t3_write_reg(adap, A_TP_INT_ENABLE, 0x7fbfffff);
} }
if ((adap->flags & (USING_MSIX | QUEUES_BOUND)) == USING_MSIX) if (!(adap->flags & QUEUES_BOUND)) {
bind_qsets(adap); err = bind_qsets(adap);
adap->flags |= QUEUES_BOUND; if (err) {
CH_ERR(adap, "failed to bind qsets, err %d\n", err);
t3_intr_disable(adap);
free_irq_resources(adap);
goto out;
}
adap->flags |= QUEUES_BOUND;
}
out: out:
return err; return err;
@ -967,19 +1035,7 @@ static void cxgb_down(struct adapter *adapter)
t3_intr_disable(adapter); t3_intr_disable(adapter);
spin_unlock_irq(&adapter->work_lock); spin_unlock_irq(&adapter->work_lock);
if (adapter->flags & USING_MSIX) { free_irq_resources(adapter);
int i, n = 0;
free_irq(adapter->msix_info[0].vec, adapter);
for_each_port(adapter, i)
n += adap2pinfo(adapter, i)->nqsets;
for (i = 0; i < n; ++i)
free_irq(adapter->msix_info[i + 1].vec,
&adapter->sge.qs[i]);
} else
free_irq(adapter->pdev->irq, adapter);
flush_workqueue(cxgb3_wq); /* wait for external IRQ handler */ flush_workqueue(cxgb3_wq); /* wait for external IRQ handler */
quiesce_rx(adapter); quiesce_rx(adapter);
} }
@ -1100,9 +1156,9 @@ static int cxgb_close(struct net_device *dev)
netif_carrier_off(dev); netif_carrier_off(dev);
t3_mac_disable(&pi->mac, MAC_DIRECTION_TX | MAC_DIRECTION_RX); t3_mac_disable(&pi->mac, MAC_DIRECTION_TX | MAC_DIRECTION_RX);
spin_lock(&adapter->work_lock); /* sync with update task */ spin_lock_irq(&adapter->work_lock); /* sync with update task */
clear_bit(pi->port_id, &adapter->open_device_map); clear_bit(pi->port_id, &adapter->open_device_map);
spin_unlock(&adapter->work_lock); spin_unlock_irq(&adapter->work_lock);
if (!(adapter->open_device_map & PORT_MASK)) if (!(adapter->open_device_map & PORT_MASK))
cancel_rearming_delayed_workqueue(cxgb3_wq, cancel_rearming_delayed_workqueue(cxgb3_wq,
@ -1284,8 +1340,8 @@ static unsigned long collect_sge_port_stats(struct adapter *adapter,
int i; int i;
unsigned long tot = 0; unsigned long tot = 0;
for (i = 0; i < p->nqsets; ++i) for (i = p->first_qset; i < p->first_qset + p->nqsets; ++i)
tot += adapter->sge.qs[i + p->first_qset].port_stats[idx]; tot += adapter->sge.qs[i].port_stats[idx];
return tot; return tot;
} }
@ -1485,11 +1541,22 @@ static int speed_duplex_to_caps(int speed, int duplex)
static int set_settings(struct net_device *dev, struct ethtool_cmd *cmd) static int set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
{ {
int cap;
struct port_info *p = netdev_priv(dev); struct port_info *p = netdev_priv(dev);
struct link_config *lc = &p->link_config; struct link_config *lc = &p->link_config;
if (!(lc->supported & SUPPORTED_Autoneg)) if (!(lc->supported & SUPPORTED_Autoneg)) {
return -EOPNOTSUPP; /* can't change speed/duplex */ /*
* PHY offers a single speed/duplex. See if that's what's
* being requested.
*/
if (cmd->autoneg == AUTONEG_DISABLE) {
cap = speed_duplex_to_caps(cmd->speed, cmd->duplex);
if (lc->supported & cap)
return 0;
}
return -EINVAL;
}
if (cmd->autoneg == AUTONEG_DISABLE) { if (cmd->autoneg == AUTONEG_DISABLE) {
int cap = speed_duplex_to_caps(cmd->speed, cmd->duplex); int cap = speed_duplex_to_caps(cmd->speed, cmd->duplex);
@ -1568,8 +1635,10 @@ static int set_rx_csum(struct net_device *dev, u32 data)
struct adapter *adap = p->adapter; struct adapter *adap = p->adapter;
int i; int i;
for (i = p->first_qset; i < p->first_qset + p->nqsets; i++) for (i = p->first_qset; i < p->first_qset + p->nqsets; i++) {
adap->params.sge.qset[i].lro = 0;
adap->sge.qs[i].lro_enabled = 0; adap->sge.qs[i].lro_enabled = 0;
}
} }
return 0; return 0;
} }
@ -1775,6 +1844,8 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
int i; int i;
struct qset_params *q; struct qset_params *q;
struct ch_qset_params t; struct ch_qset_params t;
int q1 = pi->first_qset;
int nqsets = pi->nqsets;
if (!capable(CAP_NET_ADMIN)) if (!capable(CAP_NET_ADMIN))
return -EPERM; return -EPERM;
@ -1797,6 +1868,16 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
|| !in_range(t.rspq_size, MIN_RSPQ_ENTRIES, || !in_range(t.rspq_size, MIN_RSPQ_ENTRIES,
MAX_RSPQ_ENTRIES)) MAX_RSPQ_ENTRIES))
return -EINVAL; return -EINVAL;
if ((adapter->flags & FULL_INIT_DONE) && t.lro > 0)
for_each_port(adapter, i) {
pi = adap2pinfo(adapter, i);
if (t.qset_idx >= pi->first_qset &&
t.qset_idx < pi->first_qset + pi->nqsets &&
!pi->rx_csum_offload)
return -EINVAL;
}
if ((adapter->flags & FULL_INIT_DONE) && if ((adapter->flags & FULL_INIT_DONE) &&
(t.rspq_size >= 0 || t.fl_size[0] >= 0 || (t.rspq_size >= 0 || t.fl_size[0] >= 0 ||
t.fl_size[1] >= 0 || t.txq_size[0] >= 0 || t.fl_size[1] >= 0 || t.txq_size[0] >= 0 ||
@ -1804,6 +1885,20 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
t.polling >= 0 || t.cong_thres >= 0)) t.polling >= 0 || t.cong_thres >= 0))
return -EBUSY; return -EBUSY;
/* Allow setting of any available qset when offload enabled */
if (test_bit(OFFLOAD_DEVMAP_BIT, &adapter->open_device_map)) {
q1 = 0;
for_each_port(adapter, i) {
pi = adap2pinfo(adapter, i);
nqsets += pi->first_qset + pi->nqsets;
}
}
if (t.qset_idx < q1)
return -EINVAL;
if (t.qset_idx > q1 + nqsets - 1)
return -EINVAL;
q = &adapter->params.sge.qset[t.qset_idx]; q = &adapter->params.sge.qset[t.qset_idx];
if (t.rspq_size >= 0) if (t.rspq_size >= 0)
@ -1853,13 +1948,26 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
case CHELSIO_GET_QSET_PARAMS:{ case CHELSIO_GET_QSET_PARAMS:{
struct qset_params *q; struct qset_params *q;
struct ch_qset_params t; struct ch_qset_params t;
int q1 = pi->first_qset;
int nqsets = pi->nqsets;
int i;
if (copy_from_user(&t, useraddr, sizeof(t))) if (copy_from_user(&t, useraddr, sizeof(t)))
return -EFAULT; return -EFAULT;
if (t.qset_idx >= SGE_QSETS)
/* Display qsets for all ports when offload enabled */
if (test_bit(OFFLOAD_DEVMAP_BIT, &adapter->open_device_map)) {
q1 = 0;
for_each_port(adapter, i) {
pi = adap2pinfo(adapter, i);
nqsets = pi->first_qset + pi->nqsets;
}
}
if (t.qset_idx >= nqsets)
return -EINVAL; return -EINVAL;
q = &adapter->params.sge.qset[t.qset_idx]; q = &adapter->params.sge.qset[q1 + t.qset_idx];
t.rspq_size = q->rspq_size; t.rspq_size = q->rspq_size;
t.txq_size[0] = q->txq_size[0]; t.txq_size[0] = q->txq_size[0];
t.txq_size[1] = q->txq_size[1]; t.txq_size[1] = q->txq_size[1];
@ -1870,6 +1978,12 @@ static int cxgb_extension_ioctl(struct net_device *dev, void __user *useraddr)
t.lro = q->lro; t.lro = q->lro;
t.intr_lat = q->coalesce_usecs; t.intr_lat = q->coalesce_usecs;
t.cong_thres = q->cong_thres; t.cong_thres = q->cong_thres;
t.qnum = q1;
if (adapter->flags & USING_MSIX)
t.vector = adapter->msix_info[q1 + t.qset_idx + 1].vec;
else
t.vector = adapter->pdev->irq;
if (copy_to_user(useraddr, &t, sizeof(t))) if (copy_to_user(useraddr, &t, sizeof(t)))
return -EFAULT; return -EFAULT;
@ -2117,7 +2231,7 @@ static int cxgb_ioctl(struct net_device *dev, struct ifreq *req, int cmd)
mmd = data->phy_id >> 8; mmd = data->phy_id >> 8;
if (!mmd) if (!mmd)
mmd = MDIO_DEV_PCS; mmd = MDIO_DEV_PCS;
else if (mmd > MDIO_DEV_XGXS) else if (mmd > MDIO_DEV_VEND2)
return -EINVAL; return -EINVAL;
ret = ret =
@ -2143,7 +2257,7 @@ static int cxgb_ioctl(struct net_device *dev, struct ifreq *req, int cmd)
mmd = data->phy_id >> 8; mmd = data->phy_id >> 8;
if (!mmd) if (!mmd)
mmd = MDIO_DEV_PCS; mmd = MDIO_DEV_PCS;
else if (mmd > MDIO_DEV_XGXS) else if (mmd > MDIO_DEV_VEND2)
return -EINVAL; return -EINVAL;
ret = ret =
@ -2215,8 +2329,8 @@ static void t3_synchronize_rx(struct adapter *adap, const struct port_info *p)
{ {
int i; int i;
for (i = 0; i < p->nqsets; i++) { for (i = p->first_qset; i < p->first_qset + p->nqsets; i++) {
struct sge_rspq *q = &adap->sge.qs[i + p->first_qset].rspq; struct sge_rspq *q = &adap->sge.qs[i].rspq;
spin_lock_irq(&q->lock); spin_lock_irq(&q->lock);
spin_unlock_irq(&q->lock); spin_unlock_irq(&q->lock);
@ -2290,7 +2404,7 @@ static void check_link_status(struct adapter *adapter)
struct net_device *dev = adapter->port[i]; struct net_device *dev = adapter->port[i];
struct port_info *p = netdev_priv(dev); struct port_info *p = netdev_priv(dev);
if (!(p->port_type->caps & SUPPORTED_IRQ) && netif_running(dev)) if (!(p->phy.caps & SUPPORTED_IRQ) && netif_running(dev))
t3_link_changed(adapter, i); t3_link_changed(adapter, i);
} }
} }
@ -2355,10 +2469,10 @@ static void t3_adap_check_task(struct work_struct *work)
check_t3b2_mac(adapter); check_t3b2_mac(adapter);
/* Schedule the next check update if any port is active. */ /* Schedule the next check update if any port is active. */
spin_lock(&adapter->work_lock); spin_lock_irq(&adapter->work_lock);
if (adapter->open_device_map & PORT_MASK) if (adapter->open_device_map & PORT_MASK)
schedule_chk_task(adapter); schedule_chk_task(adapter);
spin_unlock(&adapter->work_lock); spin_unlock_irq(&adapter->work_lock);
} }
/* /*
@ -2403,6 +2517,96 @@ void t3_os_ext_intr_handler(struct adapter *adapter)
spin_unlock(&adapter->work_lock); spin_unlock(&adapter->work_lock);
} }
static int t3_adapter_error(struct adapter *adapter, int reset)
{
int i, ret = 0;
/* Stop all ports */
for_each_port(adapter, i) {
struct net_device *netdev = adapter->port[i];
if (netif_running(netdev))
cxgb_close(netdev);
}
if (is_offload(adapter) &&
test_bit(OFFLOAD_DEVMAP_BIT, &adapter->open_device_map))
offload_close(&adapter->tdev);
/* Stop SGE timers */
t3_stop_sge_timers(adapter);
adapter->flags &= ~FULL_INIT_DONE;
if (reset)
ret = t3_reset_adapter(adapter);
pci_disable_device(adapter->pdev);
return ret;
}
static int t3_reenable_adapter(struct adapter *adapter)
{
if (pci_enable_device(adapter->pdev)) {
dev_err(&adapter->pdev->dev,
"Cannot re-enable PCI device after reset.\n");
goto err;
}
pci_set_master(adapter->pdev);
pci_restore_state(adapter->pdev);
/* Free sge resources */
t3_free_sge_resources(adapter);
if (t3_replay_prep_adapter(adapter))
goto err;
return 0;
err:
return -1;
}
static void t3_resume_ports(struct adapter *adapter)
{
int i;
/* Restart the ports */
for_each_port(adapter, i) {
struct net_device *netdev = adapter->port[i];
if (netif_running(netdev)) {
if (cxgb_open(netdev)) {
dev_err(&adapter->pdev->dev,
"can't bring device back up"
" after reset\n");
continue;
}
}
}
}
/*
* processes a fatal error.
* Bring the ports down, reset the chip, bring the ports back up.
*/
static void fatal_error_task(struct work_struct *work)
{
struct adapter *adapter = container_of(work, struct adapter,
fatal_error_handler_task);
int err = 0;
rtnl_lock();
err = t3_adapter_error(adapter, 1);
if (!err)
err = t3_reenable_adapter(adapter);
if (!err)
t3_resume_ports(adapter);
CH_ALERT(adapter, "adapter reset %s\n", err ? "failed" : "succeeded");
rtnl_unlock();
}
void t3_fatal_err(struct adapter *adapter) void t3_fatal_err(struct adapter *adapter)
{ {
unsigned int fw_status[4]; unsigned int fw_status[4];
@ -2413,7 +2617,11 @@ void t3_fatal_err(struct adapter *adapter)
t3_write_reg(adapter, A_XGM_RX_CTRL, 0); t3_write_reg(adapter, A_XGM_RX_CTRL, 0);
t3_write_reg(adapter, XGM_REG(A_XGM_TX_CTRL, 1), 0); t3_write_reg(adapter, XGM_REG(A_XGM_TX_CTRL, 1), 0);
t3_write_reg(adapter, XGM_REG(A_XGM_RX_CTRL, 1), 0); t3_write_reg(adapter, XGM_REG(A_XGM_RX_CTRL, 1), 0);
spin_lock(&adapter->work_lock);
t3_intr_disable(adapter); t3_intr_disable(adapter);
queue_work(cxgb3_wq, &adapter->fatal_error_handler_task);
spin_unlock(&adapter->work_lock);
} }
CH_ALERT(adapter, "encountered fatal error, operation suspended\n"); CH_ALERT(adapter, "encountered fatal error, operation suspended\n");
if (!t3_cim_ctl_blk_read(adapter, 0xa0, 4, fw_status)) if (!t3_cim_ctl_blk_read(adapter, 0xa0, 4, fw_status))
@ -2435,23 +2643,9 @@ static pci_ers_result_t t3_io_error_detected(struct pci_dev *pdev,
pci_channel_state_t state) pci_channel_state_t state)
{ {
struct adapter *adapter = pci_get_drvdata(pdev); struct adapter *adapter = pci_get_drvdata(pdev);
int i; int ret;
/* Stop all ports */ ret = t3_adapter_error(adapter, 0);
for_each_port(adapter, i) {
struct net_device *netdev = adapter->port[i];
if (netif_running(netdev))
cxgb_close(netdev);
}
if (is_offload(adapter) &&
test_bit(OFFLOAD_DEVMAP_BIT, &adapter->open_device_map))
offload_close(&adapter->tdev);
adapter->flags &= ~FULL_INIT_DONE;
pci_disable_device(pdev);
/* Request a slot reset. */ /* Request a slot reset. */
return PCI_ERS_RESULT_NEED_RESET; return PCI_ERS_RESULT_NEED_RESET;
@ -2467,22 +2661,9 @@ static pci_ers_result_t t3_io_slot_reset(struct pci_dev *pdev)
{ {
struct adapter *adapter = pci_get_drvdata(pdev); struct adapter *adapter = pci_get_drvdata(pdev);
if (pci_enable_device(pdev)) { if (!t3_reenable_adapter(adapter))
dev_err(&pdev->dev, return PCI_ERS_RESULT_RECOVERED;
"Cannot re-enable PCI device after reset.\n");
goto err;
}
pci_set_master(pdev);
pci_restore_state(pdev);
/* Free sge resources */
t3_free_sge_resources(adapter);
if (t3_replay_prep_adapter(adapter))
goto err;
return PCI_ERS_RESULT_RECOVERED;
err:
return PCI_ERS_RESULT_DISCONNECT; return PCI_ERS_RESULT_DISCONNECT;
} }
@ -2496,22 +2677,8 @@ err:
static void t3_io_resume(struct pci_dev *pdev) static void t3_io_resume(struct pci_dev *pdev)
{ {
struct adapter *adapter = pci_get_drvdata(pdev); struct adapter *adapter = pci_get_drvdata(pdev);
int i;
/* Restart the ports */ t3_resume_ports(adapter);
for_each_port(adapter, i) {
struct net_device *netdev = adapter->port[i];
if (netif_running(netdev)) {
if (cxgb_open(netdev)) {
dev_err(&pdev->dev,
"can't bring device back up"
" after reset\n");
continue;
}
netif_device_attach(netdev);
}
}
} }
static struct pci_error_handlers t3_err_handler = { static struct pci_error_handlers t3_err_handler = {
@ -2520,6 +2687,42 @@ static struct pci_error_handlers t3_err_handler = {
.resume = t3_io_resume, .resume = t3_io_resume,
}; };
/*
* Set the number of qsets based on the number of CPUs and the number of ports,
* not to exceed the number of available qsets, assuming there are enough qsets
* per port in HW.
*/
static void set_nqsets(struct adapter *adap)
{
int i, j = 0;
int num_cpus = num_online_cpus();
int hwports = adap->params.nports;
int nqsets = SGE_QSETS;
if (adap->params.rev > 0) {
if (hwports == 2 &&
(hwports * nqsets > SGE_QSETS ||
num_cpus >= nqsets / hwports))
nqsets /= hwports;
if (nqsets > num_cpus)
nqsets = num_cpus;
if (nqsets < 1 || hwports == 4)
nqsets = 1;
} else
nqsets = 1;
for_each_port(adap, i) {
struct port_info *pi = adap2pinfo(adap, i);
pi->first_qset = j;
pi->nqsets = nqsets;
j = pi->first_qset + nqsets;
dev_info(&adap->pdev->dev,
"Port %d using %d queue sets.\n", i, nqsets);
}
}
static int __devinit cxgb_enable_msix(struct adapter *adap) static int __devinit cxgb_enable_msix(struct adapter *adap)
{ {
struct msix_entry entries[SGE_QSETS + 1]; struct msix_entry entries[SGE_QSETS + 1];
@ -2564,7 +2767,7 @@ static void __devinit print_port_info(struct adapter *adap,
if (!test_bit(i, &adap->registered_device_map)) if (!test_bit(i, &adap->registered_device_map))
continue; continue;
printk(KERN_INFO "%s: %s %s %sNIC (rev %d) %s%s\n", printk(KERN_INFO "%s: %s %s %sNIC (rev %d) %s%s\n",
dev->name, ai->desc, pi->port_type->desc, dev->name, ai->desc, pi->phy.desc,
is_offload(adap) ? "R" : "", adap->params.rev, buf, is_offload(adap) ? "R" : "", adap->params.rev, buf,
(adap->flags & USING_MSIX) ? " MSI-X" : (adap->flags & USING_MSIX) ? " MSI-X" :
(adap->flags & USING_MSI) ? " MSI" : ""); (adap->flags & USING_MSI) ? " MSI" : "");
@ -2660,6 +2863,7 @@ static int __devinit init_one(struct pci_dev *pdev,
INIT_LIST_HEAD(&adapter->adapter_list); INIT_LIST_HEAD(&adapter->adapter_list);
INIT_WORK(&adapter->ext_intr_handler_task, ext_intr_task); INIT_WORK(&adapter->ext_intr_handler_task, ext_intr_task);
INIT_WORK(&adapter->fatal_error_handler_task, fatal_error_task);
INIT_DELAYED_WORK(&adapter->adap_check_task, t3_adap_check_task); INIT_DELAYED_WORK(&adapter->adap_check_task, t3_adap_check_task);
for (i = 0; i < ai->nports; ++i) { for (i = 0; i < ai->nports; ++i) {
@ -2677,9 +2881,6 @@ static int __devinit init_one(struct pci_dev *pdev,
pi = netdev_priv(netdev); pi = netdev_priv(netdev);
pi->adapter = adapter; pi->adapter = adapter;
pi->rx_csum_offload = 1; pi->rx_csum_offload = 1;
pi->nqsets = 1;
pi->first_qset = i;
pi->activity = 0;
pi->port_id = i; pi->port_id = i;
netif_carrier_off(netdev); netif_carrier_off(netdev);
netdev->irq = pdev->irq; netdev->irq = pdev->irq;
@ -2756,6 +2957,8 @@ static int __devinit init_one(struct pci_dev *pdev,
else if (msi > 0 && pci_enable_msi(pdev) == 0) else if (msi > 0 && pci_enable_msi(pdev) == 0)
adapter->flags |= USING_MSI; adapter->flags |= USING_MSI;
set_nqsets(adapter);
err = sysfs_create_group(&adapter->port[0]->dev.kobj, err = sysfs_create_group(&adapter->port[0]->dev.kobj,
&cxgb3_attr_group); &cxgb3_attr_group);
@ -2801,6 +3004,7 @@ static void __devexit remove_one(struct pci_dev *pdev)
if (test_bit(i, &adapter->registered_device_map)) if (test_bit(i, &adapter->registered_device_map))
unregister_netdev(adapter->port[i]); unregister_netdev(adapter->port[i]);
t3_stop_sge_timers(adapter);
t3_free_sge_resources(adapter); t3_free_sge_resources(adapter);
cxgb_disable_msi(adapter); cxgb_disable_msi(adapter);

Просмотреть файл

@ -1018,7 +1018,7 @@ static void set_l2t_ix(struct t3cdev *tdev, u32 tid, struct l2t_entry *e)
skb = alloc_skb(sizeof(*req), GFP_ATOMIC); skb = alloc_skb(sizeof(*req), GFP_ATOMIC);
if (!skb) { if (!skb) {
printk(KERN_ERR "%s: cannot allocate skb!\n", __FUNCTION__); printk(KERN_ERR "%s: cannot allocate skb!\n", __func__);
return; return;
} }
skb->priority = CPL_PRIORITY_CONTROL; skb->priority = CPL_PRIORITY_CONTROL;
@ -1049,14 +1049,14 @@ void cxgb_redirect(struct dst_entry *old, struct dst_entry *new)
return; return;
if (!is_offloading(newdev)) { if (!is_offloading(newdev)) {
printk(KERN_WARNING "%s: Redirect to non-offload " printk(KERN_WARNING "%s: Redirect to non-offload "
"device ignored.\n", __FUNCTION__); "device ignored.\n", __func__);
return; return;
} }
tdev = dev2t3cdev(olddev); tdev = dev2t3cdev(olddev);
BUG_ON(!tdev); BUG_ON(!tdev);
if (tdev != dev2t3cdev(newdev)) { if (tdev != dev2t3cdev(newdev)) {
printk(KERN_WARNING "%s: Redirect to different " printk(KERN_WARNING "%s: Redirect to different "
"offload device ignored.\n", __FUNCTION__); "offload device ignored.\n", __func__);
return; return;
} }
@ -1064,7 +1064,7 @@ void cxgb_redirect(struct dst_entry *old, struct dst_entry *new)
e = t3_l2t_get(tdev, new->neighbour, newdev); e = t3_l2t_get(tdev, new->neighbour, newdev);
if (!e) { if (!e) {
printk(KERN_ERR "%s: couldn't allocate new l2t entry!\n", printk(KERN_ERR "%s: couldn't allocate new l2t entry!\n",
__FUNCTION__); __func__);
return; return;
} }

Просмотреть файл

@ -86,6 +86,7 @@ static int setup_l2e_send_pending(struct t3cdev *dev, struct sk_buff *skb,
struct l2t_entry *e) struct l2t_entry *e)
{ {
struct cpl_l2t_write_req *req; struct cpl_l2t_write_req *req;
struct sk_buff *tmp;
if (!skb) { if (!skb) {
skb = alloc_skb(sizeof(*req), GFP_ATOMIC); skb = alloc_skb(sizeof(*req), GFP_ATOMIC);
@ -103,13 +104,11 @@ static int setup_l2e_send_pending(struct t3cdev *dev, struct sk_buff *skb,
memcpy(req->dst_mac, e->dmac, sizeof(req->dst_mac)); memcpy(req->dst_mac, e->dmac, sizeof(req->dst_mac));
skb->priority = CPL_PRIORITY_CONTROL; skb->priority = CPL_PRIORITY_CONTROL;
cxgb3_ofld_send(dev, skb); cxgb3_ofld_send(dev, skb);
while (e->arpq_head) {
skb = e->arpq_head; skb_queue_walk_safe(&e->arpq, skb, tmp) {
e->arpq_head = skb->next; __skb_unlink(skb, &e->arpq);
skb->next = NULL;
cxgb3_ofld_send(dev, skb); cxgb3_ofld_send(dev, skb);
} }
e->arpq_tail = NULL;
e->state = L2T_STATE_VALID; e->state = L2T_STATE_VALID;
return 0; return 0;
@ -121,12 +120,7 @@ static int setup_l2e_send_pending(struct t3cdev *dev, struct sk_buff *skb,
*/ */
static inline void arpq_enqueue(struct l2t_entry *e, struct sk_buff *skb) static inline void arpq_enqueue(struct l2t_entry *e, struct sk_buff *skb)
{ {
skb->next = NULL; __skb_queue_tail(&e->arpq, skb);
if (e->arpq_head)
e->arpq_tail->next = skb;
else
e->arpq_head = skb;
e->arpq_tail = skb;
} }
int t3_l2t_send_slow(struct t3cdev *dev, struct sk_buff *skb, int t3_l2t_send_slow(struct t3cdev *dev, struct sk_buff *skb,
@ -167,7 +161,7 @@ again:
break; break;
spin_lock_bh(&e->lock); spin_lock_bh(&e->lock);
if (e->arpq_head) if (!skb_queue_empty(&e->arpq))
setup_l2e_send_pending(dev, skb, e); setup_l2e_send_pending(dev, skb, e);
else /* we lost the race */ else /* we lost the race */
__kfree_skb(skb); __kfree_skb(skb);
@ -357,14 +351,14 @@ EXPORT_SYMBOL(t3_l2t_get);
* XXX: maybe we should abandon the latter behavior and just require a failure * XXX: maybe we should abandon the latter behavior and just require a failure
* handler. * handler.
*/ */
static void handle_failed_resolution(struct t3cdev *dev, struct sk_buff *arpq) static void handle_failed_resolution(struct t3cdev *dev, struct sk_buff_head *arpq)
{ {
while (arpq) { struct sk_buff *skb, *tmp;
struct sk_buff *skb = arpq;
skb_queue_walk_safe(arpq, skb, tmp) {
struct l2t_skb_cb *cb = L2T_SKB_CB(skb); struct l2t_skb_cb *cb = L2T_SKB_CB(skb);
arpq = skb->next; __skb_unlink(skb, arpq);
skb->next = NULL;
if (cb->arp_failure_handler) if (cb->arp_failure_handler)
cb->arp_failure_handler(dev, skb); cb->arp_failure_handler(dev, skb);
else else
@ -378,8 +372,8 @@ static void handle_failed_resolution(struct t3cdev *dev, struct sk_buff *arpq)
*/ */
void t3_l2t_update(struct t3cdev *dev, struct neighbour *neigh) void t3_l2t_update(struct t3cdev *dev, struct neighbour *neigh)
{ {
struct sk_buff_head arpq;
struct l2t_entry *e; struct l2t_entry *e;
struct sk_buff *arpq = NULL;
struct l2t_data *d = L2DATA(dev); struct l2t_data *d = L2DATA(dev);
u32 addr = *(u32 *) neigh->primary_key; u32 addr = *(u32 *) neigh->primary_key;
int ifidx = neigh->dev->ifindex; int ifidx = neigh->dev->ifindex;
@ -395,6 +389,8 @@ void t3_l2t_update(struct t3cdev *dev, struct neighbour *neigh)
return; return;
found: found:
__skb_queue_head_init(&arpq);
read_unlock(&d->lock); read_unlock(&d->lock);
if (atomic_read(&e->refcnt)) { if (atomic_read(&e->refcnt)) {
if (neigh != e->neigh) if (neigh != e->neigh)
@ -402,8 +398,7 @@ found:
if (e->state == L2T_STATE_RESOLVING) { if (e->state == L2T_STATE_RESOLVING) {
if (neigh->nud_state & NUD_FAILED) { if (neigh->nud_state & NUD_FAILED) {
arpq = e->arpq_head; skb_queue_splice_init(&e->arpq, &arpq);
e->arpq_head = e->arpq_tail = NULL;
} else if (neigh->nud_state & (NUD_CONNECTED|NUD_STALE)) } else if (neigh->nud_state & (NUD_CONNECTED|NUD_STALE))
setup_l2e_send_pending(dev, NULL, e); setup_l2e_send_pending(dev, NULL, e);
} else { } else {
@ -415,8 +410,8 @@ found:
} }
spin_unlock_bh(&e->lock); spin_unlock_bh(&e->lock);
if (arpq) if (!skb_queue_empty(&arpq))
handle_failed_resolution(dev, arpq); handle_failed_resolution(dev, &arpq);
} }
struct l2t_data *t3_init_l2t(unsigned int l2t_capacity) struct l2t_data *t3_init_l2t(unsigned int l2t_capacity)

Просмотреть файл

@ -64,8 +64,7 @@ struct l2t_entry {
struct neighbour *neigh; /* associated neighbour */ struct neighbour *neigh; /* associated neighbour */
struct l2t_entry *first; /* start of hash chain */ struct l2t_entry *first; /* start of hash chain */
struct l2t_entry *next; /* next l2t_entry on chain */ struct l2t_entry *next; /* next l2t_entry on chain */
struct sk_buff *arpq_head; /* queue of packets awaiting resolution */ struct sk_buff_head arpq; /* queue of packets awaiting resolution */
struct sk_buff *arpq_tail;
spinlock_t lock; spinlock_t lock;
atomic_t refcnt; /* entry reference count */ atomic_t refcnt; /* entry reference count */
u8 dmac[6]; /* neighbour's MAC address */ u8 dmac[6]; /* neighbour's MAC address */

Просмотреть файл

@ -573,6 +573,10 @@
#define V_GPIO10(x) ((x) << S_GPIO10) #define V_GPIO10(x) ((x) << S_GPIO10)
#define F_GPIO10 V_GPIO10(1U) #define F_GPIO10 V_GPIO10(1U)
#define S_GPIO9 9
#define V_GPIO9(x) ((x) << S_GPIO9)
#define F_GPIO9 V_GPIO9(1U)
#define S_GPIO7 7 #define S_GPIO7 7
#define V_GPIO7(x) ((x) << S_GPIO7) #define V_GPIO7(x) ((x) << S_GPIO7)
#define F_GPIO7 V_GPIO7(1U) #define F_GPIO7 V_GPIO7(1U)

Просмотреть файл

@ -351,7 +351,8 @@ static void free_rx_bufs(struct pci_dev *pdev, struct sge_fl *q)
pci_unmap_single(pdev, pci_unmap_addr(d, dma_addr), pci_unmap_single(pdev, pci_unmap_addr(d, dma_addr),
q->buf_size, PCI_DMA_FROMDEVICE); q->buf_size, PCI_DMA_FROMDEVICE);
if (q->use_pages) { if (q->use_pages) {
put_page(d->pg_chunk.page); if (d->pg_chunk.page)
put_page(d->pg_chunk.page);
d->pg_chunk.page = NULL; d->pg_chunk.page = NULL;
} else { } else {
kfree_skb(d->skb); kfree_skb(d->skb);
@ -583,7 +584,7 @@ static void t3_reset_qset(struct sge_qset *q)
memset(q->fl, 0, sizeof(struct sge_fl) * SGE_RXQ_PER_SET); memset(q->fl, 0, sizeof(struct sge_fl) * SGE_RXQ_PER_SET);
memset(q->txq, 0, sizeof(struct sge_txq) * SGE_TXQ_PER_SET); memset(q->txq, 0, sizeof(struct sge_txq) * SGE_TXQ_PER_SET);
q->txq_stopped = 0; q->txq_stopped = 0;
memset(&q->tx_reclaim_timer, 0, sizeof(q->tx_reclaim_timer)); q->tx_reclaim_timer.function = NULL; /* for t3_stop_sge_timers() */
kfree(q->lro_frag_tbl); kfree(q->lro_frag_tbl);
q->lro_nfrags = q->lro_frag_len = 0; q->lro_nfrags = q->lro_frag_len = 0;
} }
@ -603,9 +604,6 @@ static void t3_free_qset(struct adapter *adapter, struct sge_qset *q)
int i; int i;
struct pci_dev *pdev = adapter->pdev; struct pci_dev *pdev = adapter->pdev;
if (q->tx_reclaim_timer.function)
del_timer_sync(&q->tx_reclaim_timer);
for (i = 0; i < SGE_RXQ_PER_SET; ++i) for (i = 0; i < SGE_RXQ_PER_SET; ++i)
if (q->fl[i].desc) { if (q->fl[i].desc) {
spin_lock_irq(&adapter->sge.reg_lock); spin_lock_irq(&adapter->sge.reg_lock);
@ -1704,16 +1702,15 @@ int t3_offload_tx(struct t3cdev *tdev, struct sk_buff *skb)
*/ */
static inline void offload_enqueue(struct sge_rspq *q, struct sk_buff *skb) static inline void offload_enqueue(struct sge_rspq *q, struct sk_buff *skb)
{ {
skb->next = skb->prev = NULL; int was_empty = skb_queue_empty(&q->rx_queue);
if (q->rx_tail)
q->rx_tail->next = skb; __skb_queue_tail(&q->rx_queue, skb);
else {
if (was_empty) {
struct sge_qset *qs = rspq_to_qset(q); struct sge_qset *qs = rspq_to_qset(q);
napi_schedule(&qs->napi); napi_schedule(&qs->napi);
q->rx_head = skb;
} }
q->rx_tail = skb;
} }
/** /**
@ -1754,26 +1751,29 @@ static int ofld_poll(struct napi_struct *napi, int budget)
int work_done = 0; int work_done = 0;
while (work_done < budget) { while (work_done < budget) {
struct sk_buff *head, *tail, *skbs[RX_BUNDLE_SIZE]; struct sk_buff *skb, *tmp, *skbs[RX_BUNDLE_SIZE];
struct sk_buff_head queue;
int ngathered; int ngathered;
spin_lock_irq(&q->lock); spin_lock_irq(&q->lock);
head = q->rx_head; __skb_queue_head_init(&queue);
if (!head) { skb_queue_splice_init(&q->rx_queue, &queue);
if (skb_queue_empty(&queue)) {
napi_complete(napi); napi_complete(napi);
spin_unlock_irq(&q->lock); spin_unlock_irq(&q->lock);
return work_done; return work_done;
} }
tail = q->rx_tail;
q->rx_head = q->rx_tail = NULL;
spin_unlock_irq(&q->lock); spin_unlock_irq(&q->lock);
for (ngathered = 0; work_done < budget && head; work_done++) { ngathered = 0;
prefetch(head->data); skb_queue_walk_safe(&queue, skb, tmp) {
skbs[ngathered] = head; if (work_done >= budget)
head = head->next; break;
skbs[ngathered]->next = NULL; work_done++;
__skb_unlink(skb, &queue);
prefetch(skb->data);
skbs[ngathered] = skb;
if (++ngathered == RX_BUNDLE_SIZE) { if (++ngathered == RX_BUNDLE_SIZE) {
q->offload_bundles++; q->offload_bundles++;
adapter->tdev.recv(&adapter->tdev, skbs, adapter->tdev.recv(&adapter->tdev, skbs,
@ -1781,12 +1781,10 @@ static int ofld_poll(struct napi_struct *napi, int budget)
ngathered = 0; ngathered = 0;
} }
} }
if (head) { /* splice remaining packets back onto Rx queue */ if (!skb_queue_empty(&queue)) {
/* splice remaining packets back onto Rx queue */
spin_lock_irq(&q->lock); spin_lock_irq(&q->lock);
tail->next = q->rx_head; skb_queue_splice(&queue, &q->rx_queue);
if (!q->rx_head)
q->rx_tail = tail;
q->rx_head = head;
spin_unlock_irq(&q->lock); spin_unlock_irq(&q->lock);
} }
deliver_partial_bundle(&adapter->tdev, q, skbs, ngathered); deliver_partial_bundle(&adapter->tdev, q, skbs, ngathered);
@ -1937,38 +1935,6 @@ static inline int lro_frame_ok(const struct cpl_rx_pkt *p)
eh->h_proto == htons(ETH_P_IP) && ih->ihl == (sizeof(*ih) >> 2); eh->h_proto == htons(ETH_P_IP) && ih->ihl == (sizeof(*ih) >> 2);
} }
#define TCP_FLAG_MASK (TCP_FLAG_CWR | TCP_FLAG_ECE | TCP_FLAG_URG |\
TCP_FLAG_ACK | TCP_FLAG_PSH | TCP_FLAG_RST |\
TCP_FLAG_SYN | TCP_FLAG_FIN)
#define TSTAMP_WORD ((TCPOPT_NOP << 24) | (TCPOPT_NOP << 16) |\
(TCPOPT_TIMESTAMP << 8) | TCPOLEN_TIMESTAMP)
/**
* lro_segment_ok - check if a TCP segment is eligible for LRO
* @tcph: the TCP header of the packet
*
* Returns true if a TCP packet is eligible for LRO. This requires that
* the packet have only the ACK flag set and no TCP options besides
* time stamps.
*/
static inline int lro_segment_ok(const struct tcphdr *tcph)
{
int optlen;
if (unlikely((tcp_flag_word(tcph) & TCP_FLAG_MASK) != TCP_FLAG_ACK))
return 0;
optlen = (tcph->doff << 2) - sizeof(*tcph);
if (optlen) {
const u32 *opt = (const u32 *)(tcph + 1);
if (optlen != TCPOLEN_TSTAMP_ALIGNED ||
*opt != htonl(TSTAMP_WORD) || !opt[2])
return 0;
}
return 1;
}
static int t3_get_lro_header(void **eh, void **iph, void **tcph, static int t3_get_lro_header(void **eh, void **iph, void **tcph,
u64 *hdr_flags, void *priv) u64 *hdr_flags, void *priv)
{ {
@ -1981,9 +1947,6 @@ static int t3_get_lro_header(void **eh, void **iph, void **tcph,
*iph = (struct iphdr *)((struct ethhdr *)*eh + 1); *iph = (struct iphdr *)((struct ethhdr *)*eh + 1);
*tcph = (struct tcphdr *)((struct iphdr *)*iph + 1); *tcph = (struct tcphdr *)((struct iphdr *)*iph + 1);
if (!lro_segment_ok(*tcph))
return -1;
*hdr_flags = LRO_IPV4 | LRO_TCP; *hdr_flags = LRO_IPV4 | LRO_TCP;
return 0; return 0;
} }
@ -2878,9 +2841,7 @@ int t3_sge_alloc_qset(struct adapter *adapter, unsigned int id, int nports,
struct net_lro_mgr *lro_mgr = &q->lro_mgr; struct net_lro_mgr *lro_mgr = &q->lro_mgr;
init_qset_cntxt(q, id); init_qset_cntxt(q, id);
init_timer(&q->tx_reclaim_timer); setup_timer(&q->tx_reclaim_timer, sge_timer_cb, (unsigned long)q);
q->tx_reclaim_timer.data = (unsigned long)q;
q->tx_reclaim_timer.function = sge_timer_cb;
q->fl[0].desc = alloc_ring(adapter->pdev, p->fl_size, q->fl[0].desc = alloc_ring(adapter->pdev, p->fl_size,
sizeof(struct rx_desc), sizeof(struct rx_desc),
@ -2934,6 +2895,7 @@ int t3_sge_alloc_qset(struct adapter *adapter, unsigned int id, int nports,
q->rspq.gen = 1; q->rspq.gen = 1;
q->rspq.size = p->rspq_size; q->rspq.size = p->rspq_size;
spin_lock_init(&q->rspq.lock); spin_lock_init(&q->rspq.lock);
skb_queue_head_init(&q->rspq.rx_queue);
q->txq[TXQ_ETH].stop_thres = nports * q->txq[TXQ_ETH].stop_thres = nports *
flits_to_desc(sgl_len(MAX_SKB_FRAGS + 1) + 3); flits_to_desc(sgl_len(MAX_SKB_FRAGS + 1) + 3);
@ -3042,6 +3004,24 @@ err:
return ret; return ret;
} }
/**
* t3_stop_sge_timers - stop SGE timer call backs
* @adap: the adapter
*
* Stops each SGE queue set's timer call back
*/
void t3_stop_sge_timers(struct adapter *adap)
{
int i;
for (i = 0; i < SGE_QSETS; ++i) {
struct sge_qset *q = &adap->sge.qs[i];
if (q->tx_reclaim_timer.function)
del_timer_sync(&q->tx_reclaim_timer);
}
}
/** /**
* t3_free_sge_resources - free SGE resources * t3_free_sge_resources - free SGE resources
* @adap: the adapter * @adap: the adapter

Просмотреть файл

@ -194,21 +194,18 @@ int t3_mc7_bd_read(struct mc7 *mc7, unsigned int start, unsigned int n,
static void mi1_init(struct adapter *adap, const struct adapter_info *ai) static void mi1_init(struct adapter *adap, const struct adapter_info *ai)
{ {
u32 clkdiv = adap->params.vpd.cclk / (2 * adap->params.vpd.mdc) - 1; u32 clkdiv = adap->params.vpd.cclk / (2 * adap->params.vpd.mdc) - 1;
u32 val = F_PREEN | V_MDIINV(ai->mdiinv) | V_MDIEN(ai->mdien) | u32 val = F_PREEN | V_CLKDIV(clkdiv);
V_CLKDIV(clkdiv);
if (!(ai->caps & SUPPORTED_10000baseT_Full))
val |= V_ST(1);
t3_write_reg(adap, A_MI1_CFG, val); t3_write_reg(adap, A_MI1_CFG, val);
} }
#define MDIO_ATTEMPTS 10 #define MDIO_ATTEMPTS 20
/* /*
* MI1 read/write operations for direct-addressed PHYs. * MI1 read/write operations for clause 22 PHYs.
*/ */
static int mi1_read(struct adapter *adapter, int phy_addr, int mmd_addr, static int t3_mi1_read(struct adapter *adapter, int phy_addr, int mmd_addr,
int reg_addr, unsigned int *valp) int reg_addr, unsigned int *valp)
{ {
int ret; int ret;
u32 addr = V_REGADDR(reg_addr) | V_PHYADDR(phy_addr); u32 addr = V_REGADDR(reg_addr) | V_PHYADDR(phy_addr);
@ -217,16 +214,17 @@ static int mi1_read(struct adapter *adapter, int phy_addr, int mmd_addr,
return -EINVAL; return -EINVAL;
mutex_lock(&adapter->mdio_lock); mutex_lock(&adapter->mdio_lock);
t3_set_reg_field(adapter, A_MI1_CFG, V_ST(M_ST), V_ST(1));
t3_write_reg(adapter, A_MI1_ADDR, addr); t3_write_reg(adapter, A_MI1_ADDR, addr);
t3_write_reg(adapter, A_MI1_OP, V_MDI_OP(2)); t3_write_reg(adapter, A_MI1_OP, V_MDI_OP(2));
ret = t3_wait_op_done(adapter, A_MI1_OP, F_BUSY, 0, MDIO_ATTEMPTS, 20); ret = t3_wait_op_done(adapter, A_MI1_OP, F_BUSY, 0, MDIO_ATTEMPTS, 10);
if (!ret) if (!ret)
*valp = t3_read_reg(adapter, A_MI1_DATA); *valp = t3_read_reg(adapter, A_MI1_DATA);
mutex_unlock(&adapter->mdio_lock); mutex_unlock(&adapter->mdio_lock);
return ret; return ret;
} }
static int mi1_write(struct adapter *adapter, int phy_addr, int mmd_addr, static int t3_mi1_write(struct adapter *adapter, int phy_addr, int mmd_addr,
int reg_addr, unsigned int val) int reg_addr, unsigned int val)
{ {
int ret; int ret;
@ -236,19 +234,37 @@ static int mi1_write(struct adapter *adapter, int phy_addr, int mmd_addr,
return -EINVAL; return -EINVAL;
mutex_lock(&adapter->mdio_lock); mutex_lock(&adapter->mdio_lock);
t3_set_reg_field(adapter, A_MI1_CFG, V_ST(M_ST), V_ST(1));
t3_write_reg(adapter, A_MI1_ADDR, addr); t3_write_reg(adapter, A_MI1_ADDR, addr);
t3_write_reg(adapter, A_MI1_DATA, val); t3_write_reg(adapter, A_MI1_DATA, val);
t3_write_reg(adapter, A_MI1_OP, V_MDI_OP(1)); t3_write_reg(adapter, A_MI1_OP, V_MDI_OP(1));
ret = t3_wait_op_done(adapter, A_MI1_OP, F_BUSY, 0, MDIO_ATTEMPTS, 20); ret = t3_wait_op_done(adapter, A_MI1_OP, F_BUSY, 0, MDIO_ATTEMPTS, 10);
mutex_unlock(&adapter->mdio_lock); mutex_unlock(&adapter->mdio_lock);
return ret; return ret;
} }
static const struct mdio_ops mi1_mdio_ops = { static const struct mdio_ops mi1_mdio_ops = {
mi1_read, t3_mi1_read,
mi1_write t3_mi1_write
}; };
/*
* Performs the address cycle for clause 45 PHYs.
* Must be called with the MDIO_LOCK held.
*/
static int mi1_wr_addr(struct adapter *adapter, int phy_addr, int mmd_addr,
int reg_addr)
{
u32 addr = V_REGADDR(mmd_addr) | V_PHYADDR(phy_addr);
t3_set_reg_field(adapter, A_MI1_CFG, V_ST(M_ST), 0);
t3_write_reg(adapter, A_MI1_ADDR, addr);
t3_write_reg(adapter, A_MI1_DATA, reg_addr);
t3_write_reg(adapter, A_MI1_OP, V_MDI_OP(0));
return t3_wait_op_done(adapter, A_MI1_OP, F_BUSY, 0,
MDIO_ATTEMPTS, 10);
}
/* /*
* MI1 read/write operations for indirect-addressed PHYs. * MI1 read/write operations for indirect-addressed PHYs.
*/ */
@ -256,17 +272,13 @@ static int mi1_ext_read(struct adapter *adapter, int phy_addr, int mmd_addr,
int reg_addr, unsigned int *valp) int reg_addr, unsigned int *valp)
{ {
int ret; int ret;
u32 addr = V_REGADDR(mmd_addr) | V_PHYADDR(phy_addr);
mutex_lock(&adapter->mdio_lock); mutex_lock(&adapter->mdio_lock);
t3_write_reg(adapter, A_MI1_ADDR, addr); ret = mi1_wr_addr(adapter, phy_addr, mmd_addr, reg_addr);
t3_write_reg(adapter, A_MI1_DATA, reg_addr);
t3_write_reg(adapter, A_MI1_OP, V_MDI_OP(0));
ret = t3_wait_op_done(adapter, A_MI1_OP, F_BUSY, 0, MDIO_ATTEMPTS, 20);
if (!ret) { if (!ret) {
t3_write_reg(adapter, A_MI1_OP, V_MDI_OP(3)); t3_write_reg(adapter, A_MI1_OP, V_MDI_OP(3));
ret = t3_wait_op_done(adapter, A_MI1_OP, F_BUSY, 0, ret = t3_wait_op_done(adapter, A_MI1_OP, F_BUSY, 0,
MDIO_ATTEMPTS, 20); MDIO_ATTEMPTS, 10);
if (!ret) if (!ret)
*valp = t3_read_reg(adapter, A_MI1_DATA); *valp = t3_read_reg(adapter, A_MI1_DATA);
} }
@ -278,18 +290,14 @@ static int mi1_ext_write(struct adapter *adapter, int phy_addr, int mmd_addr,
int reg_addr, unsigned int val) int reg_addr, unsigned int val)
{ {
int ret; int ret;
u32 addr = V_REGADDR(mmd_addr) | V_PHYADDR(phy_addr);
mutex_lock(&adapter->mdio_lock); mutex_lock(&adapter->mdio_lock);
t3_write_reg(adapter, A_MI1_ADDR, addr); ret = mi1_wr_addr(adapter, phy_addr, mmd_addr, reg_addr);
t3_write_reg(adapter, A_MI1_DATA, reg_addr);
t3_write_reg(adapter, A_MI1_OP, V_MDI_OP(0));
ret = t3_wait_op_done(adapter, A_MI1_OP, F_BUSY, 0, MDIO_ATTEMPTS, 20);
if (!ret) { if (!ret) {
t3_write_reg(adapter, A_MI1_DATA, val); t3_write_reg(adapter, A_MI1_DATA, val);
t3_write_reg(adapter, A_MI1_OP, V_MDI_OP(1)); t3_write_reg(adapter, A_MI1_OP, V_MDI_OP(1));
ret = t3_wait_op_done(adapter, A_MI1_OP, F_BUSY, 0, ret = t3_wait_op_done(adapter, A_MI1_OP, F_BUSY, 0,
MDIO_ATTEMPTS, 20); MDIO_ATTEMPTS, 10);
} }
mutex_unlock(&adapter->mdio_lock); mutex_unlock(&adapter->mdio_lock);
return ret; return ret;
@ -399,6 +407,29 @@ int t3_phy_advertise(struct cphy *phy, unsigned int advert)
return mdio_write(phy, 0, MII_ADVERTISE, val); return mdio_write(phy, 0, MII_ADVERTISE, val);
} }
/**
* t3_phy_advertise_fiber - set fiber PHY advertisement register
* @phy: the PHY to operate on
* @advert: bitmap of capabilities the PHY should advertise
*
* Sets a fiber PHY's advertisement register to advertise the
* requested capabilities.
*/
int t3_phy_advertise_fiber(struct cphy *phy, unsigned int advert)
{
unsigned int val = 0;
if (advert & ADVERTISED_1000baseT_Half)
val |= ADVERTISE_1000XHALF;
if (advert & ADVERTISED_1000baseT_Full)
val |= ADVERTISE_1000XFULL;
if (advert & ADVERTISED_Pause)
val |= ADVERTISE_1000XPAUSE;
if (advert & ADVERTISED_Asym_Pause)
val |= ADVERTISE_1000XPSE_ASYM;
return mdio_write(phy, 0, MII_ADVERTISE, val);
}
/** /**
* t3_set_phy_speed_duplex - force PHY speed and duplex * t3_set_phy_speed_duplex - force PHY speed and duplex
* @phy: the PHY to operate on * @phy: the PHY to operate on
@ -434,27 +465,52 @@ int t3_set_phy_speed_duplex(struct cphy *phy, int speed, int duplex)
return mdio_write(phy, 0, MII_BMCR, ctl); return mdio_write(phy, 0, MII_BMCR, ctl);
} }
int t3_phy_lasi_intr_enable(struct cphy *phy)
{
return mdio_write(phy, MDIO_DEV_PMA_PMD, LASI_CTRL, 1);
}
int t3_phy_lasi_intr_disable(struct cphy *phy)
{
return mdio_write(phy, MDIO_DEV_PMA_PMD, LASI_CTRL, 0);
}
int t3_phy_lasi_intr_clear(struct cphy *phy)
{
u32 val;
return mdio_read(phy, MDIO_DEV_PMA_PMD, LASI_STAT, &val);
}
int t3_phy_lasi_intr_handler(struct cphy *phy)
{
unsigned int status;
int err = mdio_read(phy, MDIO_DEV_PMA_PMD, LASI_STAT, &status);
if (err)
return err;
return (status & 1) ? cphy_cause_link_change : 0;
}
static const struct adapter_info t3_adap_info[] = { static const struct adapter_info t3_adap_info[] = {
{2, 0, 0, 0, {2, 0,
F_GPIO2_OEN | F_GPIO4_OEN | F_GPIO2_OEN | F_GPIO4_OEN |
F_GPIO2_OUT_VAL | F_GPIO4_OUT_VAL, F_GPIO3 | F_GPIO5, F_GPIO2_OUT_VAL | F_GPIO4_OUT_VAL, { S_GPIO3, S_GPIO5 }, 0,
0,
&mi1_mdio_ops, "Chelsio PE9000"}, &mi1_mdio_ops, "Chelsio PE9000"},
{2, 0, 0, 0, {2, 0,
F_GPIO2_OEN | F_GPIO4_OEN | F_GPIO2_OEN | F_GPIO4_OEN |
F_GPIO2_OUT_VAL | F_GPIO4_OUT_VAL, F_GPIO3 | F_GPIO5, F_GPIO2_OUT_VAL | F_GPIO4_OUT_VAL, { S_GPIO3, S_GPIO5 }, 0,
0,
&mi1_mdio_ops, "Chelsio T302"}, &mi1_mdio_ops, "Chelsio T302"},
{1, 0, 0, 0, {1, 0,
F_GPIO1_OEN | F_GPIO6_OEN | F_GPIO7_OEN | F_GPIO10_OEN | F_GPIO1_OEN | F_GPIO6_OEN | F_GPIO7_OEN | F_GPIO10_OEN |
F_GPIO11_OEN | F_GPIO1_OUT_VAL | F_GPIO6_OUT_VAL | F_GPIO10_OUT_VAL, F_GPIO11_OEN | F_GPIO1_OUT_VAL | F_GPIO6_OUT_VAL | F_GPIO10_OUT_VAL,
0, SUPPORTED_10000baseT_Full | SUPPORTED_AUI, { 0 }, SUPPORTED_10000baseT_Full | SUPPORTED_AUI,
&mi1_mdio_ext_ops, "Chelsio T310"}, &mi1_mdio_ext_ops, "Chelsio T310"},
{2, 0, 0, 0, {2, 0,
F_GPIO1_OEN | F_GPIO2_OEN | F_GPIO4_OEN | F_GPIO5_OEN | F_GPIO6_OEN | F_GPIO1_OEN | F_GPIO2_OEN | F_GPIO4_OEN | F_GPIO5_OEN | F_GPIO6_OEN |
F_GPIO7_OEN | F_GPIO10_OEN | F_GPIO11_OEN | F_GPIO1_OUT_VAL | F_GPIO7_OEN | F_GPIO10_OEN | F_GPIO11_OEN | F_GPIO1_OUT_VAL |
F_GPIO5_OUT_VAL | F_GPIO6_OUT_VAL | F_GPIO10_OUT_VAL, 0, F_GPIO5_OUT_VAL | F_GPIO6_OUT_VAL | F_GPIO10_OUT_VAL,
SUPPORTED_10000baseT_Full | SUPPORTED_AUI, { S_GPIO9, S_GPIO3 }, SUPPORTED_10000baseT_Full | SUPPORTED_AUI,
&mi1_mdio_ext_ops, "Chelsio T320"}, &mi1_mdio_ext_ops, "Chelsio T320"},
}; };
@ -467,28 +523,22 @@ const struct adapter_info *t3_get_adapter_info(unsigned int id)
return id < ARRAY_SIZE(t3_adap_info) ? &t3_adap_info[id] : NULL; return id < ARRAY_SIZE(t3_adap_info) ? &t3_adap_info[id] : NULL;
} }
#define CAPS_1G (SUPPORTED_10baseT_Full | SUPPORTED_100baseT_Full | \ struct port_type_info {
SUPPORTED_1000baseT_Full | SUPPORTED_Autoneg | SUPPORTED_MII) int (*phy_prep)(struct cphy *phy, struct adapter *adapter,
#define CAPS_10G (SUPPORTED_10000baseT_Full | SUPPORTED_AUI) int phy_addr, const struct mdio_ops *ops);
static const struct port_type_info port_types[] = {
{NULL},
{t3_ael1002_phy_prep, CAPS_10G | SUPPORTED_FIBRE,
"10GBASE-XR"},
{t3_vsc8211_phy_prep, CAPS_1G | SUPPORTED_TP | SUPPORTED_IRQ,
"10/100/1000BASE-T"},
{NULL, CAPS_1G | SUPPORTED_TP | SUPPORTED_IRQ,
"10/100/1000BASE-T"},
{t3_xaui_direct_phy_prep, CAPS_10G | SUPPORTED_TP, "10GBASE-CX4"},
{NULL, CAPS_10G, "10GBASE-KX4"},
{t3_qt2045_phy_prep, CAPS_10G | SUPPORTED_TP, "10GBASE-CX4"},
{t3_ael1006_phy_prep, CAPS_10G | SUPPORTED_FIBRE,
"10GBASE-SR"},
{NULL, CAPS_10G | SUPPORTED_TP, "10GBASE-CX4"},
}; };
#undef CAPS_1G static const struct port_type_info port_types[] = {
#undef CAPS_10G { NULL },
{ t3_ael1002_phy_prep },
{ t3_vsc8211_phy_prep },
{ NULL},
{ t3_xaui_direct_phy_prep },
{ t3_ael2005_phy_prep },
{ t3_qt2045_phy_prep },
{ t3_ael1006_phy_prep },
{ NULL },
};
#define VPD_ENTRY(name, len) \ #define VPD_ENTRY(name, len) \
u8 name##_kword[2]; u8 name##_len; u8 name##_data[len] u8 name##_kword[2]; u8 name##_len; u8 name##_data[len]
@ -1132,6 +1182,15 @@ void t3_link_changed(struct adapter *adapter, int port_id)
phy->ops->get_link_status(phy, &link_ok, &speed, &duplex, &fc); phy->ops->get_link_status(phy, &link_ok, &speed, &duplex, &fc);
if (lc->requested_fc & PAUSE_AUTONEG)
fc &= lc->requested_fc;
else
fc = lc->requested_fc & (PAUSE_RX | PAUSE_TX);
if (link_ok == lc->link_ok && speed == lc->speed &&
duplex == lc->duplex && fc == lc->fc)
return; /* nothing changed */
if (link_ok != lc->link_ok && adapter->params.rev > 0 && if (link_ok != lc->link_ok && adapter->params.rev > 0 &&
uses_xaui(adapter)) { uses_xaui(adapter)) {
if (link_ok) if (link_ok)
@ -1142,10 +1201,6 @@ void t3_link_changed(struct adapter *adapter, int port_id)
lc->link_ok = link_ok; lc->link_ok = link_ok;
lc->speed = speed < 0 ? SPEED_INVALID : speed; lc->speed = speed < 0 ? SPEED_INVALID : speed;
lc->duplex = duplex < 0 ? DUPLEX_INVALID : duplex; lc->duplex = duplex < 0 ? DUPLEX_INVALID : duplex;
if (lc->requested_fc & PAUSE_AUTONEG)
fc &= lc->requested_fc;
else
fc = lc->requested_fc & (PAUSE_RX | PAUSE_TX);
if (link_ok && speed >= 0 && lc->autoneg == AUTONEG_ENABLE) { if (link_ok && speed >= 0 && lc->autoneg == AUTONEG_ENABLE) {
/* Set MAC speed, duplex, and flow control to match PHY. */ /* Set MAC speed, duplex, and flow control to match PHY. */
@ -1191,7 +1246,6 @@ int t3_link_start(struct cphy *phy, struct cmac *mac, struct link_config *lc)
fc); fc);
/* Also disables autoneg */ /* Also disables autoneg */
phy->ops->set_speed_duplex(phy, lc->speed, lc->duplex); phy->ops->set_speed_duplex(phy, lc->speed, lc->duplex);
phy->ops->reset(phy, 0);
} else } else
phy->ops->autoneg_enable(phy); phy->ops->autoneg_enable(phy);
} else { } else {
@ -1221,7 +1275,7 @@ struct intr_info {
unsigned int mask; /* bits to check in interrupt status */ unsigned int mask; /* bits to check in interrupt status */
const char *msg; /* message to print or NULL */ const char *msg; /* message to print or NULL */
short stat_idx; /* stat counter to increment or -1 */ short stat_idx; /* stat counter to increment or -1 */
unsigned short fatal:1; /* whether the condition reported is fatal */ unsigned short fatal; /* whether the condition reported is fatal */
}; };
/** /**
@ -1682,25 +1736,23 @@ static int mac_intr_handler(struct adapter *adap, unsigned int idx)
*/ */
int t3_phy_intr_handler(struct adapter *adapter) int t3_phy_intr_handler(struct adapter *adapter)
{ {
u32 mask, gpi = adapter_info(adapter)->gpio_intr;
u32 i, cause = t3_read_reg(adapter, A_T3DBG_INT_CAUSE); u32 i, cause = t3_read_reg(adapter, A_T3DBG_INT_CAUSE);
for_each_port(adapter, i) { for_each_port(adapter, i) {
struct port_info *p = adap2pinfo(adapter, i); struct port_info *p = adap2pinfo(adapter, i);
mask = gpi - (gpi & (gpi - 1)); if (!(p->phy.caps & SUPPORTED_IRQ))
gpi -= mask;
if (!(p->port_type->caps & SUPPORTED_IRQ))
continue; continue;
if (cause & mask) { if (cause & (1 << adapter_info(adapter)->gpio_intr[i])) {
int phy_cause = p->phy.ops->intr_handler(&p->phy); int phy_cause = p->phy.ops->intr_handler(&p->phy);
if (phy_cause & cphy_cause_link_change) if (phy_cause & cphy_cause_link_change)
t3_link_changed(adapter, i); t3_link_changed(adapter, i);
if (phy_cause & cphy_cause_fifo_error) if (phy_cause & cphy_cause_fifo_error)
p->phy.fifo_errors++; p->phy.fifo_errors++;
if (phy_cause & cphy_cause_module_change)
t3_os_phymod_changed(adapter, i);
} }
} }
@ -1763,6 +1815,17 @@ int t3_slow_intr_handler(struct adapter *adapter)
return 1; return 1;
} }
static unsigned int calc_gpio_intr(struct adapter *adap)
{
unsigned int i, gpi_intr = 0;
for_each_port(adap, i)
if ((adap2pinfo(adap, i)->phy.caps & SUPPORTED_IRQ) &&
adapter_info(adap)->gpio_intr[i])
gpi_intr |= 1 << adapter_info(adap)->gpio_intr[i];
return gpi_intr;
}
/** /**
* t3_intr_enable - enable interrupts * t3_intr_enable - enable interrupts
* @adapter: the adapter whose interrupts should be enabled * @adapter: the adapter whose interrupts should be enabled
@ -1805,10 +1868,8 @@ void t3_intr_enable(struct adapter *adapter)
t3_write_reg(adapter, A_ULPTX_INT_ENABLE, ULPTX_INTR_MASK); t3_write_reg(adapter, A_ULPTX_INT_ENABLE, ULPTX_INTR_MASK);
} }
t3_write_reg(adapter, A_T3DBG_GPIO_ACT_LOW, t3_write_reg(adapter, A_T3DBG_INT_ENABLE, calc_gpio_intr(adapter));
adapter_info(adapter)->gpio_intr);
t3_write_reg(adapter, A_T3DBG_INT_ENABLE,
adapter_info(adapter)->gpio_intr);
if (is_pcie(adapter)) if (is_pcie(adapter))
t3_write_reg(adapter, A_PCIE_INT_ENABLE, PCIE_INTR_MASK); t3_write_reg(adapter, A_PCIE_INT_ENABLE, PCIE_INTR_MASK);
else else
@ -3329,6 +3390,8 @@ int t3_init_hw(struct adapter *adapter, u32 fw_params)
init_hw_for_avail_ports(adapter, adapter->params.nports); init_hw_for_avail_ports(adapter, adapter->params.nports);
t3_sge_init(adapter, &adapter->params.sge); t3_sge_init(adapter, &adapter->params.sge);
t3_write_reg(adapter, A_T3DBG_GPIO_ACT_LOW, calc_gpio_intr(adapter));
t3_write_reg(adapter, A_CIM_HOST_ACC_DATA, vpd->uclk | fw_params); t3_write_reg(adapter, A_CIM_HOST_ACC_DATA, vpd->uclk | fw_params);
t3_write_reg(adapter, A_CIM_BOOT_CFG, t3_write_reg(adapter, A_CIM_BOOT_CFG,
V_BOOTADDR(FW_FLASH_BOOT_ADDR >> 2)); V_BOOTADDR(FW_FLASH_BOOT_ADDR >> 2));
@ -3488,7 +3551,7 @@ void early_hw_init(struct adapter *adapter, const struct adapter_info *ai)
* Older PCIe cards lose their config space during reset, PCI-X * Older PCIe cards lose their config space during reset, PCI-X
* ones don't. * ones don't.
*/ */
static int t3_reset_adapter(struct adapter *adapter) int t3_reset_adapter(struct adapter *adapter)
{ {
int i, save_and_restore_pcie = int i, save_and_restore_pcie =
adapter->params.rev < T3_REV_B2 && is_pcie(adapter); adapter->params.rev < T3_REV_B2 && is_pcie(adapter);
@ -3556,7 +3619,7 @@ int t3_prep_adapter(struct adapter *adapter, const struct adapter_info *ai,
int reset) int reset)
{ {
int ret; int ret;
unsigned int i, j = 0; unsigned int i, j = -1;
get_pci_mode(adapter, &adapter->params.pci); get_pci_mode(adapter, &adapter->params.pci);
@ -3620,16 +3683,18 @@ int t3_prep_adapter(struct adapter *adapter, const struct adapter_info *ai,
for_each_port(adapter, i) { for_each_port(adapter, i) {
u8 hw_addr[6]; u8 hw_addr[6];
const struct port_type_info *pti;
struct port_info *p = adap2pinfo(adapter, i); struct port_info *p = adap2pinfo(adapter, i);
while (!adapter->params.vpd.port_type[j]) while (!adapter->params.vpd.port_type[++j])
++j; ;
p->port_type = &port_types[adapter->params.vpd.port_type[j]]; pti = &port_types[adapter->params.vpd.port_type[j]];
p->port_type->phy_prep(&p->phy, adapter, ai->phy_base_addr + j, ret = pti->phy_prep(&p->phy, adapter, ai->phy_base_addr + j,
ai->mdio_ops); ai->mdio_ops);
if (ret)
return ret;
mac_prep(&p->mac, adapter, j); mac_prep(&p->mac, adapter, j);
++j;
/* /*
* The VPD EEPROM stores the base Ethernet address for the * The VPD EEPROM stores the base Ethernet address for the
@ -3643,9 +3708,9 @@ int t3_prep_adapter(struct adapter *adapter, const struct adapter_info *ai,
ETH_ALEN); ETH_ALEN);
memcpy(adapter->port[i]->perm_addr, hw_addr, memcpy(adapter->port[i]->perm_addr, hw_addr,
ETH_ALEN); ETH_ALEN);
init_link_config(&p->link_config, p->port_type->caps); init_link_config(&p->link_config, p->phy.caps);
p->phy.ops->power_down(&p->phy, 1); p->phy.ops->power_down(&p->phy, 1);
if (!(p->port_type->caps & SUPPORTED_IRQ)) if (!(p->phy.caps & SUPPORTED_IRQ))
adapter->params.linkpoll_period = 10; adapter->params.linkpoll_period = 10;
} }
@ -3661,7 +3726,7 @@ void t3_led_ready(struct adapter *adapter)
int t3_replay_prep_adapter(struct adapter *adapter) int t3_replay_prep_adapter(struct adapter *adapter)
{ {
const struct adapter_info *ai = adapter->params.info; const struct adapter_info *ai = adapter->params.info;
unsigned int i, j = 0; unsigned int i, j = -1;
int ret; int ret;
early_hw_init(adapter, ai); early_hw_init(adapter, ai);
@ -3670,15 +3735,17 @@ int t3_replay_prep_adapter(struct adapter *adapter)
return ret; return ret;
for_each_port(adapter, i) { for_each_port(adapter, i) {
const struct port_type_info *pti;
struct port_info *p = adap2pinfo(adapter, i); struct port_info *p = adap2pinfo(adapter, i);
while (!adapter->params.vpd.port_type[j])
++j;
p->port_type->phy_prep(&p->phy, adapter, ai->phy_base_addr + j, while (!adapter->params.vpd.port_type[++j])
ai->mdio_ops); ;
pti = &port_types[adapter->params.vpd.port_type[j]];
ret = pti->phy_prep(&p->phy, adapter, p->phy.addr, NULL);
if (ret)
return ret;
p->phy.ops->power_down(&p->phy, 1); p->phy.ops->power_down(&p->phy, 1);
++j;
} }
return 0; return 0;

Просмотреть файл

@ -33,28 +33,40 @@
/* VSC8211 PHY specific registers. */ /* VSC8211 PHY specific registers. */
enum { enum {
VSC8211_SIGDET_CTRL = 19,
VSC8211_EXT_CTRL = 23,
VSC8211_INTR_ENABLE = 25, VSC8211_INTR_ENABLE = 25,
VSC8211_INTR_STATUS = 26, VSC8211_INTR_STATUS = 26,
VSC8211_LED_CTRL = 27,
VSC8211_AUX_CTRL_STAT = 28, VSC8211_AUX_CTRL_STAT = 28,
VSC8211_EXT_PAGE_AXS = 31,
}; };
enum { enum {
VSC_INTR_RX_ERR = 1 << 0, VSC_INTR_RX_ERR = 1 << 0,
VSC_INTR_MS_ERR = 1 << 1, /* master/slave resolution error */ VSC_INTR_MS_ERR = 1 << 1, /* master/slave resolution error */
VSC_INTR_CABLE = 1 << 2, /* cable impairment */ VSC_INTR_CABLE = 1 << 2, /* cable impairment */
VSC_INTR_FALSE_CARR = 1 << 3, /* false carrier */ VSC_INTR_FALSE_CARR = 1 << 3, /* false carrier */
VSC_INTR_MEDIA_CHG = 1 << 4, /* AMS media change */ VSC_INTR_MEDIA_CHG = 1 << 4, /* AMS media change */
VSC_INTR_RX_FIFO = 1 << 5, /* Rx FIFO over/underflow */ VSC_INTR_RX_FIFO = 1 << 5, /* Rx FIFO over/underflow */
VSC_INTR_TX_FIFO = 1 << 6, /* Tx FIFO over/underflow */ VSC_INTR_TX_FIFO = 1 << 6, /* Tx FIFO over/underflow */
VSC_INTR_DESCRAMBL = 1 << 7, /* descrambler lock-lost */ VSC_INTR_DESCRAMBL = 1 << 7, /* descrambler lock-lost */
VSC_INTR_SYMBOL_ERR = 1 << 8, /* symbol error */ VSC_INTR_SYMBOL_ERR = 1 << 8, /* symbol error */
VSC_INTR_NEG_DONE = 1 << 10, /* autoneg done */ VSC_INTR_NEG_DONE = 1 << 10, /* autoneg done */
VSC_INTR_NEG_ERR = 1 << 11, /* autoneg error */ VSC_INTR_NEG_ERR = 1 << 11, /* autoneg error */
VSC_INTR_LINK_CHG = 1 << 13, /* link change */ VSC_INTR_DPLX_CHG = 1 << 12, /* duplex change */
VSC_INTR_ENABLE = 1 << 15, /* interrupt enable */ VSC_INTR_LINK_CHG = 1 << 13, /* link change */
VSC_INTR_SPD_CHG = 1 << 14, /* speed change */
VSC_INTR_ENABLE = 1 << 15, /* interrupt enable */
};
enum {
VSC_CTRL_CLAUSE37_VIEW = 1 << 4, /* Switch to Clause 37 view */
VSC_CTRL_MEDIA_MODE_HI = 0xf000 /* High part of media mode select */
}; };
#define CFG_CHG_INTR_MASK (VSC_INTR_LINK_CHG | VSC_INTR_NEG_ERR | \ #define CFG_CHG_INTR_MASK (VSC_INTR_LINK_CHG | VSC_INTR_NEG_ERR | \
VSC_INTR_DPLX_CHG | VSC_INTR_SPD_CHG | \
VSC_INTR_NEG_DONE) VSC_INTR_NEG_DONE)
#define INTR_MASK (CFG_CHG_INTR_MASK | VSC_INTR_TX_FIFO | VSC_INTR_RX_FIFO | \ #define INTR_MASK (CFG_CHG_INTR_MASK | VSC_INTR_TX_FIFO | VSC_INTR_RX_FIFO | \
VSC_INTR_ENABLE) VSC_INTR_ENABLE)
@ -184,6 +196,112 @@ static int vsc8211_get_link_status(struct cphy *cphy, int *link_ok,
return 0; return 0;
} }
static int vsc8211_get_link_status_fiber(struct cphy *cphy, int *link_ok,
int *speed, int *duplex, int *fc)
{
unsigned int bmcr, status, lpa, adv;
int err, sp = -1, dplx = -1, pause = 0;
err = mdio_read(cphy, 0, MII_BMCR, &bmcr);
if (!err)
err = mdio_read(cphy, 0, MII_BMSR, &status);
if (err)
return err;
if (link_ok) {
/*
* BMSR_LSTATUS is latch-low, so if it is 0 we need to read it
* once more to get the current link state.
*/
if (!(status & BMSR_LSTATUS))
err = mdio_read(cphy, 0, MII_BMSR, &status);
if (err)
return err;
*link_ok = (status & BMSR_LSTATUS) != 0;
}
if (!(bmcr & BMCR_ANENABLE)) {
dplx = (bmcr & BMCR_FULLDPLX) ? DUPLEX_FULL : DUPLEX_HALF;
if (bmcr & BMCR_SPEED1000)
sp = SPEED_1000;
else if (bmcr & BMCR_SPEED100)
sp = SPEED_100;
else
sp = SPEED_10;
} else if (status & BMSR_ANEGCOMPLETE) {
err = mdio_read(cphy, 0, MII_LPA, &lpa);
if (!err)
err = mdio_read(cphy, 0, MII_ADVERTISE, &adv);
if (err)
return err;
if (adv & lpa & ADVERTISE_1000XFULL) {
dplx = DUPLEX_FULL;
sp = SPEED_1000;
} else if (adv & lpa & ADVERTISE_1000XHALF) {
dplx = DUPLEX_HALF;
sp = SPEED_1000;
}
if (fc && dplx == DUPLEX_FULL) {
if (lpa & adv & ADVERTISE_1000XPAUSE)
pause = PAUSE_RX | PAUSE_TX;
else if ((lpa & ADVERTISE_1000XPAUSE) &&
(adv & lpa & ADVERTISE_1000XPSE_ASYM))
pause = PAUSE_TX;
else if ((lpa & ADVERTISE_1000XPSE_ASYM) &&
(adv & ADVERTISE_1000XPAUSE))
pause = PAUSE_RX;
}
}
if (speed)
*speed = sp;
if (duplex)
*duplex = dplx;
if (fc)
*fc = pause;
return 0;
}
/*
* Enable/disable auto MDI/MDI-X in forced link speed mode.
*/
static int vsc8211_set_automdi(struct cphy *phy, int enable)
{
int err;
err = mdio_write(phy, 0, VSC8211_EXT_PAGE_AXS, 0x52b5);
if (err)
return err;
err = mdio_write(phy, 0, 18, 0x12);
if (err)
return err;
err = mdio_write(phy, 0, 17, enable ? 0x2803 : 0x3003);
if (err)
return err;
err = mdio_write(phy, 0, 16, 0x87fa);
if (err)
return err;
err = mdio_write(phy, 0, VSC8211_EXT_PAGE_AXS, 0);
if (err)
return err;
return 0;
}
int vsc8211_set_speed_duplex(struct cphy *phy, int speed, int duplex)
{
int err;
err = t3_set_phy_speed_duplex(phy, speed, duplex);
if (!err)
err = vsc8211_set_automdi(phy, 1);
return err;
}
static int vsc8211_power_down(struct cphy *cphy, int enable) static int vsc8211_power_down(struct cphy *cphy, int enable)
{ {
return t3_mdio_change_bits(cphy, 0, MII_BMCR, BMCR_PDOWN, return t3_mdio_change_bits(cphy, 0, MII_BMCR, BMCR_PDOWN,
@ -221,8 +339,66 @@ static struct cphy_ops vsc8211_ops = {
.power_down = vsc8211_power_down, .power_down = vsc8211_power_down,
}; };
void t3_vsc8211_phy_prep(struct cphy *phy, struct adapter *adapter, static struct cphy_ops vsc8211_fiber_ops = {
int phy_addr, const struct mdio_ops *mdio_ops) .reset = vsc8211_reset,
.intr_enable = vsc8211_intr_enable,
.intr_disable = vsc8211_intr_disable,
.intr_clear = vsc8211_intr_clear,
.intr_handler = vsc8211_intr_handler,
.autoneg_enable = vsc8211_autoneg_enable,
.autoneg_restart = vsc8211_autoneg_restart,
.advertise = t3_phy_advertise_fiber,
.set_speed_duplex = t3_set_phy_speed_duplex,
.get_link_status = vsc8211_get_link_status_fiber,
.power_down = vsc8211_power_down,
};
int t3_vsc8211_phy_prep(struct cphy *phy, struct adapter *adapter,
int phy_addr, const struct mdio_ops *mdio_ops)
{ {
cphy_init(phy, adapter, phy_addr, &vsc8211_ops, mdio_ops); int err;
unsigned int val;
cphy_init(phy, adapter, phy_addr, &vsc8211_ops, mdio_ops,
SUPPORTED_10baseT_Full | SUPPORTED_100baseT_Full |
SUPPORTED_1000baseT_Full | SUPPORTED_Autoneg | SUPPORTED_MII |
SUPPORTED_TP | SUPPORTED_IRQ, "10/100/1000BASE-T");
msleep(20); /* PHY needs ~10ms to start responding to MDIO */
err = mdio_read(phy, 0, VSC8211_EXT_CTRL, &val);
if (err)
return err;
if (val & VSC_CTRL_MEDIA_MODE_HI) {
/* copper interface, just need to configure the LEDs */
return mdio_write(phy, 0, VSC8211_LED_CTRL, 0x100);
}
phy->caps = SUPPORTED_1000baseT_Full | SUPPORTED_Autoneg |
SUPPORTED_MII | SUPPORTED_FIBRE | SUPPORTED_IRQ;
phy->desc = "1000BASE-X";
phy->ops = &vsc8211_fiber_ops;
err = mdio_write(phy, 0, VSC8211_EXT_PAGE_AXS, 1);
if (err)
return err;
err = mdio_write(phy, 0, VSC8211_SIGDET_CTRL, 1);
if (err)
return err;
err = mdio_write(phy, 0, VSC8211_EXT_PAGE_AXS, 0);
if (err)
return err;
err = mdio_write(phy, 0, VSC8211_EXT_CTRL,
val | VSC_CTRL_CLAUSE37_VIEW);
if (err)
return err;
err = vsc8211_reset(phy, 0);
if (err)
return err;
udelay(5); /* delay after reset before next SMI */
return 0;
} }

Просмотреть файл

@ -191,7 +191,7 @@ MODULE_PARM_DESC(use_io, "Force use of i/o access mode");
#define DPRINTK(nlevel, klevel, fmt, args...) \ #define DPRINTK(nlevel, klevel, fmt, args...) \
(void)((NETIF_MSG_##nlevel & nic->msg_enable) && \ (void)((NETIF_MSG_##nlevel & nic->msg_enable) && \
printk(KERN_##klevel PFX "%s: %s: " fmt, nic->netdev->name, \ printk(KERN_##klevel PFX "%s: %s: " fmt, nic->netdev->name, \
__FUNCTION__ , ## args)) __func__ , ## args))
#define INTEL_8255X_ETHERNET_DEVICE(device_id, ich) {\ #define INTEL_8255X_ETHERNET_DEVICE(device_id, ich) {\
PCI_VENDOR_ID_INTEL, device_id, PCI_ANY_ID, PCI_ANY_ID, \ PCI_VENDOR_ID_INTEL, device_id, PCI_ANY_ID, PCI_ANY_ID, \

Просмотреть файл

@ -155,8 +155,6 @@ do { \
#endif #endif
#define E1000_MNG_VLAN_NONE (-1) #define E1000_MNG_VLAN_NONE (-1)
/* Number of packet split data buffers (not including the header buffer) */
#define PS_PAGE_BUFFERS (MAX_PS_BUFFERS - 1)
/* wrapper around a pointer to a socket buffer, /* wrapper around a pointer to a socket buffer,
* so a DMA handle can be stored along with the buffer */ * so a DMA handle can be stored along with the buffer */
@ -168,14 +166,6 @@ struct e1000_buffer {
u16 next_to_watch; u16 next_to_watch;
}; };
struct e1000_ps_page {
struct page *ps_page[PS_PAGE_BUFFERS];
};
struct e1000_ps_page_dma {
u64 ps_page_dma[PS_PAGE_BUFFERS];
};
struct e1000_tx_ring { struct e1000_tx_ring {
/* pointer to the descriptor ring memory */ /* pointer to the descriptor ring memory */
void *desc; void *desc;
@ -213,9 +203,6 @@ struct e1000_rx_ring {
unsigned int next_to_clean; unsigned int next_to_clean;
/* array of buffer information structs */ /* array of buffer information structs */
struct e1000_buffer *buffer_info; struct e1000_buffer *buffer_info;
/* arrays of page information for packet split */
struct e1000_ps_page *ps_page;
struct e1000_ps_page_dma *ps_page_dma;
/* cpu for rx queue */ /* cpu for rx queue */
int cpu; int cpu;
@ -228,8 +215,6 @@ struct e1000_rx_ring {
((((R)->next_to_clean > (R)->next_to_use) \ ((((R)->next_to_clean > (R)->next_to_use) \
? 0 : (R)->count) + (R)->next_to_clean - (R)->next_to_use - 1) ? 0 : (R)->count) + (R)->next_to_clean - (R)->next_to_use - 1)
#define E1000_RX_DESC_PS(R, i) \
(&(((union e1000_rx_desc_packet_split *)((R).desc))[i]))
#define E1000_RX_DESC_EXT(R, i) \ #define E1000_RX_DESC_EXT(R, i) \
(&(((union e1000_rx_desc_extended *)((R).desc))[i])) (&(((union e1000_rx_desc_extended *)((R).desc))[i]))
#define E1000_GET_DESC(R, i, type) (&(((struct type *)((R).desc))[i])) #define E1000_GET_DESC(R, i, type) (&(((struct type *)((R).desc))[i]))
@ -311,10 +296,8 @@ struct e1000_adapter {
u32 rx_int_delay; u32 rx_int_delay;
u32 rx_abs_int_delay; u32 rx_abs_int_delay;
bool rx_csum; bool rx_csum;
unsigned int rx_ps_pages;
u32 gorcl; u32 gorcl;
u64 gorcl_old; u64 gorcl_old;
u16 rx_ps_bsize0;
/* OS defined structs */ /* OS defined structs */
struct net_device *netdev; struct net_device *netdev;

Просмотреть файл

@ -137,15 +137,9 @@ static int e1000_clean(struct napi_struct *napi, int budget);
static bool e1000_clean_rx_irq(struct e1000_adapter *adapter, static bool e1000_clean_rx_irq(struct e1000_adapter *adapter,
struct e1000_rx_ring *rx_ring, struct e1000_rx_ring *rx_ring,
int *work_done, int work_to_do); int *work_done, int work_to_do);
static bool e1000_clean_rx_irq_ps(struct e1000_adapter *adapter,
struct e1000_rx_ring *rx_ring,
int *work_done, int work_to_do);
static void e1000_alloc_rx_buffers(struct e1000_adapter *adapter, static void e1000_alloc_rx_buffers(struct e1000_adapter *adapter,
struct e1000_rx_ring *rx_ring, struct e1000_rx_ring *rx_ring,
int cleaned_count); int cleaned_count);
static void e1000_alloc_rx_buffers_ps(struct e1000_adapter *adapter,
struct e1000_rx_ring *rx_ring,
int cleaned_count);
static int e1000_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd); static int e1000_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd);
static int e1000_mii_ioctl(struct net_device *netdev, struct ifreq *ifr, static int e1000_mii_ioctl(struct net_device *netdev, struct ifreq *ifr,
int cmd); int cmd);
@ -1331,7 +1325,6 @@ static int __devinit e1000_sw_init(struct e1000_adapter *adapter)
pci_read_config_word(pdev, PCI_COMMAND, &hw->pci_cmd_word); pci_read_config_word(pdev, PCI_COMMAND, &hw->pci_cmd_word);
adapter->rx_buffer_len = MAXIMUM_ETHERNET_VLAN_SIZE; adapter->rx_buffer_len = MAXIMUM_ETHERNET_VLAN_SIZE;
adapter->rx_ps_bsize0 = E1000_RXBUFFER_128;
hw->max_frame_size = netdev->mtu + hw->max_frame_size = netdev->mtu +
ENET_HEADER_SIZE + ETHERNET_FCS_SIZE; ENET_HEADER_SIZE + ETHERNET_FCS_SIZE;
hw->min_frame_size = MINIMUM_ETHERNET_FRAME_SIZE; hw->min_frame_size = MINIMUM_ETHERNET_FRAME_SIZE;
@ -1815,26 +1808,6 @@ static int e1000_setup_rx_resources(struct e1000_adapter *adapter,
} }
memset(rxdr->buffer_info, 0, size); memset(rxdr->buffer_info, 0, size);
rxdr->ps_page = kcalloc(rxdr->count, sizeof(struct e1000_ps_page),
GFP_KERNEL);
if (!rxdr->ps_page) {
vfree(rxdr->buffer_info);
DPRINTK(PROBE, ERR,
"Unable to allocate memory for the receive descriptor ring\n");
return -ENOMEM;
}
rxdr->ps_page_dma = kcalloc(rxdr->count,
sizeof(struct e1000_ps_page_dma),
GFP_KERNEL);
if (!rxdr->ps_page_dma) {
vfree(rxdr->buffer_info);
kfree(rxdr->ps_page);
DPRINTK(PROBE, ERR,
"Unable to allocate memory for the receive descriptor ring\n");
return -ENOMEM;
}
if (hw->mac_type <= e1000_82547_rev_2) if (hw->mac_type <= e1000_82547_rev_2)
desc_len = sizeof(struct e1000_rx_desc); desc_len = sizeof(struct e1000_rx_desc);
else else
@ -1852,8 +1825,6 @@ static int e1000_setup_rx_resources(struct e1000_adapter *adapter,
"Unable to allocate memory for the receive descriptor ring\n"); "Unable to allocate memory for the receive descriptor ring\n");
setup_rx_desc_die: setup_rx_desc_die:
vfree(rxdr->buffer_info); vfree(rxdr->buffer_info);
kfree(rxdr->ps_page);
kfree(rxdr->ps_page_dma);
return -ENOMEM; return -ENOMEM;
} }
@ -1932,11 +1903,7 @@ int e1000_setup_all_rx_resources(struct e1000_adapter *adapter)
static void e1000_setup_rctl(struct e1000_adapter *adapter) static void e1000_setup_rctl(struct e1000_adapter *adapter)
{ {
struct e1000_hw *hw = &adapter->hw; struct e1000_hw *hw = &adapter->hw;
u32 rctl, rfctl; u32 rctl;
u32 psrctl = 0;
#ifndef CONFIG_E1000_DISABLE_PACKET_SPLIT
u32 pages = 0;
#endif
rctl = er32(RCTL); rctl = er32(RCTL);
@ -1988,55 +1955,6 @@ static void e1000_setup_rctl(struct e1000_adapter *adapter)
break; break;
} }
#ifndef CONFIG_E1000_DISABLE_PACKET_SPLIT
/* 82571 and greater support packet-split where the protocol
* header is placed in skb->data and the packet data is
* placed in pages hanging off of skb_shinfo(skb)->nr_frags.
* In the case of a non-split, skb->data is linearly filled,
* followed by the page buffers. Therefore, skb->data is
* sized to hold the largest protocol header.
*/
/* allocations using alloc_page take too long for regular MTU
* so only enable packet split for jumbo frames */
pages = PAGE_USE_COUNT(adapter->netdev->mtu);
if ((hw->mac_type >= e1000_82571) && (pages <= 3) &&
PAGE_SIZE <= 16384 && (rctl & E1000_RCTL_LPE))
adapter->rx_ps_pages = pages;
else
adapter->rx_ps_pages = 0;
#endif
if (adapter->rx_ps_pages) {
/* Configure extra packet-split registers */
rfctl = er32(RFCTL);
rfctl |= E1000_RFCTL_EXTEN;
/* disable packet split support for IPv6 extension headers,
* because some malformed IPv6 headers can hang the RX */
rfctl |= (E1000_RFCTL_IPV6_EX_DIS |
E1000_RFCTL_NEW_IPV6_EXT_DIS);
ew32(RFCTL, rfctl);
rctl |= E1000_RCTL_DTYP_PS;
psrctl |= adapter->rx_ps_bsize0 >>
E1000_PSRCTL_BSIZE0_SHIFT;
switch (adapter->rx_ps_pages) {
case 3:
psrctl |= PAGE_SIZE <<
E1000_PSRCTL_BSIZE3_SHIFT;
case 2:
psrctl |= PAGE_SIZE <<
E1000_PSRCTL_BSIZE2_SHIFT;
case 1:
psrctl |= PAGE_SIZE >>
E1000_PSRCTL_BSIZE1_SHIFT;
break;
}
ew32(PSRCTL, psrctl);
}
ew32(RCTL, rctl); ew32(RCTL, rctl);
} }
@ -2053,18 +1971,10 @@ static void e1000_configure_rx(struct e1000_adapter *adapter)
struct e1000_hw *hw = &adapter->hw; struct e1000_hw *hw = &adapter->hw;
u32 rdlen, rctl, rxcsum, ctrl_ext; u32 rdlen, rctl, rxcsum, ctrl_ext;
if (adapter->rx_ps_pages) { rdlen = adapter->rx_ring[0].count *
/* this is a 32 byte descriptor */ sizeof(struct e1000_rx_desc);
rdlen = adapter->rx_ring[0].count * adapter->clean_rx = e1000_clean_rx_irq;
sizeof(union e1000_rx_desc_packet_split); adapter->alloc_rx_buf = e1000_alloc_rx_buffers;
adapter->clean_rx = e1000_clean_rx_irq_ps;
adapter->alloc_rx_buf = e1000_alloc_rx_buffers_ps;
} else {
rdlen = adapter->rx_ring[0].count *
sizeof(struct e1000_rx_desc);
adapter->clean_rx = e1000_clean_rx_irq;
adapter->alloc_rx_buf = e1000_alloc_rx_buffers;
}
/* disable receives while setting up the descriptors */ /* disable receives while setting up the descriptors */
rctl = er32(RCTL); rctl = er32(RCTL);
@ -2109,28 +2019,14 @@ static void e1000_configure_rx(struct e1000_adapter *adapter)
/* Enable 82543 Receive Checksum Offload for TCP and UDP */ /* Enable 82543 Receive Checksum Offload for TCP and UDP */
if (hw->mac_type >= e1000_82543) { if (hw->mac_type >= e1000_82543) {
rxcsum = er32(RXCSUM); rxcsum = er32(RXCSUM);
if (adapter->rx_csum) { if (adapter->rx_csum)
rxcsum |= E1000_RXCSUM_TUOFL; rxcsum |= E1000_RXCSUM_TUOFL;
else
/* Enable 82571 IPv4 payload checksum for UDP fragments
* Must be used in conjunction with packet-split. */
if ((hw->mac_type >= e1000_82571) &&
(adapter->rx_ps_pages)) {
rxcsum |= E1000_RXCSUM_IPPCSE;
}
} else {
rxcsum &= ~E1000_RXCSUM_TUOFL;
/* don't need to clear IPPCSE as it defaults to 0 */ /* don't need to clear IPPCSE as it defaults to 0 */
} rxcsum &= ~E1000_RXCSUM_TUOFL;
ew32(RXCSUM, rxcsum); ew32(RXCSUM, rxcsum);
} }
/* enable early receives on 82573, only takes effect if using > 2048
* byte total frame size. for example only for jumbo frames */
#define E1000_ERT_2048 0x100
if (hw->mac_type == e1000_82573)
ew32(ERT, E1000_ERT_2048);
/* Enable Receives */ /* Enable Receives */
ew32(RCTL, rctl); ew32(RCTL, rctl);
} }
@ -2256,10 +2152,6 @@ static void e1000_free_rx_resources(struct e1000_adapter *adapter,
vfree(rx_ring->buffer_info); vfree(rx_ring->buffer_info);
rx_ring->buffer_info = NULL; rx_ring->buffer_info = NULL;
kfree(rx_ring->ps_page);
rx_ring->ps_page = NULL;
kfree(rx_ring->ps_page_dma);
rx_ring->ps_page_dma = NULL;
pci_free_consistent(pdev, rx_ring->size, rx_ring->desc, rx_ring->dma); pci_free_consistent(pdev, rx_ring->size, rx_ring->desc, rx_ring->dma);
@ -2292,11 +2184,9 @@ static void e1000_clean_rx_ring(struct e1000_adapter *adapter,
{ {
struct e1000_hw *hw = &adapter->hw; struct e1000_hw *hw = &adapter->hw;
struct e1000_buffer *buffer_info; struct e1000_buffer *buffer_info;
struct e1000_ps_page *ps_page;
struct e1000_ps_page_dma *ps_page_dma;
struct pci_dev *pdev = adapter->pdev; struct pci_dev *pdev = adapter->pdev;
unsigned long size; unsigned long size;
unsigned int i, j; unsigned int i;
/* Free all the Rx ring sk_buffs */ /* Free all the Rx ring sk_buffs */
for (i = 0; i < rx_ring->count; i++) { for (i = 0; i < rx_ring->count; i++) {
@ -2310,25 +2200,10 @@ static void e1000_clean_rx_ring(struct e1000_adapter *adapter,
dev_kfree_skb(buffer_info->skb); dev_kfree_skb(buffer_info->skb);
buffer_info->skb = NULL; buffer_info->skb = NULL;
} }
ps_page = &rx_ring->ps_page[i];
ps_page_dma = &rx_ring->ps_page_dma[i];
for (j = 0; j < adapter->rx_ps_pages; j++) {
if (!ps_page->ps_page[j]) break;
pci_unmap_page(pdev,
ps_page_dma->ps_page_dma[j],
PAGE_SIZE, PCI_DMA_FROMDEVICE);
ps_page_dma->ps_page_dma[j] = 0;
put_page(ps_page->ps_page[j]);
ps_page->ps_page[j] = NULL;
}
} }
size = sizeof(struct e1000_buffer) * rx_ring->count; size = sizeof(struct e1000_buffer) * rx_ring->count;
memset(rx_ring->buffer_info, 0, size); memset(rx_ring->buffer_info, 0, size);
size = sizeof(struct e1000_ps_page) * rx_ring->count;
memset(rx_ring->ps_page, 0, size);
size = sizeof(struct e1000_ps_page_dma) * rx_ring->count;
memset(rx_ring->ps_page_dma, 0, size);
/* Zero out the descriptor ring */ /* Zero out the descriptor ring */
@ -2998,32 +2873,49 @@ static bool e1000_tx_csum(struct e1000_adapter *adapter,
struct e1000_buffer *buffer_info; struct e1000_buffer *buffer_info;
unsigned int i; unsigned int i;
u8 css; u8 css;
u32 cmd_len = E1000_TXD_CMD_DEXT;
if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) { if (skb->ip_summed != CHECKSUM_PARTIAL)
css = skb_transport_offset(skb); return false;
i = tx_ring->next_to_use; switch (skb->protocol) {
buffer_info = &tx_ring->buffer_info[i]; case __constant_htons(ETH_P_IP):
context_desc = E1000_CONTEXT_DESC(*tx_ring, i); if (ip_hdr(skb)->protocol == IPPROTO_TCP)
cmd_len |= E1000_TXD_CMD_TCP;
context_desc->lower_setup.ip_config = 0; break;
context_desc->upper_setup.tcp_fields.tucss = css; case __constant_htons(ETH_P_IPV6):
context_desc->upper_setup.tcp_fields.tucso = /* XXX not handling all IPV6 headers */
css + skb->csum_offset; if (ipv6_hdr(skb)->nexthdr == IPPROTO_TCP)
context_desc->upper_setup.tcp_fields.tucse = 0; cmd_len |= E1000_TXD_CMD_TCP;
context_desc->tcp_seg_setup.data = 0; break;
context_desc->cmd_and_length = cpu_to_le32(E1000_TXD_CMD_DEXT); default:
if (unlikely(net_ratelimit()))
buffer_info->time_stamp = jiffies; DPRINTK(DRV, WARNING,
buffer_info->next_to_watch = i; "checksum_partial proto=%x!\n", skb->protocol);
break;
if (unlikely(++i == tx_ring->count)) i = 0;
tx_ring->next_to_use = i;
return true;
} }
return false; css = skb_transport_offset(skb);
i = tx_ring->next_to_use;
buffer_info = &tx_ring->buffer_info[i];
context_desc = E1000_CONTEXT_DESC(*tx_ring, i);
context_desc->lower_setup.ip_config = 0;
context_desc->upper_setup.tcp_fields.tucss = css;
context_desc->upper_setup.tcp_fields.tucso =
css + skb->csum_offset;
context_desc->upper_setup.tcp_fields.tucse = 0;
context_desc->tcp_seg_setup.data = 0;
context_desc->cmd_and_length = cpu_to_le32(cmd_len);
buffer_info->time_stamp = jiffies;
buffer_info->next_to_watch = i;
if (unlikely(++i == tx_ring->count)) i = 0;
tx_ring->next_to_use = i;
return true;
} }
#define E1000_MAX_TXD_PWR 12 #define E1000_MAX_TXD_PWR 12
@ -4234,181 +4126,6 @@ next_desc:
return cleaned; return cleaned;
} }
/**
* e1000_clean_rx_irq_ps - Send received data up the network stack; packet split
* @adapter: board private structure
**/
static bool e1000_clean_rx_irq_ps(struct e1000_adapter *adapter,
struct e1000_rx_ring *rx_ring,
int *work_done, int work_to_do)
{
union e1000_rx_desc_packet_split *rx_desc, *next_rxd;
struct net_device *netdev = adapter->netdev;
struct pci_dev *pdev = adapter->pdev;
struct e1000_buffer *buffer_info, *next_buffer;
struct e1000_ps_page *ps_page;
struct e1000_ps_page_dma *ps_page_dma;
struct sk_buff *skb;
unsigned int i, j;
u32 length, staterr;
int cleaned_count = 0;
bool cleaned = false;
unsigned int total_rx_bytes=0, total_rx_packets=0;
i = rx_ring->next_to_clean;
rx_desc = E1000_RX_DESC_PS(*rx_ring, i);
staterr = le32_to_cpu(rx_desc->wb.middle.status_error);
buffer_info = &rx_ring->buffer_info[i];
while (staterr & E1000_RXD_STAT_DD) {
ps_page = &rx_ring->ps_page[i];
ps_page_dma = &rx_ring->ps_page_dma[i];
if (unlikely(*work_done >= work_to_do))
break;
(*work_done)++;
skb = buffer_info->skb;
/* in the packet split case this is header only */
prefetch(skb->data - NET_IP_ALIGN);
if (++i == rx_ring->count) i = 0;
next_rxd = E1000_RX_DESC_PS(*rx_ring, i);
prefetch(next_rxd);
next_buffer = &rx_ring->buffer_info[i];
cleaned = true;
cleaned_count++;
pci_unmap_single(pdev, buffer_info->dma,
buffer_info->length,
PCI_DMA_FROMDEVICE);
if (unlikely(!(staterr & E1000_RXD_STAT_EOP))) {
E1000_DBG("%s: Packet Split buffers didn't pick up"
" the full packet\n", netdev->name);
dev_kfree_skb_irq(skb);
goto next_desc;
}
if (unlikely(staterr & E1000_RXDEXT_ERR_FRAME_ERR_MASK)) {
dev_kfree_skb_irq(skb);
goto next_desc;
}
length = le16_to_cpu(rx_desc->wb.middle.length0);
if (unlikely(!length)) {
E1000_DBG("%s: Last part of the packet spanning"
" multiple descriptors\n", netdev->name);
dev_kfree_skb_irq(skb);
goto next_desc;
}
/* Good Receive */
skb_put(skb, length);
{
/* this looks ugly, but it seems compiler issues make it
more efficient than reusing j */
int l1 = le16_to_cpu(rx_desc->wb.upper.length[0]);
/* page alloc/put takes too long and effects small packet
* throughput, so unsplit small packets and save the alloc/put*/
if (l1 && (l1 <= copybreak) && ((length + l1) <= adapter->rx_ps_bsize0)) {
u8 *vaddr;
/* there is no documentation about how to call
* kmap_atomic, so we can't hold the mapping
* very long */
pci_dma_sync_single_for_cpu(pdev,
ps_page_dma->ps_page_dma[0],
PAGE_SIZE,
PCI_DMA_FROMDEVICE);
vaddr = kmap_atomic(ps_page->ps_page[0],
KM_SKB_DATA_SOFTIRQ);
memcpy(skb_tail_pointer(skb), vaddr, l1);
kunmap_atomic(vaddr, KM_SKB_DATA_SOFTIRQ);
pci_dma_sync_single_for_device(pdev,
ps_page_dma->ps_page_dma[0],
PAGE_SIZE, PCI_DMA_FROMDEVICE);
/* remove the CRC */
l1 -= 4;
skb_put(skb, l1);
goto copydone;
} /* if */
}
for (j = 0; j < adapter->rx_ps_pages; j++) {
length = le16_to_cpu(rx_desc->wb.upper.length[j]);
if (!length)
break;
pci_unmap_page(pdev, ps_page_dma->ps_page_dma[j],
PAGE_SIZE, PCI_DMA_FROMDEVICE);
ps_page_dma->ps_page_dma[j] = 0;
skb_fill_page_desc(skb, j, ps_page->ps_page[j], 0,
length);
ps_page->ps_page[j] = NULL;
skb->len += length;
skb->data_len += length;
skb->truesize += length;
}
/* strip the ethernet crc, problem is we're using pages now so
* this whole operation can get a little cpu intensive */
pskb_trim(skb, skb->len - 4);
copydone:
total_rx_bytes += skb->len;
total_rx_packets++;
e1000_rx_checksum(adapter, staterr,
le16_to_cpu(rx_desc->wb.lower.hi_dword.csum_ip.csum), skb);
skb->protocol = eth_type_trans(skb, netdev);
if (likely(rx_desc->wb.upper.header_status &
cpu_to_le16(E1000_RXDPS_HDRSTAT_HDRSP)))
adapter->rx_hdr_split++;
if (unlikely(adapter->vlgrp && (staterr & E1000_RXD_STAT_VP))) {
vlan_hwaccel_receive_skb(skb, adapter->vlgrp,
le16_to_cpu(rx_desc->wb.middle.vlan));
} else {
netif_receive_skb(skb);
}
netdev->last_rx = jiffies;
next_desc:
rx_desc->wb.middle.status_error &= cpu_to_le32(~0xFF);
buffer_info->skb = NULL;
/* return some buffers to hardware, one at a time is too slow */
if (unlikely(cleaned_count >= E1000_RX_BUFFER_WRITE)) {
adapter->alloc_rx_buf(adapter, rx_ring, cleaned_count);
cleaned_count = 0;
}
/* use prefetched values */
rx_desc = next_rxd;
buffer_info = next_buffer;
staterr = le32_to_cpu(rx_desc->wb.middle.status_error);
}
rx_ring->next_to_clean = i;
cleaned_count = E1000_DESC_UNUSED(rx_ring);
if (cleaned_count)
adapter->alloc_rx_buf(adapter, rx_ring, cleaned_count);
adapter->total_rx_packets += total_rx_packets;
adapter->total_rx_bytes += total_rx_bytes;
adapter->net_stats.rx_bytes += total_rx_bytes;
adapter->net_stats.rx_packets += total_rx_packets;
return cleaned;
}
/** /**
* e1000_alloc_rx_buffers - Replace used receive buffers; legacy & extended * e1000_alloc_rx_buffers - Replace used receive buffers; legacy & extended
* @adapter: address of board private structure * @adapter: address of board private structure
@ -4520,104 +4237,6 @@ map_skb:
} }
} }
/**
* e1000_alloc_rx_buffers_ps - Replace used receive buffers; packet split
* @adapter: address of board private structure
**/
static void e1000_alloc_rx_buffers_ps(struct e1000_adapter *adapter,
struct e1000_rx_ring *rx_ring,
int cleaned_count)
{
struct e1000_hw *hw = &adapter->hw;
struct net_device *netdev = adapter->netdev;
struct pci_dev *pdev = adapter->pdev;
union e1000_rx_desc_packet_split *rx_desc;
struct e1000_buffer *buffer_info;
struct e1000_ps_page *ps_page;
struct e1000_ps_page_dma *ps_page_dma;
struct sk_buff *skb;
unsigned int i, j;
i = rx_ring->next_to_use;
buffer_info = &rx_ring->buffer_info[i];
ps_page = &rx_ring->ps_page[i];
ps_page_dma = &rx_ring->ps_page_dma[i];
while (cleaned_count--) {
rx_desc = E1000_RX_DESC_PS(*rx_ring, i);
for (j = 0; j < PS_PAGE_BUFFERS; j++) {
if (j < adapter->rx_ps_pages) {
if (likely(!ps_page->ps_page[j])) {
ps_page->ps_page[j] =
alloc_page(GFP_ATOMIC);
if (unlikely(!ps_page->ps_page[j])) {
adapter->alloc_rx_buff_failed++;
goto no_buffers;
}
ps_page_dma->ps_page_dma[j] =
pci_map_page(pdev,
ps_page->ps_page[j],
0, PAGE_SIZE,
PCI_DMA_FROMDEVICE);
}
/* Refresh the desc even if buffer_addrs didn't
* change because each write-back erases
* this info.
*/
rx_desc->read.buffer_addr[j+1] =
cpu_to_le64(ps_page_dma->ps_page_dma[j]);
} else
rx_desc->read.buffer_addr[j+1] = ~cpu_to_le64(0);
}
skb = netdev_alloc_skb(netdev,
adapter->rx_ps_bsize0 + NET_IP_ALIGN);
if (unlikely(!skb)) {
adapter->alloc_rx_buff_failed++;
break;
}
/* Make buffer alignment 2 beyond a 16 byte boundary
* this will result in a 16 byte aligned IP header after
* the 14 byte MAC header is removed
*/
skb_reserve(skb, NET_IP_ALIGN);
buffer_info->skb = skb;
buffer_info->length = adapter->rx_ps_bsize0;
buffer_info->dma = pci_map_single(pdev, skb->data,
adapter->rx_ps_bsize0,
PCI_DMA_FROMDEVICE);
rx_desc->read.buffer_addr[0] = cpu_to_le64(buffer_info->dma);
if (unlikely(++i == rx_ring->count)) i = 0;
buffer_info = &rx_ring->buffer_info[i];
ps_page = &rx_ring->ps_page[i];
ps_page_dma = &rx_ring->ps_page_dma[i];
}
no_buffers:
if (likely(rx_ring->next_to_use != i)) {
rx_ring->next_to_use = i;
if (unlikely(i-- == 0)) i = (rx_ring->count - 1);
/* Force memory writes to complete before letting h/w
* know there are new descriptors to fetch. (Only
* applicable for weak-ordered memory model archs,
* such as IA-64). */
wmb();
/* Hardware increments by 16 bytes, but packet split
* descriptors are 32 bytes...so we increment tail
* twice as much.
*/
writel(i<<1, hw->hw_addr + rx_ring->rdt);
}
}
/** /**
* e1000_smartspeed - Workaround for SmartSpeed on 82541 and 82547 controllers. * e1000_smartspeed - Workaround for SmartSpeed on 82541 and 82547 controllers.
* @adapter: * @adapter:

Просмотреть файл

@ -38,6 +38,7 @@
* 82573V Gigabit Ethernet Controller (Copper) * 82573V Gigabit Ethernet Controller (Copper)
* 82573E Gigabit Ethernet Controller (Copper) * 82573E Gigabit Ethernet Controller (Copper)
* 82573L Gigabit Ethernet Controller * 82573L Gigabit Ethernet Controller
* 82574L Gigabit Network Connection
*/ */
#include <linux/netdevice.h> #include <linux/netdevice.h>
@ -54,6 +55,8 @@
#define E1000_GCR_L1_ACT_WITHOUT_L0S_RX 0x08000000 #define E1000_GCR_L1_ACT_WITHOUT_L0S_RX 0x08000000
#define E1000_NVM_INIT_CTRL2_MNGM 0x6000 /* Manageability Operation Mode mask */
static s32 e1000_get_phy_id_82571(struct e1000_hw *hw); static s32 e1000_get_phy_id_82571(struct e1000_hw *hw);
static s32 e1000_setup_copper_link_82571(struct e1000_hw *hw); static s32 e1000_setup_copper_link_82571(struct e1000_hw *hw);
static s32 e1000_setup_fiber_serdes_link_82571(struct e1000_hw *hw); static s32 e1000_setup_fiber_serdes_link_82571(struct e1000_hw *hw);
@ -63,6 +66,8 @@ static s32 e1000_fix_nvm_checksum_82571(struct e1000_hw *hw);
static void e1000_initialize_hw_bits_82571(struct e1000_hw *hw); static void e1000_initialize_hw_bits_82571(struct e1000_hw *hw);
static s32 e1000_setup_link_82571(struct e1000_hw *hw); static s32 e1000_setup_link_82571(struct e1000_hw *hw);
static void e1000_clear_hw_cntrs_82571(struct e1000_hw *hw); static void e1000_clear_hw_cntrs_82571(struct e1000_hw *hw);
static bool e1000_check_mng_mode_82574(struct e1000_hw *hw);
static s32 e1000_led_on_82574(struct e1000_hw *hw);
/** /**
* e1000_init_phy_params_82571 - Init PHY func ptrs. * e1000_init_phy_params_82571 - Init PHY func ptrs.
@ -92,6 +97,9 @@ static s32 e1000_init_phy_params_82571(struct e1000_hw *hw)
case e1000_82573: case e1000_82573:
phy->type = e1000_phy_m88; phy->type = e1000_phy_m88;
break; break;
case e1000_82574:
phy->type = e1000_phy_bm;
break;
default: default:
return -E1000_ERR_PHY; return -E1000_ERR_PHY;
break; break;
@ -111,6 +119,10 @@ static s32 e1000_init_phy_params_82571(struct e1000_hw *hw)
if (phy->id != M88E1111_I_PHY_ID) if (phy->id != M88E1111_I_PHY_ID)
return -E1000_ERR_PHY; return -E1000_ERR_PHY;
break; break;
case e1000_82574:
if (phy->id != BME1000_E_PHY_ID_R2)
return -E1000_ERR_PHY;
break;
default: default:
return -E1000_ERR_PHY; return -E1000_ERR_PHY;
break; break;
@ -150,6 +162,7 @@ static s32 e1000_init_nvm_params_82571(struct e1000_hw *hw)
switch (hw->mac.type) { switch (hw->mac.type) {
case e1000_82573: case e1000_82573:
case e1000_82574:
if (((eecd >> 15) & 0x3) == 0x3) { if (((eecd >> 15) & 0x3) == 0x3) {
nvm->type = e1000_nvm_flash_hw; nvm->type = e1000_nvm_flash_hw;
nvm->word_size = 2048; nvm->word_size = 2048;
@ -245,6 +258,17 @@ static s32 e1000_init_mac_params_82571(struct e1000_adapter *adapter)
break; break;
} }
switch (hw->mac.type) {
case e1000_82574:
func->check_mng_mode = e1000_check_mng_mode_82574;
func->led_on = e1000_led_on_82574;
break;
default:
func->check_mng_mode = e1000e_check_mng_mode_generic;
func->led_on = e1000e_led_on_generic;
break;
}
return 0; return 0;
} }
@ -330,6 +354,8 @@ static s32 e1000_get_variants_82571(struct e1000_adapter *adapter)
static s32 e1000_get_phy_id_82571(struct e1000_hw *hw) static s32 e1000_get_phy_id_82571(struct e1000_hw *hw)
{ {
struct e1000_phy_info *phy = &hw->phy; struct e1000_phy_info *phy = &hw->phy;
s32 ret_val;
u16 phy_id = 0;
switch (hw->mac.type) { switch (hw->mac.type) {
case e1000_82571: case e1000_82571:
@ -345,6 +371,20 @@ static s32 e1000_get_phy_id_82571(struct e1000_hw *hw)
case e1000_82573: case e1000_82573:
return e1000e_get_phy_id(hw); return e1000e_get_phy_id(hw);
break; break;
case e1000_82574:
ret_val = e1e_rphy(hw, PHY_ID1, &phy_id);
if (ret_val)
return ret_val;
phy->id = (u32)(phy_id << 16);
udelay(20);
ret_val = e1e_rphy(hw, PHY_ID2, &phy_id);
if (ret_val)
return ret_val;
phy->id |= (u32)(phy_id);
phy->revision = (u32)(phy_id & ~PHY_REVISION_MASK);
break;
default: default:
return -E1000_ERR_PHY; return -E1000_ERR_PHY;
break; break;
@ -421,7 +461,7 @@ static s32 e1000_acquire_nvm_82571(struct e1000_hw *hw)
if (ret_val) if (ret_val)
return ret_val; return ret_val;
if (hw->mac.type != e1000_82573) if (hw->mac.type != e1000_82573 && hw->mac.type != e1000_82574)
ret_val = e1000e_acquire_nvm(hw); ret_val = e1000e_acquire_nvm(hw);
if (ret_val) if (ret_val)
@ -461,6 +501,7 @@ static s32 e1000_write_nvm_82571(struct e1000_hw *hw, u16 offset, u16 words,
switch (hw->mac.type) { switch (hw->mac.type) {
case e1000_82573: case e1000_82573:
case e1000_82574:
ret_val = e1000_write_nvm_eewr_82571(hw, offset, words, data); ret_val = e1000_write_nvm_eewr_82571(hw, offset, words, data);
break; break;
case e1000_82571: case e1000_82571:
@ -735,7 +776,7 @@ static s32 e1000_reset_hw_82571(struct e1000_hw *hw)
* Must acquire the MDIO ownership before MAC reset. * Must acquire the MDIO ownership before MAC reset.
* Ownership defaults to firmware after a reset. * Ownership defaults to firmware after a reset.
*/ */
if (hw->mac.type == e1000_82573) { if (hw->mac.type == e1000_82573 || hw->mac.type == e1000_82574) {
extcnf_ctrl = er32(EXTCNF_CTRL); extcnf_ctrl = er32(EXTCNF_CTRL);
extcnf_ctrl |= E1000_EXTCNF_CTRL_MDIO_SW_OWNERSHIP; extcnf_ctrl |= E1000_EXTCNF_CTRL_MDIO_SW_OWNERSHIP;
@ -776,7 +817,7 @@ static s32 e1000_reset_hw_82571(struct e1000_hw *hw)
* Need to wait for Phy configuration completion before accessing * Need to wait for Phy configuration completion before accessing
* NVM and Phy. * NVM and Phy.
*/ */
if (hw->mac.type == e1000_82573) if (hw->mac.type == e1000_82573 || hw->mac.type == e1000_82574)
msleep(25); msleep(25);
/* Clear any pending interrupt events. */ /* Clear any pending interrupt events. */
@ -843,7 +884,7 @@ static s32 e1000_init_hw_82571(struct e1000_hw *hw)
ew32(TXDCTL(0), reg_data); ew32(TXDCTL(0), reg_data);
/* ...for both queues. */ /* ...for both queues. */
if (mac->type != e1000_82573) { if (mac->type != e1000_82573 && mac->type != e1000_82574) {
reg_data = er32(TXDCTL(1)); reg_data = er32(TXDCTL(1));
reg_data = (reg_data & ~E1000_TXDCTL_WTHRESH) | reg_data = (reg_data & ~E1000_TXDCTL_WTHRESH) |
E1000_TXDCTL_FULL_TX_DESC_WB | E1000_TXDCTL_FULL_TX_DESC_WB |
@ -918,19 +959,28 @@ static void e1000_initialize_hw_bits_82571(struct e1000_hw *hw)
} }
/* Device Control */ /* Device Control */
if (hw->mac.type == e1000_82573) { if (hw->mac.type == e1000_82573 || hw->mac.type == e1000_82574) {
reg = er32(CTRL); reg = er32(CTRL);
reg &= ~(1 << 29); reg &= ~(1 << 29);
ew32(CTRL, reg); ew32(CTRL, reg);
} }
/* Extended Device Control */ /* Extended Device Control */
if (hw->mac.type == e1000_82573) { if (hw->mac.type == e1000_82573 || hw->mac.type == e1000_82574) {
reg = er32(CTRL_EXT); reg = er32(CTRL_EXT);
reg &= ~(1 << 23); reg &= ~(1 << 23);
reg |= (1 << 22); reg |= (1 << 22);
ew32(CTRL_EXT, reg); ew32(CTRL_EXT, reg);
} }
/* PCI-Ex Control Register */
if (hw->mac.type == e1000_82574) {
reg = er32(GCR);
reg |= (1 << 22);
ew32(GCR, reg);
}
return;
} }
/** /**
@ -947,7 +997,7 @@ void e1000e_clear_vfta(struct e1000_hw *hw)
u32 vfta_offset = 0; u32 vfta_offset = 0;
u32 vfta_bit_in_reg = 0; u32 vfta_bit_in_reg = 0;
if (hw->mac.type == e1000_82573) { if (hw->mac.type == e1000_82573 || hw->mac.type == e1000_82574) {
if (hw->mng_cookie.vlan_id != 0) { if (hw->mng_cookie.vlan_id != 0) {
/* /*
* The VFTA is a 4096b bit-field, each identifying * The VFTA is a 4096b bit-field, each identifying
@ -975,6 +1025,48 @@ void e1000e_clear_vfta(struct e1000_hw *hw)
} }
} }
/**
* e1000_check_mng_mode_82574 - Check manageability is enabled
* @hw: pointer to the HW structure
*
* Reads the NVM Initialization Control Word 2 and returns true
* (>0) if any manageability is enabled, else false (0).
**/
static bool e1000_check_mng_mode_82574(struct e1000_hw *hw)
{
u16 data;
e1000_read_nvm(hw, NVM_INIT_CONTROL2_REG, 1, &data);
return (data & E1000_NVM_INIT_CTRL2_MNGM) != 0;
}
/**
* e1000_led_on_82574 - Turn LED on
* @hw: pointer to the HW structure
*
* Turn LED on.
**/
static s32 e1000_led_on_82574(struct e1000_hw *hw)
{
u32 ctrl;
u32 i;
ctrl = hw->mac.ledctl_mode2;
if (!(E1000_STATUS_LU & er32(STATUS))) {
/*
* If no link, then turn LED on by setting the invert bit
* for each LED that's "on" (0x0E) in ledctl_mode2.
*/
for (i = 0; i < 4; i++)
if (((hw->mac.ledctl_mode2 >> (i * 8)) & 0xFF) ==
E1000_LEDCTL_MODE_LED_ON)
ctrl |= (E1000_LEDCTL_LED0_IVRT << (i * 8));
}
ew32(LEDCTL, ctrl);
return 0;
}
/** /**
* e1000_update_mc_addr_list_82571 - Update Multicast addresses * e1000_update_mc_addr_list_82571 - Update Multicast addresses
* @hw: pointer to the HW structure * @hw: pointer to the HW structure
@ -1018,7 +1110,8 @@ static s32 e1000_setup_link_82571(struct e1000_hw *hw)
* the default flow control setting, so we explicitly * the default flow control setting, so we explicitly
* set it to full. * set it to full.
*/ */
if (hw->mac.type == e1000_82573) if ((hw->mac.type == e1000_82573 || hw->mac.type == e1000_82574) &&
hw->fc.type == e1000_fc_default)
hw->fc.type = e1000_fc_full; hw->fc.type = e1000_fc_full;
return e1000e_setup_link(hw); return e1000e_setup_link(hw);
@ -1045,6 +1138,7 @@ static s32 e1000_setup_copper_link_82571(struct e1000_hw *hw)
switch (hw->phy.type) { switch (hw->phy.type) {
case e1000_phy_m88: case e1000_phy_m88:
case e1000_phy_bm:
ret_val = e1000e_copper_link_setup_m88(hw); ret_val = e1000e_copper_link_setup_m88(hw);
break; break;
case e1000_phy_igp_2: case e1000_phy_igp_2:
@ -1114,11 +1208,10 @@ static s32 e1000_valid_led_default_82571(struct e1000_hw *hw, u16 *data)
return ret_val; return ret_val;
} }
if (hw->mac.type == e1000_82573 && if ((hw->mac.type == e1000_82573 || hw->mac.type == e1000_82574) &&
*data == ID_LED_RESERVED_F746) *data == ID_LED_RESERVED_F746)
*data = ID_LED_DEFAULT_82573; *data = ID_LED_DEFAULT_82573;
else if (*data == ID_LED_RESERVED_0000 || else if (*data == ID_LED_RESERVED_0000 || *data == ID_LED_RESERVED_FFFF)
*data == ID_LED_RESERVED_FFFF)
*data = ID_LED_DEFAULT; *data = ID_LED_DEFAULT;
return 0; return 0;
@ -1265,13 +1358,13 @@ static void e1000_clear_hw_cntrs_82571(struct e1000_hw *hw)
} }
static struct e1000_mac_operations e82571_mac_ops = { static struct e1000_mac_operations e82571_mac_ops = {
.mng_mode_enab = E1000_MNG_IAMT_MODE << E1000_FWSM_MODE_SHIFT, /* .check_mng_mode: mac type dependent */
/* .check_for_link: media type dependent */ /* .check_for_link: media type dependent */
.cleanup_led = e1000e_cleanup_led_generic, .cleanup_led = e1000e_cleanup_led_generic,
.clear_hw_cntrs = e1000_clear_hw_cntrs_82571, .clear_hw_cntrs = e1000_clear_hw_cntrs_82571,
.get_bus_info = e1000e_get_bus_info_pcie, .get_bus_info = e1000e_get_bus_info_pcie,
/* .get_link_up_info: media type dependent */ /* .get_link_up_info: media type dependent */
.led_on = e1000e_led_on_generic, /* .led_on: mac type dependent */
.led_off = e1000e_led_off_generic, .led_off = e1000e_led_off_generic,
.update_mc_addr_list = e1000_update_mc_addr_list_82571, .update_mc_addr_list = e1000_update_mc_addr_list_82571,
.reset_hw = e1000_reset_hw_82571, .reset_hw = e1000_reset_hw_82571,
@ -1312,6 +1405,22 @@ static struct e1000_phy_operations e82_phy_ops_m88 = {
.write_phy_reg = e1000e_write_phy_reg_m88, .write_phy_reg = e1000e_write_phy_reg_m88,
}; };
static struct e1000_phy_operations e82_phy_ops_bm = {
.acquire_phy = e1000_get_hw_semaphore_82571,
.check_reset_block = e1000e_check_reset_block_generic,
.commit_phy = e1000e_phy_sw_reset,
.force_speed_duplex = e1000e_phy_force_speed_duplex_m88,
.get_cfg_done = e1000e_get_cfg_done,
.get_cable_length = e1000e_get_cable_length_m88,
.get_phy_info = e1000e_get_phy_info_m88,
.read_phy_reg = e1000e_read_phy_reg_bm2,
.release_phy = e1000_put_hw_semaphore_82571,
.reset_phy = e1000e_phy_hw_reset_generic,
.set_d0_lplu_state = e1000_set_d0_lplu_state_82571,
.set_d3_lplu_state = e1000e_set_d3_lplu_state,
.write_phy_reg = e1000e_write_phy_reg_bm2,
};
static struct e1000_nvm_operations e82571_nvm_ops = { static struct e1000_nvm_operations e82571_nvm_ops = {
.acquire_nvm = e1000_acquire_nvm_82571, .acquire_nvm = e1000_acquire_nvm_82571,
.read_nvm = e1000e_read_nvm_eerd, .read_nvm = e1000e_read_nvm_eerd,
@ -1375,3 +1484,21 @@ struct e1000_info e1000_82573_info = {
.nvm_ops = &e82571_nvm_ops, .nvm_ops = &e82571_nvm_ops,
}; };
struct e1000_info e1000_82574_info = {
.mac = e1000_82574,
.flags = FLAG_HAS_HW_VLAN_FILTER
| FLAG_HAS_MSIX
| FLAG_HAS_JUMBO_FRAMES
| FLAG_HAS_WOL
| FLAG_APME_IN_CTRL3
| FLAG_RX_CSUM_ENABLED
| FLAG_HAS_SMART_POWER_DOWN
| FLAG_HAS_AMT
| FLAG_HAS_CTRLEXT_ON_LOAD,
.pba = 20,
.get_variants = e1000_get_variants_82571,
.mac_ops = &e82571_mac_ops,
.phy_ops = &e82_phy_ops_bm,
.nvm_ops = &e82571_nvm_ops,
};

Просмотреть файл

@ -71,9 +71,11 @@
#define E1000_CTRL_EXT_RO_DIS 0x00020000 /* Relaxed Ordering disable */ #define E1000_CTRL_EXT_RO_DIS 0x00020000 /* Relaxed Ordering disable */
#define E1000_CTRL_EXT_LINK_MODE_MASK 0x00C00000 #define E1000_CTRL_EXT_LINK_MODE_MASK 0x00C00000
#define E1000_CTRL_EXT_LINK_MODE_PCIE_SERDES 0x00C00000 #define E1000_CTRL_EXT_LINK_MODE_PCIE_SERDES 0x00C00000
#define E1000_CTRL_EXT_EIAME 0x01000000
#define E1000_CTRL_EXT_DRV_LOAD 0x10000000 /* Driver loaded bit for FW */ #define E1000_CTRL_EXT_DRV_LOAD 0x10000000 /* Driver loaded bit for FW */
#define E1000_CTRL_EXT_IAME 0x08000000 /* Interrupt acknowledge Auto-mask */ #define E1000_CTRL_EXT_IAME 0x08000000 /* Interrupt acknowledge Auto-mask */
#define E1000_CTRL_EXT_INT_TIMER_CLR 0x20000000 /* Clear Interrupt timers after IMS clear */ #define E1000_CTRL_EXT_INT_TIMER_CLR 0x20000000 /* Clear Interrupt timers after IMS clear */
#define E1000_CTRL_EXT_PBA_CLR 0x80000000 /* PBA Clear */
/* Receive Descriptor bit definitions */ /* Receive Descriptor bit definitions */
#define E1000_RXD_STAT_DD 0x01 /* Descriptor Done */ #define E1000_RXD_STAT_DD 0x01 /* Descriptor Done */
@ -299,6 +301,7 @@
#define E1000_RXCSUM_IPPCSE 0x00001000 /* IP payload checksum enable */ #define E1000_RXCSUM_IPPCSE 0x00001000 /* IP payload checksum enable */
/* Header split receive */ /* Header split receive */
#define E1000_RFCTL_ACK_DIS 0x00001000
#define E1000_RFCTL_EXTEN 0x00008000 #define E1000_RFCTL_EXTEN 0x00008000
#define E1000_RFCTL_IPV6_EX_DIS 0x00010000 #define E1000_RFCTL_IPV6_EX_DIS 0x00010000
#define E1000_RFCTL_NEW_IPV6_EXT_DIS 0x00020000 #define E1000_RFCTL_NEW_IPV6_EXT_DIS 0x00020000
@ -363,6 +366,11 @@
#define E1000_ICR_RXDMT0 0x00000010 /* Rx desc min. threshold (0) */ #define E1000_ICR_RXDMT0 0x00000010 /* Rx desc min. threshold (0) */
#define E1000_ICR_RXT0 0x00000080 /* Rx timer intr (ring 0) */ #define E1000_ICR_RXT0 0x00000080 /* Rx timer intr (ring 0) */
#define E1000_ICR_INT_ASSERTED 0x80000000 /* If this bit asserted, the driver should claim the interrupt */ #define E1000_ICR_INT_ASSERTED 0x80000000 /* If this bit asserted, the driver should claim the interrupt */
#define E1000_ICR_RXQ0 0x00100000 /* Rx Queue 0 Interrupt */
#define E1000_ICR_RXQ1 0x00200000 /* Rx Queue 1 Interrupt */
#define E1000_ICR_TXQ0 0x00400000 /* Tx Queue 0 Interrupt */
#define E1000_ICR_TXQ1 0x00800000 /* Tx Queue 1 Interrupt */
#define E1000_ICR_OTHER 0x01000000 /* Other Interrupts */
/* /*
* This defines the bits that are set in the Interrupt Mask * This defines the bits that are set in the Interrupt Mask
@ -386,6 +394,11 @@
#define E1000_IMS_RXSEQ E1000_ICR_RXSEQ /* Rx sequence error */ #define E1000_IMS_RXSEQ E1000_ICR_RXSEQ /* Rx sequence error */
#define E1000_IMS_RXDMT0 E1000_ICR_RXDMT0 /* Rx desc min. threshold */ #define E1000_IMS_RXDMT0 E1000_ICR_RXDMT0 /* Rx desc min. threshold */
#define E1000_IMS_RXT0 E1000_ICR_RXT0 /* Rx timer intr */ #define E1000_IMS_RXT0 E1000_ICR_RXT0 /* Rx timer intr */
#define E1000_IMS_RXQ0 E1000_ICR_RXQ0 /* Rx Queue 0 Interrupt */
#define E1000_IMS_RXQ1 E1000_ICR_RXQ1 /* Rx Queue 1 Interrupt */
#define E1000_IMS_TXQ0 E1000_ICR_TXQ0 /* Tx Queue 0 Interrupt */
#define E1000_IMS_TXQ1 E1000_ICR_TXQ1 /* Tx Queue 1 Interrupt */
#define E1000_IMS_OTHER E1000_ICR_OTHER /* Other Interrupts */
/* Interrupt Cause Set */ /* Interrupt Cause Set */
#define E1000_ICS_LSC E1000_ICR_LSC /* Link Status Change */ #define E1000_ICS_LSC E1000_ICR_LSC /* Link Status Change */
@ -505,6 +518,7 @@
#define NWAY_LPAR_ASM_DIR 0x0800 /* LP Asymmetric Pause Direction bit */ #define NWAY_LPAR_ASM_DIR 0x0800 /* LP Asymmetric Pause Direction bit */
/* Autoneg Expansion Register */ /* Autoneg Expansion Register */
#define NWAY_ER_LP_NWAY_CAPS 0x0001 /* LP has Auto Neg Capability */
/* 1000BASE-T Control Register */ /* 1000BASE-T Control Register */
#define CR_1000T_HD_CAPS 0x0100 /* Advertise 1000T HD capability */ #define CR_1000T_HD_CAPS 0x0100 /* Advertise 1000T HD capability */
@ -540,6 +554,7 @@
#define E1000_EECD_DO 0x00000008 /* NVM Data Out */ #define E1000_EECD_DO 0x00000008 /* NVM Data Out */
#define E1000_EECD_REQ 0x00000040 /* NVM Access Request */ #define E1000_EECD_REQ 0x00000040 /* NVM Access Request */
#define E1000_EECD_GNT 0x00000080 /* NVM Access Grant */ #define E1000_EECD_GNT 0x00000080 /* NVM Access Grant */
#define E1000_EECD_PRES 0x00000100 /* NVM Present */
#define E1000_EECD_SIZE 0x00000200 /* NVM Size (0=64 word 1=256 word) */ #define E1000_EECD_SIZE 0x00000200 /* NVM Size (0=64 word 1=256 word) */
/* NVM Addressing bits based on type (0-small, 1-large) */ /* NVM Addressing bits based on type (0-small, 1-large) */
#define E1000_EECD_ADDR_BITS 0x00000400 #define E1000_EECD_ADDR_BITS 0x00000400

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше