Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/net

Conflicts:
	drivers/net/ethernet/emulex/benet/be.h
	drivers/net/netconsole.c
	net/bridge/br_private.h

Three mostly trivial conflicts.

The net/bridge/br_private.h conflict was a function signature (argument
addition) change overlapping with the extern removals from Joe Perches.

In drivers/net/netconsole.c we had one change adjusting a printk message
whilst another changed "printk(KERN_INFO" into "pr_info(".

Lastly, the emulex change was a new inline function addition overlapping
with Joe Perches's extern removals.

Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
David S. Miller 2013-11-04 13:48:30 -05:00
Родитель f421436a59 be408cd3e1
Коммит 394efd19d5
189 изменённых файлов: 1496 добавлений и 1169 удалений

Просмотреть файл

@ -18,8 +18,8 @@ Introduction
Datagram Congestion Control Protocol (DCCP) is an unreliable, connection
oriented protocol designed to solve issues present in UDP and TCP, particularly
for real-time and multimedia (streaming) traffic.
It divides into a base protocol (RFC 4340) and plugable congestion control
modules called CCIDs. Like plugable TCP congestion control, at least one CCID
It divides into a base protocol (RFC 4340) and pluggable congestion control
modules called CCIDs. Like pluggable TCP congestion control, at least one CCID
needs to be enabled in order for the protocol to function properly. In the Linux
implementation, this is the TCP-like CCID2 (RFC 4341). Additional CCIDs, such as
the TCP-friendly CCID3 (RFC 4342), are optional.

Просмотреть файл

@ -103,7 +103,7 @@ Additional Configurations
PRO/100 Family of Adapters is e100.
As an example, if you install the e100 driver for two PRO/100 adapters
(eth0 and eth1), add the following to a configuraton file in /etc/modprobe.d/
(eth0 and eth1), add the following to a configuration file in /etc/modprobe.d/
alias eth0 e100
alias eth1 e100

Просмотреть файл

@ -4,7 +4,7 @@
Introduction
============
The IEEE 802.15.4 working group focuses on standartization of bottom
The IEEE 802.15.4 working group focuses on standardization of bottom
two layers: Medium Access Control (MAC) and Physical (PHY). And there
are mainly two options available for upper layers:
- ZigBee - proprietary protocol from ZigBee Alliance
@ -66,7 +66,7 @@ net_device, with .type = ARPHRD_IEEE802154. Data is exchanged with socket family
code via plain sk_buffs. On skb reception skb->cb must contain additional
info as described in the struct ieee802154_mac_cb. During packet transmission
the skb->cb is used to provide additional data to device's header_ops->create
function. Be aware, that this data can be overriden later (when socket code
function. Be aware that this data can be overridden later (when socket code
submits skb to qdisc), so if you need something from that cb later, you should
store info in the skb->data on your own.

Просмотреть файл

@ -197,7 +197,7 @@ state information because the file format is subject to change. It is
implemented to provide extra debug information to help diagnose
problems.) Users should use the netlink API.
/proc/net/pppol2tp is also provided for backwards compaibility with
/proc/net/pppol2tp is also provided for backwards compatibility with
the original pppol2tp driver. It lists information about L2TPv2
tunnels and sessions only. Its use is discouraged.

Просмотреть файл

@ -4,23 +4,23 @@ Information you need to know about netdev
Q: What is netdev?
A: It is a mailing list for all network related linux stuff. This includes
A: It is a mailing list for all network-related Linux stuff. This includes
anything found under net/ (i.e. core code like IPv6) and drivers/net
(i.e. hardware specific drivers) in the linux source tree.
(i.e. hardware specific drivers) in the Linux source tree.
Note that some subsystems (e.g. wireless drivers) which have a high volume
of traffic have their own specific mailing lists.
The netdev list is managed (like many other linux mailing lists) through
The netdev list is managed (like many other Linux mailing lists) through
VGER ( http://vger.kernel.org/ ) and archives can be found below:
http://marc.info/?l=linux-netdev
http://www.spinics.net/lists/netdev/
Aside from subsystems like that mentioned above, all network related linux
development (i.e. RFC, review, comments, etc) takes place on netdev.
Aside from subsystems like that mentioned above, all network-related Linux
development (i.e. RFC, review, comments, etc.) takes place on netdev.
Q: How do the changes posted to netdev make their way into linux?
Q: How do the changes posted to netdev make their way into Linux?
A: There are always two trees (git repositories) in play. Both are driven
by David Miller, the main network maintainer. There is the "net" tree,
@ -35,7 +35,7 @@ A: There are always two trees (git repositories) in play. Both are driven
Q: How often do changes from these trees make it to the mainline Linus tree?
A: To understand this, you need to know a bit of background information
on the cadence of linux development. Each new release starts off with
on the cadence of Linux development. Each new release starts off with
a two week "merge window" where the main maintainers feed their new
stuff to Linus for merging into the mainline tree. After the two weeks,
the merge window is closed, and it is called/tagged "-rc1". No new
@ -46,7 +46,7 @@ A: To understand this, you need to know a bit of background information
things are in a state of churn), and a week after the last vX.Y-rcN
was done, the official "vX.Y" is released.
Relating that to netdev: At the beginning of the 2 week merge window,
Relating that to netdev: At the beginning of the 2-week merge window,
the net-next tree will be closed - no new changes/features. The
accumulated new content of the past ~10 weeks will be passed onto
mainline/Linus via a pull request for vX.Y -- at the same time,
@ -59,16 +59,16 @@ A: To understand this, you need to know a bit of background information
IMPORTANT: Do not send new net-next content to netdev during the
period during which net-next tree is closed.
Shortly after the two weeks have passed, (and vX.Y-rc1 is released) the
Shortly after the two weeks have passed (and vX.Y-rc1 is released), the
tree for net-next reopens to collect content for the next (vX.Y+1) release.
If you aren't subscribed to netdev and/or are simply unsure if net-next
has re-opened yet, simply check the net-next git repository link above for
any new networking related commits.
any new networking-related commits.
The "net" tree continues to collect fixes for the vX.Y content, and
is fed back to Linus at regular (~weekly) intervals. Meaning that the
focus for "net" is on stablilization and bugfixes.
focus for "net" is on stabilization and bugfixes.
Finally, the vX.Y gets released, and the whole cycle starts over.
@ -217,7 +217,7 @@ A: Attention to detail. Re-read your own work as if you were the
to why it happens, and then if necessary, explain why the fix proposed
is the best way to get things done. Don't mangle whitespace, and as
is common, don't mis-indent function arguments that span multiple lines.
If it is your 1st patch, mail it to yourself so you can test apply
If it is your first patch, mail it to yourself so you can test apply
it to an unpatched tree to confirm infrastructure didn't mangle it.
Finally, go back and read Documentation/SubmittingPatches to be

Просмотреть файл

@ -45,7 +45,7 @@ processing.
Conversion of the reception path involves calling poll() on the file
descriptor, once the socket is readable the frames from the ring are
processsed in order until no more messages are available, as indicated by
processed in order until no more messages are available, as indicated by
a status word in the frame header.
On kernel side, in order to make use of memory mapped I/O on receive, the
@ -56,7 +56,7 @@ Dumps of kernel databases automatically support memory mapped I/O.
Conversion of the transmit path involves changing message construction to
use memory from the TX ring instead of (usually) a buffer declared on the
stack and setting up the frame header approriately. Optionally poll() can
stack and setting up the frame header appropriately. Optionally poll() can
be used to wait for free frames in the TX ring.
Structured and definitions for using memory mapped I/O are contained in
@ -231,7 +231,7 @@ Ring setup:
if (setsockopt(fd, NETLINK_TX_RING, &req, sizeof(req)) < 0)
exit(1)
/* Calculate size of each invididual ring */
/* Calculate size of each individual ring */
ring_size = req.nm_block_nr * req.nm_block_size;
/* Map RX/TX rings. The TX ring is located after the RX ring */

Просмотреть файл

@ -89,8 +89,8 @@ packets. The name 'carrier' and the inversion are historical, think of
it as lower layer.
Note that for certain kind of soft-devices, which are not managing any
real hardware, there is possible to set this bit from userpsace.
One should use TVL IFLA_CARRIER to do so.
real hardware, it is possible to set this bit from userspace. One
should use TVL IFLA_CARRIER to do so.
netif_carrier_ok() can be used to query that bit.

Просмотреть файл

@ -144,7 +144,7 @@ An overview of the RxRPC protocol:
(*) Calls use ACK packets to handle reliability. Data packets are also
explicitly sequenced per call.
(*) There are two types of positive acknowledgement: hard-ACKs and soft-ACKs.
(*) There are two types of positive acknowledgment: hard-ACKs and soft-ACKs.
A hard-ACK indicates to the far side that all the data received to a point
has been received and processed; a soft-ACK indicates that the data has
been received but may yet be discarded and re-requested. The sender may

Просмотреть файл

@ -160,7 +160,7 @@ Where:
o pmt: core has the embedded power module (optional).
o force_sf_dma_mode: force DMA to use the Store and Forward mode
instead of the Threshold.
o force_thresh_dma_mode: force DMA to use the Shreshold mode other than
o force_thresh_dma_mode: force DMA to use the Threshold mode other than
the Store and Forward mode.
o riwt_off: force to disable the RX watchdog feature and switch to NAPI mode.
o fix_mac_speed: this callback is used for modifying some syscfg registers
@ -175,7 +175,7 @@ Where:
registers.
o custom_cfg/custom_data: this is a custom configuration that can be passed
while initializing the resources.
o bsp_priv: another private poiter.
o bsp_priv: another private pointer.
For MDIO bus The we have:
@ -271,7 +271,7 @@ reset procedure etc).
o dwmac1000_dma.c: dma functions for the GMAC chip;
o dwmac1000.h: specific header file for the GMAC;
o dwmac100_core: MAC 100 core and dma code;
o dwmac100_dma.c: dma funtions for the MAC chip;
o dwmac100_dma.c: dma functions for the MAC chip;
o dwmac1000.h: specific header file for the MAC;
o dwmac_lib.c: generic DMA functions shared among chips;
o enh_desc.c: functions for handling enhanced descriptors;
@ -364,4 +364,4 @@ Auto-negotiated Link Parter Ability.
10) TODO:
o XGMAC is not supported.
o Complete the TBI & RTBI support.
o extened VLAN support for 3.70a SYNP GMAC.
o extend VLAN support for 3.70a SYNP GMAC.

Просмотреть файл

@ -68,7 +68,7 @@ Module parameters
There are several parameters which may be provided to the driver when
its module is loaded. These are usually placed in /etc/modprobe.d/*.conf
configuretion files. Example:
configuration files. Example:
options 3c59x debug=3 rx_copybreak=300
@ -178,7 +178,7 @@ max_interrupt_work=N
The driver's interrupt service routine can handle many receive and
transmit packets in a single invocation. It does this in a loop.
The value of max_interrupt_work governs how mnay times the interrupt
The value of max_interrupt_work governs how many times the interrupt
service routine will loop. The default value is 32 loops. If this
is exceeded the interrupt service routine gives up and generates a
warning message "eth0: Too much work in interrupt".

Просмотреть файл

@ -105,7 +105,7 @@ reduced by the following measures or a combination thereof:
later.
The lapb module interface was modified to support this. Its
data_indication() method should now transparently pass the
netif_rx() return value to the (lapb mopdule) caller.
netif_rx() return value to the (lapb module) caller.
(2) Drivers for kernel versions 2.2.x should always check the global
variable netdev_dropping when a new frame is received. The driver
should only call netif_rx() if netdev_dropping is zero. Otherwise

Просмотреть файл

@ -1009,6 +1009,7 @@ ARM/Marvell Armada 370 and Armada XP SOC support
M: Jason Cooper <jason@lakedaemon.net>
M: Andrew Lunn <andrew@lunn.ch>
M: Gregory Clement <gregory.clement@free-electrons.com>
M: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: arch/arm/mach-mvebu/
@ -1016,6 +1017,7 @@ F: arch/arm/mach-mvebu/
ARM/Marvell Dove/Kirkwood/MV78xx0/Orion SOC support
M: Jason Cooper <jason@lakedaemon.net>
M: Andrew Lunn <andrew@lunn.ch>
M: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: arch/arm/mach-dove/
@ -1148,6 +1150,13 @@ F: drivers/net/ethernet/i825xx/ether1*
F: drivers/net/ethernet/seeq/ether3*
F: drivers/scsi/arm/
ARM/Rockchip SoC support
M: Heiko Stuebner <heiko@sntech.de>
L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers)
S: Maintained
F: arch/arm/mach-rockchip/
F: drivers/*/*rockchip*
ARM/SHARK MACHINE SUPPORT
M: Alexander Schulz <alex@shark-linux.de>
W: http://www.shark-linux.de/shark.html
@ -2719,6 +2728,8 @@ T: git git://git.linaro.org/people/sumitsemwal/linux-dma-buf.git
DMA GENERIC OFFLOAD ENGINE SUBSYSTEM
M: Vinod Koul <vinod.koul@intel.com>
M: Dan Williams <dan.j.williams@intel.com>
L: dmaengine@vger.kernel.org
Q: https://patchwork.kernel.org/project/linux-dmaengine/list/
S: Supported
F: drivers/dma/
F: include/linux/dma*
@ -2822,7 +2833,7 @@ M: Terje Bergström <tbergstrom@nvidia.com>
L: dri-devel@lists.freedesktop.org
L: linux-tegra@vger.kernel.org
T: git git://anongit.freedesktop.org/tegra/linux.git
S: Maintained
S: Supported
F: drivers/gpu/host1x/
F: include/uapi/drm/tegra_drm.h
F: Documentation/devicetree/bindings/gpu/nvidia,tegra20-host1x.txt
@ -4358,7 +4369,10 @@ F: arch/x86/kernel/microcode_intel.c
INTEL I/OAT DMA DRIVER
M: Dan Williams <dan.j.williams@intel.com>
S: Maintained
M: Dave Jiang <dave.jiang@intel.com>
L: dmaengine@vger.kernel.org
Q: https://patchwork.kernel.org/project/linux-dmaengine/list/
S: Supported
F: drivers/dma/ioat*
INTEL IOMMU (VT-d)
@ -8310,14 +8324,72 @@ L: linux-media@vger.kernel.org
S: Maintained
F: drivers/media/rc/ttusbir.c
TEGRA SUPPORT
TEGRA ARCHITECTURE SUPPORT
M: Stephen Warren <swarren@wwwdotorg.org>
M: Thierry Reding <thierry.reding@gmail.com>
L: linux-tegra@vger.kernel.org
Q: http://patchwork.ozlabs.org/project/linux-tegra/list/
T: git git://git.kernel.org/pub/scm/linux/kernel/git/swarren/linux-tegra.git
S: Supported
N: [^a-z]tegra
TEGRA ASOC DRIVER
M: Stephen Warren <swarren@wwwdotorg.org>
S: Supported
F: sound/soc/tegra/
TEGRA CLOCK DRIVER
M: Peter De Schrijver <pdeschrijver@nvidia.com>
M: Prashant Gaikwad <pgaikwad@nvidia.com>
S: Supported
F: drivers/clk/tegra/
TEGRA DMA DRIVER
M: Laxman Dewangan <ldewangan@nvidia.com>
S: Supported
F: drivers/dma/tegra20-apb-dma.c
TEGRA GPIO DRIVER
M: Stephen Warren <swarren@wwwdotorg.org>
S: Supported
F: drivers/gpio/gpio-tegra.c
TEGRA I2C DRIVER
M: Laxman Dewangan <ldewangan@nvidia.com>
S: Supported
F: drivers/i2c/busses/i2c-tegra.c
TEGRA IOMMU DRIVERS
M: Hiroshi Doyu <hdoyu@nvidia.com>
S: Supported
F: drivers/iommu/tegra*
TEGRA KBC DRIVER
M: Rakesh Iyer <riyer@nvidia.com>
M: Laxman Dewangan <ldewangan@nvidia.com>
S: Supported
F: drivers/input/keyboard/tegra-kbc.c
TEGRA PINCTRL DRIVER
M: Stephen Warren <swarren@wwwdotorg.org>
S: Supported
F: drivers/pinctrl/pinctrl-tegra*
TEGRA PWM DRIVER
M: Thierry Reding <thierry.reding@gmail.com>
S: Supported
F: drivers/pwm/pwm-tegra.c
TEGRA SERIAL DRIVER
M: Laxman Dewangan <ldewangan@nvidia.com>
S: Supported
F: drivers/tty/serial/serial-tegra.c
TEGRA SPI DRIVER
M: Laxman Dewangan <ldewangan@nvidia.com>
S: Supported
F: drivers/spi/spi-tegra*
TEHUTI ETHERNET DRIVER
M: Andy Gospodarek <andy@greyhouse.net>
L: netdev@vger.kernel.org
@ -8853,61 +8925,14 @@ W: http://pegasus2.sourceforge.net/
S: Maintained
F: drivers/net/usb/rtl8150.c
USB SERIAL BELKIN F5U103 DRIVER
M: William Greathouse <wgreathouse@smva.com>
USB SERIAL SUBSYSTEM
M: Johan Hovold <jhovold@gmail.com>
L: linux-usb@vger.kernel.org
S: Maintained
F: drivers/usb/serial/belkin_sa.*
USB SERIAL CYPRESS M8 DRIVER
M: Lonnie Mendez <dignome@gmail.com>
L: linux-usb@vger.kernel.org
S: Maintained
W: http://geocities.com/i0xox0i
W: http://firstlight.net/cvs
F: drivers/usb/serial/cypress_m8.*
USB SERIAL CYBERJACK DRIVER
M: Matthias Bruestle and Harald Welte <support@reiner-sct.com>
W: http://www.reiner-sct.de/support/treiber_cyberjack.php
S: Maintained
F: drivers/usb/serial/cyberjack.c
USB SERIAL DIGI ACCELEPORT DRIVER
M: Peter Berger <pberger@brimson.com>
M: Al Borchers <alborchers@steinerpoint.com>
L: linux-usb@vger.kernel.org
S: Maintained
F: drivers/usb/serial/digi_acceleport.c
USB SERIAL DRIVER
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
L: linux-usb@vger.kernel.org
S: Supported
F: Documentation/usb/usb-serial.txt
F: drivers/usb/serial/generic.c
F: drivers/usb/serial/usb-serial.c
F: drivers/usb/serial/
F: include/linux/usb/serial.h
USB SERIAL EMPEG EMPEG-CAR MARK I/II DRIVER
M: Gary Brubaker <xavyer@ix.netcom.com>
L: linux-usb@vger.kernel.org
S: Maintained
F: drivers/usb/serial/empeg.c
USB SERIAL KEYSPAN DRIVER
M: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
L: linux-usb@vger.kernel.org
S: Maintained
F: drivers/usb/serial/*keyspan*
USB SERIAL WHITEHEAT DRIVER
M: Support Department <support@connecttech.com>
L: linux-usb@vger.kernel.org
W: http://www.connecttech.com
S: Supported
F: drivers/usb/serial/whiteheat*
USB SMSC75XX ETHERNET DRIVER
M: Steve Glendinning <steve.glendinning@shawell.net>
L: netdev@vger.kernel.org

Просмотреть файл

@ -1,7 +1,7 @@
VERSION = 3
PATCHLEVEL = 12
SUBLEVEL = 0
EXTRAVERSION = -rc6
EXTRAVERSION =
NAME = One Giant Leap for Frogkind
# *DOCUMENTATION*

Просмотреть файл

@ -17,7 +17,7 @@
#include <asm/pgalloc.h>
#include <asm/mmu.h>
static int handle_vmalloc_fault(struct mm_struct *mm, unsigned long address)
static int handle_vmalloc_fault(unsigned long address)
{
/*
* Synchronize this task's top level page-table
@ -27,7 +27,7 @@ static int handle_vmalloc_fault(struct mm_struct *mm, unsigned long address)
pud_t *pud, *pud_k;
pmd_t *pmd, *pmd_k;
pgd = pgd_offset_fast(mm, address);
pgd = pgd_offset_fast(current->active_mm, address);
pgd_k = pgd_offset_k(address);
if (!pgd_present(*pgd_k))
@ -72,7 +72,7 @@ void do_page_fault(struct pt_regs *regs, unsigned long address)
* nothing more.
*/
if (address >= VMALLOC_START && address <= VMALLOC_END) {
ret = handle_vmalloc_fault(mm, address);
ret = handle_vmalloc_fault(address);
if (unlikely(ret))
goto bad_area_nosemaphore;
else

Просмотреть файл

@ -9,11 +9,6 @@
model = "ARM Integrator/CP";
compatible = "arm,integrator-cp";
aliases {
arm,timer-primary = &timer2;
arm,timer-secondary = &timer1;
};
chosen {
bootargs = "root=/dev/ram0 console=ttyAMA0,38400n8 earlyprintk";
};
@ -24,14 +19,18 @@
};
timer0: timer@13000000 {
/* TIMER0 runs @ 25MHz */
compatible = "arm,integrator-cp-timer";
status = "disabled";
};
timer1: timer@13000100 {
/* TIMER1 runs @ 1MHz */
compatible = "arm,integrator-cp-timer";
};
timer2: timer@13000200 {
/* TIMER2 runs @ 1MHz */
compatible = "arm,integrator-cp-timer";
};

Просмотреть файл

@ -971,11 +971,11 @@ static const struct mips_perf_event mipsxx74Kcore_cache_map
[C(LL)] = {
[C(OP_READ)] = {
[C(RESULT_ACCESS)] = { 0x1c, CNTR_ODD, P },
[C(RESULT_MISS)] = { 0x1d, CNTR_EVEN | CNTR_ODD, P },
[C(RESULT_MISS)] = { 0x1d, CNTR_EVEN, P },
},
[C(OP_WRITE)] = {
[C(RESULT_ACCESS)] = { 0x1c, CNTR_ODD, P },
[C(RESULT_MISS)] = { 0x1d, CNTR_EVEN | CNTR_ODD, P },
[C(RESULT_MISS)] = { 0x1d, CNTR_EVEN, P },
},
},
[C(ITLB)] = {

Просмотреть файл

@ -473,7 +473,7 @@ static void __init fill_ipi_map(void)
{
int cpu;
for (cpu = 0; cpu < NR_CPUS; cpu++) {
for (cpu = 0; cpu < nr_cpu_ids; cpu++) {
fill_ipi_map1(gic_resched_int_base, cpu, GIC_CPU_INT1);
fill_ipi_map1(gic_call_int_base, cpu, GIC_CPU_INT2);
}
@ -574,8 +574,9 @@ void __init arch_init_irq(void)
/* FIXME */
int i;
#if defined(CONFIG_MIPS_MT_SMP)
gic_call_int_base = GIC_NUM_INTRS - NR_CPUS;
gic_resched_int_base = gic_call_int_base - NR_CPUS;
gic_call_int_base = GIC_NUM_INTRS -
(NR_CPUS - nr_cpu_ids) * 2 - nr_cpu_ids;
gic_resched_int_base = gic_call_int_base - nr_cpu_ids;
fill_ipi_map();
#endif
gic_init(GIC_BASE_ADDR, GIC_ADDRSPACE_SZ, gic_intr_map,
@ -599,7 +600,7 @@ void __init arch_init_irq(void)
printk("CPU%d: status register now %08x\n", smp_processor_id(), read_c0_status());
write_c0_status(0x1100dc00);
printk("CPU%d: status register frc %08x\n", smp_processor_id(), read_c0_status());
for (i = 0; i < NR_CPUS; i++) {
for (i = 0; i < nr_cpu_ids; i++) {
arch_init_ipiirq(MIPS_GIC_IRQ_BASE +
GIC_RESCHED_INT(i), &irq_resched);
arch_init_ipiirq(MIPS_GIC_IRQ_BASE +

Просмотреть файл

@ -126,7 +126,7 @@ static int rt_timer_probe(struct platform_device *pdev)
return -ENOENT;
}
rt->membase = devm_request_and_ioremap(&pdev->dev, res);
rt->membase = devm_ioremap_resource(&pdev->dev, res);
if (IS_ERR(rt->membase))
return PTR_ERR(rt->membase);

Просмотреть файл

@ -195,6 +195,8 @@ common_stext:
ldw MEM_PDC_HI(%r0),%r6
depd %r6, 31, 32, %r3 /* move to upper word */
mfctl %cr30,%r6 /* PCX-W2 firmware bug */
ldo PDC_PSW(%r0),%arg0 /* 21 */
ldo PDC_PSW_SET_DEFAULTS(%r0),%arg1 /* 2 */
ldo PDC_PSW_WIDE_BIT(%r0),%arg2 /* 2 */
@ -203,6 +205,8 @@ common_stext:
copy %r0,%arg3
stext_pdc_ret:
mtctl %r6,%cr30 /* restore task thread info */
/* restore rfi target address*/
ldd TI_TASK-THREAD_SZ_ALGN(%sp), %r10
tophys_r1 %r10

Просмотреть файл

@ -40,9 +40,11 @@ static ssize_t exitcode_proc_write(struct file *file,
const char __user *buffer, size_t count, loff_t *pos)
{
char *end, buf[sizeof("nnnnn\0")];
size_t size;
int tmp;
if (copy_from_user(buf, buffer, count))
size = min(count, sizeof(buf));
if (copy_from_user(buf, buffer, size))
return -EFAULT;
tmp = simple_strtol(buf, &end, 0);

Просмотреть файл

@ -128,7 +128,8 @@ do { \
do { \
typedef typeof(var) pao_T__; \
const int pao_ID__ = (__builtin_constant_p(val) && \
((val) == 1 || (val) == -1)) ? (val) : 0; \
((val) == 1 || (val) == -1)) ? \
(int)(val) : 0; \
if (0) { \
pao_T__ pao_tmp__; \
pao_tmp__ = (val); \

Просмотреть файл

@ -1276,16 +1276,16 @@ void perf_events_lapic_init(void)
static int __kprobes
perf_event_nmi_handler(unsigned int cmd, struct pt_regs *regs)
{
int ret;
u64 start_clock;
u64 finish_clock;
int ret;
if (!atomic_read(&active_events))
return NMI_DONE;
start_clock = local_clock();
start_clock = sched_clock();
ret = x86_pmu.handle_irq(regs);
finish_clock = local_clock();
finish_clock = sched_clock();
perf_sample_event_took(finish_clock - start_clock);

Просмотреть файл

@ -609,7 +609,7 @@ static struct dentry *d_kvm_debug;
struct dentry *kvm_init_debugfs(void)
{
d_kvm_debug = debugfs_create_dir("kvm", NULL);
d_kvm_debug = debugfs_create_dir("kvm-guest", NULL);
if (!d_kvm_debug)
printk(KERN_WARNING "Could not create 'kvm' debugfs directory\n");

Просмотреть файл

@ -113,10 +113,10 @@ static int __kprobes nmi_handle(unsigned int type, struct pt_regs *regs, bool b2
u64 before, delta, whole_msecs;
int remainder_ns, decimal_msecs, thishandled;
before = local_clock();
before = sched_clock();
thishandled = a->handler(type, regs);
handled += thishandled;
delta = local_clock() - before;
delta = sched_clock() - before;
trace_nmi_handler(a->handler, (int)delta, thishandled);
if (delta < nmi_longest_ns)

Просмотреть файл

@ -1122,7 +1122,7 @@ ENDPROC(fast_syscall_spill_registers)
* a3: exctable, original value in excsave1
*/
fast_syscall_spill_registers_fixup:
ENTRY(fast_syscall_spill_registers_fixup)
rsr a2, windowbase # get current windowbase (a2 is saved)
xsr a0, depc # restore depc and a0
@ -1134,22 +1134,26 @@ fast_syscall_spill_registers_fixup:
*/
xsr a3, excsave1 # get spill-mask
slli a2, a3, 1 # shift left by one
slli a3, a3, 1 # shift left by one
slli a3, a2, 32-WSBITS
src a2, a2, a3 # a1 = xxwww1yyxxxwww1yy......
slli a2, a3, 32-WSBITS
src a2, a3, a2 # a2 = xxwww1yyxxxwww1yy......
wsr a2, windowstart # set corrected windowstart
rsr a3, excsave1
l32i a2, a3, EXC_TABLE_DOUBLE_SAVE # restore a2
l32i a3, a3, EXC_TABLE_PARAM # original WB (in user task)
srli a3, a3, 1
rsr a2, excsave1
l32i a2, a2, EXC_TABLE_DOUBLE_SAVE # restore a2
xsr a2, excsave1
s32i a3, a2, EXC_TABLE_DOUBLE_SAVE # save a3
l32i a3, a2, EXC_TABLE_PARAM # original WB (in user task)
xsr a2, excsave1
/* Return to the original (user task) WINDOWBASE.
* We leave the following frame behind:
* a0, a1, a2 same
* a3: trashed (saved in excsave_1)
* a3: trashed (saved in EXC_TABLE_DOUBLE_SAVE)
* depc: depc (we have to return to that address)
* excsave_1: a3
* excsave_1: exctable
*/
wsr a3, windowbase
@ -1159,9 +1163,9 @@ fast_syscall_spill_registers_fixup:
* a0: return address
* a1: used, stack pointer
* a2: kernel stack pointer
* a3: available, saved in EXCSAVE_1
* a3: available
* depc: exception address
* excsave: a3
* excsave: exctable
* Note: This frame might be the same as above.
*/
@ -1181,9 +1185,12 @@ fast_syscall_spill_registers_fixup:
rsr a0, exccause
addx4 a0, a0, a3 # find entry in table
l32i a0, a0, EXC_TABLE_FAST_USER # load handler
l32i a3, a3, EXC_TABLE_DOUBLE_SAVE
jx a0
fast_syscall_spill_registers_fixup_return:
ENDPROC(fast_syscall_spill_registers_fixup)
ENTRY(fast_syscall_spill_registers_fixup_return)
/* When we return here, all registers have been restored (a2: DEPC) */
@ -1191,13 +1198,13 @@ fast_syscall_spill_registers_fixup_return:
/* Restore fixup handler. */
xsr a3, excsave1
movi a2, fast_syscall_spill_registers_fixup
s32i a2, a3, EXC_TABLE_FIXUP
s32i a0, a3, EXC_TABLE_DOUBLE_SAVE
rsr a2, windowbase
s32i a2, a3, EXC_TABLE_PARAM
l32i a2, a3, EXC_TABLE_KSTK
rsr a2, excsave1
s32i a3, a2, EXC_TABLE_DOUBLE_SAVE
movi a3, fast_syscall_spill_registers_fixup
s32i a3, a2, EXC_TABLE_FIXUP
rsr a3, windowbase
s32i a3, a2, EXC_TABLE_PARAM
l32i a2, a2, EXC_TABLE_KSTK
/* Load WB at the time the exception occurred. */
@ -1206,8 +1213,12 @@ fast_syscall_spill_registers_fixup_return:
wsr a3, windowbase
rsync
rsr a3, excsave1
l32i a3, a3, EXC_TABLE_DOUBLE_SAVE
rfde
ENDPROC(fast_syscall_spill_registers_fixup_return)
/*
* spill all registers.

Просмотреть файл

@ -341,7 +341,7 @@ static int setup_frame(int sig, struct k_sigaction *ka, siginfo_t *info,
sp = regs->areg[1];
if ((ka->sa.sa_flags & SA_ONSTACK) != 0 && ! on_sig_stack(sp)) {
if ((ka->sa.sa_flags & SA_ONSTACK) != 0 && sas_ss_flags(sp) == 0) {
sp = current->sas_ss_sp + current->sas_ss_size;
}

Просмотреть файл

@ -737,7 +737,8 @@ static int __init iss_net_setup(char *str)
return 1;
}
if ((new = alloc_bootmem(sizeof new)) == NULL) {
new = alloc_bootmem(sizeof(*new));
if (new == NULL) {
printk("Alloc_bootmem failed\n");
return 1;
}

Просмотреть файл

@ -27,6 +27,14 @@
*/
#define SRC_CR 0x00U
#define SRC_CR_T0_ENSEL BIT(15)
#define SRC_CR_T1_ENSEL BIT(17)
#define SRC_CR_T2_ENSEL BIT(19)
#define SRC_CR_T3_ENSEL BIT(21)
#define SRC_CR_T4_ENSEL BIT(23)
#define SRC_CR_T5_ENSEL BIT(25)
#define SRC_CR_T6_ENSEL BIT(27)
#define SRC_CR_T7_ENSEL BIT(29)
#define SRC_XTALCR 0x0CU
#define SRC_XTALCR_XTALTIMEN BIT(20)
#define SRC_XTALCR_SXTALDIS BIT(19)
@ -543,6 +551,19 @@ void __init nomadik_clk_init(void)
__func__, np->name);
return;
}
/* Set all timers to use the 2.4 MHz TIMCLK */
val = readl(src_base + SRC_CR);
val |= SRC_CR_T0_ENSEL;
val |= SRC_CR_T1_ENSEL;
val |= SRC_CR_T2_ENSEL;
val |= SRC_CR_T3_ENSEL;
val |= SRC_CR_T4_ENSEL;
val |= SRC_CR_T5_ENSEL;
val |= SRC_CR_T6_ENSEL;
val |= SRC_CR_T7_ENSEL;
writel(val, src_base + SRC_CR);
val = readl(src_base + SRC_XTALCR);
pr_info("SXTALO is %s\n",
(val & SRC_XTALCR_SXTALDIS) ? "disabled" : "enabled");

Просмотреть файл

@ -39,8 +39,8 @@ static const struct coreclk_ratio a370_coreclk_ratios[] __initconst = {
};
static const u32 a370_tclk_freqs[] __initconst = {
16600000,
20000000,
166000000,
200000000,
};
static u32 __init a370_get_tclk_freq(void __iomem *sar)

Просмотреть файл

@ -49,7 +49,7 @@
#define SOCFPGA_L4_SP_CLK "l4_sp_clk"
#define SOCFPGA_NAND_CLK "nand_clk"
#define SOCFPGA_NAND_X_CLK "nand_x_clk"
#define SOCFPGA_MMC_CLK "mmc_clk"
#define SOCFPGA_MMC_CLK "sdmmc_clk"
#define SOCFPGA_DB_CLK "gpio_db_clk"
#define div_mask(width) ((1 << (width)) - 1)

Просмотреть файл

@ -107,7 +107,7 @@ static int icst_set_rate(struct clk_hw *hw, unsigned long rate,
vco = icst_hz_to_vco(icst->params, rate);
icst->rate = icst_hz(icst->params, vco);
vco_set(icst->vcoreg, icst->lockreg, vco);
vco_set(icst->lockreg, icst->vcoreg, vco);
return 0;
}

Просмотреть файл

@ -986,12 +986,12 @@ static int __init acpi_cpufreq_init(void)
{
int ret;
if (acpi_disabled)
return -ENODEV;
/* don't keep reloading if cpufreq_driver exists */
if (cpufreq_get_current_driver())
return 0;
if (acpi_disabled)
return 0;
return -EEXIST;
pr_debug("acpi_cpufreq_init\n");

Просмотреть файл

@ -48,7 +48,7 @@ static inline int32_t div_fp(int32_t x, int32_t y)
}
struct sample {
int core_pct_busy;
int32_t core_pct_busy;
u64 aperf;
u64 mperf;
int freq;
@ -68,7 +68,7 @@ struct _pid {
int32_t i_gain;
int32_t d_gain;
int deadband;
int last_err;
int32_t last_err;
};
struct cpudata {
@ -153,16 +153,15 @@ static inline void pid_d_gain_set(struct _pid *pid, int percent)
pid->d_gain = div_fp(int_tofp(percent), int_tofp(100));
}
static signed int pid_calc(struct _pid *pid, int busy)
static signed int pid_calc(struct _pid *pid, int32_t busy)
{
signed int err, result;
signed int result;
int32_t pterm, dterm, fp_error;
int32_t integral_limit;
err = pid->setpoint - busy;
fp_error = int_tofp(err);
fp_error = int_tofp(pid->setpoint) - busy;
if (abs(err) <= pid->deadband)
if (abs(fp_error) <= int_tofp(pid->deadband))
return 0;
pterm = mul_fp(pid->p_gain, fp_error);
@ -176,8 +175,8 @@ static signed int pid_calc(struct _pid *pid, int busy)
if (pid->integral < -integral_limit)
pid->integral = -integral_limit;
dterm = mul_fp(pid->d_gain, (err - pid->last_err));
pid->last_err = err;
dterm = mul_fp(pid->d_gain, fp_error - pid->last_err);
pid->last_err = fp_error;
result = pterm + mul_fp(pid->integral, pid->i_gain) + dterm;
@ -367,12 +366,13 @@ static int intel_pstate_turbo_pstate(void)
static void intel_pstate_get_min_max(struct cpudata *cpu, int *min, int *max)
{
int max_perf = cpu->pstate.turbo_pstate;
int max_perf_adj;
int min_perf;
if (limits.no_turbo)
max_perf = cpu->pstate.max_pstate;
max_perf = fp_toint(mul_fp(int_tofp(max_perf), limits.max_perf));
*max = clamp_t(int, max_perf,
max_perf_adj = fp_toint(mul_fp(int_tofp(max_perf), limits.max_perf));
*max = clamp_t(int, max_perf_adj,
cpu->pstate.min_pstate, cpu->pstate.turbo_pstate);
min_perf = fp_toint(mul_fp(int_tofp(max_perf), limits.min_perf));
@ -436,8 +436,9 @@ static inline void intel_pstate_calc_busy(struct cpudata *cpu,
struct sample *sample)
{
u64 core_pct;
core_pct = div64_u64(sample->aperf * 100, sample->mperf);
sample->freq = cpu->pstate.max_pstate * core_pct * 1000;
core_pct = div64_u64(int_tofp(sample->aperf * 100),
sample->mperf);
sample->freq = fp_toint(cpu->pstate.max_pstate * core_pct * 1000);
sample->core_pct_busy = core_pct;
}
@ -469,22 +470,19 @@ static inline void intel_pstate_set_sample_time(struct cpudata *cpu)
mod_timer_pinned(&cpu->timer, jiffies + delay);
}
static inline int intel_pstate_get_scaled_busy(struct cpudata *cpu)
static inline int32_t intel_pstate_get_scaled_busy(struct cpudata *cpu)
{
int32_t busy_scaled;
int32_t core_busy, max_pstate, current_pstate;
core_busy = int_tofp(cpu->samples[cpu->sample_ptr].core_pct_busy);
core_busy = cpu->samples[cpu->sample_ptr].core_pct_busy;
max_pstate = int_tofp(cpu->pstate.max_pstate);
current_pstate = int_tofp(cpu->pstate.current_pstate);
busy_scaled = mul_fp(core_busy, div_fp(max_pstate, current_pstate));
return fp_toint(busy_scaled);
return mul_fp(core_busy, div_fp(max_pstate, current_pstate));
}
static inline void intel_pstate_adjust_busy_pstate(struct cpudata *cpu)
{
int busy_scaled;
int32_t busy_scaled;
struct _pid *pid;
signed int ctl = 0;
int steps;

Просмотреть файл

@ -305,6 +305,7 @@ static struct dma_async_tx_descriptor *edma_prep_slave_sg(
edma_alloc_slot(EDMA_CTLR(echan->ch_num),
EDMA_SLOT_ANY);
if (echan->slot[i] < 0) {
kfree(edesc);
dev_err(dev, "Failed to allocate slot\n");
kfree(edesc);
return NULL;
@ -346,6 +347,7 @@ static struct dma_async_tx_descriptor *edma_prep_slave_sg(
ccnt = sg_dma_len(sg) / (acnt * bcnt);
if (ccnt > (SZ_64K - 1)) {
dev_err(dev, "Exceeded max SG segment size\n");
kfree(edesc);
return NULL;
}
cidx = acnt * bcnt;

Просмотреть файл

@ -61,7 +61,7 @@ static int drm_version(struct drm_device *dev, void *data,
/** Ioctl table */
static const struct drm_ioctl_desc drm_ioctls[] = {
DRM_IOCTL_DEF(DRM_IOCTL_VERSION, drm_version, DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_VERSION, drm_version, DRM_UNLOCKED|DRM_RENDER_ALLOW),
DRM_IOCTL_DEF(DRM_IOCTL_GET_UNIQUE, drm_getunique, 0),
DRM_IOCTL_DEF(DRM_IOCTL_GET_MAGIC, drm_getmagic, 0),
DRM_IOCTL_DEF(DRM_IOCTL_IRQ_BUSID, drm_irq_by_busid, DRM_MASTER|DRM_ROOT_ONLY),

Просмотреть файл

@ -83,8 +83,7 @@ static bool intel_crt_get_hw_state(struct intel_encoder *encoder,
return true;
}
static void intel_crt_get_config(struct intel_encoder *encoder,
struct intel_crtc_config *pipe_config)
static unsigned int intel_crt_get_flags(struct intel_encoder *encoder)
{
struct drm_i915_private *dev_priv = encoder->base.dev->dev_private;
struct intel_crt *crt = intel_encoder_to_crt(encoder);
@ -102,7 +101,25 @@ static void intel_crt_get_config(struct intel_encoder *encoder,
else
flags |= DRM_MODE_FLAG_NVSYNC;
pipe_config->adjusted_mode.flags |= flags;
return flags;
}
static void intel_crt_get_config(struct intel_encoder *encoder,
struct intel_crtc_config *pipe_config)
{
pipe_config->adjusted_mode.flags |= intel_crt_get_flags(encoder);
}
static void hsw_crt_get_config(struct intel_encoder *encoder,
struct intel_crtc_config *pipe_config)
{
intel_ddi_get_config(encoder, pipe_config);
pipe_config->adjusted_mode.flags &= ~(DRM_MODE_FLAG_PHSYNC |
DRM_MODE_FLAG_NHSYNC |
DRM_MODE_FLAG_PVSYNC |
DRM_MODE_FLAG_NVSYNC);
pipe_config->adjusted_mode.flags |= intel_crt_get_flags(encoder);
}
/* Note: The caller is required to filter out dpms modes not supported by the
@ -799,7 +816,10 @@ void intel_crt_init(struct drm_device *dev)
crt->base.mode_set = intel_crt_mode_set;
crt->base.disable = intel_disable_crt;
crt->base.enable = intel_enable_crt;
crt->base.get_config = intel_crt_get_config;
if (IS_HASWELL(dev))
crt->base.get_config = hsw_crt_get_config;
else
crt->base.get_config = intel_crt_get_config;
if (I915_HAS_HOTPLUG(dev))
crt->base.hpd_pin = HPD_CRT;
if (HAS_DDI(dev))

Просмотреть файл

@ -1249,8 +1249,8 @@ static void intel_ddi_hot_plug(struct intel_encoder *intel_encoder)
intel_dp_check_link_status(intel_dp);
}
static void intel_ddi_get_config(struct intel_encoder *encoder,
struct intel_crtc_config *pipe_config)
void intel_ddi_get_config(struct intel_encoder *encoder,
struct intel_crtc_config *pipe_config)
{
struct drm_i915_private *dev_priv = encoder->base.dev->dev_private;
struct intel_crtc *intel_crtc = to_intel_crtc(encoder->base.crtc);
@ -1268,6 +1268,23 @@ static void intel_ddi_get_config(struct intel_encoder *encoder,
flags |= DRM_MODE_FLAG_NVSYNC;
pipe_config->adjusted_mode.flags |= flags;
switch (temp & TRANS_DDI_BPC_MASK) {
case TRANS_DDI_BPC_6:
pipe_config->pipe_bpp = 18;
break;
case TRANS_DDI_BPC_8:
pipe_config->pipe_bpp = 24;
break;
case TRANS_DDI_BPC_10:
pipe_config->pipe_bpp = 30;
break;
case TRANS_DDI_BPC_12:
pipe_config->pipe_bpp = 36;
break;
default:
break;
}
}
static void intel_ddi_destroy(struct drm_encoder *encoder)

Просмотреть файл

@ -2327,9 +2327,10 @@ static void intel_fdi_normal_train(struct drm_crtc *crtc)
FDI_FE_ERRC_ENABLE);
}
static bool pipe_has_enabled_pch(struct intel_crtc *intel_crtc)
static bool pipe_has_enabled_pch(struct intel_crtc *crtc)
{
return intel_crtc->base.enabled && intel_crtc->config.has_pch_encoder;
return crtc->base.enabled && crtc->active &&
crtc->config.has_pch_encoder;
}
static void ivb_modeset_global_resources(struct drm_device *dev)
@ -2979,6 +2980,48 @@ static void ironlake_pch_transcoder_set_timings(struct intel_crtc *crtc,
I915_READ(VSYNCSHIFT(cpu_transcoder)));
}
static void cpt_enable_fdi_bc_bifurcation(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
uint32_t temp;
temp = I915_READ(SOUTH_CHICKEN1);
if (temp & FDI_BC_BIFURCATION_SELECT)
return;
WARN_ON(I915_READ(FDI_RX_CTL(PIPE_B)) & FDI_RX_ENABLE);
WARN_ON(I915_READ(FDI_RX_CTL(PIPE_C)) & FDI_RX_ENABLE);
temp |= FDI_BC_BIFURCATION_SELECT;
DRM_DEBUG_KMS("enabling fdi C rx\n");
I915_WRITE(SOUTH_CHICKEN1, temp);
POSTING_READ(SOUTH_CHICKEN1);
}
static void ivybridge_update_fdi_bc_bifurcation(struct intel_crtc *intel_crtc)
{
struct drm_device *dev = intel_crtc->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
switch (intel_crtc->pipe) {
case PIPE_A:
break;
case PIPE_B:
if (intel_crtc->config.fdi_lanes > 2)
WARN_ON(I915_READ(SOUTH_CHICKEN1) & FDI_BC_BIFURCATION_SELECT);
else
cpt_enable_fdi_bc_bifurcation(dev);
break;
case PIPE_C:
cpt_enable_fdi_bc_bifurcation(dev);
break;
default:
BUG();
}
}
/*
* Enable PCH resources required for PCH ports:
* - PCH PLLs
@ -2997,6 +3040,9 @@ static void ironlake_pch_enable(struct drm_crtc *crtc)
assert_pch_transcoder_disabled(dev_priv, pipe);
if (IS_IVYBRIDGE(dev))
ivybridge_update_fdi_bc_bifurcation(intel_crtc);
/* Write the TU size bits before fdi link training, so that error
* detection works. */
I915_WRITE(FDI_RX_TUSIZE1(pipe),
@ -4983,6 +5029,22 @@ static bool i9xx_get_pipe_config(struct intel_crtc *crtc,
if (!(tmp & PIPECONF_ENABLE))
return false;
if (IS_G4X(dev) || IS_VALLEYVIEW(dev)) {
switch (tmp & PIPECONF_BPC_MASK) {
case PIPECONF_6BPC:
pipe_config->pipe_bpp = 18;
break;
case PIPECONF_8BPC:
pipe_config->pipe_bpp = 24;
break;
case PIPECONF_10BPC:
pipe_config->pipe_bpp = 30;
break;
default:
break;
}
}
intel_get_pipe_timings(crtc, pipe_config);
i9xx_get_pfit_config(crtc, pipe_config);
@ -5576,48 +5638,6 @@ static bool ironlake_compute_clocks(struct drm_crtc *crtc,
return true;
}
static void cpt_enable_fdi_bc_bifurcation(struct drm_device *dev)
{
struct drm_i915_private *dev_priv = dev->dev_private;
uint32_t temp;
temp = I915_READ(SOUTH_CHICKEN1);
if (temp & FDI_BC_BIFURCATION_SELECT)
return;
WARN_ON(I915_READ(FDI_RX_CTL(PIPE_B)) & FDI_RX_ENABLE);
WARN_ON(I915_READ(FDI_RX_CTL(PIPE_C)) & FDI_RX_ENABLE);
temp |= FDI_BC_BIFURCATION_SELECT;
DRM_DEBUG_KMS("enabling fdi C rx\n");
I915_WRITE(SOUTH_CHICKEN1, temp);
POSTING_READ(SOUTH_CHICKEN1);
}
static void ivybridge_update_fdi_bc_bifurcation(struct intel_crtc *intel_crtc)
{
struct drm_device *dev = intel_crtc->base.dev;
struct drm_i915_private *dev_priv = dev->dev_private;
switch (intel_crtc->pipe) {
case PIPE_A:
break;
case PIPE_B:
if (intel_crtc->config.fdi_lanes > 2)
WARN_ON(I915_READ(SOUTH_CHICKEN1) & FDI_BC_BIFURCATION_SELECT);
else
cpt_enable_fdi_bc_bifurcation(dev);
break;
case PIPE_C:
cpt_enable_fdi_bc_bifurcation(dev);
break;
default:
BUG();
}
}
int ironlake_get_lanes_required(int target_clock, int link_bw, int bpp)
{
/*
@ -5811,9 +5831,6 @@ static int ironlake_crtc_mode_set(struct drm_crtc *crtc,
&intel_crtc->config.fdi_m_n);
}
if (IS_IVYBRIDGE(dev))
ivybridge_update_fdi_bc_bifurcation(intel_crtc);
ironlake_set_pipeconf(crtc);
/* Set up the display plane register */
@ -5881,6 +5898,23 @@ static bool ironlake_get_pipe_config(struct intel_crtc *crtc,
if (!(tmp & PIPECONF_ENABLE))
return false;
switch (tmp & PIPECONF_BPC_MASK) {
case PIPECONF_6BPC:
pipe_config->pipe_bpp = 18;
break;
case PIPECONF_8BPC:
pipe_config->pipe_bpp = 24;
break;
case PIPECONF_10BPC:
pipe_config->pipe_bpp = 30;
break;
case PIPECONF_12BPC:
pipe_config->pipe_bpp = 36;
break;
default:
break;
}
if (I915_READ(PCH_TRANSCONF(crtc->pipe)) & TRANS_ENABLE) {
struct intel_shared_dpll *pll;
@ -8612,6 +8646,9 @@ intel_pipe_config_compare(struct drm_device *dev,
PIPE_CONF_CHECK_X(dpll_hw_state.fp0);
PIPE_CONF_CHECK_X(dpll_hw_state.fp1);
if (IS_G4X(dev) || INTEL_INFO(dev)->gen >= 5)
PIPE_CONF_CHECK_I(pipe_bpp);
#undef PIPE_CONF_CHECK_X
#undef PIPE_CONF_CHECK_I
#undef PIPE_CONF_CHECK_FLAGS

Просмотреть файл

@ -1401,6 +1401,26 @@ static void intel_dp_get_config(struct intel_encoder *encoder,
else
pipe_config->port_clock = 270000;
}
if (is_edp(intel_dp) && dev_priv->vbt.edp_bpp &&
pipe_config->pipe_bpp > dev_priv->vbt.edp_bpp) {
/*
* This is a big fat ugly hack.
*
* Some machines in UEFI boot mode provide us a VBT that has 18
* bpp and 1.62 GHz link bandwidth for eDP, which for reasons
* unknown we fail to light up. Yet the same BIOS boots up with
* 24 bpp and 2.7 GHz link. Use the same bpp as the BIOS uses as
* max, not what it tells us to use.
*
* Note: This will still be broken if the eDP panel is not lit
* up by the BIOS, and thus we can't get the mode at module
* load.
*/
DRM_DEBUG_KMS("pipe has %d bpp for eDP panel, overriding BIOS-provided max %d bpp\n",
pipe_config->pipe_bpp, dev_priv->vbt.edp_bpp);
dev_priv->vbt.edp_bpp = pipe_config->pipe_bpp;
}
}
static bool is_edp_psr(struct intel_dp *intel_dp)

Просмотреть файл

@ -765,6 +765,8 @@ extern void intel_ddi_prepare_link_retrain(struct drm_encoder *encoder);
extern bool
intel_ddi_connector_get_hw_state(struct intel_connector *intel_connector);
extern void intel_ddi_fdi_disable(struct drm_crtc *crtc);
extern void intel_ddi_get_config(struct intel_encoder *encoder,
struct intel_crtc_config *pipe_config);
extern void intel_display_handle_reset(struct drm_device *dev);
extern bool intel_set_cpu_fifo_underrun_reporting(struct drm_device *dev,

Просмотреть файл

@ -698,6 +698,22 @@ static const struct dmi_system_id intel_no_lvds[] = {
DMI_MATCH(DMI_PRODUCT_NAME, "ESPRIMO Q900"),
},
},
{
.callback = intel_no_lvds_dmi_callback,
.ident = "Intel D410PT",
.matches = {
DMI_MATCH(DMI_BOARD_VENDOR, "Intel"),
DMI_MATCH(DMI_BOARD_NAME, "D410PT"),
},
},
{
.callback = intel_no_lvds_dmi_callback,
.ident = "Intel D425KT",
.matches = {
DMI_MATCH(DMI_BOARD_VENDOR, "Intel"),
DMI_EXACT_MATCH(DMI_BOARD_NAME, "D425KT"),
},
},
{
.callback = intel_no_lvds_dmi_callback,
.ident = "Intel D510MO",

Просмотреть файл

@ -291,6 +291,7 @@ void evergreen_hdmi_setmode(struct drm_encoder *encoder, struct drm_display_mode
/* fglrx clears sth in AFMT_AUDIO_PACKET_CONTROL2 here */
WREG32(HDMI_ACR_PACKET_CONTROL + offset,
HDMI_ACR_SOURCE | /* select SW CTS value */
HDMI_ACR_AUTO_SEND); /* allow hw to sent ACR packets when required */
evergreen_hdmi_update_ACR(encoder, mode->clock);

Просмотреть файл

@ -2635,7 +2635,7 @@ int kv_dpm_init(struct radeon_device *rdev)
pi->caps_sclk_ds = true;
pi->enable_auto_thermal_throttling = true;
pi->disable_nb_ps3_in_battery = false;
pi->bapm_enable = true;
pi->bapm_enable = false;
pi->voltage_drop_t = 0;
pi->caps_sclk_throttle_low_notification = false;
pi->caps_fps = false; /* true? */

Просмотреть файл

@ -1272,8 +1272,8 @@ struct radeon_blacklist_clocks
struct radeon_clock_and_voltage_limits {
u32 sclk;
u32 mclk;
u32 vddc;
u32 vddci;
u16 vddc;
u16 vddci;
};
struct radeon_clock_array {

Просмотреть файл

@ -594,7 +594,7 @@ isert_connect_release(struct isert_conn *isert_conn)
pr_debug("Entering isert_connect_release(): >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\n");
if (device->use_frwr)
if (device && device->use_frwr)
isert_conn_free_frwr_pool(isert_conn);
if (isert_conn->conn_qp) {

Просмотреть файл

@ -1734,6 +1734,7 @@ EXPORT_SYMBOL_GPL(input_class);
*/
struct input_dev *input_allocate_device(void)
{
static atomic_t input_no = ATOMIC_INIT(0);
struct input_dev *dev;
dev = kzalloc(sizeof(struct input_dev), GFP_KERNEL);
@ -1743,9 +1744,13 @@ struct input_dev *input_allocate_device(void)
device_initialize(&dev->dev);
mutex_init(&dev->mutex);
spin_lock_init(&dev->event_lock);
init_timer(&dev->timer);
INIT_LIST_HEAD(&dev->h_list);
INIT_LIST_HEAD(&dev->node);
dev_set_name(&dev->dev, "input%ld",
(unsigned long) atomic_inc_return(&input_no) - 1);
__module_get(THIS_MODULE);
}
@ -2019,7 +2024,6 @@ static void devm_input_device_unregister(struct device *dev, void *res)
*/
int input_register_device(struct input_dev *dev)
{
static atomic_t input_no = ATOMIC_INIT(0);
struct input_devres *devres = NULL;
struct input_handler *handler;
unsigned int packet_size;
@ -2059,7 +2063,6 @@ int input_register_device(struct input_dev *dev)
* If delay and period are pre-set by the driver, then autorepeating
* is handled by the driver itself and we don't do it in input.c.
*/
init_timer(&dev->timer);
if (!dev->rep[REP_DELAY] && !dev->rep[REP_PERIOD]) {
dev->timer.data = (long) dev;
dev->timer.function = input_repeat_key;
@ -2073,9 +2076,6 @@ int input_register_device(struct input_dev *dev)
if (!dev->setkeycode)
dev->setkeycode = input_default_setkeycode;
dev_set_name(&dev->dev, "input%ld",
(unsigned long) atomic_inc_return(&input_no) - 1);
error = device_add(&dev->dev);
if (error)
goto err_free_vals;

Просмотреть файл

@ -786,10 +786,17 @@ static int pxa27x_keypad_probe(struct platform_device *pdev)
input_dev->evbit[0] = BIT_MASK(EV_KEY) | BIT_MASK(EV_REP);
input_set_capability(input_dev, EV_MSC, MSC_SCAN);
if (pdata)
if (pdata) {
error = pxa27x_keypad_build_keycode(keypad);
else
} else {
error = pxa27x_keypad_build_keycode_from_dt(keypad);
/*
* Data that we get from DT resides in dynamically
* allocated memory so we need to update our pdata
* pointer.
*/
pdata = keypad->pdata;
}
if (error) {
dev_err(&pdev->dev, "failed to build keycode\n");
goto failed_put_clk;

Просмотреть файл

@ -351,7 +351,9 @@ static void cm109_urb_irq_callback(struct urb *urb)
if (status) {
if (status == -ESHUTDOWN)
return;
dev_err(&dev->intf->dev, "%s: urb status %d\n", __func__, status);
dev_err_ratelimited(&dev->intf->dev, "%s: urb status %d\n",
__func__, status);
goto out;
}
/* Special keys */
@ -418,8 +420,12 @@ static void cm109_urb_ctl_callback(struct urb *urb)
dev->ctl_data->byte[2],
dev->ctl_data->byte[3]);
if (status)
dev_err(&dev->intf->dev, "%s: urb status %d\n", __func__, status);
if (status) {
if (status == -ESHUTDOWN)
return;
dev_err_ratelimited(&dev->intf->dev, "%s: urb status %d\n",
__func__, status);
}
spin_lock(&dev->ctl_submit_lock);
@ -427,7 +433,7 @@ static void cm109_urb_ctl_callback(struct urb *urb)
if (likely(!dev->shutdown)) {
if (dev->buzzer_pending) {
if (dev->buzzer_pending || status) {
dev->buzzer_pending = 0;
dev->ctl_urb_pending = 1;
cm109_submit_buzz_toggle(dev);

Просмотреть файл

@ -103,6 +103,7 @@ static const struct alps_model_info alps_model_data[] = {
/* Dell Latitude E5500, E6400, E6500, Precision M4400 */
{ { 0x62, 0x02, 0x14 }, 0x00, ALPS_PROTO_V2, 0xcf, 0xcf,
ALPS_PASS | ALPS_DUALPOINT | ALPS_PS2_INTERLEAVED },
{ { 0x73, 0x00, 0x14 }, 0x00, ALPS_PROTO_V2, 0xcf, 0xcf, ALPS_DUALPOINT }, /* Dell XT2 */
{ { 0x73, 0x02, 0x50 }, 0x00, ALPS_PROTO_V2, 0xcf, 0xcf, ALPS_FOUR_BUTTONS }, /* Dell Vostro 1400 */
{ { 0x52, 0x01, 0x14 }, 0x00, ALPS_PROTO_V2, 0xff, 0xff,
ALPS_PASS | ALPS_DUALPOINT | ALPS_PS2_INTERLEAVED }, /* Toshiba Tecra A11-11L */

Просмотреть файл

@ -223,21 +223,26 @@ static int i8042_flush(void)
{
unsigned long flags;
unsigned char data, str;
int i = 0;
int count = 0;
int retval = 0;
spin_lock_irqsave(&i8042_lock, flags);
while (((str = i8042_read_status()) & I8042_STR_OBF) && (i < I8042_BUFFER_SIZE)) {
udelay(50);
data = i8042_read_data();
i++;
dbg("%02x <- i8042 (flush, %s)\n",
data, str & I8042_STR_AUXDATA ? "aux" : "kbd");
while ((str = i8042_read_status()) & I8042_STR_OBF) {
if (count++ < I8042_BUFFER_SIZE) {
udelay(50);
data = i8042_read_data();
dbg("%02x <- i8042 (flush, %s)\n",
data, str & I8042_STR_AUXDATA ? "aux" : "kbd");
} else {
retval = -EIO;
break;
}
}
spin_unlock_irqrestore(&i8042_lock, flags);
return i;
return retval;
}
/*
@ -849,7 +854,7 @@ static int __init i8042_check_aux(void)
static int i8042_controller_check(void)
{
if (i8042_flush() == I8042_BUFFER_SIZE) {
if (i8042_flush()) {
pr_err("No controller found\n");
return -ENODEV;
}

Просмотреть файл

@ -1031,6 +1031,7 @@ static void wacom_destroy_leds(struct wacom *wacom)
}
static enum power_supply_property wacom_battery_props[] = {
POWER_SUPPLY_PROP_SCOPE,
POWER_SUPPLY_PROP_CAPACITY
};
@ -1042,6 +1043,9 @@ static int wacom_battery_get_property(struct power_supply *psy,
int ret = 0;
switch (psp) {
case POWER_SUPPLY_PROP_SCOPE:
val->intval = POWER_SUPPLY_SCOPE_DEVICE;
break;
case POWER_SUPPLY_PROP_CAPACITY:
val->intval =
wacom->wacom_wac.battery_capacity * 100 / 31;

Просмотреть файл

@ -2054,6 +2054,12 @@ static const struct wacom_features wacom_features_0x101 =
static const struct wacom_features wacom_features_0x10D =
{ "Wacom ISDv4 10D", WACOM_PKGLEN_MTTPC, 26202, 16325, 255,
0, MTTPC, WACOM_INTUOS_RES, WACOM_INTUOS_RES };
static const struct wacom_features wacom_features_0x10E =
{ "Wacom ISDv4 10E", WACOM_PKGLEN_MTTPC, 27760, 15694, 255,
0, MTTPC, WACOM_INTUOS_RES, WACOM_INTUOS_RES };
static const struct wacom_features wacom_features_0x10F =
{ "Wacom ISDv4 10F", WACOM_PKGLEN_MTTPC, 27760, 15694, 255,
0, MTTPC, WACOM_INTUOS_RES, WACOM_INTUOS_RES };
static const struct wacom_features wacom_features_0x4001 =
{ "Wacom ISDv4 4001", WACOM_PKGLEN_MTTPC, 26202, 16325, 255,
0, MTTPC, WACOM_INTUOS_RES, WACOM_INTUOS_RES };
@ -2248,6 +2254,8 @@ const struct usb_device_id wacom_ids[] = {
{ USB_DEVICE_WACOM(0x100) },
{ USB_DEVICE_WACOM(0x101) },
{ USB_DEVICE_WACOM(0x10D) },
{ USB_DEVICE_WACOM(0x10E) },
{ USB_DEVICE_WACOM(0x10F) },
{ USB_DEVICE_WACOM(0x300) },
{ USB_DEVICE_WACOM(0x301) },
{ USB_DEVICE_WACOM(0x304) },

Просмотреть файл

@ -8111,6 +8111,7 @@ static int md_set_badblocks(struct badblocks *bb, sector_t s, int sectors,
u64 *p;
int lo, hi;
int rv = 1;
unsigned long flags;
if (bb->shift < 0)
/* badblocks are disabled */
@ -8125,7 +8126,7 @@ static int md_set_badblocks(struct badblocks *bb, sector_t s, int sectors,
sectors = next - s;
}
write_seqlock_irq(&bb->lock);
write_seqlock_irqsave(&bb->lock, flags);
p = bb->page;
lo = 0;
@ -8241,7 +8242,7 @@ static int md_set_badblocks(struct badblocks *bb, sector_t s, int sectors,
bb->changed = 1;
if (!acknowledged)
bb->unacked_exist = 1;
write_sequnlock_irq(&bb->lock);
write_sequnlock_irqrestore(&bb->lock, flags);
return rv;
}

Просмотреть файл

@ -1479,6 +1479,7 @@ static int raid1_spare_active(struct mddev *mddev)
}
}
if (rdev
&& rdev->recovery_offset == MaxSector
&& !test_bit(Faulty, &rdev->flags)
&& !test_and_set_bit(In_sync, &rdev->flags)) {
count++;

Просмотреть файл

@ -1782,6 +1782,7 @@ static int raid10_spare_active(struct mddev *mddev)
}
sysfs_notify_dirent_safe(tmp->replacement->sysfs_state);
} else if (tmp->rdev
&& tmp->rdev->recovery_offset == MaxSector
&& !test_bit(Faulty, &tmp->rdev->flags)
&& !test_and_set_bit(In_sync, &tmp->rdev->flags)) {
count++;

Просмотреть файл

@ -778,6 +778,12 @@ static void ops_run_io(struct stripe_head *sh, struct stripe_head_state *s)
bi->bi_io_vec[0].bv_len = STRIPE_SIZE;
bi->bi_io_vec[0].bv_offset = 0;
bi->bi_size = STRIPE_SIZE;
/*
* If this is discard request, set bi_vcnt 0. We don't
* want to confuse SCSI because SCSI will replace payload
*/
if (rw & REQ_DISCARD)
bi->bi_vcnt = 0;
if (rrdev)
set_bit(R5_DOUBLE_LOCKED, &sh->dev[i].flags);
@ -816,6 +822,12 @@ static void ops_run_io(struct stripe_head *sh, struct stripe_head_state *s)
rbi->bi_io_vec[0].bv_len = STRIPE_SIZE;
rbi->bi_io_vec[0].bv_offset = 0;
rbi->bi_size = STRIPE_SIZE;
/*
* If this is discard request, set bi_vcnt 0. We don't
* want to confuse SCSI because SCSI will replace payload
*/
if (rw & REQ_DISCARD)
rbi->bi_vcnt = 0;
if (conf->mddev->gendisk)
trace_block_bio_remap(bdev_get_queue(rbi->bi_bdev),
rbi, disk_devt(conf->mddev->gendisk),
@ -2910,6 +2922,14 @@ static void handle_stripe_clean_event(struct r5conf *conf,
}
/* now that discard is done we can proceed with any sync */
clear_bit(STRIPE_DISCARD, &sh->state);
/*
* SCSI discard will change some bio fields and the stripe has
* no updated data, so remove it from hash list and the stripe
* will be reinitialized
*/
spin_lock_irq(&conf->device_lock);
remove_hash(sh);
spin_unlock_irq(&conf->device_lock);
if (test_bit(STRIPE_SYNC_REQUESTED, &sh->state))
set_bit(STRIPE_HANDLE, &sh->state);

Просмотреть файл

@ -349,7 +349,7 @@ static int legacy_set_geometry(struct gpmi_nand_data *this)
int common_nfc_set_geometry(struct gpmi_nand_data *this)
{
return set_geometry_by_ecc_info(this) ? 0 : legacy_set_geometry(this);
return legacy_set_geometry(this);
}
struct dma_chan *get_dma_chan(struct gpmi_nand_data *this)

Просмотреть файл

@ -1320,7 +1320,12 @@ static int pxa3xx_nand_probe(struct platform_device *pdev)
for (cs = 0; cs < pdata->num_cs; cs++) {
struct mtd_info *mtd = info->host[cs]->mtd;
mtd->name = pdev->name;
/*
* The mtd name matches the one used in 'mtdparts' kernel
* parameter. This name cannot be changed or otherwise
* user's mtd partitions configuration would get broken.
*/
mtd->name = "pxa3xx_nand-0";
info->cs = cs;
ret = pxa3xx_nand_scan(mtd);
if (ret) {

Просмотреть файл

@ -814,9 +814,6 @@ static int c_can_do_rx_poll(struct net_device *dev, int quota)
msg_ctrl_save = priv->read_reg(priv,
C_CAN_IFACE(MSGCTRL_REG, 0));
if (msg_ctrl_save & IF_MCONT_EOB)
return num_rx_pkts;
if (msg_ctrl_save & IF_MCONT_MSGLST) {
c_can_handle_lost_msg_obj(dev, 0, msg_obj);
num_rx_pkts++;
@ -824,6 +821,9 @@ static int c_can_do_rx_poll(struct net_device *dev, int quota)
continue;
}
if (msg_ctrl_save & IF_MCONT_EOB)
return num_rx_pkts;
if (!(msg_ctrl_save & IF_MCONT_NEWDAT))
continue;

Просмотреть файл

@ -1544,9 +1544,9 @@ static int kvaser_usb_init_one(struct usb_interface *intf,
return 0;
}
static void kvaser_usb_get_endpoints(const struct usb_interface *intf,
struct usb_endpoint_descriptor **in,
struct usb_endpoint_descriptor **out)
static int kvaser_usb_get_endpoints(const struct usb_interface *intf,
struct usb_endpoint_descriptor **in,
struct usb_endpoint_descriptor **out)
{
const struct usb_host_interface *iface_desc;
struct usb_endpoint_descriptor *endpoint;
@ -1557,12 +1557,18 @@ static void kvaser_usb_get_endpoints(const struct usb_interface *intf,
for (i = 0; i < iface_desc->desc.bNumEndpoints; ++i) {
endpoint = &iface_desc->endpoint[i].desc;
if (usb_endpoint_is_bulk_in(endpoint))
if (!*in && usb_endpoint_is_bulk_in(endpoint))
*in = endpoint;
if (usb_endpoint_is_bulk_out(endpoint))
if (!*out && usb_endpoint_is_bulk_out(endpoint))
*out = endpoint;
/* use first bulk endpoint for in and out */
if (*in && *out)
return 0;
}
return -ENODEV;
}
static int kvaser_usb_probe(struct usb_interface *intf,
@ -1576,8 +1582,8 @@ static int kvaser_usb_probe(struct usb_interface *intf,
if (!dev)
return -ENOMEM;
kvaser_usb_get_endpoints(intf, &dev->bulk_in, &dev->bulk_out);
if (!dev->bulk_in || !dev->bulk_out) {
err = kvaser_usb_get_endpoints(intf, &dev->bulk_in, &dev->bulk_out);
if (err) {
dev_err(&intf->dev, "Cannot get usb endpoint(s)");
return err;
}

Просмотреть файл

@ -252,25 +252,33 @@ static int bgmac_dma_rx_skb_for_slot(struct bgmac *bgmac,
struct bgmac_slot_info *slot)
{
struct device *dma_dev = bgmac->core->dma_dev;
struct sk_buff *skb;
dma_addr_t dma_addr;
struct bgmac_rx_header *rx;
/* Alloc skb */
slot->skb = netdev_alloc_skb(bgmac->net_dev, BGMAC_RX_BUF_SIZE);
if (!slot->skb)
skb = netdev_alloc_skb(bgmac->net_dev, BGMAC_RX_BUF_SIZE);
if (!skb)
return -ENOMEM;
/* Poison - if everything goes fine, hardware will overwrite it */
rx = (struct bgmac_rx_header *)slot->skb->data;
rx = (struct bgmac_rx_header *)skb->data;
rx->len = cpu_to_le16(0xdead);
rx->flags = cpu_to_le16(0xbeef);
/* Map skb for the DMA */
slot->dma_addr = dma_map_single(dma_dev, slot->skb->data,
BGMAC_RX_BUF_SIZE, DMA_FROM_DEVICE);
if (dma_mapping_error(dma_dev, slot->dma_addr)) {
dma_addr = dma_map_single(dma_dev, skb->data,
BGMAC_RX_BUF_SIZE, DMA_FROM_DEVICE);
if (dma_mapping_error(dma_dev, dma_addr)) {
bgmac_err(bgmac, "DMA mapping error\n");
dev_kfree_skb(skb);
return -ENOMEM;
}
/* Update the slot */
slot->skb = skb;
slot->dma_addr = dma_addr;
if (slot->dma_addr & 0xC0000000)
bgmac_warn(bgmac, "DMA address using 0xC0000000 bit(s), it may need translation trick\n");

Просмотреть файл

@ -2545,10 +2545,6 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
}
}
/* Allocated memory for FW statistics */
if (bnx2x_alloc_fw_stats_mem(bp))
LOAD_ERROR_EXIT(bp, load_error0);
/* need to be done after alloc mem, since it's self adjusting to amount
* of memory available for RSS queues
*/
@ -2558,6 +2554,10 @@ int bnx2x_nic_load(struct bnx2x *bp, int load_mode)
LOAD_ERROR_EXIT(bp, load_error0);
}
/* Allocated memory for FW statistics */
if (bnx2x_alloc_fw_stats_mem(bp))
LOAD_ERROR_EXIT(bp, load_error0);
/* request pf to initialize status blocks */
if (IS_VF(bp)) {
rc = bnx2x_vfpf_init(bp);
@ -2812,8 +2812,8 @@ load_error1:
if (IS_PF(bp))
bnx2x_clear_pf_load(bp);
load_error0:
bnx2x_free_fp_mem(bp);
bnx2x_free_fw_stats_mem(bp);
bnx2x_free_fp_mem(bp);
bnx2x_free_mem(bp);
return rc;

Просмотреть файл

@ -2018,6 +2018,8 @@ failed:
void bnx2x_iov_remove_one(struct bnx2x *bp)
{
int vf_idx;
/* if SRIOV is not enabled there's nothing to do */
if (!IS_SRIOV(bp))
return;
@ -2026,6 +2028,18 @@ void bnx2x_iov_remove_one(struct bnx2x *bp)
pci_disable_sriov(bp->pdev);
DP(BNX2X_MSG_IOV, "sriov disabled\n");
/* disable access to all VFs */
for (vf_idx = 0; vf_idx < bp->vfdb->sriov.total; vf_idx++) {
bnx2x_pretend_func(bp,
HW_VF_HANDLE(bp,
bp->vfdb->sriov.first_vf_in_pf +
vf_idx));
DP(BNX2X_MSG_IOV, "disabling internal access for vf %d\n",
bp->vfdb->sriov.first_vf_in_pf + vf_idx);
bnx2x_vf_enable_internal(bp, 0);
bnx2x_pretend_func(bp, BP_ABS_FUNC(bp));
}
/* free vf database */
__bnx2x_iov_free_vfdb(bp);
}
@ -3197,7 +3211,7 @@ int bnx2x_enable_sriov(struct bnx2x *bp)
* the "acquire" messages to appear on the VF PF channel.
*/
DP(BNX2X_MSG_IOV, "about to call enable sriov\n");
pci_disable_sriov(bp->pdev);
bnx2x_disable_sriov(bp);
rc = pci_enable_sriov(bp->pdev, req_vfs);
if (rc) {
BNX2X_ERR("pci_enable_sriov failed with %d\n", rc);

Просмотреть файл

@ -1599,7 +1599,8 @@ static void write_ofld_wr(struct adapter *adap, struct sk_buff *skb,
flits = skb_transport_offset(skb) / 8;
sgp = ndesc == 1 ? (struct sg_ent *)&d->flit[flits] : sgl;
sgl_flits = make_sgl(skb, sgp, skb_transport_header(skb),
skb->tail - skb->transport_header,
skb_tail_pointer(skb) -
skb_transport_header(skb),
adap->pdev);
if (need_skb_unmap()) {
setup_deferred_unmapping(skb, adap->pdev, sgp, sgl_flits);

Просмотреть файл

@ -840,6 +840,16 @@ int be_load_fw(struct be_adapter *adapter, u8 *func);
bool be_is_wol_supported(struct be_adapter *adapter);
bool be_pause_supported(struct be_adapter *adapter);
u32 be_get_fw_log_level(struct be_adapter *adapter);
static inline int fw_major_num(const char *fw_ver)
{
int fw_major = 0;
sscanf(fw_ver, "%d.", &fw_major);
return fw_major;
}
int be_update_queues(struct be_adapter *adapter);
int be_poll(struct napi_struct *napi, int budget);

Просмотреть файл

@ -3379,6 +3379,12 @@ static int be_setup(struct be_adapter *adapter)
be_cmd_get_fw_ver(adapter, adapter->fw_ver, adapter->fw_on_flash);
if (BE2_chip(adapter) && fw_major_num(adapter->fw_ver) < 4) {
dev_err(dev, "Firmware on card is old(%s), IRQs may not work.",
adapter->fw_ver);
dev_err(dev, "Please upgrade firmware to version >= 4.0\n");
}
if (adapter->vlans_added)
be_vid_config(adapter);

Просмотреть файл

@ -263,7 +263,9 @@ static inline void mal_schedule_poll(struct mal_instance *mal)
{
if (likely(napi_schedule_prep(&mal->napi))) {
MAL_DBG2(mal, "schedule_poll" NL);
spin_lock(&mal->lock);
mal_disable_eob_irq(mal);
spin_unlock(&mal->lock);
__napi_schedule(&mal->napi);
} else
MAL_DBG2(mal, "already in poll" NL);
@ -442,15 +444,13 @@ static int mal_poll(struct napi_struct *napi, int budget)
if (unlikely(mc->ops->peek_rx(mc->dev) ||
test_bit(MAL_COMMAC_RX_STOPPED, &mc->flags))) {
MAL_DBG2(mal, "rotting packet" NL);
if (napi_reschedule(napi))
mal_disable_eob_irq(mal);
else
MAL_DBG2(mal, "already in poll list" NL);
if (budget > 0)
goto again;
else
if (!napi_reschedule(napi))
goto more_work;
spin_lock_irqsave(&mal->lock, flags);
mal_disable_eob_irq(mal);
spin_unlock_irqrestore(&mal->lock, flags);
goto again;
}
mc->ops->poll_tx(mc->dev);
}

Просмотреть файл

@ -1691,7 +1691,7 @@ static void mlx4_master_deactivate_admin_state(struct mlx4_priv *priv, int slave
vp_oper->vlan_idx = NO_INDX;
}
if (NO_INDX != vp_oper->mac_idx) {
__mlx4_unregister_mac(&priv->dev, port, vp_oper->mac_idx);
__mlx4_unregister_mac(&priv->dev, port, vp_oper->state.mac);
vp_oper->mac_idx = NO_INDX;
}
}

Просмотреть файл

@ -2276,9 +2276,9 @@ int qlcnic_83xx_get_nic_info(struct qlcnic_adapter *adapter,
temp = (cmd.rsp.arg[8] & 0x7FFE0000) >> 17;
npar_info->max_linkspeed_reg_offset = temp;
}
if (npar_info->capabilities & QLCNIC_FW_CAPABILITY_MORE_CAPS)
memcpy(ahw->extra_capability, &cmd.rsp.arg[16],
sizeof(ahw->extra_capability));
memcpy(ahw->extra_capability, &cmd.rsp.arg[16],
sizeof(ahw->extra_capability));
out:
qlcnic_free_mbx_args(&cmd);

Просмотреть файл

@ -785,8 +785,6 @@ void qlcnic_82xx_config_intr_coalesce(struct qlcnic_adapter *adapter)
#define QLCNIC_ENABLE_IPV4_LRO 1
#define QLCNIC_ENABLE_IPV6_LRO 2
#define QLCNIC_NO_DEST_IPV4_CHECK (1 << 8)
#define QLCNIC_NO_DEST_IPV6_CHECK (2 << 8)
int qlcnic_82xx_config_hw_lro(struct qlcnic_adapter *adapter, int enable)
{
@ -806,11 +804,10 @@ int qlcnic_82xx_config_hw_lro(struct qlcnic_adapter *adapter, int enable)
word = 0;
if (enable) {
word = QLCNIC_ENABLE_IPV4_LRO | QLCNIC_NO_DEST_IPV4_CHECK;
word = QLCNIC_ENABLE_IPV4_LRO;
if (adapter->ahw->extra_capability[0] &
QLCNIC_FW_CAP2_HW_LRO_IPV6)
word |= QLCNIC_ENABLE_IPV6_LRO |
QLCNIC_NO_DEST_IPV6_CHECK;
word |= QLCNIC_ENABLE_IPV6_LRO;
}
req.words[0] = cpu_to_le64(word);

Просмотреть файл

@ -1133,7 +1133,10 @@ qlcnic_initialize_nic(struct qlcnic_adapter *adapter)
if (err == -EIO)
return err;
adapter->ahw->extra_capability[0] = temp;
} else {
adapter->ahw->extra_capability[0] = 0;
}
adapter->ahw->max_mac_filters = nic_info.max_mac_filters;
adapter->ahw->max_mtu = nic_info.max_mtu;
@ -2161,8 +2164,7 @@ void qlcnic_set_drv_version(struct qlcnic_adapter *adapter)
else if (qlcnic_83xx_check(adapter))
fw_cmd = QLCNIC_CMD_83XX_SET_DRV_VER;
if ((ahw->capabilities & QLCNIC_FW_CAPABILITY_MORE_CAPS) &&
(ahw->extra_capability[0] & QLCNIC_FW_CAPABILITY_SET_DRV_VER))
if (ahw->extra_capability[0] & QLCNIC_FW_CAPABILITY_SET_DRV_VER)
qlcnic_fw_cmd_set_drv_version(adapter, fw_cmd);
}

Просмотреть файл

@ -312,6 +312,7 @@ static ssize_t store_enabled(struct netconsole_target *nt,
const char *buf,
size_t count)
{
unsigned long flags;
int enabled;
int err;
@ -326,9 +327,7 @@ static ssize_t store_enabled(struct netconsole_target *nt,
return -EINVAL;
}
mutex_lock(&nt->mutex);
if (enabled) { /* 1 */
/*
* Skip netpoll_parse_options() -- all the attributes are
* already configured via configfs. Just print them out.
@ -336,19 +335,22 @@ static ssize_t store_enabled(struct netconsole_target *nt,
netpoll_print_options(&nt->np);
err = netpoll_setup(&nt->np);
if (err) {
mutex_unlock(&nt->mutex);
if (err)
return err;
}
pr_info("network logging started\n");
pr_info("netconsole: network logging started\n");
} else { /* 0 */
/* We need to disable the netconsole before cleaning it up
* otherwise we might end up in write_msg() with
* nt->np.dev == NULL and nt->enabled == 1
*/
spin_lock_irqsave(&target_list_lock, flags);
nt->enabled = 0;
spin_unlock_irqrestore(&target_list_lock, flags);
netpoll_cleanup(&nt->np);
}
nt->enabled = enabled;
mutex_unlock(&nt->mutex);
return strnlen(buf, count);
}
@ -559,8 +561,10 @@ static ssize_t netconsole_target_attr_store(struct config_item *item,
struct netconsole_target_attr *na =
container_of(attr, struct netconsole_target_attr, attr);
mutex_lock(&nt->mutex);
if (na->store)
ret = na->store(nt, buf, count);
mutex_unlock(&nt->mutex);
return ret;
}

Просмотреть файл

@ -78,7 +78,6 @@
#define AX_MEDIUM_STATUS_MODE 0x22
#define AX_MEDIUM_GIGAMODE 0x01
#define AX_MEDIUM_FULL_DUPLEX 0x02
#define AX_MEDIUM_ALWAYS_ONE 0x04
#define AX_MEDIUM_EN_125MHZ 0x08
#define AX_MEDIUM_RXFLOW_CTRLEN 0x10
#define AX_MEDIUM_TXFLOW_CTRLEN 0x20
@ -1065,8 +1064,8 @@ static int ax88179_bind(struct usbnet *dev, struct usb_interface *intf)
/* Configure default medium type => giga */
*tmp16 = AX_MEDIUM_RECEIVE_EN | AX_MEDIUM_TXFLOW_CTRLEN |
AX_MEDIUM_RXFLOW_CTRLEN | AX_MEDIUM_ALWAYS_ONE |
AX_MEDIUM_FULL_DUPLEX | AX_MEDIUM_GIGAMODE;
AX_MEDIUM_RXFLOW_CTRLEN | AX_MEDIUM_FULL_DUPLEX |
AX_MEDIUM_GIGAMODE;
ax88179_write_cmd(dev, AX_ACCESS_MAC, AX_MEDIUM_STATUS_MODE,
2, 2, tmp16);
@ -1225,7 +1224,7 @@ static int ax88179_link_reset(struct usbnet *dev)
}
mode = AX_MEDIUM_RECEIVE_EN | AX_MEDIUM_TXFLOW_CTRLEN |
AX_MEDIUM_RXFLOW_CTRLEN | AX_MEDIUM_ALWAYS_ONE;
AX_MEDIUM_RXFLOW_CTRLEN;
ax88179_read_cmd(dev, AX_ACCESS_MAC, PHYSICAL_LINK_STATUS,
1, 1, &link_sts);
@ -1339,8 +1338,8 @@ static int ax88179_reset(struct usbnet *dev)
/* Configure default medium type => giga */
*tmp16 = AX_MEDIUM_RECEIVE_EN | AX_MEDIUM_TXFLOW_CTRLEN |
AX_MEDIUM_RXFLOW_CTRLEN | AX_MEDIUM_ALWAYS_ONE |
AX_MEDIUM_FULL_DUPLEX | AX_MEDIUM_GIGAMODE;
AX_MEDIUM_RXFLOW_CTRLEN | AX_MEDIUM_FULL_DUPLEX |
AX_MEDIUM_GIGAMODE;
ax88179_write_cmd(dev, AX_ACCESS_MAC, AX_MEDIUM_STATUS_MODE,
2, 2, tmp16);

Просмотреть файл

@ -1160,11 +1160,6 @@ static int virtnet_cpu_callback(struct notifier_block *nfb,
{
struct virtnet_info *vi = container_of(nfb, struct virtnet_info, nb);
mutex_lock(&vi->config_lock);
if (!vi->config_enable)
goto done;
switch(action & ~CPU_TASKS_FROZEN) {
case CPU_ONLINE:
case CPU_DOWN_FAILED:
@ -1178,8 +1173,6 @@ static int virtnet_cpu_callback(struct notifier_block *nfb,
break;
}
done:
mutex_unlock(&vi->config_lock);
return NOTIFY_OK;
}
@ -1747,6 +1740,8 @@ static int virtnet_freeze(struct virtio_device *vdev)
struct virtnet_info *vi = vdev->priv;
int i;
unregister_hotcpu_notifier(&vi->nb);
/* Prevent config work handler from accessing the device */
mutex_lock(&vi->config_lock);
vi->config_enable = false;
@ -1795,6 +1790,10 @@ static int virtnet_restore(struct virtio_device *vdev)
virtnet_set_queues(vi, vi->curr_queue_pairs);
rtnl_unlock();
err = register_hotcpu_notifier(&vi->nb);
if (err)
return err;
return 0;
}
#endif

Просмотреть файл

@ -148,10 +148,6 @@ static int enslave( struct net_device *, struct net_device * );
static int emancipate( struct net_device * );
#endif
#ifdef __i386__
#define ASM_CRC 1
#endif
static const char version[] =
"Granch SBNI12 driver ver 5.0.1 Jun 22 2001 Denis I.Timofeev.\n";
@ -1551,88 +1547,6 @@ __setup( "sbni=", sbni_setup );
/* -------------------------------------------------------------------------- */
#ifdef ASM_CRC
static u32
calc_crc32( u32 crc, u8 *p, u32 len )
{
register u32 _crc;
_crc = crc;
__asm__ __volatile__ (
"xorl %%ebx, %%ebx\n"
"movl %2, %%esi\n"
"movl %3, %%ecx\n"
"movl $crc32tab, %%edi\n"
"shrl $2, %%ecx\n"
"jz 1f\n"
".align 4\n"
"0:\n"
"movb %%al, %%bl\n"
"movl (%%esi), %%edx\n"
"shrl $8, %%eax\n"
"xorb %%dl, %%bl\n"
"shrl $8, %%edx\n"
"xorl (%%edi,%%ebx,4), %%eax\n"
"movb %%al, %%bl\n"
"shrl $8, %%eax\n"
"xorb %%dl, %%bl\n"
"shrl $8, %%edx\n"
"xorl (%%edi,%%ebx,4), %%eax\n"
"movb %%al, %%bl\n"
"shrl $8, %%eax\n"
"xorb %%dl, %%bl\n"
"movb %%dh, %%dl\n"
"xorl (%%edi,%%ebx,4), %%eax\n"
"movb %%al, %%bl\n"
"shrl $8, %%eax\n"
"xorb %%dl, %%bl\n"
"addl $4, %%esi\n"
"xorl (%%edi,%%ebx,4), %%eax\n"
"decl %%ecx\n"
"jnz 0b\n"
"1:\n"
"movl %3, %%ecx\n"
"andl $3, %%ecx\n"
"jz 2f\n"
"movb %%al, %%bl\n"
"shrl $8, %%eax\n"
"xorb (%%esi), %%bl\n"
"xorl (%%edi,%%ebx,4), %%eax\n"
"decl %%ecx\n"
"jz 2f\n"
"movb %%al, %%bl\n"
"shrl $8, %%eax\n"
"xorb 1(%%esi), %%bl\n"
"xorl (%%edi,%%ebx,4), %%eax\n"
"decl %%ecx\n"
"jz 2f\n"
"movb %%al, %%bl\n"
"shrl $8, %%eax\n"
"xorb 2(%%esi), %%bl\n"
"xorl (%%edi,%%ebx,4), %%eax\n"
"2:\n"
: "=a" (_crc)
: "0" (_crc), "g" (p), "g" (len)
: "bx", "cx", "dx", "si", "di"
);
return _crc;
}
#else /* ASM_CRC */
static u32
calc_crc32( u32 crc, u8 *p, u32 len )
{
@ -1642,9 +1556,6 @@ calc_crc32( u32 crc, u8 *p, u32 len )
return crc;
}
#endif /* ASM_CRC */
static u32 crc32tab[] __attribute__ ((aligned(8))) = {
0xD202EF8D, 0xA505DF1B, 0x3C0C8EA1, 0x4B0BBE37,
0xD56F2B94, 0xA2681B02, 0x3B614AB8, 0x4C667A2E,

Просмотреть файл

@ -169,6 +169,7 @@ struct xenvif {
unsigned long credit_usec;
unsigned long remaining_credit;
struct timer_list credit_timeout;
u64 credit_window_start;
/* Statistics */
unsigned long rx_gso_checksum_fixup;

Просмотреть файл

@ -316,8 +316,7 @@ struct xenvif *xenvif_alloc(struct device *parent, domid_t domid,
vif->credit_bytes = vif->remaining_credit = ~0UL;
vif->credit_usec = 0UL;
init_timer(&vif->credit_timeout);
/* Initialize 'expires' now: it's used to track the credit window. */
vif->credit_timeout.expires = jiffies;
vif->credit_window_start = get_jiffies_64();
dev->netdev_ops = &xenvif_netdev_ops;
dev->hw_features = NETIF_F_SG |

Просмотреть файл

@ -1380,9 +1380,8 @@ static int checksum_setup(struct xenvif *vif, struct sk_buff *skb)
static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
{
unsigned long now = jiffies;
unsigned long next_credit =
vif->credit_timeout.expires +
u64 now = get_jiffies_64();
u64 next_credit = vif->credit_window_start +
msecs_to_jiffies(vif->credit_usec / 1000);
/* Timer could already be pending in rare cases. */
@ -1390,8 +1389,8 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
return true;
/* Passed the point where we can replenish credit? */
if (time_after_eq(now, next_credit)) {
vif->credit_timeout.expires = now;
if (time_after_eq64(now, next_credit)) {
vif->credit_window_start = now;
tx_add_credit(vif);
}
@ -1403,6 +1402,7 @@ static bool tx_credit_exceeded(struct xenvif *vif, unsigned size)
tx_credit_callback;
mod_timer(&vif->credit_timeout,
next_credit);
vif->credit_window_start = next_credit;
return true;
}

Просмотреть файл

@ -552,9 +552,8 @@ static void __ref enable_slot(struct acpiphp_slot *slot)
struct acpiphp_func *func;
int max, pass;
LIST_HEAD(add_list);
int nr_found;
nr_found = acpiphp_rescan_slot(slot);
acpiphp_rescan_slot(slot);
max = acpiphp_max_busnr(bus);
for (pass = 0; pass < 2; pass++) {
list_for_each_entry(dev, &bus->devices, bus_list) {
@ -574,9 +573,6 @@ static void __ref enable_slot(struct acpiphp_slot *slot)
}
}
__pci_bus_assign_resources(bus, &add_list, NULL);
/* Nothing more to do here if there are no new devices on this bus. */
if (!nr_found && (slot->flags & SLOT_ENABLED))
return;
acpiphp_sanitize_bus(bus);
acpiphp_set_hpp_values(bus);

Просмотреть файл

@ -696,7 +696,7 @@ static int __init blogic_init_mm_probeinfo(struct blogic_adapter *adapter)
while ((pci_device = pci_get_device(PCI_VENDOR_ID_BUSLOGIC,
PCI_DEVICE_ID_BUSLOGIC_MULTIMASTER,
pci_device)) != NULL) {
struct blogic_adapter *adapter = adapter;
struct blogic_adapter *host_adapter = adapter;
struct blogic_adapter_info adapter_info;
enum blogic_isa_ioport mod_ioaddr_req;
unsigned char bus;
@ -744,9 +744,9 @@ static int __init blogic_init_mm_probeinfo(struct blogic_adapter *adapter)
known and enabled, note that the particular Standard ISA I/O
Address should not be probed.
*/
adapter->io_addr = io_addr;
blogic_intreset(adapter);
if (blogic_cmd(adapter, BLOGIC_INQ_PCI_INFO, NULL, 0,
host_adapter->io_addr = io_addr;
blogic_intreset(host_adapter);
if (blogic_cmd(host_adapter, BLOGIC_INQ_PCI_INFO, NULL, 0,
&adapter_info, sizeof(adapter_info)) ==
sizeof(adapter_info)) {
if (adapter_info.isa_port < 6)
@ -762,7 +762,7 @@ static int __init blogic_init_mm_probeinfo(struct blogic_adapter *adapter)
I/O Address assigned at system initialization.
*/
mod_ioaddr_req = BLOGIC_IO_DISABLE;
blogic_cmd(adapter, BLOGIC_MOD_IOADDR, &mod_ioaddr_req,
blogic_cmd(host_adapter, BLOGIC_MOD_IOADDR, &mod_ioaddr_req,
sizeof(mod_ioaddr_req), NULL, 0);
/*
For the first MultiMaster Host Adapter enumerated,
@ -779,12 +779,12 @@ static int __init blogic_init_mm_probeinfo(struct blogic_adapter *adapter)
fetch_localram.offset = BLOGIC_AUTOSCSI_BASE + 45;
fetch_localram.count = sizeof(autoscsi_byte45);
blogic_cmd(adapter, BLOGIC_FETCH_LOCALRAM,
blogic_cmd(host_adapter, BLOGIC_FETCH_LOCALRAM,
&fetch_localram, sizeof(fetch_localram),
&autoscsi_byte45,
sizeof(autoscsi_byte45));
blogic_cmd(adapter, BLOGIC_GET_BOARD_ID, NULL, 0, &id,
sizeof(id));
blogic_cmd(host_adapter, BLOGIC_GET_BOARD_ID, NULL, 0,
&id, sizeof(id));
if (id.fw_ver_digit1 == '5')
force_scan_order =
autoscsi_byte45.force_scan_order;

Просмотреть файл

@ -771,6 +771,8 @@ static long aac_compat_do_ioctl(struct aac_dev *dev, unsigned cmd, unsigned long
static int aac_compat_ioctl(struct scsi_device *sdev, int cmd, void __user *arg)
{
struct aac_dev *dev = (struct aac_dev *)sdev->host->hostdata;
if (!capable(CAP_SYS_RAWIO))
return -EPERM;
return aac_compat_do_ioctl(dev, cmd, (unsigned long)arg);
}

Просмотреть файл

@ -20,7 +20,7 @@
* | Device Discovery | 0x2095 | 0x2020-0x2022, |
* | | | 0x2011-0x2012, |
* | | | 0x2016 |
* | Queue Command and IO tracing | 0x3058 | 0x3006-0x300b |
* | Queue Command and IO tracing | 0x3059 | 0x3006-0x300b |
* | | | 0x3027-0x3028 |
* | | | 0x303d-0x3041 |
* | | | 0x302d,0x3033 |

Просмотреть файл

@ -1957,6 +1957,15 @@ qla2x00_status_entry(scsi_qla_host_t *vha, struct rsp_que *rsp, void *pkt)
que = MSW(sts->handle);
req = ha->req_q_map[que];
/* Check for invalid queue pointer */
if (req == NULL ||
que >= find_first_zero_bit(ha->req_qid_map, ha->max_req_queues)) {
ql_dbg(ql_dbg_io, vha, 0x3059,
"Invalid status handle (0x%x): Bad req pointer. req=%p, "
"que=%u.\n", sts->handle, req, que);
return;
}
/* Validate handle. */
if (handle < req->num_outstanding_cmds)
sp = req->outstanding_cmds[handle];

Просмотреть файл

@ -2854,6 +2854,7 @@ static void sd_probe_async(void *data, async_cookie_t cookie)
gd->events |= DISK_EVENT_MEDIA_CHANGE;
}
blk_pm_runtime_init(sdp->request_queue, dev);
add_disk(gd);
if (sdkp->capacity)
sd_dif_config_host(sdkp);
@ -2862,7 +2863,6 @@ static void sd_probe_async(void *data, async_cookie_t cookie)
sd_printk(KERN_NOTICE, sdkp, "Attached SCSI %sdisk\n",
sdp->removable ? "removable " : "");
blk_pm_runtime_init(sdp->request_queue, dev);
scsi_autopm_put_device(sdp);
put_device(&sdkp->dev);
}

Просмотреть файл

@ -105,8 +105,11 @@ static int scatter_elem_sz_prev = SG_SCATTER_SZ;
static int sg_add(struct device *, struct class_interface *);
static void sg_remove(struct device *, struct class_interface *);
static DEFINE_SPINLOCK(sg_open_exclusive_lock);
static DEFINE_IDR(sg_index_idr);
static DEFINE_RWLOCK(sg_index_lock);
static DEFINE_RWLOCK(sg_index_lock); /* Also used to lock
file descriptor list for device */
static struct class_interface sg_interface = {
.add_dev = sg_add,
@ -143,7 +146,8 @@ typedef struct sg_request { /* SG_MAX_QUEUE requests outstanding per file */
} Sg_request;
typedef struct sg_fd { /* holds the state of a file descriptor */
struct list_head sfd_siblings; /* protected by sfd_lock of device */
/* sfd_siblings is protected by sg_index_lock */
struct list_head sfd_siblings;
struct sg_device *parentdp; /* owning device */
wait_queue_head_t read_wait; /* queue read until command done */
rwlock_t rq_list_lock; /* protect access to list in req_arr */
@ -166,12 +170,13 @@ typedef struct sg_fd { /* holds the state of a file descriptor */
typedef struct sg_device { /* holds the state of each scsi generic device */
struct scsi_device *device;
wait_queue_head_t o_excl_wait; /* queue open() when O_EXCL in use */
int sg_tablesize; /* adapter's max scatter-gather table size */
u32 index; /* device index number */
spinlock_t sfd_lock; /* protect file descriptor list for device */
/* sfds is protected by sg_index_lock */
struct list_head sfds;
struct rw_semaphore o_sem; /* exclude open should hold this rwsem */
volatile char detached; /* 0->attached, 1->detached pending removal */
/* exclude protected by sg_open_exclusive_lock */
char exclude; /* opened for exclusive access */
char sgdebug; /* 0->off, 1->sense, 9->dump dev, 10-> all devs */
struct gendisk *disk;
@ -220,14 +225,35 @@ static int sg_allow_access(struct file *filp, unsigned char *cmd)
return blk_verify_command(cmd, filp->f_mode & FMODE_WRITE);
}
static int get_exclude(Sg_device *sdp)
{
unsigned long flags;
int ret;
spin_lock_irqsave(&sg_open_exclusive_lock, flags);
ret = sdp->exclude;
spin_unlock_irqrestore(&sg_open_exclusive_lock, flags);
return ret;
}
static int set_exclude(Sg_device *sdp, char val)
{
unsigned long flags;
spin_lock_irqsave(&sg_open_exclusive_lock, flags);
sdp->exclude = val;
spin_unlock_irqrestore(&sg_open_exclusive_lock, flags);
return val;
}
static int sfds_list_empty(Sg_device *sdp)
{
unsigned long flags;
int ret;
spin_lock_irqsave(&sdp->sfd_lock, flags);
read_lock_irqsave(&sg_index_lock, flags);
ret = list_empty(&sdp->sfds);
spin_unlock_irqrestore(&sdp->sfd_lock, flags);
read_unlock_irqrestore(&sg_index_lock, flags);
return ret;
}
@ -239,6 +265,7 @@ sg_open(struct inode *inode, struct file *filp)
struct request_queue *q;
Sg_device *sdp;
Sg_fd *sfp;
int res;
int retval;
nonseekable_open(inode, filp);
@ -267,52 +294,54 @@ sg_open(struct inode *inode, struct file *filp)
goto error_out;
}
if ((flags & O_EXCL) && (O_RDONLY == (flags & O_ACCMODE))) {
retval = -EPERM; /* Can't lock it with read only access */
if (flags & O_EXCL) {
if (O_RDONLY == (flags & O_ACCMODE)) {
retval = -EPERM; /* Can't lock it with read only access */
goto error_out;
}
if (!sfds_list_empty(sdp) && (flags & O_NONBLOCK)) {
retval = -EBUSY;
goto error_out;
}
res = wait_event_interruptible(sdp->o_excl_wait,
((!sfds_list_empty(sdp) || get_exclude(sdp)) ? 0 : set_exclude(sdp, 1)));
if (res) {
retval = res; /* -ERESTARTSYS because signal hit process */
goto error_out;
}
} else if (get_exclude(sdp)) { /* some other fd has an exclusive lock on dev */
if (flags & O_NONBLOCK) {
retval = -EBUSY;
goto error_out;
}
res = wait_event_interruptible(sdp->o_excl_wait, !get_exclude(sdp));
if (res) {
retval = res; /* -ERESTARTSYS because signal hit process */
goto error_out;
}
}
if (sdp->detached) {
retval = -ENODEV;
goto error_out;
}
if (flags & O_NONBLOCK) {
if (flags & O_EXCL) {
if (!down_write_trylock(&sdp->o_sem)) {
retval = -EBUSY;
goto error_out;
}
} else {
if (!down_read_trylock(&sdp->o_sem)) {
retval = -EBUSY;
goto error_out;
}
}
} else {
if (flags & O_EXCL)
down_write(&sdp->o_sem);
else
down_read(&sdp->o_sem);
}
/* Since write lock is held, no need to check sfd_list */
if (flags & O_EXCL)
sdp->exclude = 1; /* used by release lock */
if (sfds_list_empty(sdp)) { /* no existing opens on this device */
sdp->sgdebug = 0;
q = sdp->device->request_queue;
sdp->sg_tablesize = queue_max_segments(q);
}
sfp = sg_add_sfp(sdp, dev);
if (!IS_ERR(sfp))
if ((sfp = sg_add_sfp(sdp, dev)))
filp->private_data = sfp;
/* retval is already provably zero at this point because of the
* check after retval = scsi_autopm_get_device(sdp->device))
*/
else {
retval = PTR_ERR(sfp);
if (flags & O_EXCL) {
sdp->exclude = 0; /* undo if error */
up_write(&sdp->o_sem);
} else
up_read(&sdp->o_sem);
set_exclude(sdp, 0); /* undo if error */
wake_up_interruptible(&sdp->o_excl_wait);
}
retval = -ENOMEM;
goto error_out;
}
retval = 0;
error_out:
if (retval) {
scsi_autopm_put_device(sdp->device);
sdp_put:
scsi_device_put(sdp->device);
@ -329,18 +358,13 @@ sg_release(struct inode *inode, struct file *filp)
{
Sg_device *sdp;
Sg_fd *sfp;
int excl;
if ((!(sfp = (Sg_fd *) filp->private_data)) || (!(sdp = sfp->parentdp)))
return -ENXIO;
SCSI_LOG_TIMEOUT(3, printk("sg_release: %s\n", sdp->disk->disk_name));
excl = sdp->exclude;
sdp->exclude = 0;
if (excl)
up_write(&sdp->o_sem);
else
up_read(&sdp->o_sem);
set_exclude(sdp, 0);
wake_up_interruptible(&sdp->o_excl_wait);
scsi_autopm_put_device(sdp->device);
kref_put(&sfp->f_ref, sg_remove_sfp);
@ -1391,9 +1415,8 @@ static Sg_device *sg_alloc(struct gendisk *disk, struct scsi_device *scsidp)
disk->first_minor = k;
sdp->disk = disk;
sdp->device = scsidp;
spin_lock_init(&sdp->sfd_lock);
INIT_LIST_HEAD(&sdp->sfds);
init_rwsem(&sdp->o_sem);
init_waitqueue_head(&sdp->o_excl_wait);
sdp->sg_tablesize = queue_max_segments(q);
sdp->index = k;
kref_init(&sdp->d_ref);
@ -1526,13 +1549,11 @@ static void sg_remove(struct device *cl_dev, struct class_interface *cl_intf)
/* Need a write lock to set sdp->detached. */
write_lock_irqsave(&sg_index_lock, iflags);
spin_lock(&sdp->sfd_lock);
sdp->detached = 1;
list_for_each_entry(sfp, &sdp->sfds, sfd_siblings) {
wake_up_interruptible(&sfp->read_wait);
kill_fasync(&sfp->async_qp, SIGPOLL, POLL_HUP);
}
spin_unlock(&sdp->sfd_lock);
write_unlock_irqrestore(&sg_index_lock, iflags);
sysfs_remove_link(&scsidp->sdev_gendev.kobj, "generic");
@ -2043,7 +2064,7 @@ sg_add_sfp(Sg_device * sdp, int dev)
sfp = kzalloc(sizeof(*sfp), GFP_ATOMIC | __GFP_NOWARN);
if (!sfp)
return ERR_PTR(-ENOMEM);
return NULL;
init_waitqueue_head(&sfp->read_wait);
rwlock_init(&sfp->rq_list_lock);
@ -2057,13 +2078,9 @@ sg_add_sfp(Sg_device * sdp, int dev)
sfp->cmd_q = SG_DEF_COMMAND_Q;
sfp->keep_orphan = SG_DEF_KEEP_ORPHAN;
sfp->parentdp = sdp;
spin_lock_irqsave(&sdp->sfd_lock, iflags);
if (sdp->detached) {
spin_unlock_irqrestore(&sdp->sfd_lock, iflags);
return ERR_PTR(-ENODEV);
}
write_lock_irqsave(&sg_index_lock, iflags);
list_add_tail(&sfp->sfd_siblings, &sdp->sfds);
spin_unlock_irqrestore(&sdp->sfd_lock, iflags);
write_unlock_irqrestore(&sg_index_lock, iflags);
SCSI_LOG_TIMEOUT(3, printk("sg_add_sfp: sfp=0x%p\n", sfp));
if (unlikely(sg_big_buff != def_reserved_size))
sg_big_buff = def_reserved_size;
@ -2113,9 +2130,10 @@ static void sg_remove_sfp(struct kref *kref)
struct sg_device *sdp = sfp->parentdp;
unsigned long iflags;
spin_lock_irqsave(&sdp->sfd_lock, iflags);
write_lock_irqsave(&sg_index_lock, iflags);
list_del(&sfp->sfd_siblings);
spin_unlock_irqrestore(&sdp->sfd_lock, iflags);
write_unlock_irqrestore(&sg_index_lock, iflags);
wake_up_interruptible(&sdp->o_excl_wait);
INIT_WORK(&sfp->ew.work, sg_remove_sfp_usercontext);
schedule_work(&sfp->ew.work);
@ -2502,7 +2520,7 @@ static int sg_proc_seq_show_devstrs(struct seq_file *s, void *v)
return 0;
}
/* must be called while holding sg_index_lock and sfd_lock */
/* must be called while holding sg_index_lock */
static void sg_proc_debug_helper(struct seq_file *s, Sg_device * sdp)
{
int k, m, new_interface, blen, usg;
@ -2587,26 +2605,22 @@ static int sg_proc_seq_show_debug(struct seq_file *s, void *v)
read_lock_irqsave(&sg_index_lock, iflags);
sdp = it ? sg_lookup_dev(it->index) : NULL;
if (sdp) {
spin_lock(&sdp->sfd_lock);
if (!list_empty(&sdp->sfds)) {
struct scsi_device *scsidp = sdp->device;
if (sdp && !list_empty(&sdp->sfds)) {
struct scsi_device *scsidp = sdp->device;
seq_printf(s, " >>> device=%s ", sdp->disk->disk_name);
if (sdp->detached)
seq_printf(s, "detached pending close ");
else
seq_printf
(s, "scsi%d chan=%d id=%d lun=%d em=%d",
scsidp->host->host_no,
scsidp->channel, scsidp->id,
scsidp->lun,
scsidp->host->hostt->emulated);
seq_printf(s, " sg_tablesize=%d excl=%d\n",
sdp->sg_tablesize, sdp->exclude);
sg_proc_debug_helper(s, sdp);
}
spin_unlock(&sdp->sfd_lock);
seq_printf(s, " >>> device=%s ", sdp->disk->disk_name);
if (sdp->detached)
seq_printf(s, "detached pending close ");
else
seq_printf
(s, "scsi%d chan=%d id=%d lun=%d em=%d",
scsidp->host->host_no,
scsidp->channel, scsidp->id,
scsidp->lun,
scsidp->host->hostt->emulated);
seq_printf(s, " sg_tablesize=%d excl=%d\n",
sdp->sg_tablesize, get_exclude(sdp));
sg_proc_debug_helper(s, sdp);
}
read_unlock_irqrestore(&sg_index_lock, iflags);
return 0;

Просмотреть файл

@ -1960,6 +1960,7 @@ cntrlEnd:
BCM_DEBUG_PRINT(Adapter, DBG_TYPE_OTHERS, OSAL_DBG, DBG_LVL_ALL, "Called IOCTL_BCM_GET_DEVICE_DRIVER_INFO\n");
memset(&DevInfo, 0, sizeof(DevInfo));
DevInfo.MaxRDMBufferSize = BUFFER_4K;
DevInfo.u32DSDStartOffset = EEPROM_CALPARAM_START;
DevInfo.u32RxAlignmentCorrection = 0;

Просмотреть файл

@ -155,6 +155,9 @@ static ssize_t oz_cdev_write(struct file *filp, const char __user *buf,
struct oz_app_hdr *app_hdr;
struct oz_serial_ctx *ctx;
if (count > sizeof(ei->data) - sizeof(*elt) - sizeof(*app_hdr))
return -EINVAL;
spin_lock_bh(&g_cdev.lock);
pd = g_cdev.active_pd;
if (pd)

Просмотреть файл

@ -1063,7 +1063,7 @@ static int mp_wait_modem_status(struct sb_uart_state *state, unsigned long arg)
static int mp_get_count(struct sb_uart_state *state, struct serial_icounter_struct *icnt)
{
struct serial_icounter_struct icount;
struct serial_icounter_struct icount = {};
struct sb_uart_icount cnow;
struct sb_uart_port *port = state->port;

Просмотреть файл

@ -570,6 +570,7 @@ int wvlan_uil_put_info(struct uilreq *urq, struct wl_private *lp)
ltv_t *pLtv;
bool_t ltvAllocated = FALSE;
ENCSTRCT sEncryption;
size_t len;
#ifdef USE_WDS
hcf_16 hcfPort = HCF_PORT_0;
@ -686,7 +687,8 @@ int wvlan_uil_put_info(struct uilreq *urq, struct wl_private *lp)
break;
case CFG_CNF_OWN_NAME:
memset(lp->StationName, 0, sizeof(lp->StationName));
memcpy((void *)lp->StationName, (void *)&pLtv->u.u8[2], (size_t)pLtv->u.u16[0]);
len = min_t(size_t, pLtv->u.u16[0], sizeof(lp->StationName));
strlcpy(lp->StationName, &pLtv->u.u8[2], len);
pLtv->u.u16[0] = CNV_INT_TO_LITTLE(pLtv->u.u16[0]);
break;
case CFG_CNF_LOAD_BALANCING:
@ -1783,6 +1785,7 @@ int wvlan_set_station_nickname(struct net_device *dev,
{
struct wl_private *lp = wl_priv(dev);
unsigned long flags;
size_t len;
int ret = 0;
/*------------------------------------------------------------------------*/
@ -1793,8 +1796,8 @@ int wvlan_set_station_nickname(struct net_device *dev,
wl_lock(lp, &flags);
memset(lp->StationName, 0, sizeof(lp->StationName));
memcpy(lp->StationName, extra, wrqu->data.length);
len = min_t(size_t, wrqu->data.length, sizeof(lp->StationName));
strlcpy(lp->StationName, extra, len);
/* Commit the adapter parameters */
wl_apply(lp);

Просмотреть файл

@ -134,10 +134,10 @@ static int pscsi_pmode_enable_hba(struct se_hba *hba, unsigned long mode_flag)
* pSCSI Host ID and enable for phba mode
*/
sh = scsi_host_lookup(phv->phv_host_id);
if (IS_ERR(sh)) {
if (!sh) {
pr_err("pSCSI: Unable to locate SCSI Host for"
" phv_host_id: %d\n", phv->phv_host_id);
return PTR_ERR(sh);
return -EINVAL;
}
phv->phv_lld_host = sh;
@ -515,10 +515,10 @@ static int pscsi_configure_device(struct se_device *dev)
sh = phv->phv_lld_host;
} else {
sh = scsi_host_lookup(pdv->pdv_host_id);
if (IS_ERR(sh)) {
if (!sh) {
pr_err("pSCSI: Unable to locate"
" pdv_host_id: %d\n", pdv->pdv_host_id);
return PTR_ERR(sh);
return -EINVAL;
}
}
} else {

Просмотреть файл

@ -263,6 +263,11 @@ sbc_setup_write_same(struct se_cmd *cmd, unsigned char *flags, struct sbc_ops *o
sectors, cmd->se_dev->dev_attrib.max_write_same_len);
return TCM_INVALID_CDB_FIELD;
}
/* We always have ANC_SUP == 0 so setting ANCHOR is always an error */
if (flags[0] & 0x10) {
pr_warn("WRITE SAME with ANCHOR not supported\n");
return TCM_INVALID_CDB_FIELD;
}
/*
* Special case for WRITE_SAME w/ UNMAP=1 that ends up getting
* translated into block discard requests within backend code.

Просмотреть файл

@ -82,6 +82,9 @@ static int target_xcopy_locate_se_dev_e4(struct se_cmd *se_cmd, struct xcopy_op
mutex_lock(&g_device_mutex);
list_for_each_entry(se_dev, &g_device_list, g_dev_node) {
if (!se_dev->dev_attrib.emulate_3pc)
continue;
memset(&tmp_dev_wwn[0], 0, XCOPY_NAA_IEEE_REGEX_LEN);
target_xcopy_gen_naa_ieee(se_dev, &tmp_dev_wwn[0]);
@ -357,6 +360,7 @@ struct xcopy_pt_cmd {
struct se_cmd se_cmd;
struct xcopy_op *xcopy_op;
struct completion xpt_passthrough_sem;
unsigned char sense_buffer[TRANSPORT_SENSE_BUFFER];
};
static struct se_port xcopy_pt_port;
@ -675,7 +679,8 @@ static int target_xcopy_issue_pt_cmd(struct xcopy_pt_cmd *xpt_cmd)
pr_debug("target_xcopy_issue_pt_cmd(): SCSI status: 0x%02x\n",
se_cmd->scsi_status);
return 0;
return (se_cmd->scsi_status) ? -EINVAL : 0;
}
static int target_xcopy_read_source(
@ -708,7 +713,7 @@ static int target_xcopy_read_source(
(unsigned long long)src_lba, src_sectors, length);
transport_init_se_cmd(se_cmd, &xcopy_pt_tfo, NULL, length,
DMA_FROM_DEVICE, 0, NULL);
DMA_FROM_DEVICE, 0, &xpt_cmd->sense_buffer[0]);
xop->src_pt_cmd = xpt_cmd;
rc = target_xcopy_setup_pt_cmd(xpt_cmd, xop, src_dev, &cdb[0],
@ -768,7 +773,7 @@ static int target_xcopy_write_destination(
(unsigned long long)dst_lba, dst_sectors, length);
transport_init_se_cmd(se_cmd, &xcopy_pt_tfo, NULL, length,
DMA_TO_DEVICE, 0, NULL);
DMA_TO_DEVICE, 0, &xpt_cmd->sense_buffer[0]);
xop->dst_pt_cmd = xpt_cmd;
rc = target_xcopy_setup_pt_cmd(xpt_cmd, xop, dst_dev, &cdb[0],
@ -884,30 +889,42 @@ out:
sense_reason_t target_do_xcopy(struct se_cmd *se_cmd)
{
struct se_device *dev = se_cmd->se_dev;
struct xcopy_op *xop = NULL;
unsigned char *p = NULL, *seg_desc;
unsigned int list_id, list_id_usage, sdll, inline_dl, sa;
sense_reason_t ret = TCM_INVALID_PARAMETER_LIST;
int rc;
unsigned short tdll;
if (!dev->dev_attrib.emulate_3pc) {
pr_err("EXTENDED_COPY operation explicitly disabled\n");
return TCM_UNSUPPORTED_SCSI_OPCODE;
}
sa = se_cmd->t_task_cdb[1] & 0x1f;
if (sa != 0x00) {
pr_err("EXTENDED_COPY(LID4) not supported\n");
return TCM_UNSUPPORTED_SCSI_OPCODE;
}
xop = kzalloc(sizeof(struct xcopy_op), GFP_KERNEL);
if (!xop) {
pr_err("Unable to allocate xcopy_op\n");
return TCM_OUT_OF_RESOURCES;
}
xop->xop_se_cmd = se_cmd;
p = transport_kmap_data_sg(se_cmd);
if (!p) {
pr_err("transport_kmap_data_sg() failed in target_do_xcopy\n");
kfree(xop);
return TCM_OUT_OF_RESOURCES;
}
list_id = p[0];
if (list_id != 0x00) {
pr_err("XCOPY with non zero list_id: 0x%02x\n", list_id);
goto out;
}
list_id_usage = (p[1] & 0x18);
list_id_usage = (p[1] & 0x18) >> 3;
/*
* Determine TARGET DESCRIPTOR LIST LENGTH + SEGMENT DESCRIPTOR LIST LENGTH
*/
@ -920,13 +937,6 @@ sense_reason_t target_do_xcopy(struct se_cmd *se_cmd)
goto out;
}
xop = kzalloc(sizeof(struct xcopy_op), GFP_KERNEL);
if (!xop) {
pr_err("Unable to allocate xcopy_op\n");
goto out;
}
xop->xop_se_cmd = se_cmd;
pr_debug("Processing XCOPY with list_id: 0x%02x list_id_usage: 0x%02x"
" tdll: %hu sdll: %u inline_dl: %u\n", list_id, list_id_usage,
tdll, sdll, inline_dl);
@ -935,6 +945,17 @@ sense_reason_t target_do_xcopy(struct se_cmd *se_cmd)
if (rc <= 0)
goto out;
if (xop->src_dev->dev_attrib.block_size !=
xop->dst_dev->dev_attrib.block_size) {
pr_err("XCOPY: Non matching src_dev block_size: %u + dst_dev"
" block_size: %u currently unsupported\n",
xop->src_dev->dev_attrib.block_size,
xop->dst_dev->dev_attrib.block_size);
xcopy_pt_undepend_remotedev(xop);
ret = TCM_LOGICAL_UNIT_COMMUNICATION_FAILURE;
goto out;
}
pr_debug("XCOPY: Processed %d target descriptors, length: %u\n", rc,
rc * XCOPY_TARGET_DESC_LEN);
seg_desc = &p[16];
@ -957,7 +978,7 @@ out:
if (p)
transport_kunmap_data_sg(se_cmd);
kfree(xop);
return TCM_INVALID_CDB_FIELD;
return ret;
}
static sense_reason_t target_rcr_operating_parameters(struct se_cmd *se_cmd)

Просмотреть файл

@ -1499,7 +1499,7 @@ static void atmel_set_ops(struct uart_port *port)
/*
* Get ip name usart or uart
*/
static int atmel_get_ip_name(struct uart_port *port)
static void atmel_get_ip_name(struct uart_port *port)
{
struct atmel_uart_port *atmel_port = to_atmel_uart_port(port);
int name = UART_GET_IP_NAME(port);
@ -1518,10 +1518,7 @@ static int atmel_get_ip_name(struct uart_port *port)
atmel_port->is_usart = false;
} else {
dev_err(port->dev, "Not supported ip name, set to uart\n");
return -EINVAL;
}
return 0;
}
/*
@ -2405,9 +2402,7 @@ static int atmel_serial_probe(struct platform_device *pdev)
/*
* Get port name of usart or uart
*/
ret = atmel_get_ip_name(&port->uart);
if (ret < 0)
goto err_add_port;
atmel_get_ip_name(&port->uart);
return 0;

Просмотреть файл

@ -642,16 +642,29 @@ static int uio_mmap_physical(struct vm_area_struct *vma)
{
struct uio_device *idev = vma->vm_private_data;
int mi = uio_find_mem_index(vma);
struct uio_mem *mem;
if (mi < 0)
return -EINVAL;
mem = idev->info->mem + mi;
if (vma->vm_end - vma->vm_start > mem->size)
return -EINVAL;
vma->vm_ops = &uio_physical_vm_ops;
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
/*
* We cannot use the vm_iomap_memory() helper here,
* because vma->vm_pgoff is the map index we looked
* up above in uio_find_mem_index(), rather than an
* actual page offset into the mmap.
*
* So we just do the physical mmap without a page
* offset.
*/
return remap_pfn_range(vma,
vma->vm_start,
idev->info->mem[mi].addr >> PAGE_SHIFT,
mem->addr >> PAGE_SHIFT,
vma->vm_end - vma->vm_start,
vma->vm_page_prot);
}

Просмотреть файл

@ -904,6 +904,7 @@ static struct usb_device_id id_table_combined [] = {
{ USB_DEVICE(FTDI_VID, FTDI_LUMEL_PD12_PID) },
/* Crucible Devices */
{ USB_DEVICE(FTDI_VID, FTDI_CT_COMET_PID) },
{ USB_DEVICE(FTDI_VID, FTDI_Z3X_PID) },
{ } /* Terminating entry */
};

Просмотреть файл

@ -1307,3 +1307,9 @@
* Manufacturer: Crucible Technologies
*/
#define FTDI_CT_COMET_PID 0x8e08
/*
* Product: Z3X Box
* Manufacturer: Smart GSM Team
*/
#define FTDI_Z3X_PID 0x0011

Просмотреть файл

@ -4,11 +4,6 @@
* Copyright (C) 2001-2007 Greg Kroah-Hartman (greg@kroah.com)
* Copyright (C) 2003 IBM Corp.
*
* Copyright (C) 2009, 2013 Frank Schäfer <fschaefer.oss@googlemail.com>
* - fixes, improvements and documentation for the baud rate encoding methods
* Copyright (C) 2013 Reinhard Max <max@suse.de>
* - fixes and improvements for the divisor based baud rate encoding method
*
* Original driver for 2.2.x by anonymous
*
* This program is free software; you can redistribute it and/or
@ -134,18 +129,10 @@ MODULE_DEVICE_TABLE(usb, id_table);
enum pl2303_type {
type_0, /* H version ? */
type_1, /* H version ? */
HX_TA, /* HX(A) / X(A) / TA version */ /* TODO: improve */
HXD_EA_RA_SA, /* HXD / EA / RA / SA version */ /* TODO: improve */
TB, /* TB version */
HX_CLONE, /* Cheap and less functional clone of the HX chip */
type_0, /* don't know the difference between type 0 and */
type_1, /* type 1, until someone from prolific tells us... */
HX, /* HX version of the pl2303 chip */
};
/*
* NOTE: don't know the difference between type 0 and type 1,
* until someone from Prolific tells us...
* TODO: distinguish between X/HX, TA and HXD, EA, RA, SA variants
*/
struct pl2303_serial_private {
enum pl2303_type type;
@ -185,7 +172,6 @@ static int pl2303_startup(struct usb_serial *serial)
{
struct pl2303_serial_private *spriv;
enum pl2303_type type = type_0;
char *type_str = "unknown (treating as type_0)";
unsigned char *buf;
spriv = kzalloc(sizeof(*spriv), GFP_KERNEL);
@ -198,53 +184,15 @@ static int pl2303_startup(struct usb_serial *serial)
return -ENOMEM;
}
if (serial->dev->descriptor.bDeviceClass == 0x02) {
if (serial->dev->descriptor.bDeviceClass == 0x02)
type = type_0;
type_str = "type_0";
} else if (serial->dev->descriptor.bMaxPacketSize0 == 0x40) {
/*
* NOTE: The bcdDevice version is the only difference between
* the device descriptors of the X/HX, HXD, EA, RA, SA, TA, TB
*/
if (le16_to_cpu(serial->dev->descriptor.bcdDevice) == 0x300) {
/* Check if the device is a clone */
pl2303_vendor_read(0x9494, 0, serial, buf);
/*
* NOTE: Not sure if this read is really needed.
* The HX returns 0x00, the clone 0x02, but the Windows
* driver seems to ignore the value and continues.
*/
pl2303_vendor_write(0x0606, 0xaa, serial);
pl2303_vendor_read(0x8686, 0, serial, buf);
if (buf[0] != 0xaa) {
type = HX_CLONE;
type_str = "X/HX clone (limited functionality)";
} else {
type = HX_TA;
type_str = "X/HX/TA";
}
pl2303_vendor_write(0x0606, 0x00, serial);
} else if (le16_to_cpu(serial->dev->descriptor.bcdDevice)
== 0x400) {
type = HXD_EA_RA_SA;
type_str = "HXD/EA/RA/SA";
} else if (le16_to_cpu(serial->dev->descriptor.bcdDevice)
== 0x500) {
type = TB;
type_str = "TB";
} else {
dev_info(&serial->interface->dev,
"unknown/unsupported device type\n");
kfree(spriv);
kfree(buf);
return -ENODEV;
}
} else if (serial->dev->descriptor.bDeviceClass == 0x00
|| serial->dev->descriptor.bDeviceClass == 0xFF) {
else if (serial->dev->descriptor.bMaxPacketSize0 == 0x40)
type = HX;
else if (serial->dev->descriptor.bDeviceClass == 0x00)
type = type_1;
type_str = "type_1";
}
dev_dbg(&serial->interface->dev, "device type: %s\n", type_str);
else if (serial->dev->descriptor.bDeviceClass == 0xFF)
type = type_1;
dev_dbg(&serial->interface->dev, "device type: %d\n", type);
spriv->type = type;
usb_set_serial_data(serial, spriv);
@ -259,10 +207,10 @@ static int pl2303_startup(struct usb_serial *serial)
pl2303_vendor_read(0x8383, 0, serial, buf);
pl2303_vendor_write(0, 1, serial);
pl2303_vendor_write(1, 0, serial);
if (type == type_0 || type == type_1)
pl2303_vendor_write(2, 0x24, serial);
else
if (type == HX)
pl2303_vendor_write(2, 0x44, serial);
else
pl2303_vendor_write(2, 0x24, serial);
kfree(buf);
return 0;
@ -316,174 +264,65 @@ static int pl2303_set_control_lines(struct usb_serial_port *port, u8 value)
return retval;
}
static int pl2303_baudrate_encode_direct(int baud, enum pl2303_type type,
u8 buf[4])
static void pl2303_encode_baudrate(struct tty_struct *tty,
struct usb_serial_port *port,
u8 buf[4])
{
/*
* NOTE: Only the values defined in baud_sup are supported !
* => if unsupported values are set, the PL2303 uses 9600 baud instead
* => HX clones just don't work at unsupported baud rates < 115200 baud,
* for baud rates > 115200 they run at 115200 baud
*/
const int baud_sup[] = { 75, 150, 300, 600, 1200, 1800, 2400, 3600,
4800, 7200, 9600, 14400, 19200, 28800, 38400,
57600, 115200, 230400, 460800, 614400, 921600,
1228800, 2457600, 3000000, 6000000, 12000000 };
/*
* NOTE: With the exception of type_0/1 devices, the following
* additional baud rates are supported (tested with HX rev. 3A only):
* 110*, 56000*, 128000, 134400, 161280, 201600, 256000*, 268800,
* 403200, 806400. (*: not HX and HX clones)
*
* Maximum values: HXD, TB: 12000000; HX, TA: 6000000;
* type_0+1: 1228800; RA: 921600; HX clones, SA: 115200
*
* As long as we are not using this encoding method for anything else
* than the type_0+1, HX and HX clone chips, there is no point in
* complicating the code to support them.
*/
4800, 7200, 9600, 14400, 19200, 28800, 38400,
57600, 115200, 230400, 460800, 500000, 614400,
921600, 1228800, 2457600, 3000000, 6000000 };
struct usb_serial *serial = port->serial;
struct pl2303_serial_private *spriv = usb_get_serial_data(serial);
int baud;
int i;
/*
* NOTE: Only the values defined in baud_sup are supported!
* => if unsupported values are set, the PL2303 seems to use
* 9600 baud (at least my PL2303X always does)
*/
baud = tty_get_baud_rate(tty);
dev_dbg(&port->dev, "baud requested = %d\n", baud);
if (!baud)
return;
/* Set baudrate to nearest supported value */
for (i = 0; i < ARRAY_SIZE(baud_sup); ++i) {
if (baud_sup[i] > baud)
break;
}
if (i == ARRAY_SIZE(baud_sup))
baud = baud_sup[i - 1];
else if (i > 0 && (baud_sup[i] - baud) > (baud - baud_sup[i - 1]))
baud = baud_sup[i - 1];
else
baud = baud_sup[i];
/* Respect the chip type specific baud rate limits */
/*
* FIXME: as long as we don't know how to distinguish between the
* HXD, EA, RA, and SA chip variants, allow the max. value of 12M.
*/
if (type == HX_TA)
baud = min_t(int, baud, 6000000);
else if (type == type_0 || type == type_1)
/* type_0, type_1 only support up to 1228800 baud */
if (spriv->type != HX)
baud = min_t(int, baud, 1228800);
else if (type == HX_CLONE)
baud = min_t(int, baud, 115200);
/* Direct (standard) baud rate encoding method */
put_unaligned_le32(baud, buf);
return baud;
}
static int pl2303_baudrate_encode_divisor(int baud, enum pl2303_type type,
u8 buf[4])
{
/*
* Divisor based baud rate encoding method
*
* NOTE: HX clones do NOT support this method.
* It's not clear if the type_0/1 chips support it.
*
* divisor = 12MHz * 32 / baudrate = 2^A * B
*
* with
*
* A = buf[1] & 0x0e
* B = buf[0] + (buf[1] & 0x01) << 8
*
* Special cases:
* => 8 < B < 16: device seems to work not properly
* => B <= 8: device uses the max. value B = 512 instead
*/
unsigned int A, B;
/*
* NOTE: The Windows driver allows maximum baud rates of 110% of the
* specified maximium value.
* Quick tests with early (2004) HX (rev. A) chips suggest, that even
* higher baud rates (up to the maximum of 24M baud !) are working fine,
* but that should really be tested carefully in "real life" scenarios
* before removing the upper limit completely.
* Baud rates smaller than the specified 75 baud are definitely working
* fine.
*/
if (type == type_0 || type == type_1)
baud = min_t(int, baud, 1228800 * 1.1);
else if (type == HX_TA)
baud = min_t(int, baud, 6000000 * 1.1);
else if (type == HXD_EA_RA_SA)
/* HXD, EA: 12Mbps; RA: 1Mbps; SA: 115200 bps */
/*
* FIXME: as long as we don't know how to distinguish between
* these chip variants, allow the max. of these values
*/
baud = min_t(int, baud, 12000000 * 1.1);
else if (type == TB)
baud = min_t(int, baud, 12000000 * 1.1);
/* Determine factors A and B */
A = 0;
B = 12000000 * 32 / baud; /* 12MHz */
B <<= 1; /* Add one bit for rounding */
while (B > (512 << 1) && A <= 14) {
A += 2;
B >>= 2;
}
if (A > 14) { /* max. divisor = min. baudrate reached */
A = 14;
B = 512;
/* => ~45.78 baud */
if (baud <= 115200) {
put_unaligned_le32(baud, buf);
} else {
B = (B + 1) >> 1; /* Round the last bit */
}
/* Handle special cases */
if (B == 512)
B = 0; /* also: 1 to 8 */
else if (B < 16)
/*
* NOTE: With the current algorithm this happens
* only for A=0 and means that the min. divisor
* (respectively: the max. baudrate) is reached.
* Apparently the formula for higher speeds is:
* baudrate = 12M * 32 / (2^buf[1]) / buf[0]
*/
B = 16; /* => 24 MBaud */
/* Encode the baud rate */
buf[3] = 0x80; /* Select divisor encoding method */
buf[2] = 0;
buf[1] = (A & 0x0e); /* A */
buf[1] |= ((B & 0x100) >> 8); /* MSB of B */
buf[0] = B & 0xff; /* 8 LSBs of B */
/* Calculate the actual/resulting baud rate */
if (B <= 8)
B = 512;
baud = 12000000 * 32 / ((1 << A) * B);
unsigned tmp = 12000000 * 32 / baud;
buf[3] = 0x80;
buf[2] = 0;
buf[1] = (tmp >= 256);
while (tmp >= 256) {
tmp >>= 2;
buf[1] <<= 1;
}
buf[0] = tmp;
}
return baud;
}
static void pl2303_encode_baudrate(struct tty_struct *tty,
struct usb_serial_port *port,
enum pl2303_type type,
u8 buf[4])
{
int baud;
baud = tty_get_baud_rate(tty);
dev_dbg(&port->dev, "baud requested = %d\n", baud);
if (!baud)
return;
/*
* There are two methods for setting/encoding the baud rate
* 1) Direct method: encodes the baud rate value directly
* => supported by all chip types
* 2) Divisor based method: encodes a divisor to a base value (12MHz*32)
* => not supported by HX clones (and likely type_0/1 chips)
*
* NOTE: Although the divisor based baud rate encoding method is much
* more flexible, some of the standard baud rate values can not be
* realized exactly. But the difference is very small (max. 0.2%) and
* the device likely uses the same baud rate generator for both methods
* so that there is likley no difference.
*/
if (type == type_0 || type == type_1 || type == HX_CLONE)
baud = pl2303_baudrate_encode_direct(baud, type, buf);
else
baud = pl2303_baudrate_encode_divisor(baud, type, buf);
/* Save resulting baud rate */
tty_encode_baud_rate(tty, baud, baud);
dev_dbg(&port->dev, "baud set = %d\n", baud);
@ -540,8 +379,8 @@ static void pl2303_set_termios(struct tty_struct *tty,
dev_dbg(&port->dev, "data bits = %d\n", buf[6]);
}
/* For reference: buf[0]:buf[3] baud rate value */
pl2303_encode_baudrate(tty, port, spriv->type, buf);
/* For reference buf[0]:buf[3] baud rate value */
pl2303_encode_baudrate(tty, port, &buf[0]);
/* For reference buf[4]=0 is 1 stop bits */
/* For reference buf[4]=1 is 1.5 stop bits */
@ -618,10 +457,10 @@ static void pl2303_set_termios(struct tty_struct *tty,
dev_dbg(&port->dev, "0xa1:0x21:0:0 %d - %7ph\n", i, buf);
if (C_CRTSCTS(tty)) {
if (spriv->type == type_0 || spriv->type == type_1)
pl2303_vendor_write(0x0, 0x41, serial);
else
if (spriv->type == HX)
pl2303_vendor_write(0x0, 0x61, serial);
else
pl2303_vendor_write(0x0, 0x41, serial);
} else {
pl2303_vendor_write(0x0, 0x0, serial);
}
@ -658,7 +497,7 @@ static int pl2303_open(struct tty_struct *tty, struct usb_serial_port *port)
struct pl2303_serial_private *spriv = usb_get_serial_data(serial);
int result;
if (spriv->type == type_0 || spriv->type == type_1) {
if (spriv->type != HX) {
usb_clear_halt(serial->dev, port->write_urb->pipe);
usb_clear_halt(serial->dev, port->read_urb->pipe);
} else {
@ -833,7 +672,6 @@ static void pl2303_break_ctl(struct tty_struct *tty, int break_state)
result = usb_control_msg(serial->dev, usb_sndctrlpipe(serial->dev, 0),
BREAK_REQUEST, BREAK_REQUEST_TYPE, state,
0, NULL, 0, 100);
/* NOTE: HX clones don't support sending breaks, -EPIPE is returned */
if (result)
dev_err(&port->dev, "error sending break = %d\n", result);
}

Просмотреть файл

@ -1056,7 +1056,7 @@ vhost_scsi_handle_vq(struct vhost_scsi *vs, struct vhost_virtqueue *vq)
if (data_direction != DMA_NONE) {
ret = vhost_scsi_map_iov_to_sgl(cmd,
&vq->iov[data_first], data_num,
data_direction == DMA_TO_DEVICE);
data_direction == DMA_FROM_DEVICE);
if (unlikely(ret)) {
vq_err(vq, "Failed to map iov to sgl\n");
goto err_free;

Просмотреть файл

@ -361,37 +361,13 @@ void au1100fb_fb_rotate(struct fb_info *fbi, int angle)
int au1100fb_fb_mmap(struct fb_info *fbi, struct vm_area_struct *vma)
{
struct au1100fb_device *fbdev;
unsigned int len;
unsigned long start=0, off;
fbdev = to_au1100fb_device(fbi);
if (vma->vm_pgoff > (~0UL >> PAGE_SHIFT)) {
return -EINVAL;
}
start = fbdev->fb_phys & PAGE_MASK;
len = PAGE_ALIGN((start & ~PAGE_MASK) + fbdev->fb_len);
off = vma->vm_pgoff << PAGE_SHIFT;
if ((vma->vm_end - vma->vm_start + off) > len) {
return -EINVAL;
}
off += start;
vma->vm_pgoff = off >> PAGE_SHIFT;
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
pgprot_val(vma->vm_page_prot) |= (6 << 9); //CCA=6
if (io_remap_pfn_range(vma, vma->vm_start, off >> PAGE_SHIFT,
vma->vm_end - vma->vm_start,
vma->vm_page_prot)) {
return -EAGAIN;
}
return 0;
return vm_iomap_memory(vma, fbdev->fb_phys, fbdev->fb_len);
}
static struct fb_ops au1100fb_ops =

Просмотреть файл

@ -1233,34 +1233,13 @@ static int au1200fb_fb_blank(int blank_mode, struct fb_info *fbi)
* method mainly to allow the use of the TLB streaming flag (CCA=6)
*/
static int au1200fb_fb_mmap(struct fb_info *info, struct vm_area_struct *vma)
{
unsigned int len;
unsigned long start=0, off;
struct au1200fb_device *fbdev = info->par;
if (vma->vm_pgoff > (~0UL >> PAGE_SHIFT)) {
return -EINVAL;
}
start = fbdev->fb_phys & PAGE_MASK;
len = PAGE_ALIGN((start & ~PAGE_MASK) + fbdev->fb_len);
off = vma->vm_pgoff << PAGE_SHIFT;
if ((vma->vm_end - vma->vm_start + off) > len) {
return -EINVAL;
}
off += start;
vma->vm_pgoff = off >> PAGE_SHIFT;
vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
pgprot_val(vma->vm_page_prot) |= _CACHE_MASK; /* CCA=7 */
return io_remap_pfn_range(vma, vma->vm_start, off >> PAGE_SHIFT,
vma->vm_end - vma->vm_start,
vma->vm_page_prot);
return vm_iomap_memory(vma, fbdev->fb_phys, fbdev->fb_len);
}
static void set_global(u_int cmd, struct au1200_lcd_global_regs_t *pdata)

Некоторые файлы не были показаны из-за слишком большого количества измененных файлов Показать больше