Merge branch 'linus' into x86/core
Conflicts: arch/x86/mm/ioremap.c Signed-off-by: Ingo Molnar <mingo@elte.hu>
This commit is contained in:
Коммит
ae94b8075a
Разница между файлами не показана из-за своего большого размера
Загрузить разницу
|
@ -148,9 +148,9 @@ tcp_available_congestion_control - STRING
|
|||
but not loaded.
|
||||
|
||||
tcp_base_mss - INTEGER
|
||||
The initial value of search_low to be used by Packetization Layer
|
||||
Path MTU Discovery (MTU probing). If MTU probing is enabled,
|
||||
this is the inital MSS used by the connection.
|
||||
The initial value of search_low to be used by the packetization layer
|
||||
Path MTU discovery (MTU probing). If MTU probing is enabled,
|
||||
this is the initial MSS used by the connection.
|
||||
|
||||
tcp_congestion_control - STRING
|
||||
Set the congestion control algorithm to be used for new
|
||||
|
@ -186,9 +186,8 @@ tcp_frto - INTEGER
|
|||
where packet loss is typically due to random radio interference
|
||||
rather than intermediate router congestion. F-RTO is sender-side
|
||||
only modification. Therefore it does not require any support from
|
||||
the peer, but in a typical case, however, where wireless link is
|
||||
the local access link and most of the data flows downlink, the
|
||||
faraway servers should have F-RTO enabled to take advantage of it.
|
||||
the peer.
|
||||
|
||||
If set to 1, basic version is enabled. 2 enables SACK enhanced
|
||||
F-RTO if flow uses SACK. The basic version can be used also when
|
||||
SACK is in use though scenario(s) with it exists where F-RTO
|
||||
|
@ -276,7 +275,7 @@ tcp_mem - vector of 3 INTEGERs: min, pressure, max
|
|||
memory.
|
||||
|
||||
tcp_moderate_rcvbuf - BOOLEAN
|
||||
If set, TCP performs receive buffer autotuning, attempting to
|
||||
If set, TCP performs receive buffer auto-tuning, attempting to
|
||||
automatically size the buffer (no greater than tcp_rmem[2]) to
|
||||
match the size required by the path for full throughput. Enabled by
|
||||
default.
|
||||
|
@ -336,7 +335,7 @@ tcp_rmem - vector of 3 INTEGERs: min, default, max
|
|||
pressure.
|
||||
Default: 8K
|
||||
|
||||
default: default size of receive buffer used by TCP sockets.
|
||||
default: initial size of receive buffer used by TCP sockets.
|
||||
This value overrides net.core.rmem_default used by other protocols.
|
||||
Default: 87380 bytes. This value results in window of 65535 with
|
||||
default setting of tcp_adv_win_scale and tcp_app_win:0 and a bit
|
||||
|
@ -344,8 +343,10 @@ tcp_rmem - vector of 3 INTEGERs: min, default, max
|
|||
|
||||
max: maximal size of receive buffer allowed for automatically
|
||||
selected receiver buffers for TCP socket. This value does not override
|
||||
net.core.rmem_max, "static" selection via SO_RCVBUF does not use this.
|
||||
Default: 87380*2 bytes.
|
||||
net.core.rmem_max. Calling setsockopt() with SO_RCVBUF disables
|
||||
automatic tuning of that socket's receive buffer size, in which
|
||||
case this value is ignored.
|
||||
Default: between 87380B and 4MB, depending on RAM size.
|
||||
|
||||
tcp_sack - BOOLEAN
|
||||
Enable select acknowledgments (SACKS).
|
||||
|
@ -358,7 +359,7 @@ tcp_slow_start_after_idle - BOOLEAN
|
|||
Default: 1
|
||||
|
||||
tcp_stdurg - BOOLEAN
|
||||
Use the Host requirements interpretation of the TCP urg pointer field.
|
||||
Use the Host requirements interpretation of the TCP urgent pointer field.
|
||||
Most hosts use the older BSD interpretation, so if you turn this on
|
||||
Linux might not communicate correctly with them.
|
||||
Default: FALSE
|
||||
|
@ -371,12 +372,12 @@ tcp_synack_retries - INTEGER
|
|||
tcp_syncookies - BOOLEAN
|
||||
Only valid when the kernel was compiled with CONFIG_SYNCOOKIES
|
||||
Send out syncookies when the syn backlog queue of a socket
|
||||
overflows. This is to prevent against the common 'syn flood attack'
|
||||
overflows. This is to prevent against the common 'SYN flood attack'
|
||||
Default: FALSE
|
||||
|
||||
Note, that syncookies is fallback facility.
|
||||
It MUST NOT be used to help highly loaded servers to stand
|
||||
against legal connection rate. If you see synflood warnings
|
||||
against legal connection rate. If you see SYN flood warnings
|
||||
in your logs, but investigation shows that they occur
|
||||
because of overload with legal connections, you should tune
|
||||
another parameters until this warning disappear.
|
||||
|
@ -386,7 +387,7 @@ tcp_syncookies - BOOLEAN
|
|||
to use TCP extensions, can result in serious degradation
|
||||
of some services (f.e. SMTP relaying), visible not by you,
|
||||
but your clients and relays, contacting you. While you see
|
||||
synflood warnings in logs not being really flooded, your server
|
||||
SYN flood warnings in logs not being really flooded, your server
|
||||
is seriously misconfigured.
|
||||
|
||||
tcp_syn_retries - INTEGER
|
||||
|
@ -419,19 +420,21 @@ tcp_window_scaling - BOOLEAN
|
|||
Enable window scaling as defined in RFC1323.
|
||||
|
||||
tcp_wmem - vector of 3 INTEGERs: min, default, max
|
||||
min: Amount of memory reserved for send buffers for TCP socket.
|
||||
min: Amount of memory reserved for send buffers for TCP sockets.
|
||||
Each TCP socket has rights to use it due to fact of its birth.
|
||||
Default: 4K
|
||||
|
||||
default: Amount of memory allowed for send buffers for TCP socket
|
||||
by default. This value overrides net.core.wmem_default used
|
||||
by other protocols, it is usually lower than net.core.wmem_default.
|
||||
default: initial size of send buffer used by TCP sockets. This
|
||||
value overrides net.core.wmem_default used by other protocols.
|
||||
It is usually lower than net.core.wmem_default.
|
||||
Default: 16K
|
||||
|
||||
max: Maximal amount of memory allowed for automatically selected
|
||||
send buffers for TCP socket. This value does not override
|
||||
net.core.wmem_max, "static" selection via SO_SNDBUF does not use this.
|
||||
Default: 128K
|
||||
max: Maximal amount of memory allowed for automatically tuned
|
||||
send buffers for TCP sockets. This value does not override
|
||||
net.core.wmem_max. Calling setsockopt() with SO_SNDBUF disables
|
||||
automatic tuning of that socket's send buffer size, in which case
|
||||
this value is ignored.
|
||||
Default: between 64K and 4MB, depending on RAM size.
|
||||
|
||||
tcp_workaround_signed_windows - BOOLEAN
|
||||
If set, assume no receipt of a window scaling option means the
|
||||
|
@ -1060,24 +1063,193 @@ bridge-nf-filter-pppoe-tagged - BOOLEAN
|
|||
Default: 1
|
||||
|
||||
|
||||
proc/sys/net/sctp/* Variables:
|
||||
|
||||
addip_enable - BOOLEAN
|
||||
Enable or disable extension of Dynamic Address Reconfiguration
|
||||
(ADD-IP) functionality specified in RFC5061. This extension provides
|
||||
the ability to dynamically add and remove new addresses for the SCTP
|
||||
associations.
|
||||
|
||||
1: Enable extension.
|
||||
|
||||
0: Disable extension.
|
||||
|
||||
Default: 0
|
||||
|
||||
addip_noauth_enable - BOOLEAN
|
||||
Dynamic Address Reconfiguration (ADD-IP) requires the use of
|
||||
authentication to protect the operations of adding or removing new
|
||||
addresses. This requirement is mandated so that unauthorized hosts
|
||||
would not be able to hijack associations. However, older
|
||||
implementations may not have implemented this requirement while
|
||||
allowing the ADD-IP extension. For reasons of interoperability,
|
||||
we provide this variable to control the enforcement of the
|
||||
authentication requirement.
|
||||
|
||||
1: Allow ADD-IP extension to be used without authentication. This
|
||||
should only be set in a closed environment for interoperability
|
||||
with older implementations.
|
||||
|
||||
0: Enforce the authentication requirement
|
||||
|
||||
Default: 0
|
||||
|
||||
auth_enable - BOOLEAN
|
||||
Enable or disable Authenticated Chunks extension. This extension
|
||||
provides the ability to send and receive authenticated chunks and is
|
||||
required for secure operation of Dynamic Address Reconfiguration
|
||||
(ADD-IP) extension.
|
||||
|
||||
1: Enable this extension.
|
||||
0: Disable this extension.
|
||||
|
||||
Default: 0
|
||||
|
||||
prsctp_enable - BOOLEAN
|
||||
Enable or disable the Partial Reliability extension (RFC3758) which
|
||||
is used to notify peers that a given DATA should no longer be expected.
|
||||
|
||||
1: Enable extension
|
||||
0: Disable
|
||||
|
||||
Default: 1
|
||||
|
||||
max_burst - INTEGER
|
||||
The limit of the number of new packets that can be initially sent. It
|
||||
controls how bursty the generated traffic can be.
|
||||
|
||||
Default: 4
|
||||
|
||||
association_max_retrans - INTEGER
|
||||
Set the maximum number for retransmissions that an association can
|
||||
attempt deciding that the remote end is unreachable. If this value
|
||||
is exceeded, the association is terminated.
|
||||
|
||||
Default: 10
|
||||
|
||||
max_init_retransmits - INTEGER
|
||||
The maximum number of retransmissions of INIT and COOKIE-ECHO chunks
|
||||
that an association will attempt before declaring the destination
|
||||
unreachable and terminating.
|
||||
|
||||
Default: 8
|
||||
|
||||
path_max_retrans - INTEGER
|
||||
The maximum number of retransmissions that will be attempted on a given
|
||||
path. Once this threshold is exceeded, the path is considered
|
||||
unreachable, and new traffic will use a different path when the
|
||||
association is multihomed.
|
||||
|
||||
Default: 5
|
||||
|
||||
rto_initial - INTEGER
|
||||
The initial round trip timeout value in milliseconds that will be used
|
||||
in calculating round trip times. This is the initial time interval
|
||||
for retransmissions.
|
||||
|
||||
Default: 3000
|
||||
|
||||
rto_max - INTEGER
|
||||
The maximum value (in milliseconds) of the round trip timeout. This
|
||||
is the largest time interval that can elapse between retransmissions.
|
||||
|
||||
Default: 60000
|
||||
|
||||
rto_min - INTEGER
|
||||
The minimum value (in milliseconds) of the round trip timeout. This
|
||||
is the smallest time interval the can elapse between retransmissions.
|
||||
|
||||
Default: 1000
|
||||
|
||||
hb_interval - INTEGER
|
||||
The interval (in milliseconds) between HEARTBEAT chunks. These chunks
|
||||
are sent at the specified interval on idle paths to probe the state of
|
||||
a given path between 2 associations.
|
||||
|
||||
Default: 30000
|
||||
|
||||
sack_timeout - INTEGER
|
||||
The amount of time (in milliseconds) that the implementation will wait
|
||||
to send a SACK.
|
||||
|
||||
Default: 200
|
||||
|
||||
valid_cookie_life - INTEGER
|
||||
The default lifetime of the SCTP cookie (in milliseconds). The cookie
|
||||
is used during association establishment.
|
||||
|
||||
Default: 60000
|
||||
|
||||
cookie_preserve_enable - BOOLEAN
|
||||
Enable or disable the ability to extend the lifetime of the SCTP cookie
|
||||
that is used during the establishment phase of SCTP association
|
||||
|
||||
1: Enable cookie lifetime extension.
|
||||
0: Disable
|
||||
|
||||
Default: 1
|
||||
|
||||
rcvbuf_policy - INTEGER
|
||||
Determines if the receive buffer is attributed to the socket or to
|
||||
association. SCTP supports the capability to create multiple
|
||||
associations on a single socket. When using this capability, it is
|
||||
possible that a single stalled association that's buffering a lot
|
||||
of data may block other associations from delivering their data by
|
||||
consuming all of the receive buffer space. To work around this,
|
||||
the rcvbuf_policy could be set to attribute the receiver buffer space
|
||||
to each association instead of the socket. This prevents the described
|
||||
blocking.
|
||||
|
||||
1: rcvbuf space is per association
|
||||
0: recbuf space is per socket
|
||||
|
||||
Default: 0
|
||||
|
||||
sndbuf_policy - INTEGER
|
||||
Similar to rcvbuf_policy above, this applies to send buffer space.
|
||||
|
||||
1: Send buffer is tracked per association
|
||||
0: Send buffer is tracked per socket.
|
||||
|
||||
Default: 0
|
||||
|
||||
sctp_mem - vector of 3 INTEGERs: min, pressure, max
|
||||
Number of pages allowed for queueing by all SCTP sockets.
|
||||
|
||||
min: Below this number of pages SCTP is not bothered about its
|
||||
memory appetite. When amount of memory allocated by SCTP exceeds
|
||||
this number, SCTP starts to moderate memory usage.
|
||||
|
||||
pressure: This value was introduced to follow format of tcp_mem.
|
||||
|
||||
max: Number of pages allowed for queueing by all SCTP sockets.
|
||||
|
||||
Default is calculated at boot time from amount of available memory.
|
||||
|
||||
sctp_rmem - vector of 3 INTEGERs: min, default, max
|
||||
See tcp_rmem for a description.
|
||||
|
||||
sctp_wmem - vector of 3 INTEGERs: min, default, max
|
||||
See tcp_wmem for a description.
|
||||
|
||||
UNDOCUMENTED:
|
||||
|
||||
/proc/sys/net/core/*
|
||||
dev_weight FIXME
|
||||
discovery_slots FIXME
|
||||
discovery_timeout FIXME
|
||||
fast_poll_increase FIXME
|
||||
ip6_queue_maxlen FIXME
|
||||
lap_keepalive_time FIXME
|
||||
lo_cong FIXME
|
||||
max_baud_rate FIXME
|
||||
|
||||
/proc/sys/net/unix/*
|
||||
max_dgram_qlen FIXME
|
||||
|
||||
/proc/sys/net/irda/*
|
||||
fast_poll_increase FIXME
|
||||
warn_noreply_time FIXME
|
||||
discovery_slots FIXME
|
||||
slot_timeout FIXME
|
||||
max_baud_rate FIXME
|
||||
discovery_timeout FIXME
|
||||
lap_keepalive_time FIXME
|
||||
max_noreply_time FIXME
|
||||
max_tx_data_size FIXME
|
||||
max_tx_window FIXME
|
||||
min_tx_turn_time FIXME
|
||||
mod_cong FIXME
|
||||
no_cong FIXME
|
||||
no_cong_thresh FIXME
|
||||
slot_timeout FIXME
|
||||
warn_noreply_time FIXME
|
||||
|
||||
|
|
|
@ -76,6 +76,8 @@ struct of_device* of_platform_device_create(struct device_node *np,
|
|||
return NULL;
|
||||
|
||||
dev->dma_mask = 0xffffffffUL;
|
||||
dev->dev.coherent_dma_mask = DMA_32BIT_MASK;
|
||||
|
||||
dev->dev.bus = &of_platform_bus_type;
|
||||
|
||||
/* We do not fill the DMA ops for platform devices by default.
|
||||
|
|
|
@ -1,2 +1,3 @@
|
|||
vsyscall.lds
|
||||
vsyscall_32.lds
|
||||
vmlinux.lds
|
||||
|
|
|
@ -300,6 +300,29 @@ void __iomem *ioremap_cache(resource_size_t phys_addr, unsigned long size)
|
|||
}
|
||||
EXPORT_SYMBOL(ioremap_cache);
|
||||
|
||||
static void __iomem *ioremap_default(resource_size_t phys_addr,
|
||||
unsigned long size)
|
||||
{
|
||||
unsigned long flags;
|
||||
void *ret;
|
||||
int err;
|
||||
|
||||
/*
|
||||
* - WB for WB-able memory and no other conflicting mappings
|
||||
* - UC_MINUS for non-WB-able memory with no other conflicting mappings
|
||||
* - Inherit from confliting mappings otherwise
|
||||
*/
|
||||
err = reserve_memtype(phys_addr, phys_addr + size, -1, &flags);
|
||||
if (err < 0)
|
||||
return NULL;
|
||||
|
||||
ret = (void *) __ioremap_caller(phys_addr, size, flags,
|
||||
__builtin_return_address(0));
|
||||
|
||||
free_memtype(phys_addr, phys_addr + size);
|
||||
return (void __iomem *)ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* iounmap - Free a IO remapping
|
||||
* @addr: virtual address from ioremap_*
|
||||
|
@ -365,7 +388,7 @@ void *xlate_dev_mem_ptr(unsigned long phys)
|
|||
if (page_is_ram(start >> PAGE_SHIFT))
|
||||
return __va(phys);
|
||||
|
||||
addr = (void __force *)ioremap(start, PAGE_SIZE);
|
||||
addr = (void __force *)ioremap_default(start, PAGE_SIZE);
|
||||
if (addr)
|
||||
addr = (void *)((unsigned long)addr | (phys & ~PAGE_MASK));
|
||||
|
||||
|
|
|
@ -117,6 +117,7 @@ static int chainiv_init(struct crypto_tfm *tfm)
|
|||
static int async_chainiv_schedule_work(struct async_chainiv_ctx *ctx)
|
||||
{
|
||||
int queued;
|
||||
int err = ctx->err;
|
||||
|
||||
if (!ctx->queue.qlen) {
|
||||
smp_mb__before_clear_bit();
|
||||
|
@ -131,7 +132,7 @@ static int async_chainiv_schedule_work(struct async_chainiv_ctx *ctx)
|
|||
BUG_ON(!queued);
|
||||
|
||||
out:
|
||||
return ctx->err;
|
||||
return err;
|
||||
}
|
||||
|
||||
static int async_chainiv_postpone_request(struct skcipher_givcrypt_request *req)
|
||||
|
@ -227,6 +228,7 @@ static void async_chainiv_do_postponed(struct work_struct *work)
|
|||
postponed);
|
||||
struct skcipher_givcrypt_request *req;
|
||||
struct ablkcipher_request *subreq;
|
||||
int err;
|
||||
|
||||
/* Only handle one request at a time to avoid hogging keventd. */
|
||||
spin_lock_bh(&ctx->lock);
|
||||
|
@ -241,7 +243,11 @@ static void async_chainiv_do_postponed(struct work_struct *work)
|
|||
subreq = skcipher_givcrypt_reqctx(req);
|
||||
subreq->base.flags |= CRYPTO_TFM_REQ_MAY_SLEEP;
|
||||
|
||||
async_chainiv_givencrypt_tail(req);
|
||||
err = async_chainiv_givencrypt_tail(req);
|
||||
|
||||
local_bh_disable();
|
||||
skcipher_givcrypt_complete(req, err);
|
||||
local_bh_enable();
|
||||
}
|
||||
|
||||
static int async_chainiv_init(struct crypto_tfm *tfm)
|
||||
|
|
|
@ -586,12 +586,6 @@ static void test_cipher(char *algo, int enc,
|
|||
j = 0;
|
||||
for (i = 0; i < tcount; i++) {
|
||||
|
||||
data = kzalloc(template[i].ilen, GFP_KERNEL);
|
||||
if (!data)
|
||||
continue;
|
||||
|
||||
memcpy(data, template[i].input, template[i].ilen);
|
||||
|
||||
if (template[i].iv)
|
||||
memcpy(iv, template[i].iv, MAX_IVLEN);
|
||||
else
|
||||
|
@ -613,11 +607,9 @@ static void test_cipher(char *algo, int enc,
|
|||
printk("setkey() failed flags=%x\n",
|
||||
crypto_ablkcipher_get_flags(tfm));
|
||||
|
||||
if (!template[i].fail) {
|
||||
kfree(data);
|
||||
if (!template[i].fail)
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
|
||||
temp = 0;
|
||||
sg_init_table(sg, template[i].np);
|
||||
|
|
|
@ -29,14 +29,16 @@
|
|||
enum {
|
||||
ATA_ACPI_FILTER_SETXFER = 1 << 0,
|
||||
ATA_ACPI_FILTER_LOCK = 1 << 1,
|
||||
ATA_ACPI_FILTER_DIPM = 1 << 2,
|
||||
|
||||
ATA_ACPI_FILTER_DEFAULT = ATA_ACPI_FILTER_SETXFER |
|
||||
ATA_ACPI_FILTER_LOCK,
|
||||
ATA_ACPI_FILTER_LOCK |
|
||||
ATA_ACPI_FILTER_DIPM,
|
||||
};
|
||||
|
||||
static unsigned int ata_acpi_gtf_filter = ATA_ACPI_FILTER_DEFAULT;
|
||||
module_param_named(acpi_gtf_filter, ata_acpi_gtf_filter, int, 0644);
|
||||
MODULE_PARM_DESC(acpi_gtf_filter, "filter mask for ACPI _GTF commands, set to filter out (0x1=set xfermode, 0x2=lock/freeze lock)");
|
||||
MODULE_PARM_DESC(acpi_gtf_filter, "filter mask for ACPI _GTF commands, set to filter out (0x1=set xfermode, 0x2=lock/freeze lock, 0x4=DIPM)");
|
||||
|
||||
#define NO_PORT_MULT 0xffff
|
||||
#define SATA_ADR(root, pmp) (((root) << 16) | (pmp))
|
||||
|
@ -195,6 +197,10 @@ static void ata_acpi_handle_hotplug(struct ata_port *ap, struct ata_device *dev,
|
|||
/* This device does not support hotplug */
|
||||
return;
|
||||
|
||||
if (event == ACPI_NOTIFY_BUS_CHECK ||
|
||||
event == ACPI_NOTIFY_DEVICE_CHECK)
|
||||
status = acpi_evaluate_integer(handle, "_STA", NULL, &sta);
|
||||
|
||||
spin_lock_irqsave(ap->lock, flags);
|
||||
|
||||
switch (event) {
|
||||
|
@ -202,7 +208,6 @@ static void ata_acpi_handle_hotplug(struct ata_port *ap, struct ata_device *dev,
|
|||
case ACPI_NOTIFY_DEVICE_CHECK:
|
||||
ata_ehi_push_desc(ehi, "ACPI event");
|
||||
|
||||
status = acpi_evaluate_integer(handle, "_STA", NULL, &sta);
|
||||
if (ACPI_FAILURE(status)) {
|
||||
ata_port_printk(ap, KERN_ERR,
|
||||
"acpi: failed to determine bay status (0x%x)\n",
|
||||
|
@ -690,6 +695,14 @@ static int ata_acpi_filter_tf(const struct ata_taskfile *tf,
|
|||
return 1;
|
||||
}
|
||||
|
||||
if (ata_acpi_gtf_filter & ATA_ACPI_FILTER_DIPM) {
|
||||
/* inhibit enabling DIPM */
|
||||
if (tf->command == ATA_CMD_SET_FEATURES &&
|
||||
tf->feature == SETFEATURES_SATA_ENABLE &&
|
||||
tf->nsect == SATA_DIPM)
|
||||
return 1;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
|
|
|
@ -56,6 +56,7 @@ static const struct sis_laptop sis_laptop[] = {
|
|||
{ 0x5513, 0x1043, 0x1107 }, /* ASUS A6K */
|
||||
{ 0x5513, 0x1734, 0x105F }, /* FSC Amilo A1630 */
|
||||
{ 0x5513, 0x1071, 0x8640 }, /* EasyNote K5305 */
|
||||
{ 0x5513, 0x1039, 0x5513 }, /* Targa Visionary 1000 */
|
||||
/* end marker */
|
||||
{ 0, }
|
||||
};
|
||||
|
|
|
@ -755,9 +755,8 @@ static ssize_t ipmi_write(struct file *file,
|
|||
rv = ipmi_heartbeat();
|
||||
if (rv)
|
||||
return rv;
|
||||
return 1;
|
||||
}
|
||||
return 0;
|
||||
return len;
|
||||
}
|
||||
|
||||
static ssize_t ipmi_read(struct file *file,
|
||||
|
|
|
@ -678,12 +678,13 @@ static int rtc_do_ioctl(unsigned int cmd, unsigned long arg, int kernel)
|
|||
if (arg != (1<<tmp))
|
||||
return -EINVAL;
|
||||
|
||||
rtc_freq = arg;
|
||||
|
||||
spin_lock_irqsave(&rtc_lock, flags);
|
||||
if (hpet_set_periodic_freq(arg)) {
|
||||
spin_unlock_irqrestore(&rtc_lock, flags);
|
||||
return 0;
|
||||
}
|
||||
rtc_freq = arg;
|
||||
|
||||
val = CMOS_READ(RTC_FREQ_SELECT) & 0xf0;
|
||||
val |= (16 - tmp);
|
||||
|
|
|
@ -623,6 +623,7 @@ static struct pnp_device_id tpm_pnp_tbl[] __devinitdata = {
|
|||
{"IFX0102", 0}, /* Infineon */
|
||||
{"BCM0101", 0}, /* Broadcom */
|
||||
{"NSC1200", 0}, /* National */
|
||||
{"ICO0102", 0}, /* Intel */
|
||||
/* Add new here */
|
||||
{"", 0}, /* User Specified */
|
||||
{"", 0} /* Terminator */
|
||||
|
|
|
@ -1096,7 +1096,9 @@ static ssize_t show_fw_ver(struct device *dev, struct device_attribute *attr, ch
|
|||
struct net_device *lldev = iwch_dev->rdev.t3cdev_p->lldev;
|
||||
|
||||
PDBG("%s dev 0x%p\n", __func__, dev);
|
||||
rtnl_lock();
|
||||
lldev->ethtool_ops->get_drvinfo(lldev, &info);
|
||||
rtnl_unlock();
|
||||
return sprintf(buf, "%s\n", info.fw_version);
|
||||
}
|
||||
|
||||
|
@ -1109,7 +1111,9 @@ static ssize_t show_hca(struct device *dev, struct device_attribute *attr,
|
|||
struct net_device *lldev = iwch_dev->rdev.t3cdev_p->lldev;
|
||||
|
||||
PDBG("%s dev 0x%p\n", __func__, dev);
|
||||
rtnl_lock();
|
||||
lldev->ethtool_ops->get_drvinfo(lldev, &info);
|
||||
rtnl_unlock();
|
||||
return sprintf(buf, "%s\n", info.driver);
|
||||
}
|
||||
|
||||
|
|
|
@ -2017,12 +2017,7 @@ static int __handle_issuing_new_read_requests5(struct stripe_head *sh,
|
|||
*/
|
||||
s->uptodate++;
|
||||
return 0; /* uptodate + compute == disks */
|
||||
} else if ((s->uptodate < disks - 1) &&
|
||||
test_bit(R5_Insync, &dev->flags)) {
|
||||
/* Note: we hold off compute operations while checks are
|
||||
* in flight, but we still prefer 'compute' over 'read'
|
||||
* hence we only read if (uptodate < * disks-1)
|
||||
*/
|
||||
} else if (test_bit(R5_Insync, &dev->flags)) {
|
||||
set_bit(R5_LOCKED, &dev->flags);
|
||||
set_bit(R5_Wantread, &dev->flags);
|
||||
if (!test_and_set_bit(STRIPE_OP_IO, &sh->ops.pending))
|
||||
|
|
|
@ -152,6 +152,7 @@ static chipio_t pnp_info;
|
|||
static const struct pnp_device_id nsc_ircc_pnp_table[] = {
|
||||
{ .id = "NSC6001", .driver_data = 0 },
|
||||
{ .id = "IBM0071", .driver_data = 0 },
|
||||
{ .id = "HWPC224", .driver_data = 0 },
|
||||
{ }
|
||||
};
|
||||
|
||||
|
|
|
@ -1546,6 +1546,7 @@ static int via_ircc_net_open(struct net_device *dev)
|
|||
IRDA_WARNING("%s, unable to allocate dma2=%d\n",
|
||||
driver_name, self->io.dma2);
|
||||
free_irq(self->io.irq, self);
|
||||
free_dma(self->io.dma);
|
||||
return -EAGAIN;
|
||||
}
|
||||
}
|
||||
|
@ -1606,6 +1607,8 @@ static int via_ircc_net_close(struct net_device *dev)
|
|||
EnAllInt(iobase, OFF);
|
||||
free_irq(self->io.irq, dev);
|
||||
free_dma(self->io.dma);
|
||||
if (self->io.dma2 != self->io.dma)
|
||||
free_dma(self->io.dma2);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
|
|
@ -602,6 +602,12 @@ static int tun_set_iff(struct net *net, struct file *file, struct ifreq *ifr)
|
|||
tun->attached = 1;
|
||||
get_net(dev_net(tun->dev));
|
||||
|
||||
/* Make sure persistent devices do not get stuck in
|
||||
* xoff state.
|
||||
*/
|
||||
if (netif_running(tun->dev))
|
||||
netif_wake_queue(tun->dev);
|
||||
|
||||
strcpy(ifr->ifr_name, tun->dev->name);
|
||||
return 0;
|
||||
|
||||
|
|
|
@ -777,7 +777,9 @@ static int hostap_cs_suspend(struct pcmcia_device *link)
|
|||
int dev_open = 0;
|
||||
struct hostap_interface *iface = NULL;
|
||||
|
||||
if (dev)
|
||||
if (!dev)
|
||||
return -ENODEV;
|
||||
|
||||
iface = netdev_priv(dev);
|
||||
|
||||
PDEBUG(DEBUG_EXTRA, "%s: CS_EVENT_PM_SUSPEND\n", dev_info);
|
||||
|
@ -798,7 +800,9 @@ static int hostap_cs_resume(struct pcmcia_device *link)
|
|||
int dev_open = 0;
|
||||
struct hostap_interface *iface = NULL;
|
||||
|
||||
if (dev)
|
||||
if (!dev)
|
||||
return -ENODEV;
|
||||
|
||||
iface = netdev_priv(dev);
|
||||
|
||||
PDEBUG(DEBUG_EXTRA, "%s: CS_EVENT_PM_RESUME\n", dev_info);
|
||||
|
|
|
@ -449,7 +449,7 @@ static void iwl3945_dbg_report_frame(struct iwl3945_priv *priv,
|
|||
|
||||
if (print_summary) {
|
||||
char *title;
|
||||
u32 rate;
|
||||
int rate;
|
||||
|
||||
if (hundred)
|
||||
title = "100Frames";
|
||||
|
@ -487,7 +487,7 @@ static void iwl3945_dbg_report_frame(struct iwl3945_priv *priv,
|
|||
* but you can hack it to show more, if you'd like to. */
|
||||
if (dataframe)
|
||||
IWL_DEBUG_RX("%s: mhd=0x%04x, dst=0x%02x, "
|
||||
"len=%u, rssi=%d, chnl=%d, rate=%u, \n",
|
||||
"len=%u, rssi=%d, chnl=%d, rate=%d, \n",
|
||||
title, fc, header->addr1[5],
|
||||
length, rssi, channel, rate);
|
||||
else {
|
||||
|
|
|
@ -567,11 +567,11 @@ static int lbs_process_bss(struct bss_descriptor *bss,
|
|||
pos += 8;
|
||||
|
||||
/* beacon interval is 2 bytes long */
|
||||
bss->beaconperiod = le16_to_cpup((void *) pos);
|
||||
bss->beaconperiod = get_unaligned_le16(pos);
|
||||
pos += 2;
|
||||
|
||||
/* capability information is 2 bytes long */
|
||||
bss->capability = le16_to_cpup((void *) pos);
|
||||
bss->capability = get_unaligned_le16(pos);
|
||||
lbs_deb_scan("process_bss: capabilities 0x%04x\n", bss->capability);
|
||||
pos += 2;
|
||||
|
||||
|
|
|
@ -731,6 +731,17 @@ static int rt2400pci_init_registers(struct rt2x00_dev *rt2x00dev)
|
|||
(rt2x00dev->rx->data_size / 128));
|
||||
rt2x00pci_register_write(rt2x00dev, CSR9, reg);
|
||||
|
||||
rt2x00pci_register_read(rt2x00dev, CSR14, ®);
|
||||
rt2x00_set_field32(®, CSR14_TSF_COUNT, 0);
|
||||
rt2x00_set_field32(®, CSR14_TSF_SYNC, 0);
|
||||
rt2x00_set_field32(®, CSR14_TBCN, 0);
|
||||
rt2x00_set_field32(®, CSR14_TCFP, 0);
|
||||
rt2x00_set_field32(®, CSR14_TATIMW, 0);
|
||||
rt2x00_set_field32(®, CSR14_BEACON_GEN, 0);
|
||||
rt2x00_set_field32(®, CSR14_CFP_COUNT_PRELOAD, 0);
|
||||
rt2x00_set_field32(®, CSR14_TBCM_PRELOAD, 0);
|
||||
rt2x00pci_register_write(rt2x00dev, CSR14, reg);
|
||||
|
||||
rt2x00pci_register_write(rt2x00dev, CNT3, 0x3f080000);
|
||||
|
||||
rt2x00pci_register_read(rt2x00dev, ARCSR0, ®);
|
||||
|
|
|
@ -824,6 +824,17 @@ static int rt2500pci_init_registers(struct rt2x00_dev *rt2x00dev)
|
|||
rt2x00_set_field32(®, CSR11_CW_SELECT, 0);
|
||||
rt2x00pci_register_write(rt2x00dev, CSR11, reg);
|
||||
|
||||
rt2x00pci_register_read(rt2x00dev, CSR14, ®);
|
||||
rt2x00_set_field32(®, CSR14_TSF_COUNT, 0);
|
||||
rt2x00_set_field32(®, CSR14_TSF_SYNC, 0);
|
||||
rt2x00_set_field32(®, CSR14_TBCN, 0);
|
||||
rt2x00_set_field32(®, CSR14_TCFP, 0);
|
||||
rt2x00_set_field32(®, CSR14_TATIMW, 0);
|
||||
rt2x00_set_field32(®, CSR14_BEACON_GEN, 0);
|
||||
rt2x00_set_field32(®, CSR14_CFP_COUNT_PRELOAD, 0);
|
||||
rt2x00_set_field32(®, CSR14_TBCM_PRELOAD, 0);
|
||||
rt2x00pci_register_write(rt2x00dev, CSR14, reg);
|
||||
|
||||
rt2x00pci_register_write(rt2x00dev, CNT3, 0);
|
||||
|
||||
rt2x00pci_register_read(rt2x00dev, TXCSR8, ®);
|
||||
|
|
|
@ -801,6 +801,13 @@ static int rt2500usb_init_registers(struct rt2x00_dev *rt2x00dev)
|
|||
rt2x00_set_field16(®, TXRX_CSR8_BBP_ID1_VALID, 0);
|
||||
rt2500usb_register_write(rt2x00dev, TXRX_CSR8, reg);
|
||||
|
||||
rt2500usb_register_read(rt2x00dev, TXRX_CSR19, ®);
|
||||
rt2x00_set_field16(®, TXRX_CSR19_TSF_COUNT, 0);
|
||||
rt2x00_set_field16(®, TXRX_CSR19_TSF_SYNC, 0);
|
||||
rt2x00_set_field16(®, TXRX_CSR19_TBCN, 0);
|
||||
rt2x00_set_field16(®, TXRX_CSR19_BEACON_GEN, 0);
|
||||
rt2500usb_register_write(rt2x00dev, TXRX_CSR19, reg);
|
||||
|
||||
rt2500usb_register_write(rt2x00dev, TXRX_CSR21, 0xe78f);
|
||||
rt2500usb_register_write(rt2x00dev, MAC_CSR9, 0xff1d);
|
||||
|
||||
|
|
|
@ -1201,6 +1201,15 @@ static int rt61pci_init_registers(struct rt2x00_dev *rt2x00dev)
|
|||
rt2x00_set_field32(®, TXRX_CSR8_ACK_CTS_54MBS, 42);
|
||||
rt2x00pci_register_write(rt2x00dev, TXRX_CSR8, reg);
|
||||
|
||||
rt2x00pci_register_read(rt2x00dev, TXRX_CSR9, ®);
|
||||
rt2x00_set_field32(®, TXRX_CSR9_BEACON_INTERVAL, 0);
|
||||
rt2x00_set_field32(®, TXRX_CSR9_TSF_TICKING, 0);
|
||||
rt2x00_set_field32(®, TXRX_CSR9_TSF_SYNC, 0);
|
||||
rt2x00_set_field32(®, TXRX_CSR9_TBTT_ENABLE, 0);
|
||||
rt2x00_set_field32(®, TXRX_CSR9_BEACON_GEN, 0);
|
||||
rt2x00_set_field32(®, TXRX_CSR9_TIMESTAMP_COMPENSATE, 0);
|
||||
rt2x00pci_register_write(rt2x00dev, TXRX_CSR9, reg);
|
||||
|
||||
rt2x00pci_register_write(rt2x00dev, TXRX_CSR15, 0x0000000f);
|
||||
|
||||
rt2x00pci_register_write(rt2x00dev, MAC_CSR6, 0x00000fff);
|
||||
|
|
|
@ -1006,6 +1006,15 @@ static int rt73usb_init_registers(struct rt2x00_dev *rt2x00dev)
|
|||
rt2x00_set_field32(®, TXRX_CSR8_ACK_CTS_54MBS, 42);
|
||||
rt73usb_register_write(rt2x00dev, TXRX_CSR8, reg);
|
||||
|
||||
rt73usb_register_read(rt2x00dev, TXRX_CSR9, ®);
|
||||
rt2x00_set_field32(®, TXRX_CSR9_BEACON_INTERVAL, 0);
|
||||
rt2x00_set_field32(®, TXRX_CSR9_TSF_TICKING, 0);
|
||||
rt2x00_set_field32(®, TXRX_CSR9_TSF_SYNC, 0);
|
||||
rt2x00_set_field32(®, TXRX_CSR9_TBTT_ENABLE, 0);
|
||||
rt2x00_set_field32(®, TXRX_CSR9_BEACON_GEN, 0);
|
||||
rt2x00_set_field32(®, TXRX_CSR9_TIMESTAMP_COMPENSATE, 0);
|
||||
rt73usb_register_write(rt2x00dev, TXRX_CSR9, reg);
|
||||
|
||||
rt73usb_register_write(rt2x00dev, TXRX_CSR15, 0x0000000f);
|
||||
|
||||
rt73usb_register_read(rt2x00dev, MAC_CSR6, ®);
|
||||
|
|
|
@ -765,6 +765,7 @@ static void zd_op_remove_interface(struct ieee80211_hw *hw,
|
|||
{
|
||||
struct zd_mac *mac = zd_hw_mac(hw);
|
||||
mac->type = IEEE80211_IF_TYPE_INVALID;
|
||||
zd_set_beacon_interval(&mac->chip, 0);
|
||||
zd_write_mac_addr(&mac->chip, NULL);
|
||||
}
|
||||
|
||||
|
|
|
@ -64,6 +64,7 @@ static struct usb_device_id usb_ids[] = {
|
|||
{ USB_DEVICE(0x079b, 0x0062), .driver_info = DEVICE_ZD1211B },
|
||||
{ USB_DEVICE(0x1582, 0x6003), .driver_info = DEVICE_ZD1211B },
|
||||
{ USB_DEVICE(0x050d, 0x705c), .driver_info = DEVICE_ZD1211B },
|
||||
{ USB_DEVICE(0x083a, 0xe506), .driver_info = DEVICE_ZD1211B },
|
||||
{ USB_DEVICE(0x083a, 0x4505), .driver_info = DEVICE_ZD1211B },
|
||||
{ USB_DEVICE(0x0471, 0x1236), .driver_info = DEVICE_ZD1211B },
|
||||
{ USB_DEVICE(0x13b1, 0x0024), .driver_info = DEVICE_ZD1211B },
|
||||
|
|
|
@ -101,9 +101,9 @@ static int rio_device_probe(struct device *dev)
|
|||
if (error >= 0) {
|
||||
rdev->driver = rdrv;
|
||||
error = 0;
|
||||
} else
|
||||
rio_dev_put(rdev);
|
||||
}
|
||||
}
|
||||
return error;
|
||||
}
|
||||
|
||||
|
|
|
@ -537,6 +537,13 @@ int ssb_pcicore_dev_irqvecs_enable(struct ssb_pcicore *pc,
|
|||
int err = 0;
|
||||
u32 tmp;
|
||||
|
||||
if (dev->bus->bustype != SSB_BUSTYPE_PCI) {
|
||||
/* This SSB device is not on a PCI host-bus. So the IRQs are
|
||||
* not routed through the PCI core.
|
||||
* So we must not enable routing through the PCI core. */
|
||||
goto out;
|
||||
}
|
||||
|
||||
if (!pdev)
|
||||
goto out;
|
||||
bus = pdev->bus;
|
||||
|
|
|
@ -8,7 +8,7 @@
|
|||
* Bus Glue for AMD Alchemy Au1xxx
|
||||
*
|
||||
* Written by Christopher Hoover <ch@hpl.hp.com>
|
||||
* Based on fragments of previous driver by Rusell King et al.
|
||||
* Based on fragments of previous driver by Russell King et al.
|
||||
*
|
||||
* Modified for LH7A404 from ohci-sa1111.c
|
||||
* by Durgesh Pattamatta <pattamattad@sharpsec.com>
|
||||
|
|
|
@ -8,7 +8,7 @@
|
|||
* Bus Glue for Sharp LH7A404
|
||||
*
|
||||
* Written by Christopher Hoover <ch@hpl.hp.com>
|
||||
* Based on fragments of previous driver by Rusell King et al.
|
||||
* Based on fragments of previous driver by Russell King et al.
|
||||
*
|
||||
* Modified for LH7A404 from ohci-sa1111.c
|
||||
* by Durgesh Pattamatta <pattamattad@sharpsec.com>
|
||||
|
|
|
@ -8,7 +8,7 @@
|
|||
* USB Bus Glue for Samsung S3C2410
|
||||
*
|
||||
* Written by Christopher Hoover <ch@hpl.hp.com>
|
||||
* Based on fragments of previous driver by Rusell King et al.
|
||||
* Based on fragments of previous driver by Russell King et al.
|
||||
*
|
||||
* Modified for S3C2410 from ohci-sa1111.c, ohci-omap.c and ohci-lh7a40.c
|
||||
* by Ben Dooks, <ben@simtec.co.uk>
|
||||
|
|
|
@ -8,7 +8,7 @@
|
|||
* SA1111 Bus Glue
|
||||
*
|
||||
* Written by Christopher Hoover <ch@hpl.hp.com>
|
||||
* Based on fragments of previous driver by Rusell King et al.
|
||||
* Based on fragments of previous driver by Russell King et al.
|
||||
*
|
||||
* This file is licenced under the GPL.
|
||||
*/
|
||||
|
|
|
@ -1324,7 +1324,7 @@ static int fsl_diu_suspend(struct of_device *ofdev, pm_message_t state)
|
|||
{
|
||||
struct fsl_diu_data *machine_data;
|
||||
|
||||
machine_data = dev_get_drvdata(&dev->dev);
|
||||
machine_data = dev_get_drvdata(&ofdev->dev);
|
||||
disable_lcdc(machine_data->fsl_diu_info[0]);
|
||||
|
||||
return 0;
|
||||
|
@ -1334,7 +1334,7 @@ static int fsl_diu_resume(struct of_device *ofdev)
|
|||
{
|
||||
struct fsl_diu_data *machine_data;
|
||||
|
||||
machine_data = dev_get_drvdata(&dev->dev);
|
||||
machine_data = dev_get_drvdata(&ofdev->dev);
|
||||
enable_lcdc(machine_data->fsl_diu_info[0]);
|
||||
|
||||
return 0;
|
||||
|
|
|
@ -610,7 +610,7 @@ int setup_arg_pages(struct linux_binprm *bprm,
|
|||
bprm->exec -= stack_shift;
|
||||
|
||||
down_write(&mm->mmap_sem);
|
||||
vm_flags = vma->vm_flags;
|
||||
vm_flags = VM_STACK_FLAGS;
|
||||
|
||||
/*
|
||||
* Adjust stack execute permissions; explicitly enable for
|
||||
|
|
|
@ -606,7 +606,9 @@ static void dlm_init_lockres(struct dlm_ctxt *dlm,
|
|||
|
||||
res->last_used = 0;
|
||||
|
||||
spin_lock(&dlm->spinlock);
|
||||
list_add_tail(&res->tracking, &dlm->tracking_list);
|
||||
spin_unlock(&dlm->spinlock);
|
||||
|
||||
memset(res->lvb, 0, DLM_LVB_LEN);
|
||||
memset(res->refmap, 0, sizeof(res->refmap));
|
||||
|
|
|
@ -1554,8 +1554,8 @@ out:
|
|||
*/
|
||||
int ocfs2_file_lock(struct file *file, int ex, int trylock)
|
||||
{
|
||||
int ret, level = ex ? LKM_EXMODE : LKM_PRMODE;
|
||||
unsigned int lkm_flags = trylock ? LKM_NOQUEUE : 0;
|
||||
int ret, level = ex ? DLM_LOCK_EX : DLM_LOCK_PR;
|
||||
unsigned int lkm_flags = trylock ? DLM_LKF_NOQUEUE : 0;
|
||||
unsigned long flags;
|
||||
struct ocfs2_file_private *fp = file->private_data;
|
||||
struct ocfs2_lock_res *lockres = &fp->fp_flock;
|
||||
|
@ -1582,7 +1582,7 @@ int ocfs2_file_lock(struct file *file, int ex, int trylock)
|
|||
* Get the lock at NLMODE to start - that way we
|
||||
* can cancel the upconvert request if need be.
|
||||
*/
|
||||
ret = ocfs2_lock_create(osb, lockres, LKM_NLMODE, 0);
|
||||
ret = ocfs2_lock_create(osb, lockres, DLM_LOCK_NL, 0);
|
||||
if (ret < 0) {
|
||||
mlog_errno(ret);
|
||||
goto out;
|
||||
|
@ -1597,7 +1597,7 @@ int ocfs2_file_lock(struct file *file, int ex, int trylock)
|
|||
}
|
||||
|
||||
lockres->l_action = OCFS2_AST_CONVERT;
|
||||
lkm_flags |= LKM_CONVERT;
|
||||
lkm_flags |= DLM_LKF_CONVERT;
|
||||
lockres->l_requested = level;
|
||||
lockres_or_flags(lockres, OCFS2_LOCK_BUSY);
|
||||
|
||||
|
@ -1664,7 +1664,7 @@ void ocfs2_file_unlock(struct file *file)
|
|||
if (!(lockres->l_flags & OCFS2_LOCK_ATTACHED))
|
||||
return;
|
||||
|
||||
if (lockres->l_level == LKM_NLMODE)
|
||||
if (lockres->l_level == DLM_LOCK_NL)
|
||||
return;
|
||||
|
||||
mlog(0, "Unlock: \"%s\" flags: 0x%lx, level: %d, act: %d\n",
|
||||
|
@ -1678,11 +1678,11 @@ void ocfs2_file_unlock(struct file *file)
|
|||
lockres_or_flags(lockres, OCFS2_LOCK_BLOCKED);
|
||||
lockres->l_blocking = DLM_LOCK_EX;
|
||||
|
||||
gen = ocfs2_prepare_downconvert(lockres, LKM_NLMODE);
|
||||
gen = ocfs2_prepare_downconvert(lockres, DLM_LOCK_NL);
|
||||
lockres_add_mask_waiter(lockres, &mw, OCFS2_LOCK_BUSY, 0);
|
||||
spin_unlock_irqrestore(&lockres->l_lock, flags);
|
||||
|
||||
ret = ocfs2_downconvert_lock(osb, lockres, LKM_NLMODE, 0, gen);
|
||||
ret = ocfs2_downconvert_lock(osb, lockres, DLM_LOCK_NL, 0, gen);
|
||||
if (ret) {
|
||||
mlog_errno(ret);
|
||||
return;
|
||||
|
|
|
@ -2427,13 +2427,20 @@ restart:
|
|||
if (iclog->ic_size - iclog->ic_offset < 2*sizeof(xlog_op_header_t)) {
|
||||
xlog_state_switch_iclogs(log, iclog, iclog->ic_size);
|
||||
|
||||
/* If I'm the only one writing to this iclog, sync it to disk */
|
||||
if (atomic_read(&iclog->ic_refcnt) == 1) {
|
||||
/*
|
||||
* If I'm the only one writing to this iclog, sync it to disk.
|
||||
* We need to do an atomic compare and decrement here to avoid
|
||||
* racing with concurrent atomic_dec_and_lock() calls in
|
||||
* xlog_state_release_iclog() when there is more than one
|
||||
* reference to the iclog.
|
||||
*/
|
||||
if (!atomic_add_unless(&iclog->ic_refcnt, -1, 1)) {
|
||||
/* we are the only one */
|
||||
spin_unlock(&log->l_icloglock);
|
||||
if ((error = xlog_state_release_iclog(log, iclog)))
|
||||
error = xlog_state_release_iclog(log, iclog);
|
||||
if (error)
|
||||
return error;
|
||||
} else {
|
||||
atomic_dec(&iclog->ic_refcnt);
|
||||
spin_unlock(&log->l_icloglock);
|
||||
}
|
||||
goto restart;
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
* Copyright (C) 2004-2006 Atmel Corporation
|
||||
*
|
||||
* Based on linux/include/asm-arm/setup.h
|
||||
* Copyright (C) 1997-1999 Russel King
|
||||
* Copyright (C) 1997-1999 Russell King
|
||||
*
|
||||
* This program is free software; you can redistribute it and/or modify
|
||||
* it under the terms of the GNU General Public License version 2 as
|
||||
|
|
|
@ -339,6 +339,7 @@ struct xfrm_usersa_info {
|
|||
#define XFRM_STATE_NOPMTUDISC 4
|
||||
#define XFRM_STATE_WILDRECV 8
|
||||
#define XFRM_STATE_ICMP 16
|
||||
#define XFRM_STATE_AF_UNSPEC 32
|
||||
};
|
||||
|
||||
struct xfrm_usersa_id {
|
||||
|
|
|
@ -79,7 +79,7 @@ static DEFINE_PER_CPU(struct kprobe *, kprobe_instance) = NULL;
|
|||
*
|
||||
* For such cases, we now have a blacklist
|
||||
*/
|
||||
struct kprobe_blackpoint kprobe_blacklist[] = {
|
||||
static struct kprobe_blackpoint kprobe_blacklist[] = {
|
||||
{"preempt_schedule",},
|
||||
{NULL} /* Terminator */
|
||||
};
|
||||
|
|
|
@ -670,7 +670,7 @@ static int acquire_console_semaphore_for_printk(unsigned int cpu)
|
|||
return retval;
|
||||
}
|
||||
|
||||
const char printk_recursion_bug_msg [] =
|
||||
static const char printk_recursion_bug_msg [] =
|
||||
KERN_CRIT "BUG: recent printk recursion!\n";
|
||||
static int printk_recursion_bug;
|
||||
|
||||
|
|
|
@ -925,7 +925,15 @@ void rcu_offline_cpu(int cpu)
|
|||
spin_unlock_irqrestore(&rdp->lock, flags);
|
||||
}
|
||||
|
||||
void __devinit rcu_online_cpu(int cpu)
|
||||
#else /* #ifdef CONFIG_HOTPLUG_CPU */
|
||||
|
||||
void rcu_offline_cpu(int cpu)
|
||||
{
|
||||
}
|
||||
|
||||
#endif /* #else #ifdef CONFIG_HOTPLUG_CPU */
|
||||
|
||||
void __cpuinit rcu_online_cpu(int cpu)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
||||
|
@ -934,18 +942,6 @@ void __devinit rcu_online_cpu(int cpu)
|
|||
spin_unlock_irqrestore(&rcu_ctrlblk.fliplock, flags);
|
||||
}
|
||||
|
||||
#else /* #ifdef CONFIG_HOTPLUG_CPU */
|
||||
|
||||
void rcu_offline_cpu(int cpu)
|
||||
{
|
||||
}
|
||||
|
||||
void __devinit rcu_online_cpu(int cpu)
|
||||
{
|
||||
}
|
||||
|
||||
#endif /* #else #ifdef CONFIG_HOTPLUG_CPU */
|
||||
|
||||
static void rcu_process_callbacks(struct softirq_action *unused)
|
||||
{
|
||||
unsigned long flags;
|
||||
|
|
|
@ -5622,10 +5622,10 @@ static int __migrate_task(struct task_struct *p, int src_cpu, int dest_cpu)
|
|||
double_rq_lock(rq_src, rq_dest);
|
||||
/* Already moved. */
|
||||
if (task_cpu(p) != src_cpu)
|
||||
goto out;
|
||||
goto done;
|
||||
/* Affinity changed (again). */
|
||||
if (!cpu_isset(dest_cpu, p->cpus_allowed))
|
||||
goto out;
|
||||
goto fail;
|
||||
|
||||
on_rq = p->se.on_rq;
|
||||
if (on_rq)
|
||||
|
@ -5636,8 +5636,9 @@ static int __migrate_task(struct task_struct *p, int src_cpu, int dest_cpu)
|
|||
activate_task(rq_dest, p, 0);
|
||||
check_preempt_curr(rq_dest, p);
|
||||
}
|
||||
done:
|
||||
ret = 1;
|
||||
out:
|
||||
fail:
|
||||
double_rq_unlock(rq_src, rq_dest);
|
||||
return ret;
|
||||
}
|
||||
|
|
|
@ -1628,9 +1628,11 @@ static __always_inline void *slab_alloc(struct kmem_cache *s,
|
|||
void **object;
|
||||
struct kmem_cache_cpu *c;
|
||||
unsigned long flags;
|
||||
unsigned int objsize;
|
||||
|
||||
local_irq_save(flags);
|
||||
c = get_cpu_slab(s, smp_processor_id());
|
||||
objsize = c->objsize;
|
||||
if (unlikely(!c->freelist || !node_match(c, node)))
|
||||
|
||||
object = __slab_alloc(s, gfpflags, node, addr, c);
|
||||
|
@ -1643,7 +1645,7 @@ static __always_inline void *slab_alloc(struct kmem_cache *s,
|
|||
local_irq_restore(flags);
|
||||
|
||||
if (unlikely((gfpflags & __GFP_ZERO) && object))
|
||||
memset(object, 0, c->objsize);
|
||||
memset(object, 0, objsize);
|
||||
|
||||
return object;
|
||||
}
|
||||
|
|
|
@ -1359,17 +1359,17 @@ static int check_leaf(struct trie *t, struct leaf *l,
|
|||
t->stats.semantic_match_miss++;
|
||||
#endif
|
||||
if (err <= 0)
|
||||
return plen;
|
||||
return err;
|
||||
}
|
||||
|
||||
return -1;
|
||||
return 1;
|
||||
}
|
||||
|
||||
static int fn_trie_lookup(struct fib_table *tb, const struct flowi *flp,
|
||||
struct fib_result *res)
|
||||
{
|
||||
struct trie *t = (struct trie *) tb->tb_data;
|
||||
int plen, ret = 0;
|
||||
int ret;
|
||||
struct node *n;
|
||||
struct tnode *pn;
|
||||
int pos, bits;
|
||||
|
@ -1393,10 +1393,7 @@ static int fn_trie_lookup(struct fib_table *tb, const struct flowi *flp,
|
|||
|
||||
/* Just a leaf? */
|
||||
if (IS_LEAF(n)) {
|
||||
plen = check_leaf(t, (struct leaf *)n, key, flp, res);
|
||||
if (plen < 0)
|
||||
goto failed;
|
||||
ret = 0;
|
||||
ret = check_leaf(t, (struct leaf *)n, key, flp, res);
|
||||
goto found;
|
||||
}
|
||||
|
||||
|
@ -1421,11 +1418,9 @@ static int fn_trie_lookup(struct fib_table *tb, const struct flowi *flp,
|
|||
}
|
||||
|
||||
if (IS_LEAF(n)) {
|
||||
plen = check_leaf(t, (struct leaf *)n, key, flp, res);
|
||||
if (plen < 0)
|
||||
ret = check_leaf(t, (struct leaf *)n, key, flp, res);
|
||||
if (ret > 0)
|
||||
goto backtrace;
|
||||
|
||||
ret = 0;
|
||||
goto found;
|
||||
}
|
||||
|
||||
|
|
|
@ -439,8 +439,8 @@ static unsigned char asn1_oid_decode(struct asn1_ctx *ctx,
|
|||
unsigned int *len)
|
||||
{
|
||||
unsigned long subid;
|
||||
unsigned int size;
|
||||
unsigned long *optr;
|
||||
size_t size;
|
||||
|
||||
size = eoc - ctx->pointer + 1;
|
||||
|
||||
|
|
|
@ -224,7 +224,7 @@ static __init int tcpprobe_init(void)
|
|||
if (bufsize < 0)
|
||||
return -EINVAL;
|
||||
|
||||
tcp_probe.log = kcalloc(sizeof(struct tcp_log), bufsize, GFP_KERNEL);
|
||||
tcp_probe.log = kcalloc(bufsize, sizeof(struct tcp_log), GFP_KERNEL);
|
||||
if (!tcp_probe.log)
|
||||
goto err0;
|
||||
|
||||
|
|
|
@ -749,12 +749,12 @@ static void ipv6_del_addr(struct inet6_ifaddr *ifp)
|
|||
}
|
||||
write_unlock_bh(&idev->lock);
|
||||
|
||||
addrconf_del_timer(ifp);
|
||||
|
||||
ipv6_ifa_notify(RTM_DELADDR, ifp);
|
||||
|
||||
atomic_notifier_call_chain(&inet6addr_chain, NETDEV_DOWN, ifp);
|
||||
|
||||
addrconf_del_timer(ifp);
|
||||
|
||||
/*
|
||||
* Purge or update corresponding prefix
|
||||
*
|
||||
|
|
|
@ -445,7 +445,7 @@ looped_back:
|
|||
kfree_skb(skb);
|
||||
return -1;
|
||||
}
|
||||
if (!ipv6_chk_home_addr(&init_net, addr)) {
|
||||
if (!ipv6_chk_home_addr(dev_net(skb->dst->dev), addr)) {
|
||||
IP6_INC_STATS_BH(ip6_dst_idev(skb->dst),
|
||||
IPSTATS_MIB_INADDRERRORS);
|
||||
kfree_skb(skb);
|
||||
|
|
|
@ -101,8 +101,8 @@ static int irda_nl_get_mode(struct sk_buff *skb, struct genl_info *info)
|
|||
|
||||
hdr = genlmsg_put(msg, info->snd_pid, info->snd_seq,
|
||||
&irda_nl_family, 0, IRDA_NL_CMD_GET_MODE);
|
||||
if (IS_ERR(hdr)) {
|
||||
ret = PTR_ERR(hdr);
|
||||
if (hdr == NULL) {
|
||||
ret = -EMSGSIZE;
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
|
|
|
@ -530,8 +530,6 @@ static int ieee80211_stop(struct net_device *dev)
|
|||
local->sta_hw_scanning = 0;
|
||||
}
|
||||
|
||||
flush_workqueue(local->hw.workqueue);
|
||||
|
||||
sdata->u.sta.flags &= ~IEEE80211_STA_PRIVACY_INVOKED;
|
||||
kfree(sdata->u.sta.extra_ie);
|
||||
sdata->u.sta.extra_ie = NULL;
|
||||
|
@ -555,6 +553,8 @@ static int ieee80211_stop(struct net_device *dev)
|
|||
|
||||
ieee80211_led_radio(local, 0);
|
||||
|
||||
flush_workqueue(local->hw.workqueue);
|
||||
|
||||
tasklet_disable(&local->tx_pending_tasklet);
|
||||
tasklet_disable(&local->tasklet);
|
||||
}
|
||||
|
|
|
@ -547,15 +547,14 @@ static void ieee80211_set_associated(struct net_device *dev,
|
|||
sdata->bss_conf.ht_bss_conf = &conf->ht_bss_conf;
|
||||
}
|
||||
|
||||
netif_carrier_on(dev);
|
||||
ifsta->flags |= IEEE80211_STA_PREV_BSSID_SET;
|
||||
memcpy(ifsta->prev_bssid, sdata->u.sta.bssid, ETH_ALEN);
|
||||
memcpy(wrqu.ap_addr.sa_data, sdata->u.sta.bssid, ETH_ALEN);
|
||||
ieee80211_sta_send_associnfo(dev, ifsta);
|
||||
} else {
|
||||
netif_carrier_off(dev);
|
||||
ieee80211_sta_tear_down_BA_sessions(dev, ifsta->bssid);
|
||||
ifsta->flags &= ~IEEE80211_STA_ASSOCIATED;
|
||||
netif_carrier_off(dev);
|
||||
ieee80211_reset_erp_info(dev);
|
||||
|
||||
sdata->bss_conf.assoc_ht = 0;
|
||||
|
@ -569,6 +568,10 @@ static void ieee80211_set_associated(struct net_device *dev,
|
|||
|
||||
sdata->bss_conf.assoc = assoc;
|
||||
ieee80211_bss_info_change_notify(sdata, changed);
|
||||
|
||||
if (assoc)
|
||||
netif_carrier_on(dev);
|
||||
|
||||
wrqu.ap_addr.sa_family = ARPHRD_ETHER;
|
||||
wireless_send_event(dev, SIOCGIWAP, &wrqu, NULL);
|
||||
}
|
||||
|
@ -3611,8 +3614,10 @@ static int ieee80211_sta_find_ibss(struct net_device *dev,
|
|||
spin_unlock_bh(&local->sta_bss_lock);
|
||||
|
||||
#ifdef CONFIG_MAC80211_IBSS_DEBUG
|
||||
if (found)
|
||||
printk(KERN_DEBUG " sta_find_ibss: selected %s current "
|
||||
"%s\n", print_mac(mac, bssid), print_mac(mac2, ifsta->bssid));
|
||||
"%s\n", print_mac(mac, bssid),
|
||||
print_mac(mac2, ifsta->bssid));
|
||||
#endif /* CONFIG_MAC80211_IBSS_DEBUG */
|
||||
if (found && memcmp(ifsta->bssid, bssid, ETH_ALEN) != 0 &&
|
||||
(bss = ieee80211_rx_bss_get(dev, bssid,
|
||||
|
|
|
@ -141,7 +141,6 @@ struct rc_pid_events_file_info {
|
|||
* rate behaviour values (lower means we should trust more what we learnt
|
||||
* about behaviour of rates, higher means we should trust more the natural
|
||||
* ordering of rates)
|
||||
* @fast_start: if Y, push high rates right after initialization
|
||||
*/
|
||||
struct rc_pid_debugfs_entries {
|
||||
struct dentry *dir;
|
||||
|
@ -154,7 +153,6 @@ struct rc_pid_debugfs_entries {
|
|||
struct dentry *sharpen_factor;
|
||||
struct dentry *sharpen_duration;
|
||||
struct dentry *norm_offset;
|
||||
struct dentry *fast_start;
|
||||
};
|
||||
|
||||
void rate_control_pid_event_tx_status(struct rc_pid_event_buffer *buf,
|
||||
|
@ -267,9 +265,6 @@ struct rc_pid_info {
|
|||
/* Normalization offset. */
|
||||
unsigned int norm_offset;
|
||||
|
||||
/* Fast starst parameter. */
|
||||
unsigned int fast_start;
|
||||
|
||||
/* Rates information. */
|
||||
struct rc_pid_rateinfo *rinfo;
|
||||
|
||||
|
|
|
@ -398,13 +398,25 @@ static void *rate_control_pid_alloc(struct ieee80211_local *local)
|
|||
return NULL;
|
||||
}
|
||||
|
||||
pinfo->target = RC_PID_TARGET_PF;
|
||||
pinfo->sampling_period = RC_PID_INTERVAL;
|
||||
pinfo->coeff_p = RC_PID_COEFF_P;
|
||||
pinfo->coeff_i = RC_PID_COEFF_I;
|
||||
pinfo->coeff_d = RC_PID_COEFF_D;
|
||||
pinfo->smoothing_shift = RC_PID_SMOOTHING_SHIFT;
|
||||
pinfo->sharpen_factor = RC_PID_SHARPENING_FACTOR;
|
||||
pinfo->sharpen_duration = RC_PID_SHARPENING_DURATION;
|
||||
pinfo->norm_offset = RC_PID_NORM_OFFSET;
|
||||
pinfo->rinfo = rinfo;
|
||||
pinfo->oldrate = 0;
|
||||
|
||||
/* Sort the rates. This is optimized for the most common case (i.e.
|
||||
* almost-sorted CCK+OFDM rates). Kind of bubble-sort with reversed
|
||||
* mapping too. */
|
||||
for (i = 0; i < sband->n_bitrates; i++) {
|
||||
rinfo[i].index = i;
|
||||
rinfo[i].rev_index = i;
|
||||
if (pinfo->fast_start)
|
||||
if (RC_PID_FAST_START)
|
||||
rinfo[i].diff = 0;
|
||||
else
|
||||
rinfo[i].diff = i * pinfo->norm_offset;
|
||||
|
@ -425,19 +437,6 @@ static void *rate_control_pid_alloc(struct ieee80211_local *local)
|
|||
break;
|
||||
}
|
||||
|
||||
pinfo->target = RC_PID_TARGET_PF;
|
||||
pinfo->sampling_period = RC_PID_INTERVAL;
|
||||
pinfo->coeff_p = RC_PID_COEFF_P;
|
||||
pinfo->coeff_i = RC_PID_COEFF_I;
|
||||
pinfo->coeff_d = RC_PID_COEFF_D;
|
||||
pinfo->smoothing_shift = RC_PID_SMOOTHING_SHIFT;
|
||||
pinfo->sharpen_factor = RC_PID_SHARPENING_FACTOR;
|
||||
pinfo->sharpen_duration = RC_PID_SHARPENING_DURATION;
|
||||
pinfo->norm_offset = RC_PID_NORM_OFFSET;
|
||||
pinfo->fast_start = RC_PID_FAST_START;
|
||||
pinfo->rinfo = rinfo;
|
||||
pinfo->oldrate = 0;
|
||||
|
||||
#ifdef CONFIG_MAC80211_DEBUGFS
|
||||
de = &pinfo->dentries;
|
||||
de->dir = debugfs_create_dir("rc80211_pid",
|
||||
|
@ -465,9 +464,6 @@ static void *rate_control_pid_alloc(struct ieee80211_local *local)
|
|||
de->norm_offset = debugfs_create_u32("norm_offset",
|
||||
S_IRUSR | S_IWUSR, de->dir,
|
||||
&pinfo->norm_offset);
|
||||
de->fast_start = debugfs_create_bool("fast_start",
|
||||
S_IRUSR | S_IWUSR, de->dir,
|
||||
&pinfo->fast_start);
|
||||
#endif
|
||||
|
||||
return pinfo;
|
||||
|
@ -479,7 +475,6 @@ static void rate_control_pid_free(void *priv)
|
|||
#ifdef CONFIG_MAC80211_DEBUGFS
|
||||
struct rc_pid_debugfs_entries *de = &pinfo->dentries;
|
||||
|
||||
debugfs_remove(de->fast_start);
|
||||
debugfs_remove(de->norm_offset);
|
||||
debugfs_remove(de->sharpen_duration);
|
||||
debugfs_remove(de->sharpen_factor);
|
||||
|
|
|
@ -844,10 +844,16 @@ static int tcp_packet(struct nf_conn *ct,
|
|||
/* Attempt to reopen a closed/aborted connection.
|
||||
* Delete this connection and look up again. */
|
||||
write_unlock_bh(&tcp_lock);
|
||||
if (del_timer(&ct->timeout))
|
||||
/* Only repeat if we can actually remove the timer.
|
||||
* Destruction may already be in progress in process
|
||||
* context and we must give it a chance to terminate.
|
||||
*/
|
||||
if (del_timer(&ct->timeout)) {
|
||||
ct->timeout.function((unsigned long)ct);
|
||||
return -NF_REPEAT;
|
||||
}
|
||||
return -NF_DROP;
|
||||
}
|
||||
/* Fall through */
|
||||
case TCP_CONNTRACK_IGNORE:
|
||||
/* Ignored packets:
|
||||
|
|
|
@ -584,12 +584,7 @@ list_start:
|
|||
rcu_read_unlock();
|
||||
|
||||
genlmsg_end(ans_skb, data);
|
||||
|
||||
ret_val = genlmsg_reply(ans_skb, info);
|
||||
if (ret_val != 0)
|
||||
goto list_failure;
|
||||
|
||||
return 0;
|
||||
return genlmsg_reply(ans_skb, info);
|
||||
|
||||
list_retry:
|
||||
/* XXX - this limit is a guesstimate */
|
||||
|
|
|
@ -386,11 +386,7 @@ static int netlbl_mgmt_listdef(struct sk_buff *skb, struct genl_info *info)
|
|||
rcu_read_unlock();
|
||||
|
||||
genlmsg_end(ans_skb, data);
|
||||
|
||||
ret_val = genlmsg_reply(ans_skb, info);
|
||||
if (ret_val != 0)
|
||||
goto listdef_failure;
|
||||
return 0;
|
||||
return genlmsg_reply(ans_skb, info);
|
||||
|
||||
listdef_failure_lock:
|
||||
rcu_read_unlock();
|
||||
|
@ -501,11 +497,7 @@ static int netlbl_mgmt_version(struct sk_buff *skb, struct genl_info *info)
|
|||
goto version_failure;
|
||||
|
||||
genlmsg_end(ans_skb, data);
|
||||
|
||||
ret_val = genlmsg_reply(ans_skb, info);
|
||||
if (ret_val != 0)
|
||||
goto version_failure;
|
||||
return 0;
|
||||
return genlmsg_reply(ans_skb, info);
|
||||
|
||||
version_failure:
|
||||
kfree_skb(ans_skb);
|
||||
|
|
|
@ -1107,11 +1107,7 @@ static int netlbl_unlabel_list(struct sk_buff *skb, struct genl_info *info)
|
|||
goto list_failure;
|
||||
|
||||
genlmsg_end(ans_skb, data);
|
||||
|
||||
ret_val = genlmsg_reply(ans_skb, info);
|
||||
if (ret_val != 0)
|
||||
goto list_failure;
|
||||
return 0;
|
||||
return genlmsg_reply(ans_skb, info);
|
||||
|
||||
list_failure:
|
||||
kfree_skb(ans_skb);
|
||||
|
|
|
@ -5899,12 +5899,6 @@ static int sctp_eat_data(const struct sctp_association *asoc,
|
|||
return SCTP_IERROR_NO_DATA;
|
||||
}
|
||||
|
||||
/* If definately accepting the DATA chunk, record its TSN, otherwise
|
||||
* wait for renege processing.
|
||||
*/
|
||||
if (SCTP_CMD_CHUNK_ULP == deliver)
|
||||
sctp_add_cmd_sf(commands, SCTP_CMD_REPORT_TSN, SCTP_U32(tsn));
|
||||
|
||||
chunk->data_accepted = 1;
|
||||
|
||||
/* Note: Some chunks may get overcounted (if we drop) or overcounted
|
||||
|
@ -5924,6 +5918,9 @@ static int sctp_eat_data(const struct sctp_association *asoc,
|
|||
* and discard the DATA chunk.
|
||||
*/
|
||||
if (ntohs(data_hdr->stream) >= asoc->c.sinit_max_instreams) {
|
||||
/* Mark tsn as received even though we drop it */
|
||||
sctp_add_cmd_sf(commands, SCTP_CMD_REPORT_TSN, SCTP_U32(tsn));
|
||||
|
||||
err = sctp_make_op_error(asoc, chunk, SCTP_ERROR_INV_STRM,
|
||||
&data_hdr->stream,
|
||||
sizeof(data_hdr->stream));
|
||||
|
|
|
@ -710,6 +710,11 @@ struct sctp_ulpevent *sctp_ulpevent_make_rcvmsg(struct sctp_association *asoc,
|
|||
if (!skb)
|
||||
goto fail;
|
||||
|
||||
/* Now that all memory allocations for this chunk succeeded, we
|
||||
* can mark it as received so the tsn_map is updated correctly.
|
||||
*/
|
||||
sctp_tsnmap_mark(&asoc->peer.tsn_map, ntohl(chunk->subh.data_hdr->tsn));
|
||||
|
||||
/* First calculate the padding, so we don't inadvertently
|
||||
* pass up the wrong length to the user.
|
||||
*
|
||||
|
|
|
@ -277,9 +277,8 @@ static void copy_from_user_state(struct xfrm_state *x, struct xfrm_usersa_info *
|
|||
memcpy(&x->props.saddr, &p->saddr, sizeof(x->props.saddr));
|
||||
x->props.flags = p->flags;
|
||||
|
||||
if (!x->sel.family)
|
||||
if (!x->sel.family && !(p->flags & XFRM_STATE_AF_UNSPEC))
|
||||
x->sel.family = p->family;
|
||||
|
||||
}
|
||||
|
||||
/*
|
||||
|
|
Загрузка…
Ссылка в новой задаче