This is needed to get kde-powersave to work properly on some g4
powerbooks.
From: Olaf Hering <olh@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Thadeu Lima de Souza Cascardo <cascardo@holoscopio.com>
Signed-off-by: Samuel R. C. Vale <srcvale@holoscopio.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Michal Simek <monstr@monstr.eu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
The WM831x devices feature two software controlled status LEDs with
hardware assisted blinking.
The device can also autonomously control the LEDs based on a selection
of sources. This can be configured at boot time using either platform
data or the chip OTP. A sysfs file in the style of that for triggers
allowing the control source to be configured at run time. Triggers
can't be used here since they can't depend on the implementation details
of a specific LED type.
Signed-off-by: Mark Brown <broonie@opensource.wolfsonmicro.com>
Signed-off-by: Richard Purdie <rpurdie@linux.intel.com>
* git://git.kernel.org/pub/scm/linux/kernel/git/agk/linux-2.6-dm:
dm snapshot: fix on disk chunk size validation
dm exception store: split set_chunk_size
dm snapshot: fix header corruption race on invalidation
dm snapshot: refactor zero_disk_area to use chunk_io
dm log: userspace add luid to distinguish between concurrent log instances
dm raid1: do not allow log_failure variable to unset after being set
dm log: remove incorrect field from userspace table output
dm log: fix userspace status output
dm stripe: expose correct io hints
dm table: add more context to terse warning messages
dm table: fix queue_limit checking device iterator
dm snapshot: implement iterate devices
dm multipath: fix oops when request based io fails when no paths
The whole write-room thing is something that is up to the _caller_ to
worry about, not the pty layer itself. The total buffer space will
still be limited by the buffering routines themselves, so there is no
advantage or need in having pty_write() artificially limit the size
somehow.
And what happened was that the caller (the n_tty line discipline, in
this case) may have verified that there is room for 2 bytes to be
written (for NL -> CRNL expansion), and it used to then do those writes
as two single-byte writes. And if the first byte written (CR) then
caused a new tty buffer to be allocated, pty_space() may have returned
zero when trying to write the second byte (LF), and then incorrectly
failed the write - leading to a lost newline character.
This should finally fix
http://bugzilla.kernel.org/show_bug.cgi?id=14015
Reported-by: Mikael Pettersson <mikpe@it.uu.se>
Acked-by: Alan Cox <alan@lxorguk.ukuu.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
When translating CR to CRNL in the n_tty line discipline, we did it as
two tty_put_char() calls. Which works, but is stupid, and has caused
problems before too with bad interactions with the write_room() logic.
The generic USB serial driver had that problem, for example.
Now the pty layer had similar issues after being moved to the generic
tty buffering code (in commit d945cb9cce20ac7143c2de8d88b187f62db99bdc:
"pty: Rework the pty layer to use the normal buffering logic").
So stop doing the silly separate two writes, and do it as a single write
instead. That's what the n_tty layer already does for the space
expansion of tabs (XTABS), and it means that we'll now always have just
a single write for the CRNL to match the single 'tty_write_room()' test,
which hopefully means that the next time somebody screws up buffering,
it won't cause weeks of debugging.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
If a target writes invalid status (typically status of a command that
already timed out), firewire-sbp2 attempts to put away an ORB that
doesn't exist. https://bugzilla.redhat.com/show_bug.cgi?id=519772
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
In dual-buffer DMA mode, no video frames are ever received from R5C832
by libdc1394. Fallback to packet-per-buffer DMA works reliably.
http://thread.gmane.org/gmane.linux.kernel.firewire.devel/13393/focus=13476
Reported-by: Jonathan Cameron <jic23@cam.ac.uk>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
An Agere FW643 OHCI 1.1 card works fine for video reception from one
camera but fails early if receiving from two cameras. After a short
while, no IR IRQ events occur and the context control register does not
react anymore. This happens regardless whether both IR DMA contexts are
dual-buffer or one is dual-buffer and the other packet-per-buffer.
This can be worked around by disabling dual buffer DMA mode entirely.
http://sourceforge.net/mailarchive/message.php?msg_name=4A7C0594.2020208%40gmail.com
(Reported by Samuel Audet.)
In another report (by Jonathan Cameron), an FW643 works OK with two
cameras in dual buffer mode. Whether this is due to different chip
revisions or different usage patterns (different video formats) is not
yet clear. However, as far as the current capabilities of
firewire-core's isochronous I/O interface are concerned, simply
switching off dual-buffer on non-working and working FW643s alike is not
a problem in practice. We only need to revisit this issue if we are
going to enhance the interface, e.g. so that applications can explicitly
choose modes.
Reported-by: Samuel Audet <samuel.audet@gmail.com>
Reported-by: Jonathan Cameron <jic23@cam.ac.uk>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
This fixes a regression due to post 2.6.30 commit "firewire: core: do
not DMA-map stack addresses" 6fdc037094.
As David Moore noted, a previously correct sizeof() expression became
wrong since the commit changed its argument from an array to a pointer.
This resulted in an oops in ohci_cancel_packet in the shared workqueue
thread's context when an isochronous resource was to be freed.
Reported-by: Jonathan Cameron <jic23@cam.ac.uk>
Signed-off-by: Stefan Richter <stefanr@s5r6.in-berlin.de>
Fix some problems seen in the chunk size processing when activating a
pre-existing snapshot.
For a new snapshot, the chunk size can either be supplied by the creator
or a default value can be used. For an existing snapshot, the
chunk size in the snapshot header on disk should always be used.
If someone attempts to load an existing snapshot and has the 'default
chunk size' option set, the kernel uses its default value even when it
is incorrect for the snapshot being loaded. This patch ensures the
correct on-disk value is always used.
Secondly, when the code does use the chunk size stored on the disk it is
prudent to revalidate it, so the code can exit cleanly if it got
corrupted as happened in
https://bugzilla.redhat.com/show_bug.cgi?id=461506 .
Cc: stable@kernel.org
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Break the function set_chunk_size to two functions in preparation for
the fix in the following patch.
Cc: stable@kernel.org
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
If a persistent snapshot fills up, a race can corrupt the on-disk header
which causes a crash on any future attempt to activate the snapshot
(typically while booting). This patch fixes the race.
When the snapshot overflows, __invalidate_snapshot is called, which calls
snapshot store method drop_snapshot. It goes to persistent_drop_snapshot that
calls write_header. write_header constructs the new header in the "area"
location.
Concurrently, an existing kcopyd job may finish, call copy_callback
and commit_exception method, that goes to persistent_commit_exception.
persistent_commit_exception doesn't do locking, relying on the fact that
callbacks are single-threaded, but it can race with snapshot invalidation and
overwrite the header that is just being written while the snapshot is being
invalidated.
The result of this race is a corrupted header being written that can
lead to a crash on further reactivation (if chunk_size is zero in the
corrupted header).
The fix is to use separate memory areas for each.
See the bug: https://bugzilla.redhat.com/show_bug.cgi?id=461506
Cc: stable@kernel.org
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Refactor chunk_io to prepare for the fix in the following patch.
Pass an area pointer to chunk_io and simplify zero_disk_area to use
chunk_io. No functional change.
Cc: stable@kernel.org
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Device-mapper userspace logs (like the clustered log) are
identified by a universally unique identifier (UUID). This
identifier is used to associate requests from the kernel to
a specific log in userspace. The UUID must be unique everywhere,
since multiple machines may use this identifier when communicating
about a particular log, as is the case for cluster logs.
Sometimes, device-mapper/LVM may re-use a UUID. This is the
case during pvmoves, when moving from one segment of an LV
to another, or when resizing a mirror, etc. In these cases,
a new log is created with the same UUID and loaded in the
"inactive" slot. When a device-mapper "resume" is issued,
the "live" table is deactivated and the new "inactive" table
becomes "live". (The "inactive" table can also be removed
via a device-mapper 'clear' command.)
The above two issues were colliding. More than one log was being
created with the same UUID, and there was no way to distinguish
between them. So, sometimes the wrong log would be swapped
out during the exchange.
The solution is to create a locally unique identifier,
'luid', to go along with the UUID. This new identifier is used
to determine exactly which log is being referenced by the kernel
when the log exchange is made. The identifier is not
universally safe, but it does not need to be, since
create/destroy/suspend/resume operations are bound to a specific
machine; and these are the operations that make up the exchange.
Signed-off-by: Jonathan Brassow <jbrassow@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
This patch fixes a bug which was triggering a case where the primary leg
could not be changed on failure even when the mirror was in-sync.
The case involves the failure of the primary device along with
the transient failure of the log device. The problem is that
bios can be put on the 'failures' list (due to log failure)
before 'fail_mirror' is called due to the primary device failure.
Normally, this is fine, but if the log device failure is transient,
a subsequent iteration of the work thread, 'do_mirror', will
reset 'log_failure'. The 'do_failures' function then resets
the 'in_sync' variable when processing bios on the failures list.
The 'in_sync' variable is what is used to determine if the
primary device can be switched in the event of a failure. Since
this has been reset, the primary device is incorrectly assumed
to be not switchable.
The case has been seen in the cluster mirror context, where one
machine realizes the log device is dead before the other machines.
As the responsibilities of the server migrate from one node to
another (because the mirror is being reconfigured due to the failure),
the new server may think for a moment that the log device is fine -
thus resetting the 'log_failure' variable.
In any case, it is inappropiate for us to reset the 'log_failure'
variable. The above bug simply illustrates that it can actually
hurt us.
Cc: stable@kernel.org
Signed-off-by: Jonathan Brassow <jbrassow@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
The output of 'dmsetup table' includes an internal field that should not
be there. This patch removes it. To make the fix simpler, we first
reorder a constructor argument
The 'device size' argument is generated internally. Currently it is
placed as the last space-separated word of the constructor string.
However, we need to use a version of the string without this word, so we
move it to the beginning instead so it is trivial to skip past it.
We keep a copy of the arguments passed to userspace for creating a log,
just in case we need to resend them. These are the same arguments that
are desired in the STATUSTYPE_TABLE request, except for one. When
creating the userspace log, the userspace daemon must know the size of
the mirror, so that is added to the arguments given in the constructor
table. We were printing this extra argument out as well, which is a
mistake.
Signed-off-by: Jonathan Brassow <jbrassow@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Fix 'dmsetup table' output.
There is a missing ' ' at the end of the string causing two
words to run together.
Signed-off-by: Jonathan Brassow <jbrassow@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Set sensible I/O hints for striped DM devices in the topology
infrastructure added for 2.6.31 for userspace tools to
obtain via sysfs.
Add .io_hints to 'struct target_type' to allow the I/O hints portion
(io_min and io_opt) of the 'struct queue_limits' to be set by each
target and implement this for dm-stripe.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
A couple of recent warning messages make it difficult for the reader to
determine exactly what is wrong. This patch adds more information to
those messages.
The messages were added by these commits:
5dea271b6d ("dm table: pass correct dev area size
to device_area_is_valid")
ea9df47cc9 ("dm table: fix blk_stack_limits arg
to use bytes not sectors")
The patch also corrects references to logical_block_size in printk format
strings from %hu to %u.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
The logic to check for valid device areas is inverted relative to proper
use with iterate_devices.
The iterate_devices method calls its callback for every underlying
device in the target. If any callback returns non-zero, iterate_devices
exits immediately. But the callback device_area_is_valid() returns 0 on
error and 1 on success. The overall effect without is that an error is
issued only if every device is invalid.
This patch renames device_area_is_valid to device_area_is_invalid and
inverts the logic so that one invalid device is sufficient to raise
an error.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Implement the .iterate_devices for the origin and snapshot targets.
dm-snapshot's lack of .iterate_devices resulted in the inability to
properly establish queue_limits for both targets.
With 4K sector drives: an unfortunate side-effect of not establishing
proper limits in either targets' DM device was that IO to the devices
would fail even though both had been created without error.
Commit af4874e03e ("dm target:s introduce
iterate devices fn") in 2.6.31-rc1 should have implemented .iterate_devices
for dm-snap.c's origin and snapshot targets.
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Compaq Presario R4000-series laptops are not sending a "volume up button
release" and "volume down button release" signal in the PS/2 protocol for
atkbd. The URL below has some of confirmed reports:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/385477
Signed-off-by: Dave Andrews <jetdog330@hotmail.com>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Arithmetic conversion in the mask computation makes the upper word
of the second argument passed down to mtd->read_oob(), be always 0
(assuming 'offs' being a 64-bit signed long long type, and
'mtd->writesize' being a 32-bit unsigned int type).
This patch applies over the other one adding masking in nftl_write,
"nftl: write support is broken".
Signed-off-by: Dimitri Gorokhovik <dimitri.gorokhovik@free.fr>
Cc: Tim Gardner <tim.gardner@canonical.com>
Cc: Scott James Remnant <scott@canonical.com>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Write support is broken in NFTL. Fix it.
Signed-off-by: <dimitri.gorokhovik@free.fr>
Cc: Tim Gardner <tim.gardner@canonical.com>
Cc: Scott James Remnant <scott@canonical.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: David Woodhouse <David.Woodhouse@intel.com>
Commit 4bc5d34135 is broken and causes regressions:
(1) cpufreq_driver->resume() and ->suspend() were only called on
__powerpc__, but you could set them on all architectures. In fact,
->resume() was defined and used before the PPC-related commit
42d4dc3f4e complained about in 4bc5d34135.
(2) Therfore, the resume functions in acpi_cpufreq and speedstep-smi
would never be called.
(3) This means speedstep-smi would be unusuable after suspend or resume.
The _real_ problem was calling cpufreq_driver->get() with interrupts
off, but it re-enabling interrupts on some platforms. Why is ->get()
necessary?
Some systems like to change the CPU frequency behind our
back, especially during BIOS-intensive operations like suspend or
resume. If such systems also use a CPU frequency-dependant timing loop,
delays might be off by large factors. Therefore, we need to ascertain
as soon as possible that the CPU frequency is indeed at the speed we
think it is. You can do this two ways: either setting it anew, or trying
to get it. The latter is what was done, the former also has the same IRQ
issue.
So, let's try something different: defer the checking to after interrupts
are re-enabled, by calling cpufreq_update_policy() (via schedule_work()).
Timings may be off until this later stage, so let's watch out for
resume regressions caused by the deferred handling of frequency changes
behind the kernel's back.
Signed-off-by: Dominik Brodowski <linux@dominikbrodowski.net>
Signed-off-by: Dave Jones <davej@redhat.com>
Commit log for commit 517d3cc15b
("[libata] ata_piix: Enable parallel scan") says:
This patch turns on parallel scanning for the ata_piix driver.
This driver is used on most netbooks (no AHCI for cheap storage it seems).
The scan is the dominating time factor in the kernel boot for these
devices; with this flag it gets cut in half for the device I used
for testing (eeepc).
Alan took a look at the driver source and concluded that it ought to be safe
to do for this driver. Alan has also checked with the hardware team.
and it is all true but once we put all things together additional
constraints for PATA controllers show up (some hardware registers
have per-host not per-port atomicity) and we risk misprogramming
the controller.
I used the following test to check whether the issue is real:
@@ -736,8 +736,20 @@ static void piix_set_piomode(struct ata_
(timings[pio][1] << 8);
}
pci_write_config_word(dev, master_port, master_data);
- if (is_slave)
+ if (is_slave) {
+ if (ap->port_no == 0) {
+ u8 tmp = slave_data;
+
+ while (slave_data == tmp) {
+ pci_read_config_byte(dev, slave_port, &tmp);
+ msleep(50);
+ }
+
+ dev_printk(KERN_ERR, &dev->dev, "PATA parallel scan "
+ "race detected\n");
+ }
pci_write_config_byte(dev, slave_port, slave_data);
+ }
/* Ensure the UDMA bit is off - it will be turned back on if
UDMA is selected */
and it indeed triggered the error message.
Lets fix all such races by adding an extra locking to ->set_piomode
and ->set_dmamode methods for PATA controllers.
[ Alan: would be better to take the host lock in libata-core for these
cases so that we fix all the adapters in one swoop. "Looks fine as a
temproary quickfix tho" ]
Cc: Arjan van de Ven <arjan@linux.intel.com>
Acked-by: Alan Cox <alan@linux.intel.com>
Cc: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/anholt/drm-intel:
drm/i915: Improve CRTDDC mapping by using VBT info
drm/i915: Fix CPU-spinning hangs related to fence usage by using an LRU.
drm/i915: Set crtc/clone mask in different output devices
drm/i915: Always use SDVO_B detect bit for SDVO output detection.
drm/i915: Fix typo that broke SVID1 in intel_sdvo_multifunc_encoder()
drm/i915: Check if BIOS enabled dual-channel LVDS on 8xx, not only on 9xx
drm/i915: Set the multiplier for SDVO on G33 platform
We should call em28xx_ir_init(dev) only when disable_ir is true.
Signed-off-by: Shine Liu <shinel@foxmail.com>
Reviewed-by: Devin Heitmueller <dheitmueller@kernellabs.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
The order of indexes is reversed
Signed-off-by: Roel Kluin <roel.kluin@gmail.com>
Signed-off-by: Antoine Jacquet <royale@zerezo.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
[mchehab@redhat.com: fix merge conflict and a few CodingStyle issues]
Signed-off-by: Steve Gotthardt <gotthardt@gmail.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Previous changesets broke Hauppauge devices and their GPIO configurations.
This changeset restores the LED & LNA functionality.
Signed-off-by: Michael Krufky <mkrufky@kernellabs.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
An SR-IOV capable device includes an SR-IOV PCIe capability which
describes the Virtual Function (VF) BAR requirements. A typical SR-IOV
device can support multiple VFs whose BARs must be in a contiguous region,
effectively an array of VF BARs. The BAR reports the size requirement
for a single VF. We calculate the full range needed by simply multiplying
the VF BAR size with the number of possible VFs and create a resource
spanning the full range.
This all seems sane enough except it artificially inflates the alignment
requirement for the VF BAR. The VF BAR need only be aligned to the size
of a single BAR not the contiguous range of VF BARs. This can cause us
to fail to allocate resources for the BAR despite the fact that we
actually have enough space.
This patch adds a thin PCI specific layer over the generic
resource_alignment() function which is aware of the special nature of
VF BARs and does sorting and allocation based on the smaller alignment
requirement.
I recognize that while resource_alignment is generic, it's basically a
PCI helper. An alternative to this patch is to add PCI VF BAR specific
information to struct resource. I opted for the extra layer rather than
adding such PCI specific information to struct resource. This does
have the slight downside that we don't cache the BAR size and re-read
for each alignment query (happens a small handful of times during boot
for each VF BAR).
Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matthew Wilcox <matthew@wil.cx>
Cc: Yu Zhao <yu.zhao@intel.com>
Cc: stable@kernel.org
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Use VBT information to determine which DDC bus to use for CRTDCC.
Fall back to GPIOA if VBT info is not available.
Signed-off-by: David Müller <d.mueller@elsoft.ch>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Eric Anholt <eric@anholt.net>
Tested on: 855 (David), and 945GM, 965GM, GM45, and G45 (anholt)