- 10 Nov, 2009 27 commits
-
-
Alan Cox authored
commit 575c9ed7 upstream. I've not touched the other stuff here but the word "locking" comes to mind. Signed-off-by: Alan Cox <alan@linux.intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Mikulas Patocka authored
commit df96eee6 upstream. Use unsigned integer chunk size. Maximum chunk size is 512kB, there won't ever be need to use 4GB chunk size, so the number can be 32-bit. This fixes compiler failure on 32-bit systems with large block devices. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Reviewed-by: Jonathan Brassow <jbrassow@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Mikulas Patocka authored
commit 3f2412dc upstream. If we are creating snapshot with memory-stored exception store, fail if the user didn't specify chunk size. Zero chunk size would probably crash a lot of places in the rest of snapshot code. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Reviewed-by: Jonathan Brassow <jbrassow@redhat.com> Reviewed-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Mikulas Patocka authored
commit 4c6fff44 upstream. This patch locks the snapshot when returning status. It fixes a race when it could return an invalid number of free chunks if someone was simultaneously modifying it. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Mikulas Patocka authored
commit 0e8c4e4e upstream. Properly close the device if failing because of an invalid chunk size. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Kiyoshi Ueda authored
commit f88fb981 upstream. Multiple instances of dec_pending() can run concurrently so a lock is needed when it saves the first error code. I have never experienced actual problem without locking and just found this during code inspection while implementing the barrier support patch for request-based dm. This patch adds the locking. I've done compile, boot and basic I/O testings. Signed-off-by: Kiyoshi Ueda <k-ueda@ct.jp.nec.com> Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Zdenek Kabelac authored
commit 03022c54 upstream. Add missing del_gendisk() to error path when creation of workqueue fails. Otherwice there is a resource leak and following warning is shown: WARNING: at fs/sysfs/dir.c:487 sysfs_add_one+0xc5/0x160() sysfs: cannot create duplicate filename '/devices/virtual/block/dm-0' Signed-off-by: Zdenek Kabelac <zkabelac@redhat.com> Reviewed-by: Jonathan Brassow <jbrassow@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Andrew Morton authored
commit bca915aa upstream. mips: drivers/md/dm-log-userspace-base.c: In function `userspace_ctr': drivers/md/dm-log-userspace-base.c:159: warning: cast from pointer to integer of different size Cc: Jonathan Brassow <jbrassow@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Mikulas Patocka authored
commit 6d45d93e upstream. Avoid a race causing corruption when snapshots of the same origin have different chunk sizes by sorting the internal list of snapshots by chunk size, largest first. https://bugzilla.redhat.com/show_bug.cgi?id=182659 For example, let's have two snapshots with different chunk sizes. The first snapshot (1) has small chunk size and the second snapshot (2) has large chunk size. Let's have chunks A, B, C in these snapshots: snapshot1: ====A==== ====B==== snapshot2: ==========C========== (Chunk size is a power of 2. Chunks are aligned.) A write to the origin at a position within A and C comes along. It triggers reallocation of A, then reallocation of C and links them together using A as the 'primary' exception. Then another write to the origin comes along at a position within B and C. It creates pending exception for B. C already has a reallocation in progress and it already has a primary exception (A), so nothing is done to it: B and C are not linked. If the reallocation of B finishes before the reallocation of C, because there is no link with the pending exception for C it does not know to wait for it and, the second write is dispatched to the origin and causes data corruption in the chunk C in snapshot2. To avoid this situation, we maintain snapshots sorted in descending order of chunk size. This leads to a guaranteed ordering on the links between the pending exceptions and avoids the problem explained above - both A and B now get linked to C. Signed-off-by: Mikulas Patocka <mpatocka@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Jonathan Brassow authored
commit 034a186d upstream. While initializing the snapshot module, if we fail to register the snapshot target then we must back-out the exception store module initialization. Signed-off-by: Jonathan Brassow <jbrassow@redhat.com> Reviewed-by: Mikulas Patocka <mpatocka@redhat.com> Reviewed-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Alasdair G Kergon <agk@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Dmitry Torokhov authored
commit 5f5eeff4 upstream. Apparently some of Toshiba Protege M300 identify themselves as "Portable PC" in DMI so we need to add that to the DMI table as well. We need DMI data so we can automatically lower Synaptics reporting rate from 80 to 40 pps to avoid over-taxing their keyboard controllers. Tested-by: Rod Davison <roddavison@gmail.com> Signed-off-by: Dmitry Torokhov <dtor@mail.ru> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Thomas Gleixner authored
[ Upstream commit 03717e3d ] After sucessfully registering the misc device the driver iounmaps the hardware registers and kfree's the device data structure. Ouch ! This was introduced with commit e42311d7 (riowatchdog: Convert to pure OF driver) and went unnoticed for more than a year :) Return success instead of dropping into the error cleanup code path. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
David S. Miller authored
[ Upstream commit 09d3f3f0 ] Many years ago when this driver was written, it had a use, but these days it's nothing but trouble and distributions should not enable it in any situation. Pretty much every console device a sparc machine could see has a bonafide real driver, making the PROM console hack unnecessary. If any new device shows up, we should write a driver instead of depending upon this crutch to save us. We've been able to take care of this even when no chip documentation exists (sunxvr500, sunxvr2500) so there are no excuses. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
David S. Miller authored
[ Upstream commit c58543c8 ] With lots of virtual devices it's easy to generate a lot of events and chew up the kernel IRQ stack. Reported-by: hyl <heyongli@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Chuck Ebbert authored
revert commit 58a09b38 ("[libata] ahci: Restore SB600 SATA controller 64 bit DMA") Upstream commit 58a09b38 does nearly the same thing but this patch is simplified for 2.6.31 Disables 64-bit DMA for _all_ boards, unlike 2.6.32 which adds a whitelist. (The whitelist function requires a fairly large patch that touches unrelated code.) Doesn't revert the DMI part as other backported patches might need the exported symbol. Applies to 2.6.31.4 Signed-off-by: Chuck Ebbert <cebbert@redhat.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Thomas Gleixner authored
commit 11df6ddd upstream. The requeue_pi path doesn't use unqueue_me() (and the racy lock_ptr == NULL test) nor does it use the wake_list of futex_wake() which where the reason for commit 41890f24 (futex: Handle spurious wake up) See debugging discussing on LKML Message-ID: <4AD4080C.20703@us.ibm.com> The changes in this fix to the wait_requeue_pi path were considered to be a likely unecessary, but harmless safety net. But it turns out that due to the fact that for unknown $@#!*( reasons EWOULDBLOCK is defined as EAGAIN we built an endless loop in the code path which returns correctly EWOULDBLOCK. Spurious wakeups in wait_requeue_pi code path are unlikely so we do the easy solution and return EWOULDBLOCK^WEAGAIN to user space and let it deal with the spurious wakeup. Cc: Darren Hart <dvhltc@us.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: John Stultz <johnstul@linux.vnet.ibm.com> Cc: Dinakar Guniguntala <dino@in.ibm.com> LKML-Reference: <4AE23C74.1090502@us.ibm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Darren Hart authored
commit 89061d3d upstream. When requeuing tasks from one futex to another, the reference held by the requeued task to the original futex location needs to be dropped eventually. Dropping the reference may ultimately lead to a call to "iput_final" and subsequently call into filesystem- specific code - which may be non-atomic. It is therefore safer to defer this drop operation until after the futex_hash_bucket spinlock has been dropped. Originally-From: Helge Bahmann <hcb@chaoticmind.net> Signed-off-by: Darren Hart <dvhltc@us.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Eric Dumazet <eric.dumazet@gmail.com> Cc: Dinakar Guniguntala <dino@in.ibm.com> Cc: John Stultz <johnstul@linux.vnet.ibm.com> Cc: Sven-Thorsten Dietrich <sdietrich@novell.com> Cc: John Kacur <jkacur@redhat.com> LKML-Reference: <4AD7A298.5040802@us.ibm.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Darren Hart authored
commit 2bc87203 upstream. If userspace tries to perform a requeue_pi on a non-requeue_pi waiter, it will find the futex_q->requeue_pi_key to be NULL and OOPS. Check for NULL in match_futex() instead of doing explicit NULL pointer checks on all call sites. While match_futex(NULL, NULL) returning false is a little odd, it's still correct as we expect valid key references. Signed-off-by: Darren Hart <dvhltc@us.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@elte.hu> CC: Eric Dumazet <eric.dumazet@gmail.com> CC: Dinakar Guniguntala <dino@in.ibm.com> CC: John Stultz <johnstul@us.ibm.com> LKML-Reference: <4AD60687.10306@us.ibm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Thomas Gleixner authored
commit d58e6576 upstream. The futex code does not handle spurious wake up in futex_wait and futex_wait_requeue_pi. The code assumes that any wake up which was not caused by futex_wake / requeue or by a timeout was caused by a signal wake up and returns one of the syscall restart error codes. In case of a spurious wake up the signal delivery code which deals with the restart error codes is not invoked and we return that error code to user space. That causes applications which actually check the return codes to fail. Blaise reported that on preempt-rt a python test program run into a exception trap. -rt exposed that due to a built in spurious wake up accelerator :) Solve this by checking signal_pending(current) in the wake up path and handle the spurious wake up case w/o returning to user space. Reported-by: Blaise Gassend <blaise@willowgarage.com> Debugged-by: Darren Hart <dvhltc@us.ibm.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Andre Przywara authored
commit 1fdbd48c upstream. If the Linux kernel detects an C1E capable AMD processor (K8 RevF and higher), it will access a certain MSR on every attempt to go to halt. Explicitly handle this read and return 0 to let KVM run a Linux guest with the native AMD host CPU propagated to the guest. Signed-off-by: Andre Przywara <andre.przywara@amd.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Marcelo Tosatti authored
commit ace15464 upstream. hrtimer->base can be temporarily NULL due to racing hrtimer_start. See switch_hrtimer_base/lock_hrtimer_base. Use hrtimer_get_remaining which is robust against it. Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Yinghai Lu authored
commit 4223a4a1 upstream. Fix a (small) memory leak in one of the error paths of the NFS mount options parsing code. Regression introduced in 2.6.30 by commit a67d18f8 (NFS: load the rpc/rdma transport module automatically). Reported-by: Yinghai Lu <yinghai@kernel.org> Reported-by: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Tejun Heo authored
commit 6489e326 upstream. prereset doesn't bring link online if hardreset is about to happen and nv_hardreset() may skip if conditions are not right so softreset may be entered with non-working link status if the system firmware didn't bring it up before entering OS code which can happen during resume. This patch makes nv_hardreset() to bring up the link if it's skipping reset. This bug was reported by frodone@gmail.com in the following bug entry. http://bugzilla.kernel.org/show_bug.cgi?id=14329Signed-off-by: Tejun Heo <tj@kernel.org> Reported-by: frodone@gmail.com Signed-off-by: Jeff Garzik <jgarzik@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Tejun Heo authored
commit 4f7c2874 upstream. Commit 842faa6c fixed error handling during attach by not committing detected device class to dev->class while attaching a new device. However, this change missed the PMP class check in the configuration loop causing a new PMP device to go through ata_dev_configure() as if it were an ATA or ATAPI device. As PMP device doesn't have a regular IDENTIFY data, this makes ata_dev_configure() tries to configure a PMP device using an invalid data. For the most part, it wasn't too harmful and went unnoticed but this ends up clearing dev->flags which may have ATA_DFLAG_AN set by sata_pmp_attach(). This means that SATA_PMP_FEAT_NOTIFY ends up being disabled on PMPs and on PMPs which honor the flag breaks hotplug support. This problem was discovered and reported by Ethan Hsiao. Signed-off-by: Tejun Heo <tj@kernel.org> Reported-by: Ethan Hsiao <ethanhsiao@jmicron.com> Signed-off-by: Jeff Garzik <jgarzik@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Tejun Heo authored
commit f4b31db9 upstream. When an internal command fails, it should be failed directly without invoking EH. In the original implemetation, this was accomplished by letting internal command bypass failure handling in ata_qc_complete(). However, later changes added post-successful-completion handling to that code path and the success path is no longer adequate as internal command failure path. One of the visible problems is that internal command failure due to timeout or other freeze conditions would spuriously trigger WARN_ON_ONCE() in the success path. This patch updates failure path such that internal command failure handling is contained there. Signed-off-by: Tejun Heo <tj@kernel.org> Signed-off-by: Jeff Garzik <jgarzik@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Yinghai Lu authored
commit 15b812f1 upstream. As reported in http://bugzilla.kernel.org/show_bug.cgi?id=13940 on some system when acpi are enabled, acpi clears some BAR for some devices without reason, and kernel will need to allocate devices for them. It then apparently hits some undocumented resource conflict, resulting in non-working devices. Try to increase alignment to get more safe range for unassigned devices. Signed-off-by: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-
Earl Chew authored
commit ad396024 upstream. This patch fixes a null pointer exception in pipe_rdwr_open() which generates the stack trace: > Unable to handle kernel NULL pointer dereference at 0000000000000028 RIP: > [<ffffffff802899a5>] pipe_rdwr_open+0x35/0x70 > [<ffffffff8028125c>] __dentry_open+0x13c/0x230 > [<ffffffff8028143d>] do_filp_open+0x2d/0x40 > [<ffffffff802814aa>] do_sys_open+0x5a/0x100 > [<ffffffff8021faf3>] sysenter_do_call+0x1b/0x67 The failure mode is triggered by an attempt to open an anonymous pipe via /proc/pid/fd/* as exemplified by this script: ============================================================= while : ; do { echo y ; sleep 1 ; } | { while read ; do echo z$REPLY; done ; } & PID=$! OUT=$(ps -efl | grep 'sleep 1' | grep -v grep | { read PID REST ; echo $PID; } ) OUT="${OUT%% *}" DELAY=$((RANDOM * 1000 / 32768)) usleep $((DELAY * 1000 + RANDOM % 1000 )) echo n > /proc/$OUT/fd/1 # Trigger defect done ============================================================= Note that the failure window is quite small and I could only reliably reproduce the defect by inserting a small delay in pipe_rdwr_open(). For example: static int pipe_rdwr_open(struct inode *inode, struct file *filp) { msleep(100); mutex_lock(&inode->i_mutex); Although the defect was observed in pipe_rdwr_open(), I think it makes sense to replicate the change through all the pipe_*_open() functions. The core of the change is to verify that inode->i_pipe has not been released before attempting to manipulate it. If inode->i_pipe is no longer present, return ENOENT to indicate so. The comment about potentially using atomic_t for i_pipe->readers and i_pipe->writers has also been removed because it is no longer relevant in this context. The inode->i_mutex lock must be used so that inode->i_pipe can be dealt with correctly. Signed-off-by: Earl Chew <earl_chew@agilent.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
- 22 Oct, 2009 13 commits
-
-
Greg Kroah-Hartman authored
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Linus Torvalds authored
commit c8e33141 upstream. The locking logic in this function is extremely subtle, and it broke when we started doing potentially concurrent 'flush_to_ldisc()' calls in commit e043e42b ("pty: avoid forcing 'low_latency' tty flag"). The code in flush_to_ldisc() used to set 'tty->buf.head' to NULL, with the intention that this would then cause any other concurrent calls to not do anything (locking note: we have to drop the buf.lock over the call to ->receive_buf that can block, which is why we can have concurrency here at all in the first place). It also used to set the TTY_FLUSHING bit, which would then cause any concurrent 'tty_buffer_flush()' to not free all the tty buffers and clear 'tty->buf.tail'. And with 'buf.head' being NULL, and 'buf.tail' being non-NULL, new data would never touch 'buf.head'. Does that sound a bit too subtle? It was. If another concurrent call to 'flush_to_ldisc()' were to come in, the NULL buf.head would indeed cause it to not process the buffer list, but it would still clear TTY_FLUSHING afterwards, making the buffer protection against 'tty_buffer_flush()' no longer work. So this clears it all up. We depend purely on TTY_FLUSHING for handling re-entrancy, and stop playing games with the buffer list entirely. In fact, the buffer list handling is now robust enough that we could probably stop doing the whole "protect against 'tty_buffer_flush()'" thing entirely. However, Alan also points out that we would probably be better off simplifying the locking even further, and just take the tty ldisc_mutex around all the buffer flushing calls. That seems like a good idea, but in the meantime this is a conceptually minimal fix (with the patch itself being bigger than required just to clean the code up and make it readable). This fixes keyboard trouble under X: http://bugzilla.kernel.org/show_bug.cgi?id=14388Reported-and-tested-by: Frédéric Meunier <fredlwm@gmail.com> Reported-and-tested-by: Boyan <btanastasov@yahoo.co.uk> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Cc: Paul Fulghum <paulkf@microgate.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Johannes Berg authored
commit fbc44bf7 upstream. When receiving data frames, we can send them only to the interface they belong to based on transmitting station (this doesn't work for probe requests). Also, don't try to handle other frames for AP_VLAN at all since those interface should only receive data. Additionally, the transmit side must check that the station we're sending a frame to is actually on the interface we're transmitting on, and not transmit packets to functions that live on other interfaces, so validate that as well. Another bug fix is needed in sta_info.c where in the VLAN case when adding/removing stations we overwrite the sdata variable we still need. Signed-off-by: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Jay Sternberg authored
commit 2facba76 upstream. The address stored in the next link address is a word address but when reading the OTP blocks, a byte address is used. Also if the blocks are full and the last link pointer is not zero, then none of the blocks are valid so return an error. The algorithm is simply valid blocks have a next address and that address's contents is zero. Using the wrong address for the next link address gets arbitrary data, obviously. In cases seen, the first block is considered valid when it is not. If the block has in fact been invalidated there may be old data or there may be no data, bad data, or partial data, there is no way of telling. Without this patch it is possible that a device with valid OTP data is unable to work. Signed-off-by: Jay Sternberg <jay.e.sternberg@intel.com> Signed-off-by: Reinette Chatre <reinette.chatre@intel.com> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Benjamin Herrenschmidt authored
commit b8430e1b upstream. usb-storage: Workaround devices with bogus sense size Some devices, such as Huawei E169, advertise more than the standard amount of sense data, causing us to set US_FL_SANE_SENSE, assuming they support it. However, they subsequently fail the request sense with that size. This works around it generically. When a sense request fails due to a device returning an error, US_FL_SANE_SENSE was set, and that sense request used a larger sense size, we retry with a smaller size before giving up. Based on an original patch by Ben Efros <ben@pc-doctor.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Acked-by: Alan Stern <stern@rowland.harvard.edu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Darren Salt authored
commit 0af49167 upstream. This fixes a panic which is triggered when the hardware "disappears" from beneath the driver, i.e. when wireless is toggled off via Fn-F2 on various EeePC models. Ref. bug report http://bugzilla.kernel.org/show_bug.cgi?id=13390 panic http://bugzilla.kernel.org/attachment.cgi?id=21928Signed-off-by: Darren Salt <linux@youmustbejoking.demon.co.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Neil Brown authored
commit 83db93f4 upstream. sysfs_notify_dirent is a simple atomic operation that can be used to alert user-space that new data can be read from a sysfs attribute. Unfortunately it cannot currently be called from non-process context because of its use of spin_lock which is sometimes taken with interrupts enabled. So change all lockers of sysfs_open_dirent_lock to disable interrupts, thus making sysfs_notify_dirent safe to be called from non-process context (as drivers/md does in md_safemode_timeout). sysfs_get_open_dirent is (documented as being) only called from process context, so it uses spin_lock_irq. Other places use spin_lock_irqsave. The usage for sysfs_notify_dirent in md_safemode_timeout was introduced in 2.6.28, so this patch is suitable for that and more recent kernels. Reported-by: Joel Andres Granados <jgranado@redhat.com> Signed-off-by: NeilBrown <neilb@suse.de> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Michal Schmidt authored
commit d8e180dc upstream. When process accounting is enabled, every exiting process writes a log to the account file. In addition, every once in a while one of the exiting processes checks whether there's enough free space for the log. SELinux policy may or may not allow the exiting process to stat the fs. So unsuspecting processes start generating AVC denials just because someone enabled process accounting. For these filesystem operations, the exiting process's credentials should be temporarily switched to that of the process which enabled accounting, because it's really that process which wanted to have the accounting information logged. Signed-off-by: Michal Schmidt <mschmidt@redhat.com> Acked-by: David Howells <dhowells@redhat.com> Acked-by: Serge Hallyn <serue@us.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: James Morris <jmorris@namei.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Takashi Iwai authored
commit 18c40784 upstream. The client->driver pointer can be NULL when i2c-device probing fails in i2c_new_device(). This patch adds the NULL checks for client->driver and return the error instead of blind assumption of driver availability. Reported-by: Tim Shepard <shep@alum.mit.edu> Cc: Jean Delvare <khali@linux-fr.org> Cc: Johannes Berg <johannes@sipsolutions.net> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Jean Delvare authored
commit 18669eab upstream. When an ACPI resource conflict is detected, error messages are already printed by ACPI. There's no point in causing the driver core to print more error messages, so return one of the error codes for which no message is printed. This fixes bug #14293: http://bugzilla.kernel.org/show_bug.cgi?id=14293Signed-off-by: Jean Delvare <khali@linux-fr.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Jean Delvare authored
commit 6f6b35e1 upstream. If i2c device probing fails, then there is no driver to dereference after calling i2c_new_device(). Stop assuming that probing will always succeed, to avoid NULL pointer dereferences. We have an easier access to the driver anyway. Signed-off-by: Jean Delvare <khali@linux-fr.org> Tested-by: Tim Shepard <shep@alum.mit.edu> Cc: Colin Leroy <colin@colino.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Jean Delvare authored
commit 05576a1e upstream. Signed-off-by: Jean Delvare <khali@linux-fr.org> Acked-by: Riku Voipio <riku.voipio@iki.fi> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-
Alexander Duyck authored
commit a825e00c upstream. There appears to have been a mixup in the max supported jumbo frame size between 82574 and 82583 which ended up disabling jumbo frames on the 82574 as a result. This patch swaps the two so that this issue is resolved. This patch fixes http://bugzilla.kernel.org/show_bug.cgi?id=14261Signed-off-by: Alexander Duyck <alexander.h.duyck@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com> Signed-off-by: David S. Miller <davem@davemloft.net> Cc: Tim Gardner <tim.gardner@canonical.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
-