- 25 Sep, 2009 3 commits
-
-
Hans Reiser authored
add_to_page_cache_lru to be EXPORT_SYMBOL-ed. [bunk@stusta.de: unexport {,__}remove_from_page_cache] Signed-off-by: Adrian Bunk <bunk@stusta.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Artem Bityutskiy authored
generic implementaion and changes fs-writeback.c:sync_sb_inodes() to call filesystem's sync_inodes if it is defined or generic implementaion otherwise. This new operation allows filesystem to decide itself what to flush. Reiser4 flushes dirty pages on basic of atoms, not of inodes. sync_sb_inodes used to call address space flushing method (writepages) for every dirty inode. For reiser4 it caused having to commit atoms unnecessarily often. This turned into substantial slowdown. Having this method helped to fix that problem. akpm: this patch needs to be chnaged to remove the `sb' arg. Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com> Cc: Edward Shishkin <edward.shishkin@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Edward Shishkin authored
"captured" by some atom. However, outside reiser4 context dirty page can not be captured in some cases, as it is accompanied with specific work (jnode creation, etc). Reiser4 recognizes such "anonymous" pages (i.e. pages that were dirtied outside of reiser4) by the tag PAGECACHE_TAG_DIRTY. Pages dirtied inside reiser4 context are not tagged at all: we don't need this. Indeed, once page is dirtied and captured, it is attached to a jnode (a special header to keep a track of transactions). reiser4_set_page_dirty_internal() was the internal reiser4 function that set dirty bit without tagging the page. Having such internal function led to real problems (incorrect task io accounting, etc. because of not updating this internal "friend"). Solution: The following patch adds a core library function that sets a dirty bit without tagging the page. Signed-off-by: Edward Shishkin<edward.shishkin@gmail.com> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
- 04 Jun, 2009 1 commit
-
-
Andrew Morton authored
Cc: David S. Miller <davem@davemloft.net> Cc: Florian Fainelli <florian@openwrt.org> Cc: Julius Volz <juliusv@google.com> Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com> Cc: Simon Horman <horms@verge.net.au> Cc: Takashi Iwai <tiwai@suse.de> Cc: Patrick McHardy <kaber@trash.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
- 24 Aug, 2009 1 commit
-
-
Florian Fainelli authored
lib/gcd.c, the latter also implementing the a < b case. Signed-off-by: Florian Fainelli <florian@openwrt.org> Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com> Cc: Takashi Iwai <tiwai@suse.de> Acked-by: Simon Horman <horms@verge.net.au> Cc: Julius Volz <juliusv@google.com> Cc: David S. Miller <davem@davemloft.net> Cc: Patrick McHardy <kaber@trash.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
- 04 Jun, 2009 1 commit
-
-
Florian Fainelli authored
Signed-off-by: Florian Fainelli <florian@openwrt.org> Cc: Sergei Shtylyov <sshtylyov@ru.mvista.com> Cc: Takashi Iwai <tiwai@suse.de> Cc: Simon Horman <horms@verge.net.au> Cc: Julius Volz <juliusv@google.com> Cc: David S. Miller <davem@davemloft.net> Cc: Patrick McHardy <kaber@trash.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
- 28 Oct, 2009 7 commits
-
-
Akinobu Mita authored
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Akinobu Mita authored
Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Akinobu Mita authored
Acked-by: "David S. Miller" <davem@davemloft.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Akinobu Mita authored
Reviewed-by: Roland Dreier <rolandd@cisco.com> Cc: Yevgeny Petrilin <yevgenyp@mellanox.co.il> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Akinobu Mita authored
Cc: Greg Kroah-Hartman <gregkh@suse.de> Cc: Lothar Wassmann <LW@KARO-electronics.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Akinobu Mita authored
1. s/iommu_area_free/bitmap_clear/ 2. s/iommu_area_reserve/bitmap_set/ 3. Use bitmap_find_next_zero_area instead of find_next_zero_area This cannot be simple substitution because find_next_zero_area doesn't check the last bit of the limit in bitmap 4. Remove iommu_area_free, iommu_area_reserve, and find_next_zero_area Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Akinobu Mita authored
bitmap_set: Set specified bit area bitmap_clear: Clear specified bit area bitmap_find_next_zero_area: Find free bit area These are mostly stolen from iommu helper. The differences are: - Use find_next_bit instead of doing test_bit for each bit - Rewrite bitmap_set and bitmap_clear Instead of setting or clearing for each bit. - Check the last bit of the limit iommu-helper doesn't want to find such area - The return value if there is no zero area find_next_zero_area in iommu helper: returns -1 bitmap_find_next_zero_area: return >= bitmap size Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: "David S. Miller" <davem@davemloft.net> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Paul Mackerras <paulus@samba.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ingo Molnar <mingo@redhat.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Greg Kroah-Hartman <gregkh@suse.de> Cc: Lothar Wassmann <LW@KARO-electronics.de> Cc: Roland Dreier <rolandd@cisco.com> Cc: Yevgeny Petrilin <yevgenyp@mellanox.co.il> Cc: Tony Luck <tony.luck@intel.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
- 30 Oct, 2009 2 commits
-
-
Andrew Morton authored
Cc: Zach Brown <zach.brown@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Jeff Moyer authored
commit 848c4dd5 Author: Zach Brown <zach.brown@oracle.com> Date: Mon Aug 20 17:12:01 2007 -0700 dio: zero struct dio with kzalloc instead of manually This patch uses kzalloc to zero all of struct dio rather than manually trying to track which fields we rely on being zero. It passed aio+dio stress testing and some bug regression testing on ext3. This patch was introduced by Linus in the conversation that lead up to Badari's minimal fix to manually zero .map_bh.b_state in commit: 6a648fa7 It makes the code a bit smaller. Maybe a couple fewer cachelines to load, if we're lucky: text data bss dec hex filename 3285925 568506 1304616 5159047 4eb887 vmlinux 3285797 568506 1304616 5158919 4eb807 vmlinux.patched I was unable to measure a stable difference in the number of cpu cycles spent in blockdev_direct_IO() when pushing aio+dio 256K reads at ~340MB/s. So the resulting intent of the patch isn't a performance gain but to avoid exposing ourselves to the risk of finding another field like .map_bh.b_state where we rely on zeroing but don't enforce it in the code. Zach surmised that zeroing out the page array was what caused most of the problem, and suggested the approach taken in the attached patch for resolving the issue. Intel re-tested with this patch and saw a 0.6% performance gain (the original regression was 0.5%). Signed-off-by: Jeff Moyer <jmoyer@redhat.com> Acked-by: Zach Brown <zach.brown@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
- 14 Oct, 2009 1 commit
-
-
Roel Kluin authored
Signed-off-by: Roel Kluin <roel.kluin@gmail.com> Cc: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
- 16 Oct, 2009 1 commit
-
-
Jan Beulich authored
comparison msut also be done using the last valid of the buffer in question. Also fix the open-coded instances in lib/swiotlb.c. Signed-off-by: Jan Beulich <jbeulich@novell.com> Cc: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp> Cc: Becky Bruce <beckyb@kernel.crashing.org> Cc: "Luck, Tony" <tony.luck@intel.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Ingo Molnar <mingo@elte.hu> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: "H. Peter Anvin" <hpa@zytor.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
- 30 Sep, 2009 3 commits
-
-
Nils Carlson authored
the patch as far as possible with correctible errors and things appear good. The DIMM mapping is correct for our board, but boards may differ. Signed-off-by: Nils Carlson <nils.carlson@ludd.ltu.se> Acked-by: Arthur Jones <ajones@riverbed.com> Signed-off-by: Doug Thompson <dougthompson@xmission.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Nils Carlson authored
scrubbing rate, which is not constant but dependent on memory load. The rate returned by this driver is an estimate based on some experimentation, but is substantially closer to the truth than the speed supplied in the documentation. Also, scrubbing is done once, and then a done-bit is set. This means that to accomplish continuous scrubbing a re-enabling mechanism must be used. I have created the simplest possible such mechanism in the form of a work-queue which will check every five minutes. This interval is quite arbitrary but should be sufficient for all sizes of system memory. Signed-off-by: Nils Carlson <nils.carlson@ludd.ltu.se> Signed-off-by: Doug Thompson <dougthompson@xmission.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Nils Carlson authored
places, this is simply a cleanup of the patch. Signed-off-by: Nils Carlson <nils.carlson@ludd.ltu.se> Signed-off-by: Doug Thompson <dougthompson@xmission.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
- 16 Oct, 2009 1 commit
-
-
Roel Kluin authored
Signed-off-by: Roel Kluin <roel.kluin@gmail.com> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Hendrik Brueckner <brueckner@linux.vnet.ibm.com> Cc: Christian Borntraeger <borntraeger@de.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
- 09 Oct, 2009 1 commit
-
-
Joe Perches authored
Convert printks to pr_<level> Convert some embedded function names to %s...__func__ Remove a period after exclamation points. Remove #define pr_dbg which could be used by future kernel.h includes Signed-off-by: Joe Perches <joe@perches.com> Acked-by: Jiri Slaby <jirislaby@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
- 30 Sep, 2009 1 commit
-
-
Jiri Slaby authored
tty is dereferenced earlier, the test is superfluous. Remove it. Signed-off-by: Jiri Slaby <jirislaby@gmail.com> Cc: Greg KH <greg@kroah.com> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
- 24 Aug, 2009 1 commit
-
-
Bartlomiej Zolnierkiewicz authored
Signed-off-by: Jiri Slaby <jirislaby@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
- 15 Oct, 2009 1 commit
-
-
Christoph Hellwig authored
USE_ELF_CORE_DUMP. The microblaze omission seems like an error to me, so let's kill this ifdef and make sure we are the same everywhere. Signed-off-by: Christoph Hellwig <hch@lst.de> Acked-by: Hugh Dickins <hugh.dickins@tiscali.co.uk> Cc: <linux-arch@vger.kernel.org> Cc: Michal Simek <michal.simek@petalogix.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
- 29 Sep, 2009 1 commit
-
-
Julia Lawall authored
constants ending in _STATE are compared to the state variable. Signed-off-by: Julia Lawall <julia@diku.dk> Cc: Corey Minyard <minyard@acm.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
- 06 Oct, 2009 7 commits
-
-
Manfred Spraul authored
current code scans all decrement operations, even if the semaphore value is already 0. The patch optimizes that: if the semaphore value is 0, then there is no need to scan the q->alter entries. Note that this is a common case: It happens if 100 decrements by one are pending and now an increment by one increases the semaphore value from 0 to 1. Without this patch, all 100 entries are scanned. With the patch, only one entry is scanned, then woken up. Then the new rule triggers and the scanning is aborted, without looking at the remaining 99 tasks. With this patch, single sop increment/decrement by 1 are now O(1). (same as with Nick's patch) Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Pierre Peiffer <peifferp@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Manfred Spraul authored
semaphores. Atomic operations that affect multiple semaphores are supported. The patch optimizes single semaphore operation calls that affect only one semaphore: It's not necessary to scan all pending operations, it is sufficient to scan the per-semaphore list. The idea is from Nick Piggin version of an ipc sem improvement, the implementation is different: The code tries to keep as much common code as possible. As the result, the patch is simpler, but optimizes fewer cases. Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Pierre Peiffer <peifferp@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Manfred Spraul authored
sysv sem has the concept of semaphore arrays that consist out of multiple semaphores. Atomic operations that affect multiple semaphores are supported. The patch is the first step for optimizing simple, single semaphore operations: In addition to the global list of all pending operations, a 2nd, per-semaphore list with the simple operations is added. Note: this patch does not make sense by itself, the new list is used nowhere. Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Pierre Peiffer <peifferp@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Manfred Spraul authored
If try_atomic_semop failed, then no changes were applied. Thus no need to restart. Additionally, this patch correct an incorrect comment: It's possible to wait for arbitrary semaphore values (do a dec by <x>, wait-for-zero, inc by <x> in one atomic operation) Both changes are from Nick Piggin, the patch is the result of a different split of the individual changes. Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Pierre Peiffer <peifferp@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Nick Piggin authored
involved, which could deadlock if preemption is enabled during the "lock". It is an implementation detail (due to a spinlock being held) that this is actually the case. However if "spinlocks" are made preemptible, or if the sem lock is changed to a sleeping lock for example, then the wakeup would become buggy. So this might be a bugfix for -rt kernels. Imagine waker being preempted by wakee and never clearing IN_WAKEUP -- if wakee has higher RT priority then there is a priority inversion deadlock. Even if there is not a priority inversion to cause a deadlock, then there is still time wasted spinning. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Cc: Pierre Peiffer <peifferp@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Nick Piggin authored
list_for_each_entry macros. list_for_each_entry_safe() must be used, because list entries can disappear immediately uppon the wakeup event. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Cc: Pierre Peiffer <peifferp@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Nick Piggin authored
sysv sem algorithm: Most (at least: some important) users only use simple semaphore operations, therefore it's worthwile to optimize this use case. This patch: Move last looked up sem_undo struct to the head of the task's undo list. Attempt to move common entries to the front of the list so search time is reduced. This reduces lookup_undo on oprofile of problematic SAP workload by 30% (see patch 4 for a description of SAP workload). Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Manfred Spraul <manfred@colorfullife.com> Cc: Pierre Peiffer <peifferp@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
- 13 Oct, 2009 1 commit
-
-
Serge E. Hallyn authored
7ca7e564 "ipc: store ipcs into IDRs" in 2007. The idr of which 3 exist for each ipc namespace is never freed. This patch simply frees them when the ipcns is freed. I don't believe any idr_remove() are done from rcu (and could therefore be delayed until after this idr_destroy()), so the patch should be safe. Some quick testing showed no harm, and the memory leak fixed. Caught by kmemleak. Signed-off-by: Serge E. Hallyn <serue@us.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
- 24 Sep, 2009 1 commit
-
-
Oleg Nesterov authored
Currently we add sub-threads to ->real_parent->children list. This buys nothing but slows down do_wait(). With this patch ->children contains only main threads (group leaders). The only complication is that forget_original_parent() should iterate over sub-threads by hand, and de_thread() needs another list_replace() when it changes ->group_leader. Henceforth do_wait_thread() can never see task_detached() && !EXIT_DEAD tasks, we can remove this check (and we can unify do_wait_thread() and ptrace_do_wait()). This change can confuse the optimistic search in mm_update_next_owner(), but this is fixable and minor. Perhaps badness() and oom_kill_process() should be updated, but they should be fixed in any case. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Cc: Roland McGrath <roland@redhat.com> Cc: Ingo Molnar <mingo@elte.hu> Cc: Ratan Nalumasu <rnalumasu@gmail.com> Cc: Vitaly Mayatskikh <vmayatsk@redhat.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
- 25 Sep, 2009 1 commit
-
-
Roland McGrath authored
implementing user thread tracing and debugging. This fits on top of the tracehook_* layer, so the new code is well-isolated. The new interface is in <linux/utrace.h> and the DocBook utrace book describes it. It allows for multiple separate tracing engines to work in parallel without interfering with each other. Higher-level tracing facilities can be implemented as loadable kernel modules using this layer. The new facility is made optional under CONFIG_UTRACE. When this is not enabled, no new code is added. It can only be enabled on machines that have all the prerequisites and select CONFIG_HAVE_ARCH_TRACEHOOK. In this initial version, utrace and ptrace do not play together at all. If ptrace is attached to a thread, the attach calls in the utrace kernel API return -EBUSY. If utrace is attached to a thread, the PTRACE_ATTACH or PTRACE_TRACEME request will return EBUSY to userland. The old ptrace code is otherwise unchanged and nothing using ptrace should be affected by this patch as long as utrace is not used at the same time. In the future we can clean up the ptrace implementation and rework it to use the utrace API. [oleg@redhat.com: kill exclude_xtrace logic] Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: Oleg Nesterov <oleg@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
- 30 Oct, 2009 1 commit
-
-
Oleg Nesterov authored
->group_stop_count condition visible to tracers before do_signal_stop() will participate in this group-stop. Currently the patch has no effect, tracehook_get_signal() always returns 0. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Acked-by: Roland McGrath <roland@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
- 16 Oct, 2009 3 commits
-
-
Oleg Nesterov authored
Signed-off-by: Oleg Nesterov <oleg@redhat.com> Cc: Roland McGrath <roland@redhat.com> Cc: Sukadev Bhattiprolu <sukadev@us.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Oleg Nesterov authored
This is a bit confusing, we don't know the source of this signal. But we don't care, and "info->si_code = 0" is imho worse. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Cc: Roland McGrath <roland@redhat.com> Cc: Sukadev Bhattiprolu <sukadev@us.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-
Oleg Nesterov authored
triggers the "from_ancestor_ns" check. This fixes reparent_thread()->group_send_sig_info(pdeath_signal) behaviour, before this patch send_signal() does not detect the cross-namespace case when the child of the dying parent belongs to the sub-namespace. This patch can affect the behaviour of send_sig(), kill_pgrp() and kill_pid() when the caller sends the signal to the sub-namespace with "priv == 0" but surprisingly all callers seem to use them correctly, including disassociate_ctty(on_exit). Except: drivers/staging/comedi/drivers/addi-data/*.c incorrectly use send_sig(priv => 0). But his is minor and should be fixed anyway. Reported-by: Daniel Lezcano <dlezcano@fr.ibm.com> Signed-off-by: Oleg Nesterov <oleg@redhat.com> Cc: Roland McGrath <roland@redhat.com> Reviewed-by: Sukadev Bhattiprolu <sukadev@us.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
-