- 14 Sep, 2009 37 commits
-
-
Nick Piggin authored
Sebastian discovered that SLQB can enable interrupts early in boot due to taking the rwsem. It should not be contended here, so XADD algorithm implementations should not be affected, however spinlock algorithm implementations do a spin_lock_irq in down_write fastpath and would be affected. Move the lock out of early init path, comment why. This also covers a very small (and basically insignificant) race where kmem_cache_create_ok checks succeed but kmem_cache_create still creates a duplicate named cache because the lock was dropped and retaken. Reported-by: Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Tested-by: Sebastian Andrzej Siewior <sebastian@breakpoint.cc> Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
Nick Piggin authored
SLQB would return ZERO_SIZE_PTR rather than NULL if the requested size is too large. Debugged by Heiko Carstens. Fix this by checking size edge cases up front rather than in the slab index calculation. Additionally, if the size parameter was non-constant and too large, then the checks may not have been performed at all which could cause corruption. Next, ARCH_KMALLOC_MINALIGN may not be obeyed if size is non-constant. So test for KMALLOC_MIN_SIZE in that case. Finally, if KMALLOC_SHIFT_SLQB_HIGH is larger than 2MB, then kmalloc_index could silently run off the end of its precomputed table and return a -1 index into the kmalloc slab array, which could result in corruption. Extend this to allow up to 32MB (to match SLAB), and add a compile-time error in the case that the table is exceeded (also like SLAB). Tested-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
Pekka Enberg authored
Commit dcce284a ("mm: Extend gfp masking to the page allocator") introduced a global 'gfp_allowed_mask' for masking GFP flags. Use that in SLQB to fix a compilation error. Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
Pekka Enberg authored
The slab allocator is set up much earlier in the boot sequence now so the allocator must be able to support GFP_KERNEL allocations when interrupts are not enabled yet. Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Nick Piggin <npiggin@suse.de> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
Pekka Enberg authored
As explained by Minchan Kim, we need to hold ->page_lock while updating l->nr_partial in slab_alloc_page() and __cache_list_get_page(). Reported-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Tested-by: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Analyzed-by: Minchan Kim <minchan.kim@gmail.com> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
Randy Dunlap authored
Documentation/vm/slqbinfo.c:386: warning: unused variable 'total' Documentation/vm/slqbinfo.c:512: warning: format '%5d' expects type 'int', but argument 9 has type 'long unsigned int' Documentation/vm/slqbinfo.c:520: warning: format '%4ld' expects type 'long int', but argument 9 has type 'int' Documentation/vm/slqbinfo.c:646: warning: unused variable 'total_partial' Documentation/vm/slqbinfo.c:646: warning: unused variable 'avg_partial' Documentation/vm/slqbinfo.c:645: warning: unused variable 'max_partial' Documentation/vm/slqbinfo.c:645: warning: unused variable 'min_partial' Documentation/vm/slqbinfo.c:860: warning: unused variable 'count' Documentation/vm/slqbinfo.c:858: warning: unused variable 'p' Signed-off-by: Randy Dunlap <randy.dunlap@oracle.com> Cc: Nick Piggin <nickpiggin@yahoo.com.au> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
Andrew Morton authored
x86_64 allnoconfig: mm/slqb.c: In function 'slab_alloc': mm/slqb.c:1546: warning: passing argument 3 of 'alloc_debug_processing' makes pointer from integer without a cast mm/slqb.c: In function 'slab_free': mm/slqb.c:1764: warning: passing argument 3 of 'free_debug_processing' makes pointer from integer without a cast Cc: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
Nick Piggin authored
SLQB does not correctly account reclaim_state.reclaimed_slab, so it will break memory reclaim. Account it like SLAB does. Cc: stable@kernel.org Cc: linux-mm@kvack.org Cc: Matt Mackall <mpm@selenic.com> Acked-by: Christoph Lameter <cl@linux-foundation.org> Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
Pekka Enberg authored
This patch cleans up SQLB to make the code checkpatch clean. Acked-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
Pekka Enberg authored
This patch changes SLQB to use the shorter _RET_IP_ form for consistency with SLUB and to prepare SLQB for eventual kmemtrace hooks. Acked-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
Nick Piggin authored
The dumb early allocation cache had a bug where it could allow allocation to go past the end of a page, which could cause crashes or random memory corruption. Fix this and simplify the logic. [ penberg@cs.helsinki.fi: fix whitespace issues pointed out by Anton Vorontsov] Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Tested-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
Nick Piggin authored
SLQB didn't consider MAX_ORDER when defining which sizes of kmalloc slabs to create. It panics at boot if it tries to create a cache which exceeds MAX_ORDER-1. The patch fixes PowerPC boot failures discussed here: http://lkml.org/lkml/2009/4/28/198Reported-by: Sachin Sant <sachinp@in.ibm.com> Tested-by: Sachin Sant <sachinp@in.ibm.com> Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
Nick Piggin authored
SLQB fails to build without -O, which causes some external code to trip over. This BUILD_BUG_ON isn't so useful anyway because it is trivial to follow that size will be constant, looking at the callers. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
David Rientjes authored
Slabs may not be allocated at MAX_ORDER or higher. Cc: Nick Piggin <npiggin@suse.de> Signed-off-by: David Rientjes <rientjes@google.com> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
Nobuhiro Iwamatsu authored
This patch fixes the following build breakage which happens when CONFIG_NUMA is enabled but CONFIG_SMP is disabled: CC mm/slqb.o mm/slqb.c: In function '__slab_free': mm/slqb.c:1735: error: implicit declaration of function 'slab_free_to_remote' mm/slqb.c: In function 'kmem_cache_open': mm/slqb.c:2274: error: implicit declaration of function 'kmem_cache_dyn_array_free' mm/slqb.c:2275: warning: label 'error_cpu_array' defined but not used mm/slqb.c: In function 'kmem_cache_destroy': mm/slqb.c:2395: error: implicit declaration of function 'claim_remote_free_list' mm/slqb.c: In function 'kmem_cache_init': mm/slqb.c:2885: error: 'per_cpu__kmem_cpu_nodes' undeclared (first use in this function) mm/slqb.c:2885: error: (Each undeclared identifier is reported only once mm/slqb.c:2885: error: for each function it appears in.) mm/slqb.c:2886: error: 'kmem_cpu_cache' undeclared (first use in this function) make[1]: *** [mm/slqb.o] Error 1 make: *** [mm] Error 2 As x86 Kconfig doesn't even allow this combination, one is tempted to think it's an architecture Kconfig bug. But as it turns out, it's a perfecly valid configuration. Tony Luck explains: UP + NUMA is a special case of memory-only nodes. There are some (crazy?) customers with problems that require very large amounts of memory, but not very much cpu horse power. They buy large multi-node systems and populate all the nodes with as much memory as they can afford, but most nodes get zero cpus. So lets fix that up. [ tony.luck@intel.com: #ifdef cleanups ] Signed-off-by: Nobuhiro Iwamatsu <iwamatsu.nobuhiro@renesas.com> Acked-by: Tony Luck <tony.luck@intel.com> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
Nick Piggin authored
Yanmin Zhang had reported performance increases in a routing stress test with SLUB using gigantic slab sizes. The theory is either increased TLB efficiency or reduced page allocator costs. Anyway it is trivial and basically no overhead to add similar parameters to SLQB to experiment with. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
Nick Piggin authored
SLQB has a design flaw noticed by Yanmin Zhang when doing network packet stress testing. My intention with lockless slab lists had been to try to balance producer/consumer type activity on the object queues just at allocation-time with the producer and free-time with the consumer. But that breaks down if you have a huge number of objects in-flight and then free them after activity is reduced at the producer-side. Basically just objects being allocated on CPU0 are then freed by CPU1 but then rely on activity from CPU0 (or periodic trimming) to free them back to the page allocator. If there is infrequent activity on CPU0, then it can take a long time for the periodic trimming to free up unused objects. Fix this by adding a lock to the page list queues and allowing CPU1 to do the freeing work synchronously if queues get too large. It allows "nice" producer/consumer type patterns to still fit within the fast object queues, without the possibility to build up a lot of objects... The spinlock should not be a big problem for nice workloads, as it is at least an order of magnitude less frequent than an object allocation/free operation. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
Nick Piggin authored
gather_stats is not used if CONFIG_SLQB_SYSFS is not selected. Make it conditional and avoid the warning. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
Nick Piggin authored
flush_free_list can be called with interrupts enabled, from kmem_cache_destroy. Fix this. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
Lai Jiangshan authored
struct slqb_page defines struct rcu_head rcu_head for rcu, the rcu callback should use it. Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Acked-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
Nick Piggin authored
My virt_to_page_fast speedup for x86-64 snuck in to the slqb patch, where it does not belong. Oops. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
Nick Piggin authored
Fix the lockdep error reported by Fengguang where down_read slqb_lock is taken twice when reading /proc/slabinfo, with the possibility for a deadlock. Reported-by: Wu Fengguang <fengguang.wu@intel.com> Tested-by: Wu Fengguang <fengguang.wu@intel.com> Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
Pekka Enberg authored
As reported by Stephen Rothwell: Today's linux-next build (powerpc allnoconfig) produced this warning: mm/slqb.c: In function 'kmem_cache_open': mm/slqb.c:2180: warning: label 'error_lock' defined but not used mm/slqb.c:2176: warning: label 'error_cpu_array' defined but not used Caused by commit 8b9ffd9d52479bd17b5729c9f3acaefa90c7e585 ("slqb: dynamic array allocations"). Clearly neither CONFIG_SMP not CONFIG_NUMA is set. Fix those up by wrapping the labes in ifdef CONFIG_SMP and CONFIG_NUMA where appropriate. Cc: Nick Piggin <npiggin@suse.de> Reported-by: Stephen Rothwell <sfr@canb.auug.org.au> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
Nick Piggin authored
Implement dynamic allocation for SLQB per-cpu and per-node arrays. This should hopefully have minimal runtime performance impact, because although there is an extra level of indirection to do allocations, the pointer should be in the cache hot area of the struct kmem_cache. It's not quite possible to use dynamic percpu allocator for this: firstly, that subsystem uses the slab allocator. Secondly, it doesn't have good support for per-node data. If those problems were improved, we could use it. For now, just implement a very very simple allocator until the kmalloc caches are up. On x86-64 with a NUMA MAXCPUS config, sizes look like this: text data bss dec hex filename 29960 259565 100 289625 46b59 mm/slab.o 34130 497130 696 531956 81df4 mm/slub.o 24575 1634267 111136 1769978 1b01fa mm/slqb.o 24845 13959 712 39516 9a5c mm/slqb.o + this patch SLQB is now 2 orders of magnitude smaller than it was, and an order of magnitude smaller than SLAB or SLUB (in total size -- text size has always been smaller). So it should now be very suitable for distro-type configs in this respect. As a side-effect the UP version of cpu_slab (which is embedded directly in the kmem_cache struct) moves up to the hot cachelines, so it need no longer be cacheline aligned on UP. The overall result should be a reduction in cacheline footprint on UP kernels. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
Nick Piggin authored
Fix a problem where SLQB did not correctly return ZERO_SIZE_PTR for a zero sized allocation. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
Nick Piggin authored
Introducing the SLQB slab allocator. SLQB takes code and ideas from all other slab allocators in the tree. The primary method for keeping lists of free objects within the allocator is a singly-linked list, storing a pointer within the object memory itself (or a small additional space in the case of RCU destroyed slabs). This is like SLOB and SLUB, and opposed to SLAB, which uses arrays of objects, and metadata. This reduces memory consumption and makes smaller sized objects more realistic as there is less overhead. Using lists rather than arrays can reduce the cacheline footprint. When moving objects around, SLQB can move a list of objects from one CPU to another by simply manipulating a head pointer, wheras SLAB needs to memcpy arrays. Some SLAB per-CPU arrays can be up to 1K in size, which is a lot of cachelines that can be touched during alloc/free. Newly freed objects tend to be cache hot, and newly allocated ones tend to soon be touched anyway, so often there is little cost to using metadata in the objects. SLQB has a per-CPU LIFO freelist of objects like SLAB (but using lists rather than arrays). Freed objects are returned to this freelist if they belong to the node which our CPU belongs to. So objects allocated on one CPU can be added to the freelist of another CPU on the same node. When LIFO freelists need to be refilled or trimmed, SLQB takes or returns objects from a list of slabs. SLQB has per-CPU lists of slabs (which use struct page as their metadata including list head for this list). Each slab contains a singly-linked list of objects that are free in that slab (free, and not on a LIFO freelist). Slabs are freed as soon as all their objects are freed, and only allocated when there are no slabs remaining. They are taken off this slab list when if there are no free objects left. So the slab lists always only contain "partial" slabs; those slabs which are not completely full and not completely empty. SLQB slabs can be manipulated with no locking unlike other allocators which tend to use per-node locks. As the number of threads per socket increases, this should help improve the scalability of slab operations. Freeing objects to remote slab lists first batches up the objects on the freeing CPU, then moves them over at once to a list on the allocating CPU. The allocating CPU will then notice those objects and pull them onto the end of its freelist. This remote freeing scheme is designed to minimise the number of cross CPU cachelines touched, short of going to a "crossbar" arrangement like SLAB has. SLAB has "crossbars" of arrays of objects. That is, NR_CPUS*MAX_NUMNODES type arrays, which can become very bloated in huge systems (this could be hundreds of GBs for kmem caches for 4096 CPU, 1024 nodes systems). SLQB also has similar freelist, slablist structures per-node, which are protected by a lock, and usable by any CPU in order to do node specific allocations. These allocations tend not to be too frequent (short lived allocations should be node local, long lived allocations should not be too frequent). There is a good overview and illustration of the design here: http://lwn.net/Articles/311502/ By using LIFO freelists like SLAB, SLQB tries to be very page-size agnostic. It tries very hard to use order-0 pages. This is good for both page allocator fragmentation, and slab fragmentation. SLQB initialistaion code attempts to be as simple and un-clever as possible. There are no multiple phases where different things come up. There is no weird self bootstrapping stuff. It just statically allocates the structures required to create the slabs that allocate other slab structures. SLQB uses much of the debugging infrastructure, and fine-grained sysfs statistics from SLUB. There is also a Documentation/vm/slqbinfo.c, derived from slabinfo.c, which can query the sysfs data. Signed-off-by: Nick Piggin <npiggin@suse.de> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi>
-
Linus Torvalds authored
Merge branch 'x86-setup-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-setup-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86, e820: Guard against array overflowed in __e820_add_region() x86, setup: remove obsolete pre-Kconfig CONFIG_VIDEO_ variables
-
Linus Torvalds authored
Merge branch 'x86-percpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-percpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86, percpu: Collect hot percpu variables into one cacheline x86, percpu: Fix DECLARE/DEFINE_PER_CPU_PAGE_ALIGNED() x86, percpu: Add 'percpu_read_stable()' interface for cacheable accesses
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tipLinus Torvalds authored
* 'x86-mm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86, highmem_32.c: Clean up comment x86, pgtable.h: Clean up types x86: Clean up dump_pagetable()
-
Linus Torvalds authored
Merge branch 'x86-kbuild-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-kbuild-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86: Simplify the Makefile in a minor way through use of cc-ifversion
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tipLinus Torvalds authored
* 'x86-fpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86-64: move clts into batch cpu state updates when preloading fpu x86-64: move unlazy_fpu() into lazy cpu state part of context switch x86-32: make sure clts is batched during context switch x86: split out core __math_state_restore
-
Linus Torvalds authored
Merge branch 'x86-debug-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-debug-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86: Decrease the level of some NUMA messages to KERN_DEBUG
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tipLinus Torvalds authored
* 'x86-cpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (22 commits) x86: Fix code patching for paravirt-alternatives on 486 x86, msr: change msr-reg.o to obj-y, and export its symbols x86: Use hard_smp_processor_id() to get apic id for AMD K8 cpus x86, sched: Workaround broken sched domain creation for AMD Magny-Cours x86, mcheck: Use correct cpumask for shared bank4 x86, cacheinfo: Fixup L3 cache information for AMD multi-node processors x86: Fix CPU llc_shared_map information for AMD Magny-Cours x86, msr: Fix msr-reg.S compilation with gas 2.16.1, on 32-bit too x86: Move kernel_fpu_using to irq_fpu_usable in asm/i387.h x86, msr: fix msr-reg.S compilation with gas 2.16.1 x86, msr: Export the register-setting MSR functions via /dev/*/msr x86, msr: Create _on_cpu helpers for {rw,wr}msr_safe_regs() x86, msr: Have the _safe MSR functions return -EIO, not -EFAULT x86, msr: CFI annotations, cleanups for msr-reg.S x86, asm: Make _ASM_EXTABLE() usable from assembly code x86, asm: Add 32-bit versions of the combined CFI macros x86, AMD: Disable wrongly set X86_FEATURE_LAHF_LM CPUID bit x86, msr: Rewrite AMD rd/wrmsr variants x86, msr: Add rd/wrmsr interfaces with preset registers x86: add specific support for Intel Atom architecture ...
-
Linus Torvalds authored
Merge branch 'x86-cleanups-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-cleanups-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86: Make memtype_seq_ops const x86: uv: Clean up uv_ptc_init(), use proc_create() x86: Use printk_once() x86/cpu: Clean up various files a bit x86: Remove duplicated #include x86, ipi: Clean up safe_smp_processor_id() by using the cpu_has_apic() macro helper x86: Clean up idt_descr and idt_tableby using NR_VECTORS instead of hardcoded number x86: Further clean up of mtrr/generic.c x86: Clean up mtrr/main.c x86: Clean up mtrr/state.c x86: Clean up mtrr/mtrr.h x86: Clean up mtrr/if.c x86: Clean up mtrr/generic.c x86: Clean up mtrr/cyrix.c x86: Clean up mtrr/cleanup.c x86: Clean up mtrr/centaur.c x86: Clean up mtrr/amd.c: x86: ds.c fix invalid assignment
-
Linus Torvalds authored
Merge branch 'x86-asm-generic-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-asm-generic-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86: remove all now-duplicate header files x86: convert termios.h to the asm-generic version x86: convert almost generic headers to asm-generic version x86: convert trivial headers to asm-generic version x86: add copies of some headers to convert to asm-generic
-
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tipLinus Torvalds authored
* 'x86-asm-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: x86/i386: Put aligned stack-canary in percpu shared_aligned section x86/i386: Make sure stack-protector segment base is cache aligned x86: Detect stack protector for i386 builds on x86_64 x86: allow "=rm" in native_save_fl() x86: properly annotate alternatives.c x86: Introduce GDT_ENTRY_INIT(), initialize bad_bios_desc statically x86, 32-bit: Use generic sys_pipe() x86: Introduce GDT_ENTRY_INIT(), fix APM x86: Introduce GDT_ENTRY_INIT() x86: Introduce set_desc_base() and set_desc_limit() x86: Remove unused patch_espfix_desc() x86: Use get_desc_base()
-
Linus Torvalds authored
Merge branch 'x86-apic-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'x86-apic-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (24 commits) ACPI, x86: expose some IO-APIC routines when CONFIG_ACPI=n x86, apic: Slim down stack usage in early_init_lapic_mapping() x86, ioapic: Get rid of needless check and simplify ioapic_setup_resources() x86, ioapic: Define IO_APIC_DEFAULT_PHYS_BASE constant x86: Fix x86_model test in es7000_apic_is_cluster() x86, apic: Move dmar_table_init() out of enable_IR() x86, ioapic: Panic on irq-pin binding only if needed x86/apic: Enable x2APIC without interrupt remapping under KVM x86, apic: Drop redundant bit assignment x86, ioapic: Throw BUG instead of NULL dereference x86, ioapic: Introduce for_each_irq_pin() helper x86: Remove superfluous NULL pointer check in destroy_irq() x86/ioapic.c: unify ioapic_retrigger_irq() x86/ioapic.c: convert __target_IO_APIC_irq to conventional for() loop x86/ioapic.c: clean up replace_pin_at_irq_node logic and comments x86/ioapic.c: convert replace_pin_at_irq_node to conventional for() loop x86/ioapic.c: simplify add_pin_to_irq_node() x86/ioapic.c: convert io_apic_level_ack_pending loop to normal for() loop x86/ioapic.c: move lost comment to what seems like appropriate place x86/ioapic.c: remove redundant declaration of irq_pin_list ...
-
- 11 Sep, 2009 3 commits
-
-
git://git.linux-nfs.org/projects/trondmy/nfs-2.6Linus Torvalds authored
* git://git.linux-nfs.org/projects/trondmy/nfs-2.6: (87 commits) NFSv4: Disallow 'mount -t nfs4 -overs=2' and 'mount -t nfs4 -overs=3' NFS: Allow the "nfs" file system type to support NFSv4 NFS: Move details of nfs4_get_sb() to a helper NFS: Refactor NFSv4 text-based mount option validation NFS: Mount option parser should detect missing "port=" NFS: out of date comment regarding O_EXCL above nfs3_proc_create() NFS: Handle a zero-length auth flavor list SUNRPC: Ensure that sunrpc gets initialised before nfs, lockd, etc... nfs: fix compile error in rpc_pipefs.h nfs: Remove reference to generic_osync_inode from a comment SUNRPC: cache must take a reference to the cache detail's module on open() NFS: Use the DNS resolver in the mount code. NFS: Add a dns resolver for use with NFSv4 referrals and migration SUNRPC: Fix a typo in cache_pipefs_files nfs: nfs4xdr: optimize low level decoding nfs: nfs4xdr: get rid of READ_BUF nfs: nfs4xdr: simplify decode_exchange_id by reusing decode_opaque_inline nfs: nfs4xdr: get rid of COPYMEM nfs: nfs4xdr: introduce decode_sessionid helper nfs: nfs4xdr: introduce decode_verifier helper ...
-
git://git.kernel.org/pub/scm/linux/kernel/git/jgarzik/libata-devLinus Torvalds authored
* 'upstream-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jgarzik/libata-dev: (25 commits) pata_rz1000: use printk_once ahci: kill @force_restart and refine CLO for ahci_kick_engine() pata_cs5535: add pci id for AMD based CS5535 controllers ahci: Add AMD SB900 SATA/IDE controller device IDs drivers/ata: use resource_size sata_fsl: Defer non-ncq commands when ncq commands active libata: add SATA PMP revision information for spec 1.2 libata: fix off-by-one error in ata_tf_read_block() ahci: Gigabyte GA-MA69VM-S2 can't do 64bit DMA ahci: make ahci_asus_m2a_vm_32bit_only() quirk more generic dmi: extend dmi_get_year() to dmi_get_date() dmi: fix date handling in dmi_get_year() libata: unbreak TPM filtering by reorganizing ata_scsi_pass_thru() sata_sis: convert to slave_link sata_sil24: always set protocol override for non-ATAPI data commands libata: Export AHCI capabilities libata: Delegate nonrot flag setting to SCSI [libata] Add pata_rdc driver for RDC ATA devices drivers/ata: Remove unnecessary semicolons libata: remove spindown skipping and warning ...
-
Linus Torvalds authored
Merge branch 'tracing-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'tracing-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: (105 commits) ring-buffer: only enable ring_buffer_swap_cpu when needed ring-buffer: check for swapped buffers in start of committing tracing: report error in trace if we fail to swap latency buffer tracing: add trace_array_printk for internal tracers to use tracing: pass around ring buffer instead of tracer tracing: make tracing_reset safe for external use tracing: use timestamp to determine start of latency traces tracing: Remove mentioning of legacy latency_trace file from documentation tracing/filters: Defer pred allocation, fix memory leak tracing: remove users of tracing_reset tracing: disable buffers and synchronize_sched before resetting tracing: disable update max tracer while reading trace tracing: print out start and stop in latency traces ring-buffer: disable all cpu buffers when one finds a problem ring-buffer: do not count discarded events ring-buffer: remove ring_buffer_event_discard ring-buffer: fix ring_buffer_read crossing pages ring-buffer: remove unnecessary cpu_relax ring-buffer: do not swap buffers during a commit ring-buffer: do not reset while in a commit ...
-