Commit e00946fe authored by Christoph Lameter's avatar Christoph Lameter Committed by Linus Torvalds

[PATCH] slab: Bypass free lists for __drain_alien_cache()

__drain_alien_cache() currently drains objects by freeing them to the
(remote) freelists of the original node.  However, each node also has a
shared list containing objects to be used on any processor of that node.
We can avoid a number of remote node accesses by copying the pointers to
the free objects directly into the remote shared array.

And while we are at it: Skip alien draining if the alien cache spinlock is
already taken.

Kiran reported that this is a performance benefit.
Signed-off-by: default avatarChristoph Lameter <clameter@sgi.com>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Signed-off-by: default avatarAndrew Morton <akpm@osdl.org>
Signed-off-by: default avatarLinus Torvalds <torvalds@osdl.org>
parent 3ded175a
...@@ -971,6 +971,13 @@ static void __drain_alien_cache(struct kmem_cache *cachep, ...@@ -971,6 +971,13 @@ static void __drain_alien_cache(struct kmem_cache *cachep,
if (ac->avail) { if (ac->avail) {
spin_lock(&rl3->list_lock); spin_lock(&rl3->list_lock);
/*
* Stuff objects into the remote nodes shared array first.
* That way we could avoid the overhead of putting the objects
* into the free lists and getting them back later.
*/
transfer_objects(rl3->shared, ac, ac->limit);
free_block(cachep, ac->entry, ac->avail, node); free_block(cachep, ac->entry, ac->avail, node);
ac->avail = 0; ac->avail = 0;
spin_unlock(&rl3->list_lock); spin_unlock(&rl3->list_lock);
...@@ -986,8 +993,8 @@ static void reap_alien(struct kmem_cache *cachep, struct kmem_list3 *l3) ...@@ -986,8 +993,8 @@ static void reap_alien(struct kmem_cache *cachep, struct kmem_list3 *l3)
if (l3->alien) { if (l3->alien) {
struct array_cache *ac = l3->alien[node]; struct array_cache *ac = l3->alien[node];
if (ac && ac->avail) {
spin_lock_irq(&ac->lock); if (ac && ac->avail && spin_trylock_irq(&ac->lock)) {
__drain_alien_cache(cachep, ac, node); __drain_alien_cache(cachep, ac, node);
spin_unlock_irq(&ac->lock); spin_unlock_irq(&ac->lock);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment