Commit ba4d40bb authored by Eric W. Biederman's avatar Eric W. Biederman Committed by Andi Kleen

[PATCH] Auto size the per cpu area.

Now for a completely different but trivial approach.
I just boot tested it with 255 CPUS and everything worked.

Currently everything (except module data) we place in
the per cpu area we know about at compile time.  So
instead of allocating a fixed size for the per_cpu area
allocate the number of bytes we need plus a fixed constant
for to be used for modules.

It isn't perfect but it is much less of a pain to
work with than what we are doing now.

AK: fixed warning
Signed-off-by: default avatarEric W. Biederman <ebiederm@xmission.com>
Signed-off-by: default avatarAndi Kleen <ak@suse.de>
parent 522e93e3
...@@ -95,12 +95,9 @@ void __init setup_per_cpu_areas(void) ...@@ -95,12 +95,9 @@ void __init setup_per_cpu_areas(void)
#endif #endif
/* Copy section for each CPU (we discard the original) */ /* Copy section for each CPU (we discard the original) */
size = ALIGN(__per_cpu_end - __per_cpu_start, SMP_CACHE_BYTES);
#ifdef CONFIG_MODULES
if (size < PERCPU_ENOUGH_ROOM)
size = PERCPU_ENOUGH_ROOM; size = PERCPU_ENOUGH_ROOM;
#endif
printk(KERN_INFO "PERCPU: Allocating %lu bytes of per cpu data\n", size);
for_each_cpu_mask (i, cpu_possible_map) { for_each_cpu_mask (i, cpu_possible_map) {
char *ptr; char *ptr;
......
...@@ -11,6 +11,16 @@ ...@@ -11,6 +11,16 @@
#include <asm/pda.h> #include <asm/pda.h>
#ifdef CONFIG_MODULES
# define PERCPU_MODULE_RESERVE 8192
#else
# define PERCPU_MODULE_RESERVE 0
#endif
#define PERCPU_ENOUGH_ROOM \
(ALIGN(__per_cpu_end - __per_cpu_start, SMP_CACHE_BYTES) + \
PERCPU_MODULE_RESERVE)
#define __per_cpu_offset(cpu) (cpu_pda(cpu)->data_offset) #define __per_cpu_offset(cpu) (cpu_pda(cpu)->data_offset)
#define __my_cpu_offset() read_pda(data_offset) #define __my_cpu_offset() read_pda(data_offset)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment