• Anton Blanchard's avatar
    [PATCH] powerpc/64: per cpu data optimisations · 7a0268fa
    Anton Blanchard authored
    The current ppc64 per cpu data implementation is quite slow. eg:
    
            lhz 11,18(13)           /* smp_processor_id() */
            ld 9,.LC63-.LCTOC1(30)  /* per_cpu__variable_name */
            ld 8,.LC61-.LCTOC1(30)  /* __per_cpu_offset */
            sldi 11,11,3            /* form index into __per_cpu_offset */
            mr 10,9
            ldx 9,11,8              /* __per_cpu_offset[smp_processor_id()] */
            ldx 0,10,9              /* load per cpu data */
    
    5 loads for something that is supposed to be fast, pretty awful. One
    reason for the large number of loads is that we have to synthesize 2
    64bit constants (per_cpu__variable_name and __per_cpu_offset).
    
    By putting __per_cpu_offset into the paca we can avoid the 2 loads
    associated with it:
    
            ld 11,56(13)            /* paca->data_offset */
            ld 9,.LC59-.LCTOC1(30)  /* per_cpu__variable_name */
            ldx 0,9,11              /* load per cpu data
    
    Longer term we can should be able to do even better than 3 loads.
    If per_cpu__variable_name wasnt a 64bit constant and paca->data_offset
    was in a register we could cut it down to one load. A suggestion from
    Rusty is to use gcc's __thread extension here. In order to do this we
    would need to free up r13 (the __thread register and where the paca
    currently is). So far Ive had a few unsuccessful attempts at doing that :)
    
    The patch also allocates per cpu memory node local on NUMA machines.
    This patch from Rusty has been sitting in my queue _forever_ but stalled
    when I hit the compiler bug. Sorry about that.
    
    Finally I also only allocate per cpu data for possible cpus, which comes
    straight out of the x86-64 port. On a pseries kernel (with NR_CPUS == 128)
    and 4 possible cpus we see some nice gains:
    
                 total       used       free     shared    buffers cached
    Mem:       4012228     212860    3799368          0          0 162424
    
                 total       used       free     shared    buffers cached
    Mem:       4016200     212984    3803216          0          0 162424
    
    A saving of 3.75MB. Quite nice for smaller machines. Note: we now have
    to be careful of per cpu users that touch data for !possible cpus.
    
    At this stage it might be worth making the NUMA and possible cpu
    optimisations generic, but per cpu init is done so early we have to be
    careful that all architectures have their possible map setup correctly.
    Signed-off-by: default avatarAnton Blanchard <anton@samba.org>
    Signed-off-by: default avatarPaul Mackerras <paulus@samba.org>
    7a0268fa
setup_64.c 16.9 KB