Commit 73f10281 authored by Nick Piggin's avatar Nick Piggin Committed by Linus Torvalds

read_barrier_depends arch fixlets

read_barrie_depends has always been a noop (not a compiler barrier) on all
architectures except SMP alpha. This brings UP alpha and frv into line with all
other architectures, and fixes incorrect documentation.
Signed-off-by: default avatarNick Piggin <npiggin@suse.de>
Acked-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: default avatarLinus Torvalds <torvalds@linux-foundation.org>
parent 4ef7e3e9
...@@ -994,7 +994,17 @@ The Linux kernel has eight basic CPU memory barriers: ...@@ -994,7 +994,17 @@ The Linux kernel has eight basic CPU memory barriers:
DATA DEPENDENCY read_barrier_depends() smp_read_barrier_depends() DATA DEPENDENCY read_barrier_depends() smp_read_barrier_depends()
All CPU memory barriers unconditionally imply compiler barriers. All memory barriers except the data dependency barriers imply a compiler
barrier. Data dependencies do not impose any additional compiler ordering.
Aside: In the case of data dependencies, the compiler would be expected to
issue the loads in the correct order (eg. `a[b]` would have to load the value
of b before loading a[b]), however there is no guarantee in the C specification
that the compiler may not speculate the value of b (eg. is equal to 1) and load
a before b (eg. tmp = a[1]; if (b != 1) tmp = a[b]; ). There is also the
problem of a compiler reloading b after having loaded a[b], thus having a newer
copy of b than a[b]. A consensus has not yet been reached about these problems,
however the ACCESS_ONCE macro is a good place to start looking.
SMP memory barriers are reduced to compiler barriers on uniprocessor compiled SMP memory barriers are reduced to compiler barriers on uniprocessor compiled
systems because it is assumed that a CPU will appear to be self-consistent, systems because it is assumed that a CPU will appear to be self-consistent,
......
...@@ -24,7 +24,7 @@ __asm__ __volatile__("mb": : :"memory") ...@@ -24,7 +24,7 @@ __asm__ __volatile__("mb": : :"memory")
#define smp_mb() barrier() #define smp_mb() barrier()
#define smp_rmb() barrier() #define smp_rmb() barrier()
#define smp_wmb() barrier() #define smp_wmb() barrier()
#define smp_read_barrier_depends() barrier() #define smp_read_barrier_depends() do { } while (0)
#endif #endif
#define set_mb(var, value) \ #define set_mb(var, value) \
......
...@@ -179,7 +179,7 @@ do { \ ...@@ -179,7 +179,7 @@ do { \
#define mb() asm volatile ("membar" : : :"memory") #define mb() asm volatile ("membar" : : :"memory")
#define rmb() asm volatile ("membar" : : :"memory") #define rmb() asm volatile ("membar" : : :"memory")
#define wmb() asm volatile ("membar" : : :"memory") #define wmb() asm volatile ("membar" : : :"memory")
#define read_barrier_depends() barrier() #define read_barrier_depends() do { } while (0)
#ifdef CONFIG_SMP #ifdef CONFIG_SMP
#define smp_mb() mb() #define smp_mb() mb()
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment