Commit 5f865151 authored by Lai Jiangshan's avatar Lai Jiangshan Committed by Ingo Molnar

rcupdate: fix bug of rcu_barrier*()

current rcu_barrier_bh() is like this:

void rcu_barrier_bh(void)
{
	BUG_ON(in_interrupt());
	/* Take cpucontrol mutex to protect against CPU hotplug */
	mutex_lock(&rcu_barrier_mutex);
	init_completion(&rcu_barrier_completion);
	atomic_set(&rcu_barrier_cpu_count, 0);
	/*
	 * The queueing of callbacks in all CPUs must be atomic with
	 * respect to RCU, otherwise one CPU may queue a callback,
	 * wait for a grace period, decrement barrier count and call
	 * complete(), while other CPUs have not yet queued anything.
	 * So, we need to make sure that grace periods cannot complete
	 * until all the callbacks are queued.
	 */
	rcu_read_lock();
	on_each_cpu(rcu_barrier_func, (void *)RCU_BARRIER_BH, 1);
	rcu_read_unlock();
	wait_for_completion(&rcu_barrier_completion);
	mutex_unlock(&rcu_barrier_mutex);
}

The inconsistency of the code and the comments show a bug here.
rcu_read_lock() cannot make sure that "grace periods for RCU_BH
cannot complete until all the callbacks are queued".
it only make sure that race periods for RCU cannot complete
until all the callbacks are queued.

so we must use rcu_read_lock_bh() for rcu_barrier_bh().
like this:

void rcu_barrier_bh(void)
{
	......
	rcu_read_lock_bh();
	on_each_cpu(rcu_barrier_func, (void *)RCU_BARRIER_BH, 1);
	rcu_read_unlock_bh();
	......
}

and also rcu_barrier() rcu_barrier_sched() are implemented like this.
it will bring a lot of duplicate code. My patch uses another way to
fix this bug, please see the comment of my patch.
Thank Paul E. McKenney for he rewrote the comment.
Signed-off-by: default avatarLai Jiangshan <laijs@cn.fujitsu.com>
Reviewed-by: default avatarPaul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: default avatarIngo Molnar <mingo@elte.hu>
parent 8cf7d362
...@@ -119,18 +119,19 @@ static void _rcu_barrier(enum rcu_barrier type) ...@@ -119,18 +119,19 @@ static void _rcu_barrier(enum rcu_barrier type)
/* Take cpucontrol mutex to protect against CPU hotplug */ /* Take cpucontrol mutex to protect against CPU hotplug */
mutex_lock(&rcu_barrier_mutex); mutex_lock(&rcu_barrier_mutex);
init_completion(&rcu_barrier_completion); init_completion(&rcu_barrier_completion);
atomic_set(&rcu_barrier_cpu_count, 0);
/* /*
* The queueing of callbacks in all CPUs must be atomic with * Initialize rcu_barrier_cpu_count to 1, then invoke
* respect to RCU, otherwise one CPU may queue a callback, * rcu_barrier_func() on each CPU, so that each CPU also has
* wait for a grace period, decrement barrier count and call * incremented rcu_barrier_cpu_count. Only then is it safe to
* complete(), while other CPUs have not yet queued anything. * decrement rcu_barrier_cpu_count -- otherwise the first CPU
* So, we need to make sure that grace periods cannot complete * might complete its grace period before all of the other CPUs
* until all the callbacks are queued. * did their increment, causing this function to return too
* early.
*/ */
rcu_read_lock(); atomic_set(&rcu_barrier_cpu_count, 1);
on_each_cpu(rcu_barrier_func, (void *)type, 1); on_each_cpu(rcu_barrier_func, (void *)type, 1);
rcu_read_unlock(); if (atomic_dec_and_test(&rcu_barrier_cpu_count))
complete(&rcu_barrier_completion);
wait_for_completion(&rcu_barrier_completion); wait_for_completion(&rcu_barrier_completion);
mutex_unlock(&rcu_barrier_mutex); mutex_unlock(&rcu_barrier_mutex);
} }
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment