rcu: Consolidate last open-coded expedited memory barrier

One of the requirements on RCU grace periods is that if there is a
causal chain of operations that starts after one grace period and
ends before another grace period, then the two grace periods must
be serialized.  There has been (and might still be) code that relies
on this, for example, certain types of reference-counting code that
does a call_rcu() within an RCU callback function.

This requirement is why there is an smp_mb() at the end of both
synchronize_sched_expedited() and synchronize_rcu_expedited().
However, this is the only smp_mb() in these functions, so it would
be nicer to consolidate it into rcu_exp_gp_seq_end().  This commit
does just that.

Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
This commit is contained in:
Paul E. McKenney 2015-06-27 09:36:29 -07:00
Родитель 4f525a528b
Коммит 704dd435ac
2 изменённых файлов: 1 добавлений и 2 удалений

Просмотреть файл

@ -3299,6 +3299,7 @@ static void rcu_exp_gp_seq_start(struct rcu_state *rsp)
static void rcu_exp_gp_seq_end(struct rcu_state *rsp)
{
rcu_seq_end(&rsp->expedited_sequence);
smp_mb(); /* Ensure that consecutive grace periods serialize. */
}
static unsigned long rcu_exp_gp_seq_snap(struct rcu_state *rsp)
{
@ -3431,7 +3432,6 @@ void synchronize_sched_expedited(void)
rcu_exp_gp_seq_end(rsp);
mutex_unlock(&rnp->exp_funnel_mutex);
smp_mb(); /* ensure subsequent action seen after grace period. */
put_online_cpus();
}

Просмотреть файл

@ -735,7 +735,6 @@ void synchronize_rcu_expedited(void)
/* Clean up and exit. */
rcu_exp_gp_seq_end(rsp);
mutex_unlock(&rnp_unlock->exp_funnel_mutex);
smp_mb(); /* ensure subsequent action seen after grace period. */
}
EXPORT_SYMBOL_GPL(synchronize_rcu_expedited);