diff options
author | Linus Torvalds <torvalds@woody.linux-foundation.org> | 2008-02-23 18:05:03 -0800 |
---|---|---|
committer | Linus Torvalds <torvalds@woody.linux-foundation.org> | 2008-02-23 18:05:03 -0800 |
commit | 04e2f1741d235ba599037734878d72e57cb302b5 (patch) | |
tree | 5d4b05f9645370a40ce436aa6da18dc0b885d19c | |
parent | 0a3716eb04ccfdbef6e872a343ba7ce309237e79 (diff) |
Add memory barrier semantics to wake_up() & co
Oleg Nesterov and others have pointed out that on some architectures,
the traditional sequence of
set_current_state(TASK_INTERRUPTIBLE);
if (CONDITION)
return;
schedule();
is racy wrt another CPU doing
CONDITION = 1;
wake_up_process(p);
because while set_current_state() has a memory barrier separating
setting of the TASK_INTERRUPTIBLE state from reading of the CONDITION
variable, there is no such memory barrier on the wakeup side.
Now, wake_up_process() does actually take a spinlock before it reads and
sets the task state on the waking side, and on x86 (and many other
architectures) that spinlock is in fact equivalent to a memory barrier,
but that is not generally guaranteed. The write that sets CONDITION
could move into the critical region protected by the runqueue spinlock.
However, adding a smp_wmb() to before the spinlock should now order the
writing of CONDITION wrt the lock itself, which in turn is ordered wrt
the accesses within the spinlock (which includes the reading of the old
state).
This should thus close the race (which probably has never been seen in
practice, but since smp_wmb() is a no-op on x86, it's not like this will
make anything worse either on the most common architecture where the
spinlock already gave the required protection).
Acked-by: Oleg Nesterov <oleg@tv-sign.ru>
Acked-by: Dmitry Adamushko <dmitry.adamushko@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Nick Piggin <nickpiggin@yahoo.com.au>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r-- | kernel/sched.c | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index c4bc8c21095..b387a8de26a 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -1831,6 +1831,7 @@ static int try_to_wake_up(struct task_struct *p, unsigned int state, int sync) long old_state; struct rq *rq; + smp_wmb(); rq = task_rq_lock(p, &flags); old_state = p->state; if (!(old_state & state)) |