forked from rrcarlosr/Jetpack
50 lines
1.5 KiB
Diff
50 lines
1.5 KiB
Diff
|
From 70a4228ad3c22ce449833a10a651c3ae25422d32 Mon Sep 17 00:00:00 2001
|
||
|
From: Thomas Gleixner <tglx@linutronix.de>
|
||
|
Date: Fri, 1 Mar 2013 11:17:42 +0100
|
||
|
Subject: [PATCH 168/352] futex: Ensure lock/unlock symetry versus pi_lock and
|
||
|
hash bucket lock
|
||
|
|
||
|
In exit_pi_state_list() we have the following locking construct:
|
||
|
|
||
|
spin_lock(&hb->lock);
|
||
|
raw_spin_lock_irq(&curr->pi_lock);
|
||
|
|
||
|
...
|
||
|
spin_unlock(&hb->lock);
|
||
|
|
||
|
In !RT this works, but on RT the migrate_enable() function which is
|
||
|
called from spin_unlock() sees atomic context due to the held pi_lock
|
||
|
and just decrements the migrate_disable_atomic counter of the
|
||
|
task. Now the next call to migrate_disable() sees the counter being
|
||
|
negative and issues a warning. That check should be in
|
||
|
migrate_enable() already.
|
||
|
|
||
|
Fix this by dropping pi_lock before unlocking hb->lock and reaquire
|
||
|
pi_lock after that again. This is safe as the loop code reevaluates
|
||
|
head again under the pi_lock.
|
||
|
|
||
|
Reported-by: Yong Zhang <yong.zhang@windriver.com>
|
||
|
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
|
||
|
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
|
||
|
---
|
||
|
kernel/futex.c | 2 ++
|
||
|
1 file changed, 2 insertions(+)
|
||
|
|
||
|
diff --git a/kernel/futex.c b/kernel/futex.c
|
||
|
index 601490b..ee718c3 100644
|
||
|
--- a/kernel/futex.c
|
||
|
+++ b/kernel/futex.c
|
||
|
@@ -910,7 +910,9 @@ void exit_pi_state_list(struct task_struct *curr)
|
||
|
* task still owns the PI-state:
|
||
|
*/
|
||
|
if (head->next != next) {
|
||
|
+ raw_spin_unlock_irq(&curr->pi_lock);
|
||
|
spin_unlock(&hb->lock);
|
||
|
+ raw_spin_lock_irq(&curr->pi_lock);
|
||
|
continue;
|
||
|
}
|
||
|
|
||
|
--
|
||
|
2.7.4
|
||
|
|