x86/nmi/64: Reorder nested NMI checks
Check the repeat_nmi .. end_repeat_nmi special case first. The next patch will rework the RSP check and, as a side effect, the RSP check will no longer detect repeat_nmi .. end_repeat_nmi, so we'll need this ordering of the checks. Note: this is more subtle than it appears. The check for repeat_nmi .. end_repeat_nmi jumps straight out of the NMI code instead of adjusting the "iret" frame to force a repeat. This is necessary, because the code between repeat_nmi and end_repeat_nmi sets "NMI executing" and then writes to the "iret" frame itself. If a nested NMI comes in and modifies the "iret" frame while repeat_nmi is also modifying it, we'll end up with garbage. The old code got this right, as does the new code, but the new code is a bit more explicit. If we were to move the check right after the "NMI executing" check, then we'd get it wrong and have random crashes. ( Because the "NMI executing" check would jump to the code that would modify the "iret" frame without checking if the interrupted NMI was currently modifying it. ) Signed-off-by: Andy Lutomirski <luto@kernel.org> Reviewed-by: Steven Rostedt <rostedt@goodmis.org> Cc: Borislav Petkov <bp@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
This commit is contained in:
Родитель
0b22930eba
Коммит
a27507ca2d
|
@ -1361,7 +1361,24 @@ ENTRY(nmi)
|
|||
/*
|
||||
* Determine whether we're a nested NMI.
|
||||
*
|
||||
* First check "NMI executing". If it's set, then we're nested.
|
||||
* If we interrupted kernel code between repeat_nmi and
|
||||
* end_repeat_nmi, then we are a nested NMI. We must not
|
||||
* modify the "iret" frame because it's being written by
|
||||
* the outer NMI. That's okay; the outer NMI handler is
|
||||
* about to about to call do_nmi anyway, so we can just
|
||||
* resume the outer NMI.
|
||||
*/
|
||||
|
||||
movq $repeat_nmi, %rdx
|
||||
cmpq 8(%rsp), %rdx
|
||||
ja 1f
|
||||
movq $end_repeat_nmi, %rdx
|
||||
cmpq 8(%rsp), %rdx
|
||||
ja nested_nmi_out
|
||||
1:
|
||||
|
||||
/*
|
||||
* Now check "NMI executing". If it's set, then we're nested.
|
||||
* This will not detect if we interrupted an outer NMI just
|
||||
* before IRET.
|
||||
*/
|
||||
|
@ -1386,21 +1403,6 @@ ENTRY(nmi)
|
|||
/* Ah, it is within the NMI stack, treat it as nested */
|
||||
|
||||
nested_nmi:
|
||||
/*
|
||||
* If we interrupted an NMI that is between repeat_nmi and
|
||||
* end_repeat_nmi, then we must not modify the "iret" frame
|
||||
* because it's being written by the outer NMI. That's okay;
|
||||
* the outer NMI handler is about to call do_nmi anyway,
|
||||
* so we can just resume the outer NMI.
|
||||
*/
|
||||
movq $repeat_nmi, %rdx
|
||||
cmpq 8(%rsp), %rdx
|
||||
ja 1f
|
||||
movq $end_repeat_nmi, %rdx
|
||||
cmpq 8(%rsp), %rdx
|
||||
ja nested_nmi_out
|
||||
|
||||
1:
|
||||
/*
|
||||
* Modify the "iret" frame to point to repeat_nmi, forcing another
|
||||
* iteration of NMI handling.
|
||||
|
|
Загрузка…
Ссылка в новой задаче