Source-Changes-HG archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
[src/netbsd-7-0]: src/sys/kern Pull up following revision(s) (requested by oz...
details: https://anonhg.NetBSD.org/src/rev/ec5152e6b2bd
branches: netbsd-7-0
changeset: 801250:ec5152e6b2bd
user: snj <snj%NetBSD.org@localhost>
date: Mon Dec 12 07:30:20 2016 +0000
description:
Pull up following revision(s) (requested by ozaki-r in ticket #1306):
sys/kern/subr_xcall.c: revision 1.19
Fix a race condition of low priority xcall
xc_lowpri and xc_thread are racy and xc_wait may return during/before
executing all xcall callbacks, resulting in a kernel panic at worst.
xc_lowpri serializes multiple jobs by a mutex and a cv. If all xcall
callbacks are done, xc_wait returns and also xc_lowpri accepts a next job.
The problem is that a counter that counts the number of finished xcall
callbacks is incremented *before* actually executing a xcall callback
(see xc_tailp++ in xc_thread). So xc_lowpri accepts a next job before
all xcall callbacks complete and a next job begins to run its xcall callbacks.
Even worse the counter is global and shared between jobs, so if a xcall
callback of the next job completes, the shared counter is incremented,
which confuses wc_wait of the previous job as all xcall callbacks of the
previous job are done and wc_wait of the previous job returns during/before
executing its xcall callbacks.
How to fix: there are actually two counters that count the number of finished
xcall callbacks for low priority xcall for historical reasons (I guess):
xc_tailp and xc_low_pri.xc_donep. xc_low_pri.xc_donep is incremented correctly
while xc_tailp is incremented wrongly, i.e., before executing a xcall callback.
We can fix the issue by dropping xc_tailp and using only xc_low_pri.xc_donep.
PR kern/51632
diffstat:
sys/kern/subr_xcall.c | 13 +++++--------
1 files changed, 5 insertions(+), 8 deletions(-)
diffs (69 lines):
diff -r d28aa8216636 -r ec5152e6b2bd sys/kern/subr_xcall.c
--- a/sys/kern/subr_xcall.c Mon Dec 12 07:25:16 2016 +0000
+++ b/sys/kern/subr_xcall.c Mon Dec 12 07:30:20 2016 +0000
@@ -1,4 +1,4 @@
-/* $NetBSD: subr_xcall.c,v 1.18 2013/11/26 21:13:05 rmind Exp $ */
+/* $NetBSD: subr_xcall.c,v 1.18.8.1 2016/12/12 07:30:20 snj Exp $ */
/*-
* Copyright (c) 2007-2010 The NetBSD Foundation, Inc.
@@ -74,7 +74,7 @@
*/
#include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: subr_xcall.c,v 1.18 2013/11/26 21:13:05 rmind Exp $");
+__KERNEL_RCSID(0, "$NetBSD: subr_xcall.c,v 1.18.8.1 2016/12/12 07:30:20 snj Exp $");
#include <sys/types.h>
#include <sys/param.h>
@@ -105,7 +105,6 @@
/* Low priority xcall structures. */
static xc_state_t xc_low_pri __cacheline_aligned;
-static uint64_t xc_tailp __cacheline_aligned;
/* High priority xcall structures. */
static xc_state_t xc_high_pri __cacheline_aligned;
@@ -134,7 +133,6 @@
memset(xclo, 0, sizeof(xc_state_t));
mutex_init(&xclo->xc_lock, MUTEX_DEFAULT, IPL_NONE);
cv_init(&xclo->xc_busy, "xclocv");
- xc_tailp = 0;
memset(xchi, 0, sizeof(xc_state_t));
mutex_init(&xchi->xc_lock, MUTEX_DEFAULT, IPL_SOFTSERIAL);
@@ -256,7 +254,7 @@
uint64_t where;
mutex_enter(&xc->xc_lock);
- while (xc->xc_headp != xc_tailp) {
+ while (xc->xc_headp != xc->xc_donep) {
cv_wait(&xc->xc_busy, &xc->xc_lock);
}
xc->xc_arg1 = arg1;
@@ -277,7 +275,7 @@
ci->ci_data.cpu_xcall_pending = true;
cv_signal(&ci->ci_data.cpu_xcall);
}
- KASSERT(xc_tailp < xc->xc_headp);
+ KASSERT(xc->xc_donep < xc->xc_headp);
where = xc->xc_headp;
mutex_exit(&xc->xc_lock);
@@ -302,7 +300,7 @@
mutex_enter(&xc->xc_lock);
for (;;) {
while (!ci->ci_data.cpu_xcall_pending) {
- if (xc->xc_headp == xc_tailp) {
+ if (xc->xc_headp == xc->xc_donep) {
cv_broadcast(&xc->xc_busy);
}
cv_wait(&ci->ci_data.cpu_xcall, &xc->xc_lock);
@@ -312,7 +310,6 @@
func = xc->xc_func;
arg1 = xc->xc_arg1;
arg2 = xc->xc_arg2;
- xc_tailp++;
mutex_exit(&xc->xc_lock);
KASSERT(func != NULL);
Home |
Main Index |
Thread Index |
Old Index