Source-Changes-HG archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
[src/trunk]: src/sys/dev/dkwedge dk(4): Fix callout detach race.
details: https://anonhg.NetBSD.org/src/rev/79e523a67310
branches: trunk
changeset: 374388:79e523a67310
user: riastradh <riastradh%NetBSD.org@localhost>
date: Fri Apr 21 18:30:32 2023 +0000
description:
dk(4): Fix callout detach race.
1. Set a flag sc_iostop under the lock sc_iolock so dkwedge_detach
and dkstart don't race over it.
2. Decline to schedule the callout if sc_iostop is set. The callout
is already only ever scheduled while the lock is held.
3. Use callout_halt to wait for any concurrent callout to complete.
At this point, it can't reschedule itself.
Without this change, the callout could be concurrently rescheduling
itself as we issue callout_stop, leading to use-after-free later.
diffstat:
sys/dev/dkwedge/dk.c | 15 ++++++++++-----
1 files changed, 10 insertions(+), 5 deletions(-)
diffs (57 lines):
diff -r 9769c16662c4 -r 79e523a67310 sys/dev/dkwedge/dk.c
--- a/sys/dev/dkwedge/dk.c Fri Apr 21 18:30:21 2023 +0000
+++ b/sys/dev/dkwedge/dk.c Fri Apr 21 18:30:32 2023 +0000
@@ -1,4 +1,4 @@
-/* $NetBSD: dk.c,v 1.141 2023/04/21 18:30:21 riastradh Exp $ */
+/* $NetBSD: dk.c,v 1.142 2023/04/21 18:30:32 riastradh Exp $ */
/*-
* Copyright (c) 2004, 2005, 2006, 2007 The NetBSD Foundation, Inc.
@@ -30,7 +30,7 @@
*/
#include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: dk.c,v 1.141 2023/04/21 18:30:21 riastradh Exp $");
+__KERNEL_RCSID(0, "$NetBSD: dk.c,v 1.142 2023/04/21 18:30:32 riastradh Exp $");
#ifdef _KERNEL_OPT
#include "opt_dkwedge.h"
@@ -93,6 +93,7 @@ struct dkwedge_softc {
kmutex_t sc_iolock;
kcondvar_t sc_dkdrn;
u_int sc_iopend; /* I/Os pending */
+ bool sc_iostop; /* don't schedule restart */
int sc_mode; /* parent open mode */
};
@@ -724,7 +725,10 @@ dkwedge_detach(device_t self, int flags)
cmaj = cdevsw_lookup_major(&dk_cdevsw);
/* Kill any pending restart. */
- callout_stop(&sc->sc_restart_ch);
+ mutex_enter(&sc->sc_iolock);
+ sc->sc_iostop = true;
+ mutex_exit(&sc->sc_iolock);
+ callout_halt(&sc->sc_restart_ch, NULL);
/*
* dkstart() will kill any queued buffers now that the
@@ -1473,7 +1477,7 @@ dkstart(struct dkwedge_softc *sc)
/* Do as much work as has been enqueued. */
while ((bp = bufq_peek(sc->sc_bufq)) != NULL) {
- if (sc->sc_state != DKW_STATE_RUNNING) {
+ if (sc->sc_iostop) {
(void) bufq_get(sc->sc_bufq);
if (--sc->sc_iopend == 0)
cv_broadcast(&sc->sc_dkdrn);
@@ -1495,7 +1499,8 @@ dkstart(struct dkwedge_softc *sc)
* buffer queued up, and schedule a timer to
* restart the queue in 1/2 a second.
*/
- callout_schedule(&sc->sc_restart_ch, hz/2);
+ if (!sc->sc_iostop)
+ callout_schedule(&sc->sc_restart_ch, hz/2);
break;
}
Home |
Main Index |
Thread Index |
Old Index