Source-Changes-HG archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
[src/netbsd-9]: src/sys/kern Pull up following revision(s) (requested by rias...
details: https://anonhg.NetBSD.org/src/rev/41964125243a
branches: netbsd-9
changeset: 1002137:41964125243a
user: martin <martin%NetBSD.org@localhost>
date: Mon Jan 25 14:12:50 2021 +0000
description:
Pull up following revision(s) (requested by riastradh in ticket #1187):
sys/kern/kern_threadpool.c: revision 1.23
threadpool(9): Fix synchronization between cancel and dispatch.
- threadpool_cancel_job_async tried to prevent
threadpool_dispatcher_thread from taking the job by setting
job->job_thread = NULL and then removing the job from the queue.
- But threadpool_cancel_job_async didn't notice job->job_thread is
null until after it also removes the job from the queue =>
double-remove, *boom*.
The solution is to teach threadpool_dispatcher_thread to wait until
it has acquired the job lock to test whether job->job_thread is still
valid before it decides to remove the job from the queue.
Fixes PR kern/55948.
XXX pullup-9
diffstat:
sys/kern/kern_threadpool.c | 7 ++++---
1 files changed, 4 insertions(+), 3 deletions(-)
diffs (35 lines):
diff -r 8e81828fa666 -r 41964125243a sys/kern/kern_threadpool.c
--- a/sys/kern/kern_threadpool.c Sat Jan 23 13:01:59 2021 +0000
+++ b/sys/kern/kern_threadpool.c Mon Jan 25 14:12:50 2021 +0000
@@ -1,4 +1,4 @@
-/* $NetBSD: kern_threadpool.c,v 1.15 2019/01/17 10:18:52 hannken Exp $ */
+/* $NetBSD: kern_threadpool.c,v 1.15.6.1 2021/01/25 14:12:50 martin Exp $ */
/*-
* Copyright (c) 2014, 2018 The NetBSD Foundation, Inc.
@@ -81,7 +81,7 @@
*/
#include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: kern_threadpool.c,v 1.15 2019/01/17 10:18:52 hannken Exp $");
+__KERNEL_RCSID(0, "$NetBSD: kern_threadpool.c,v 1.15.6.1 2021/01/25 14:12:50 martin Exp $");
#include <sys/types.h>
#include <sys/param.h>
@@ -947,7 +947,7 @@
/* There are idle threads, so try giving one a job. */
struct threadpool_job *const job = TAILQ_FIRST(&pool->tp_jobs);
- TAILQ_REMOVE(&pool->tp_jobs, job, job_entry);
+
/*
* Take an extra reference on the job temporarily so that
* it won't disappear on us while we have both locks dropped.
@@ -959,6 +959,7 @@
/* If the job was cancelled, we'll no longer be its thread. */
if (__predict_true(job->job_thread == overseer)) {
mutex_spin_enter(&pool->tp_lock);
+ TAILQ_REMOVE(&pool->tp_jobs, job, job_entry);
if (__predict_false(
TAILQ_EMPTY(&pool->tp_idle_threads))) {
/*
Home |
Main Index |
Thread Index |
Old Index