tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: Deadlock on fragmented memory?
> Date: Mon, 23 Oct 2017 02:57:19 +0000
> From: Taylor R Campbell <campbell+netbsd-tech-kern%mumble.net@localhost>
>
> > Date: Sun, 22 Oct 2017 22:32:40 +0200
> > From: Manuel Bouyer <bouyer%antioche.eu.org@localhost>
> >
> > With a pullup of kern_exec.c 1.448-1.449, to netbsd-6, we're still seeing
> > hangs on vmem.
>
> chuq inspired me to reexamine an idea I mentioned in passing a couple
> times: allow each pool to have only one call to the backing allocator
> active at any given time. It doesn't solve the root of the problem,
> but it may mitigate fragmentation damage of bursts of load, and it's
> easy to cook up a patch!
And paulg inspired me to write a patch that actually does something,
namely *setting* the PR_GROWING flag, instead of merely testing an
always-clear PR_GROWING flag.
Index: sys/sys/pool.h
===================================================================
RCS file: /cvsroot/src/sys/sys/pool.h,v
retrieving revision 1.79
diff -p -u -r1.79 pool.h
--- sys/sys/pool.h 29 Jul 2015 00:10:25 -0000 1.79
+++ sys/sys/pool.h 23 Oct 2017 03:26:12 -0000
@@ -147,6 +147,7 @@ struct pool {
#define PR_NOTOUCH 0x400 /* don't use free items to keep internal state*/
#define PR_NOALIGN 0x800 /* don't assume backend alignment */
#define PR_LARGECACHE 0x1000 /* use large cache groups */
+#define PR_GROWING 0x2000 /* pool_grow in progress */
/*
* `pr_lock' protects the pool's data structures when removing
Index: sys/kern/subr_pool.c
===================================================================
RCS file: /cvsroot/src/sys/kern/subr_pool.c,v
retrieving revision 1.208
diff -p -u -r1.208 subr_pool.c
--- sys/kern/subr_pool.c 8 Jun 2017 04:00:01 -0000 1.208
+++ sys/kern/subr_pool.c 23 Oct 2017 03:26:12 -0000
@@ -1052,6 +1052,22 @@ pool_grow(struct pool *pp, int flags)
struct pool_item_header *ph = NULL;
char *cp;
+ /*
+ * If there's a pool_grow in progress, wait for it to complete
+ * and try again from the top.
+ */
+ if (pp->pr_flags & PR_GROWING) {
+ if (flags & PR_WAITOK) {
+ do {
+ cv_wait(&pp->pr_cv, &pp->pr_lock);
+ } while (pp->pr_flags & PR_GROWING);
+ return ERESTART;
+ } else {
+ return EWOULDBLOCK;
+ }
+ }
+ pp->pr_flags |= PR_GROWING;
+
mutex_exit(&pp->pr_lock);
cp = pool_allocator_alloc(pp, flags);
if (__predict_true(cp != NULL)) {
@@ -1066,6 +1082,8 @@ pool_grow(struct pool *pp, int flags)
}
mutex_enter(&pp->pr_lock);
+ KASSERT(pp->pr_flags & PR_GROWING);
+ pp->pr_flags &= ~PR_GROWING;
pool_prime_page(pp, cp, ph);
pp->pr_npagealloc++;
return 0;
Home |
Main Index |
Thread Index |
Old Index