Source-Changes-HG archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
[src/netbsd-6]: src Pull up revisions:
details: https://anonhg.NetBSD.org/src/rev/a04103996093
branches: netbsd-6
changeset: 774276:a04103996093
user: jdc <jdc%NetBSD.org@localhost>
date: Mon Jul 02 19:04:42 2012 +0000
description:
Pull up revisions:
src/sys/kern/subr_pool.c revision 1.196
src/share/man/man9/pool_cache.9 patch
(requested by jym in ticket #366).
As pool reclaiming is unlikely to happen at interrupt or softint
context, re-enable the portion of code that allows invalidation of
CPU-bound pool caches.
Two reasons:
- CPU cached objects being invalidated, the probability of fetching an
obsolete object from the pool_cache(9) is greatly reduced. This speeds
up pool_cache_get() quite a bit as it does not have to keep destroying
objects until it finds an updated one when an invalidation is in progress.
- for situations where we have to ensure that no obsolete object remains
after a state transition (canonical example: pmap mappings between Xen
VM restoration), invalidating all pool_cache(9) is the safest way to go.
As it uses xcall(9) to broadcast the execution of pool_cache_transfer(),
pool_cache_invalidate() cannot be called from interrupt or softint
context (scheduling a xcall(9) can put a LWP to sleep).
pool_cache_xcall() => pool_cache_transfer() to reflect its use.
Invalidation being a costly process (1000s objects may be destroyed),
all places where pool_cache_invalidate() may be called from
interrupt/softint context will now get caught by the proper KASSERT(),
and fixed. Ping me when you see one.
Tested under i386 and amd64 by running ATF suite within 64MiB HVM
domains (tried triggering pgdaemon a few times).
No objection on tech-kern@.
XXX a similar fix has to be pulled up to NetBSD-6, but with a more
conservative approach.
See http://mail-index.netbsd.org/tech-kern/2012/05/29/msg013245.html
diffstat:
share/man/man9/pool_cache.9 | 12 +++++++++++-
sys/kern/subr_pool.c | 33 +++++++++++----------------------
2 files changed, 22 insertions(+), 23 deletions(-)
diffs (77 lines):
diff -r aebfc7af5d7e -r a04103996093 share/man/man9/pool_cache.9
--- a/share/man/man9/pool_cache.9 Mon Jul 02 18:50:11 2012 +0000
+++ b/share/man/man9/pool_cache.9 Mon Jul 02 19:04:42 2012 +0000
@@ -1,4 +1,4 @@
-.\" $NetBSD: pool_cache.9,v 1.19 2011/11/15 00:32:34 jym Exp $
+.\" $NetBSD: pool_cache.9,v 1.19.2.1 2012/07/02 19:04:42 jdc Exp $
.\"
.\" Copyright (c)2003 YAMAMOTO Takashi,
.\" All rights reserved.
@@ -360,3 +360,13 @@
.Xr memoryallocators 9 ,
.Xr percpu 9 ,
.Xr pool 9
+.\" ------------------------------------------------------------
+.Sh CAVEATS
+.Fn pool_cache_invalidate
+only affects objects safely accessible by the local CPU.
+On multiprocessor systems this function should be called by each CPU to
+invalidate their local caches.
+See
+.Xr xcall 9
+for an interface to schedule the execution of arbitrary functions
+to any other CPU.
diff -r aebfc7af5d7e -r a04103996093 sys/kern/subr_pool.c
--- a/sys/kern/subr_pool.c Mon Jul 02 18:50:11 2012 +0000
+++ b/sys/kern/subr_pool.c Mon Jul 02 19:04:42 2012 +0000
@@ -1,4 +1,4 @@
-/* $NetBSD: subr_pool.c,v 1.194 2012/02/04 22:11:42 para Exp $ */
+/* $NetBSD: subr_pool.c,v 1.194.2.1 2012/07/02 19:04:42 jdc Exp $ */
/*-
* Copyright (c) 1997, 1999, 2000, 2002, 2007, 2008, 2010
@@ -32,7 +32,7 @@
*/
#include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: subr_pool.c,v 1.194 2012/02/04 22:11:42 para Exp $");
+__KERNEL_RCSID(0, "$NetBSD: subr_pool.c,v 1.194.2.1 2012/07/02 19:04:42 jdc Exp $");
#include "opt_ddb.h"
#include "opt_pool.h"
@@ -2245,26 +2245,15 @@
pool_cache_invalidate(pool_cache_t pc)
{
pcg_t *full, *empty, *part;
-#if 0
- uint64_t where;
-
- if (ncpu < 2 || !mp_online) {
- /*
- * We might be called early enough in the boot process
- * for the CPU data structures to not be fully initialized.
- * In this case, simply gather the local CPU's cache now
- * since it will be the only one running.
- */
- pool_cache_xcall(pc);
- } else {
- /*
- * Gather all of the CPU-specific caches into the
- * global cache.
- */
- where = xc_broadcast(0, (xcfunc_t)pool_cache_xcall, pc, NULL);
- xc_wait(where);
- }
-#endif
+
+ /*
+ * Transfer the content of the local CPU's cache back into global
+ * cache. Note that this does not handle objects cached for other CPUs.
+ * A xcall(9) must be scheduled to take care of them.
+ */
+ pool_cache_xcall(pc);
+
+ /* Invalidate the global cache. */
mutex_enter(&pc->pc_lock);
full = pc->pc_fullgroups;
empty = pc->pc_emptygroups;
Home |
Main Index |
Thread Index |
Old Index