Subject: Small scheduler tweak for MP systems
To: None <tech-kern@netbsd.org>
From: Jason R Thorpe <thorpej@wasabisystems.com>
List: tech-kern
Date: 12/28/2002 18:51:30
--uAKRQypu60I7Lcqm
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
The following fixes a problem in the not-quite-right very-weak-affinity
handling in setrunnable(), and also encapsulates the operation into a
new inline, so that resetpriority() can share the same code.
I realize handling of affinity is still not right, but this at least
improves the situation (compare the test in the old code vs. the new
to see what I mean :-)
I'm going to check this in after I test it some more, but thought I'd
throw it out there for discussion.
--
-- Jason R. Thorpe <thorpej@wasabisystems.com>
--uAKRQypu60I7Lcqm
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename=resched-patch
Index: kern_synch.c
===================================================================
RCS file: /cvsroot/src/sys/kern/kern_synch.c,v
retrieving revision 1.118
diff -c -r1.118 kern_synch.c
*** kern_synch.c 2002/12/29 02:08:39 1.118
--- kern_synch.c 2002/12/29 02:38:47
***************
*** 581,589 ****
/*
* Since curpriority is a user priority, p->p_priority
! * is always better than curpriority.
*
! * XXX See affinity comment in setrunnable().
*/
if (p->p_flag & P_INMEM) {
setrunqueue(p);
--- 581,590 ----
/*
* Since curpriority is a user priority, p->p_priority
! * is always better than curpriority on the last CPU on
! * which it ran.
*
! * XXXSMP See affinity comment in resched_proc().
*/
if (p->p_flag & P_INMEM) {
setrunqueue(p);
***************
*** 918,923 ****
--- 919,956 ----
(struct proc *)&sched_qs[i];
}
+ static __inline void
+ resched_proc(struct proc *p)
+ {
+ struct cpu_info *ci;
+
+ /*
+ * XXXSMP
+ * Since p->p_cpu persists across a context switch,
+ * this gives us *very weak* processor affinity, in
+ * that we notify the CPU on which the process last
+ * ran that it should try to switch.
+ *
+ * This does not guarantee that the process will run on
+ * that processor next, because another processor might
+ * grab it the next time it performs a context switch.
+ *
+ * This also does not handle the case where its last
+ * CPU is running a higher-priority process, but every
+ * other CPU is running a lower-priority process. There
+ * are ways to handle this situation, but they're not
+ * currently very pretty, and we also need to weigh the
+ * cost of moving a process from one CPU to another.
+ *
+ * XXXSMP
+ * There is also the issue of locking the other CPU's
+ * sched state, which we currently do not do.
+ */
+ ci = (p->p_cpu != NULL) ? p->p_cpu : curcpu();
+ if (p->p_priority < ci->ci_schedstate.spc_curpriority)
+ need_resched(ci);
+ }
+
/*
* Change process state to be runnable,
* placing it on the run queue if it is in memory,
***************
*** 962,978 ****
p->p_slptime = 0;
if ((p->p_flag & P_INMEM) == 0)
sched_wakeup((caddr_t)&proc0);
! else if (p->p_priority < curcpu()->ci_schedstate.spc_curpriority) {
! /*
! * XXXSMP
! * This is not exactly right. Since p->p_cpu persists
! * across a context switch, this gives us some sort
! * of processor affinity. But we need to figure out
! * at what point it's better to reschedule on a different
! * CPU than the last one.
! */
! need_resched((p->p_cpu != NULL) ? p->p_cpu : curcpu());
! }
}
/*
--- 995,1002 ----
p->p_slptime = 0;
if ((p->p_flag & P_INMEM) == 0)
sched_wakeup((caddr_t)&proc0);
! else
! resched_proc(p);
}
/*
***************
*** 990,1002 ****
newpriority = PUSER + p->p_estcpu + NICE_WEIGHT * (p->p_nice - NZERO);
newpriority = min(newpriority, MAXPRI);
p->p_usrpri = newpriority;
! if (newpriority < curcpu()->ci_schedstate.spc_curpriority) {
! /*
! * XXXSMP
! * Same applies as in setrunnable() above.
! */
! need_resched((p->p_cpu != NULL) ? p->p_cpu : curcpu());
! }
}
/*
--- 1014,1020 ----
newpriority = PUSER + p->p_estcpu + NICE_WEIGHT * (p->p_nice - NZERO);
newpriority = min(newpriority, MAXPRI);
p->p_usrpri = newpriority;
! resched_proc(p);
}
/*
--uAKRQypu60I7Lcqm--