NetBSD-Bugs archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
port-sparc/41372: inline functions writing psr needs __insn_barrier()
>Number: 41372
>Category: port-sparc
>Synopsis: inline functions writing psr needs __insn_barrier()
>Confidential: no
>Severity: serious
>Priority: medium
>Responsible: port-sparc-maintainer
>State: open
>Class: sw-bug
>Submitter-Id: net
>Arrival-Date: Wed May 06 20:30:01 +0000 2009
>Originator: Manuel Bouyer
>Release: NetBSD 5.0
>Organization:
>Environment:
5.0_STABLE sources
Architecture: sparc
Machine: sparc
>Description:
if splfoo() are inline or macro, the compiler may optimise things
wrongly around it. An example is softint_schedule(), where the
second check for SOFTINT_PENDING is optimised out. See PR kern/38637
for details.
To avoid this, splx(), splraiseipl(), spl0() and I guess
setpsr() should be a barrier for the compiler;
a __insn_barrier() will do it.
>How-To-Repeat:
code inspection
dissasembly of softint_schedule(). See also kern/38637.
>Fix:
The patch below should fix it (I confirm that the softint_schedule()
dissasembly is correct with this patch). A better way may be to
use memory clobber instruction in the inline assembly, but I'm
not familiar enough with sparc assembly to fix it this way.
Index: include/psl.h
===================================================================
RCS file: /cvsroot/src/sys/arch/sparc/include/psl.h,v
retrieving revision 1.44
diff -u -p -r1.44 psl.h
--- include/psl.h 19 Feb 2007 02:57:40 -0000 1.44
+++ include/psl.h 6 May 2009 20:23:24 -0000
@@ -254,6 +254,7 @@ setpsr(int newpsr)
{
__asm volatile("wr %0,0,%%psr" : : "r" (newpsr));
__asm volatile("nop; nop; nop");
+ __insn_barrier();
}
static __inline void
@@ -261,6 +262,7 @@ spl0(void)
{
int psr, oldipl;
+ __insn_barrier();
/*
* wrpsr xors two values: we choose old psr and old ipl here,
* which gives us the same value as the old psr but with all
@@ -286,11 +288,13 @@ spl0(void)
static __inline void name(void) \
{ \
int psr; \
+ __insn_barrier(); \
__asm volatile("rd %%psr,%0" : "=r" (psr)); \
psr &= ~PSR_PIL; \
__asm volatile("wr %0,%1,%%psr" : : \
"r" (psr), "n" ((newipl) << 8)); \
__asm volatile("nop; nop; nop"); \
+ __insn_barrier(); \
}
_SPLSET(spllowerschedclock, IPL_SCHED)
@@ -318,14 +322,16 @@ splraiseipl(ipl_cookie_t icookie)
oldipl = psr & PSR_PIL;
newipl <<= 8;
- if (newipl <= oldipl)
+ if (newipl <= oldipl) {
return (oldipl);
+ }
psr = (psr & ~oldipl) | newipl;
__asm volatile("wr %0,0,%%psr" : : "r" (psr));
__asm volatile("nop; nop; nop");
+ __insn_barrier();
return (oldipl);
}
@@ -344,7 +350,8 @@ static __inline void
splx(int newipl)
{
int psr;
-
+
+ __insn_barrier();
__asm volatile("rd %%psr,%0" : "=r" (psr));
__asm volatile("wr %0,%1,%%psr" : : \
"r" (psr & ~PSR_PIL), "rn" (newipl));
Home |
Main Index |
Thread Index |
Old Index