Port-sparc64 archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: fast softint support for sparc64
>>> Eduardo Horvath <eeh%NetBSD.org@localhost> wrote
> On Fri, 17 Jun 2011, Takeshi Nakayama wrote:
>
> > Hi folks,
> >
> > I have implemented a fast softint support for sparc64 as attached.
> >
> > It works fine on my Ultra60 and Fire V100 in the last two weeks,
> > and fixes a periodic noise on uaudio while playing MP3s.
> >
> > Patch for the netbsd-5 branch is also available at:
> >
> > ftp://ftp.netbsd.org/pub/NetBSD/misc/nakayama/sparc64-fast-softint-5.diff
> >
> > Comment?
>
> A bit more context in the diff would be useful for the assembly code.
Ah, yes. I will add more comments as possible.
> @@ -1171,11 +1172,16 @@
> xor %g7, WSTATE_KERN, %g3; /* Are we on
> the user stack ? */ \
> \
> sra %g5, 0, %g5; /* Sign extend
> the damn thing */ \
> - or %g3, %g4, %g4; /* Definitely
> not off the interrupt stack */ \
> + orcc %g3, %g4, %g0; /* Definitely
> not off the interrupt stack */ \
> \
> - movrz %g4, %sp, %g6; \
> + bz,a,pt %xcc, 1f; \
> + mov %sp, %g6; \
> \
> - add %g6, %g5, %g5; /* Allocate a
> stack frame */ \
> + sethi %hi(CPUINFO_VA + CI_EINTSTACK), %g4; \
> + ldx [%g4 + %lo(CPUINFO_VA + CI_EINTSTACK)], %g4; \
> + movrnz %g4, %g4, %g6; /* Use saved
> intr stack if exists */ \
>
> You have 3 dependent instructions here. See if you can spread them around
> a bit to improve instruction scheduling.
Free registers are restricted around there. I think a little more.
> +/*
> + * Trampoline function that gets returned to by cpu_switchto() when
> + * an interrupt handler blocks.
> + *
> + * Arguments:
> + * o0 old lwp from cpu_switchto()
> + *
> + * from softint_fastintr():
> + * l0 CPUINFO_VA
> + * l6 saved ci_eintstack
> + * l7 saved ipl
> + */
> +softint_fastintr_ret:
> + /* re-adjust after mi_switch() */
> + ld [%l0 + CI_MTX_COUNT], %o1
> + inc %o1 ! ci_mtx_count++
> + st %o1, [%l0 + CI_MTX_COUNT]
> + membar #LoadLoad ! membar_exit()
>
> LoadLoad? All loads are completed before the next load? This doesn't
> sound right. What are you trying to sync here?
>
> + st %g0, [%o0 + L_CTXSWTCH] ! prev->l_ctxswtch = 0
These code is borrowed from mi_switch() in kern_synch.c:
http://nxr.netbsd.org/xref/src/sys/kern/kern_synch.c#797
and membar_exit() for sparc64 is "membar #LoadLoad":
http://nxr.netbsd.org/xref/src/common/lib/libc/arch/sparc64/atomic/membar_ops.S#42
Should it be "membar #StoreStore" ?
> +
> + ld [%l0 + CI_IDEPTH], %l1
> + STPTR %l6, [%l0 + CI_EINTSTACK] ! restore ci_eintstack
> + inc %l1
> + st %l1, [%l0 + CI_IDEPTH] ! re-adjust ci_idepth
> + wrpr %g0, %l7, %pil ! restore ipl
> + ret
> + restore %g0, 1, %o0
> +
> +#endif /* __HAVE_FAST_SOFTINTS */
> +
> /*
> * Snapshot the current process so that stack frames are up to date.
> * Only used just before a crash dump.
Thanks,
Takeshi Nakayama
Home |
Main Index |
Thread Index |
Old Index