tech-net archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: RFC: gif(4) MP-ify
Date: Wed, 6 Jan 2016 14:19:01 +0900
From: Kengo NAKAHARA <k-nakahara%iij.ad.jp@localhost>
On 2016/01/06 3:40, Taylor R Campbell wrote:
> There are a couple other things I'm concerned about in the patch -- I
> notice you're doing softint_establish with a lock held. Maybe that's
> OK, but it makes me a little nervous. I don't have time right now but
> I would like to take a closer look in the next few days at some of the
> changes like that that your patch makes.
It works for my environment for now..., meanwhile I see your point.
Thank you for your help. I'm looking forward to your review.
I took a closer look at your patch, and at your fix for 50522. I'm
concerned about a few things:
- missing memory barriers in handling the reference counts,
- (future) performance impact of atomics in gifintr and gif_output,
and
- global invariants for the gif list, and overcomplexity of the
locking scheme.
In particular:
- The following fragment of gifintr probably needs a memory barrier on
non-x86 platforms:
atomic_inc_uint(&sc->gif_si_refs);
if (sc->gif_pdst == NULL || sc->gif_psrc == NULL) {
...
}
Otherwise, gifintr on CPU 0 might load nonnull values from
sc->gif_pdst and sc->gif_psrc before gif_set_tunnel on CPU 1 sees
positive sc->gif_si_refs. I'm a little unclear on the rest of the
protocol -- but I suspect it can be done without atomics or mutexes
anyway in the packet-processing path.
- Atomics are expensive interprocessor synchronization operations, so
we should avoid them where unnecessary for MP-scalability --
particularly in the packet-processing path. In this case I think it
shouldn't be too hard to avoid: we simply need to prevent calls to
gif_output before we do softint_disestablish.
- With your proposed patch, if CPU 0 and CPU 1 both run gif_set_tunnel
with the same address, then they might
(a) first find no existing gif(4) having that address;
(b) next configure gif0 and gif1 with the same address; and then
(c) finally leave the system in a state where gif0 and gif1 have the
same address,
which violates the global invariant that gif_set_tunnel tries to
enforce. So we really need (a), (b), and (c) to happen atomically.
I think the following protocol should be simpler and safer:
- One global interruptible lock for the system's gif(4) configuration.
- No per-gif(4) locks at all.
- Split gif_encap_detach (and in/in6_gif_detach) into two parts:
. gif_encap_pause: prevent further use of gif_output (encap_detach)
. gif_encap_detach: destroy data structures associated with it (rtcache_free)
- Make gif_set_tunnel do:
1. acquire global lock
2. check invariant -- reject duplicate gif src/dst addresses
3. gif_encap_pause
4. softint_disestablish
5. gif_encap_detach, sockaddr_free
6. gif_encap_attach
7. softint_establish
8. release global lock
- Make gif_delete_tunnel do:
1. acquire global lock
2. gif_encap_pause
3. softint_disestablish
4. gif_encap_detach, sockaddr_free
5. release global lock
The global lock, for the moment, can just be KERNEL_LOCK: it is
acquired only during ifconfig(8), not in the packet-processing path.
Or you could make an interruptible lock out of a mutex, condvar, and
flag. It should be interruptible so that if `ifconfig gif0 tunnel
...' hangs for any reason, then another process trying to do the same
can handle ^C.
Here's a quick sketch of an interruptible lock:
struct {
kmutex_t lock;
kcondvar_t cv;
struct lwp *busy;
LIST_HEAD(gif_softc) head; /* protected by gif_list_enter/exit */
} gif_list __cacheline_aligned;
static int
gif_list_enter(void)
{
int error;
mutex_enter(&gif_list.lock);
while (gif_list.busy != NULL) {
error = cv_wait_sig(&gif_list.cv, &gif_list.lock);
if (error) {
mutex_exit(&gif_list.lock);
return error;
}
}
KASSERT(gif_list.busy == NULL);
gif_list.busy = curlwp;
mutex_exit(&gif_list.lock);
return 0;
}
static void
gif_list_exit(void)
{
mutex_enter(&gif_list.lock);
KASSERT(gif_list.busy == curlwp);
gif_list.busy = NULL;
cv_broadcast(&gif_list.cv);
mutex_exit(&gif_list.lock);
}
Home |
Main Index |
Thread Index |
Old Index