Source-Changes-HG archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
[src/trunk]: src/sys/arch/xen/xen xennet(4): Membar audit.
details: https://anonhg.NetBSD.org/src/rev/a783c92a8885
branches: trunk
changeset: 373712:a783c92a8885
user: riastradh <riastradh%NetBSD.org@localhost>
date: Sat Feb 25 00:32:49 2023 +0000
description:
xennet(4): Membar audit.
- xennet_tx_complete: Other side owns rsp_prod, giving us responses
to tx commands. We own rsp_cons, recording which responess we've
processed already.
1. Other side initializes responses before advancing rsp_prod, so
we must observe rsp_prod before trying to examine the responses.
Hence load from rsp_prod must be followed by xen_rmb.
(Can this just use atomic_load_acquire?)
2. As soon as other side observes rsp_event, it may start to
overwrite now-unused response slots, so we must finish using the
response before advancing rsp_cons. Hence we must issue xen_wmb
before store to rsp_event.
(Can this just use atomic_store_release?)
(Should this use RING_FINAL_CHECK_FOR_RESPONSES?)
3. When loop is done and we set rsp_event, we must ensure the other
side has had a chance to see that we want more before we check
whether there is more to consume; otherwise the other side might
not bother to send us an interrupt. Hence after setting
rsp_event, we must issue xen_mb (store-before-load) before
re-checking rsp_prod.
- xennet_handler (rx): Same deal, except the xen_mb is buried in
RING_FINAL_CHECK_FOR_RESPONSES. Unclear why xennet_tx_complete has
this open-coded while xennet_handler (rx) uses the macro.
XXX pullup-8 (at least the xen_mb part; requires patch)
XXX pullup-9 (at least the xen_mb part; requires patch)
XXX pullup-10
diffstat:
sys/arch/xen/xen/if_xennet_xenbus.c | 10 +++++-----
1 files changed, 5 insertions(+), 5 deletions(-)
diffs (43 lines):
diff -r 63a0ec8467d4 -r a783c92a8885 sys/arch/xen/xen/if_xennet_xenbus.c
--- a/sys/arch/xen/xen/if_xennet_xenbus.c Sat Feb 25 00:32:38 2023 +0000
+++ b/sys/arch/xen/xen/if_xennet_xenbus.c Sat Feb 25 00:32:49 2023 +0000
@@ -1,4 +1,4 @@
-/* $NetBSD: if_xennet_xenbus.c,v 1.128 2020/08/26 15:54:10 riastradh Exp $ */
+/* $NetBSD: if_xennet_xenbus.c,v 1.129 2023/02/25 00:32:49 riastradh Exp $ */
/*
* Copyright (c) 2006 Manuel Bouyer.
@@ -81,7 +81,7 @@
*/
#include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: if_xennet_xenbus.c,v 1.128 2020/08/26 15:54:10 riastradh Exp $");
+__KERNEL_RCSID(0, "$NetBSD: if_xennet_xenbus.c,v 1.129 2023/02/25 00:32:49 riastradh Exp $");
#include "opt_xen.h"
#include "opt_nfs_boot.h"
@@ -951,12 +951,12 @@
SLIST_INSERT_HEAD(&sc->sc_txreq_head, req, txreq_next);
sc->sc_free_txreql++;
}
-
sc->sc_tx_ring.rsp_cons = resp_prod;
/* set new event and check for race with rsp_cons update */
+ xen_wmb();
sc->sc_tx_ring.sring->rsp_event =
resp_prod + ((sc->sc_tx_ring.sring->req_prod - resp_prod) >> 1) + 1;
- xen_wmb();
+ xen_mb();
if (resp_prod != sc->sc_tx_ring.sring->rsp_prod)
goto again;
}
@@ -1060,8 +1060,8 @@
if_statinc(ifp, if_iqdrops);
m_freem(m0);
}
- xen_rmb();
sc->sc_rx_ring.rsp_cons = i;
+ xen_wmb();
RING_FINAL_CHECK_FOR_RESPONSES(&sc->sc_rx_ring, more_to_do);
mutex_exit(&sc->sc_rx_lock);
Home |
Main Index |
Thread Index |
Old Index