Port-xen archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Increasing RX buffers for xennet(4)
hello. For some time I've seen messages of the form:
[ 68670.674458] xennet0: rx no cluster
[ 71559.876225] xennet0: rx no cluster
[ 72917.029633] xennet0: rx no cluster
On sevral busy domU's we use. In looking into the issue, it appears the xennet(4) driver
doesn't pre-allocate enough ring items for the RX buffer to keep up with the input flow from
the network. In trying to solve this issue, I tried the following patch to if_xennet_xenbus.c,
but this results in a GNTST_bad_gntref error on the back end, at which point packets are no
longer received until things reset.
Is there a limit on the size a shared ring can be?
Also, how does one set up the grant references for the shared ring? I see the call to the
FRONT_RING_INIT macro, but that doesn't seem to make the backend happy. Am I correct that the
issue is that the backend is getting a reference to a ring item it doesn't thing has been
initialized?
Is there a document that describes how one goes about setting up a shared ring and making it
available to the back end?
-thanks
-Brian
--- if_xennet_xenbus.c.good 2024-10-28 10:19:58.798553030 -0700
+++ if_xennet_xenbus.c 2024-10-29 08:34:58.981668220 -0700
@@ -147,8 +147,11 @@
#define GRANT_INVALID_REF -1 /* entry is free */
+/* Create more than 256 rX buffers */
+#define RX_MEMSIZE (2 * PAGE_SIZE)
+
#define NET_TX_RING_SIZE __CONST_RING_SIZE(netif_tx, PAGE_SIZE)
-#define NET_RX_RING_SIZE __CONST_RING_SIZE(netif_rx, PAGE_SIZE)
+#define NET_RX_RING_SIZE __CONST_RING_SIZE(netif_rx, RX_MEMSIZE)
struct xennet_txreq {
SLIST_ENTRY(xennet_txreq) txreq_next;
@@ -350,6 +353,9 @@
return;
}
+ aprint_normal_dev(self, "Using %lu TX buffers, %lu rX buffers\n",
+ NET_TX_RING_SIZE, NET_RX_RING_SIZE);
+
/* read mac address */
err = xenbus_read(NULL, sc->sc_xbusd->xbusd_path, "mac",
mac, sizeof(mac));
@@ -409,7 +415,7 @@
/* alloc shared rings */
tx_ring = (void *)uvm_km_alloc(kernel_map, PAGE_SIZE, 0,
UVM_KMF_WIRED);
- rx_ring = (void *)uvm_km_alloc(kernel_map, PAGE_SIZE, 0,
+ rx_ring = (void *)uvm_km_alloc(kernel_map, RX_MEMSIZE, 0,
UVM_KMF_WIRED);
if (tx_ring == NULL || rx_ring == NULL)
panic("%s: can't alloc rings", device_xname(self));
@@ -445,7 +451,7 @@
if (xennet_xenbus_resume(self, PMF_Q_NONE) == false) {
uvm_km_free(kernel_map, (vaddr_t)tx_ring, PAGE_SIZE,
UVM_KMF_WIRED);
- uvm_km_free(kernel_map, (vaddr_t)rx_ring, PAGE_SIZE,
+ uvm_km_free(kernel_map, (vaddr_t)rx_ring, RX_MEMSIZE,
UVM_KMF_WIRED);
return;
}
@@ -514,7 +520,7 @@
kpause("xnrxref", true, hz/2, &sc->sc_rx_lock);
xengnt_revoke_access(sc->sc_rx_ring_gntref);
mutex_exit(&sc->sc_rx_lock);
- uvm_km_free(kernel_map, (vaddr_t)sc->sc_rx_ring.sring, PAGE_SIZE,
+ uvm_km_free(kernel_map, (vaddr_t)sc->sc_rx_ring.sring, RX_MEMSIZE,
UVM_KMF_WIRED);
pmf_device_deregister(self);
@@ -552,9 +558,9 @@
SHARED_RING_INIT(tx_ring);
FRONT_RING_INIT(&sc->sc_tx_ring, tx_ring, PAGE_SIZE);
- memset(rx_ring, 0, PAGE_SIZE);
+ memset(rx_ring, 0, RX_MEMSIZE);
SHARED_RING_INIT(rx_ring);
- FRONT_RING_INIT(&sc->sc_rx_ring, rx_ring, PAGE_SIZE);
+ FRONT_RING_INIT(&sc->sc_rx_ring, rx_ring, RX_MEMSIZE);
(void)pmap_extract_ma(pmap_kernel(), (vaddr_t)tx_ring, &ma);
error = xenbus_grant_ring(sc->sc_xbusd, ma, &sc->sc_tx_ring_gntref);
Home |
Main Index |
Thread Index |
Old Index