tech-net archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: if_txtimer API to replace (*if_watchdog)()
> On Jan 20, 2020, at 2:48 PM, Jason Thorpe <thorpej%me.com@localhost> wrote:
>
> Folks --
>
> The legacy (*if_watchdog)() interface has a couple of problems:
>
> 1- It does not have any way to represent multiple hardware-managed transmit queues.
>
> 2- It's not easy to make MP-safe because it relies on modifying the ifnet structure periodically outside of the normal locking mechanisms.
>
> The wm(4) driver solved the problem in a reasonable way, and to make it easier for the other drivers in the tree to adopt it's strategy, I re-factored it into a new if_txtimer structure and API.
>
> So save typing, I'll paste the relevant section of <net/if.h>:
I spent some time thinking about this a little more, especially around making it easier to convert drivers that don't have a periodic tick already (this is mainly drivers that don't use MII), so I added some extra support for such drivers and as a proof of concept, converted the SGI "sq" driver. You can see that it's a very easy mechanical conversion for legacy drivers, that still gives them a simple NET_MPSAFE migration path (there's a provision for providing a tx queue interlock to the timer expiration callback).
If there is consensus that this is a reasonable direction, then I'll start migrating drivers and, when complete, remove the legacy (*if_watchdog)() and if_timer fields from struct ifnet.
For completeness, here's the big block comment:
/*
* if_txtimer --
*
* Transmission timer (to replace the legacy ifnet watchdog timer).
*
* The driver should allocate one if_txtimer per hardware-managed
* transmission queue. There are two different ways to allocate
* the and use the timer, based on the driver's structure.
*
* DRIVERS THAT PROVIDE THEIR OWN PERIODIC CALLOUT
* ===============================================
*
* ==> Allocate timers using if_txtimer_alloc().
* ==> In the periodic callout, check for txtimer expiration using
* if_txtimer_is_expired() or if_txtimer_is_expired_explicit()
* (see below).
*
* DRIVERS THAT DO NOT PROVIDE THEIR OWN PERIODIC CALLOUT
* ======================================================
*
* ==> Allocate timers using if_txtimer_alloc_with_callback().
* This variant allocates a callout and provides a facility
* for the callout to invoke a driver-provided callack when
* the timer expires, with an optional interlock (typically
* the transmit queue mutex).
*
* If an interlock is provided, the interlock will be acquired
* before checking for timer expiration, and will invoke the
* callback with the interlock held if the timer has expired.
* NOTE: the callback is responsible for releasing the interlock.
* If an interlock is NOT provided, then IPL will be raised to
* splnet() before checking for timer expiration and invoking
* the callback. In this case, the IPL will be restored on
* the callback's behalf when it returns.
* ==> In the driver's (*if_init)() routine, the timer's callout
* should be started with if_txtimer_start(). In the driver's
* (*if_stop)() routine, the timer's callout should be stopped
* with if_txtimer_stop() or if_txtimer_halt() (see callout(9)
* for the difference between stop and halt).
*
* In both cases, all locking of the if_txtimer is the responsibility
* of the driver. The if_txtimer should be protected by the same lock
* that protects the associated transmission queue. The queue
* associated with the timer should be locked when arming and disarming
* the timer and when checking the timer for expiration.
*
* When the driver gives packets to the hardware to transmit, it should
* arm the timer by calling if_txtimer_arm(). When it is sweeping up
* completed transmit jobs, it should disarm the timer by calling
* if_txtimer_disarm() if there are no outstanding jobs remaining.
*
* If a driver needs to check multiple transmission queues, an
* optimization is available that avoids repeated calls to fetch
* the compare time. In this case, the driver can get the compare
* time by calling if_txtimer_now() and can check for timer expiration
* using if_txtimer_is_expired_explicit().
*
* The granularity of the if_txtimer is 1 second.
*/
Index: arch/sgimips/hpc/if_sq.c
===================================================================
RCS file: /cvsroot/src/sys/arch/sgimips/hpc/if_sq.c,v
retrieving revision 1.52
diff -u -p -r1.52 if_sq.c
--- arch/sgimips/hpc/if_sq.c 23 May 2019 13:10:51 -0000 1.52
+++ arch/sgimips/hpc/if_sq.c 21 Jan 2020 23:31:04 -0000
@@ -104,9 +104,9 @@ static void sq_attach(device_t, device_t
static int sq_init(struct ifnet *);
static void sq_start(struct ifnet *);
static void sq_stop(struct ifnet *, int);
-static void sq_watchdog(struct ifnet *);
static int sq_ioctl(struct ifnet *, u_long, void *);
+static void sq_txtimer_expired(void *);
static void sq_set_filter(struct sq_softc *);
static int sq_intr(void *);
static int sq_rxintr(struct sq_softc *);
@@ -319,8 +319,9 @@ sq_attach(device_t parent, device_t self
ifp->if_stop = sq_stop;
ifp->if_start = sq_start;
ifp->if_ioctl = sq_ioctl;
- ifp->if_watchdog = sq_watchdog;
ifp->if_flags = IFF_BROADCAST | IFF_MULTICAST;
+ sc->sc_txtimer = if_txtimer_alloc_with_callback(ifp,
+ sq_txtimer_expired, sc, NULL, 5);
IFQ_SET_READY(&ifp->if_snd);
if_attach(ifp);
@@ -457,6 +458,7 @@ sq_init(struct ifnet *ifp)
ifp->if_flags |= IFF_RUNNING;
ifp->if_flags &= ~IFF_OACTIVE;
+ if_txtimer_start(sc->sc_txtimer);
return 0;
}
@@ -815,7 +817,7 @@ sq_start(struct ifnet *ifp)
}
/* Set a watchdog timer in case the chip flakes out. */
- ifp->if_timer = 5;
+ if_txtimer_arm(sc->sc_txtimer);
}
}
@@ -840,15 +842,16 @@ sq_stop(struct ifnet *ifp, int disable)
sq_reset(sc);
ifp->if_flags &= ~(IFF_RUNNING | IFF_OACTIVE);
- ifp->if_timer = 0;
+ if_txtimer_stop(sc->sc_txtimer);
}
/* Device timeout/watchdog routine. */
-void
-sq_watchdog(struct ifnet *ifp)
+static void
+sq_txtimer_expired(void *arg)
{
uint32_t status;
- struct sq_softc *sc = ifp->if_softc;
+ struct sq_softc *sc = arg;
+ struct ifnet *ifp = &sc->sc_ethercom.ec_if;
status = sq_hpc_read(sc, sc->hpc_regs->enetx_ctl);
log(LOG_ERR, "%s: device timeout (prev %d, next %d, free %d, "
@@ -1105,7 +1108,7 @@ sq_txintr(struct sq_softc *sc)
/* If all packets have left the coop, cancel watchdog */
if (sc->sc_nfreetx == SQ_NTXDESC)
- ifp->if_timer = 0;
+ if_txtimer_disarm(sc->sc_txtimer);
SQ_TRACE(SQ_TXINTR_EXIT, sc, sc->sc_prevtx, status);
if_schedule_deferred_start(ifp);
@@ -1184,7 +1187,7 @@ sq_txring_hpc1(struct sq_softc *sc)
* Set a watchdog timer in case the chip
* flakes out.
*/
- ifp->if_timer = 5;
+ if_txtimer_arm(sc->sc_txtimer);
}
sc->sc_prevtx = i;
@@ -1236,7 +1239,7 @@ sq_txring_hpc3(struct sq_softc *sc)
* Set a watchdog timer in case the chip
* flakes out.
*/
- ifp->if_timer = 5;
+ if_txtimer_arm(sc->sc_txtimer);
} else
SQ_TRACE(SQ_TXINTR_BUSY, sc, i, status);
break;
Index: arch/sgimips/hpc/sqvar.h
===================================================================
RCS file: /cvsroot/src/sys/arch/sgimips/hpc/sqvar.h,v
retrieving revision 1.15
diff -u -p -r1.15 sqvar.h
--- arch/sgimips/hpc/sqvar.h 13 Apr 2015 21:18:42 -0000 1.15
+++ arch/sgimips/hpc/sqvar.h 21 Jan 2020 23:31:04 -0000
@@ -142,6 +142,7 @@ struct sq_softc {
int sc_nexttx;
int sc_prevtx;
int sc_nfreetx;
+ struct if_txtimer *sc_txtimer;
/* DMA structures for TX packet data */
bus_dma_segment_t sc_txseg[SQ_NTXDESC];
Index: dev/pci/if_pcn.c
===================================================================
RCS file: /cvsroot/src/sys/dev/pci/if_pcn.c,v
retrieving revision 1.72
diff -u -p -r1.72 if_pcn.c
--- dev/pci/if_pcn.c 11 Oct 2019 14:22:46 -0000 1.72
+++ dev/pci/if_pcn.c 21 Jan 2020 23:31:05 -0000
@@ -296,6 +296,8 @@ struct pcn_softc {
int sc_flags; /* misc. flags; see below */
int sc_swstyle; /* the software style in use */
+ struct if_txtimer *sc_txtimer; /* transmit watchdog timer */
+
int sc_txfree; /* number of free Tx descriptors */
int sc_txnext; /* next ready Tx descriptor */
@@ -381,7 +383,6 @@ do { \
} while(/*CONSTCOND*/0)
static void pcn_start(struct ifnet *);
-static void pcn_watchdog(struct ifnet *);
static int pcn_ioctl(struct ifnet *, u_long, void *);
static int pcn_init(struct ifnet *);
static void pcn_stop(struct ifnet *, int);
@@ -806,9 +807,9 @@ pcn_attach(device_t parent, device_t sel
ifp->if_flags = IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST;
ifp->if_ioctl = pcn_ioctl;
ifp->if_start = pcn_start;
- ifp->if_watchdog = pcn_watchdog;
ifp->if_init = pcn_init;
ifp->if_stop = pcn_stop;
+ sc->sc_txtimer = if_txtimer_alloc(ifp, 5);
IFQ_SET_READY(&ifp->if_snd);
/* Attach the interface. */
@@ -1140,37 +1141,8 @@ pcn_start(struct ifnet *ifp)
if (sc->sc_txfree != ofree) {
/* Set a watchdog timer in case the chip flakes out. */
- ifp->if_timer = 5;
- }
-}
-
-/*
- * pcn_watchdog: [ifnet interface function]
- *
- * Watchdog timer handler.
- */
-static void
-pcn_watchdog(struct ifnet *ifp)
-{
- struct pcn_softc *sc = ifp->if_softc;
-
- /*
- * Since we're not interrupting every packet, sweep
- * up before we report an error.
- */
- pcn_txintr(sc);
-
- if (sc->sc_txfree != PCN_NTXDESC) {
- printf("%s: device timeout (txfree %d txsfree %d)\n",
- device_xname(sc->sc_dev), sc->sc_txfree, sc->sc_txsfree);
- ifp->if_oerrors++;
-
- /* Reset the interface. */
- (void) pcn_init(ifp);
+ if_txtimer_arm(sc->sc_txtimer);
}
-
- /* Try to get more packets going. */
- pcn_start(ifp);
}
/*
@@ -1412,7 +1384,7 @@ pcn_txintr(struct pcn_softc *sc)
* timer.
*/
if (sc->sc_txsfree == PCN_TXQUEUELEN)
- ifp->if_timer = 0;
+ if_txtimer_disarm(sc->sc_txtimer);
}
/*
@@ -1553,9 +1525,39 @@ pcn_rxintr(struct pcn_softc *sc)
}
/*
+ * pcn_txtimer_expired:
+ *
+ * Tx timer to check for stalled transmitter.
+ */
+static void
+pcn_txtimer_expired(struct pcn_softc * const sc)
+{
+ struct ifnet * const ifp = &sc->sc_ethercom.ec_if;
+
+ /*
+ * Since we're not interrupting every packet, sweep
+ * up before we report an error.
+ */
+ pcn_txintr(sc);
+
+ if (__predict_false(sc->sc_txfree != PCN_NTXDESC)) {
+ printf("%s: device timeout (txfree %d txsfree %d)\n",
+ device_xname(sc->sc_dev), sc->sc_txfree, sc->sc_txsfree);
+ ifp->if_oerrors++;
+
+ /* Reset the interface. */
+ (void) pcn_init(ifp);
+ }
+
+ /* Try to get more packets going. */
+ pcn_start(ifp);
+}
+
+/*
* pcn_tick:
*
- * One second timer, used to tick the MII.
+ * One second timer, used to tick the MII and check transmit
+ * timer expiration.
*/
static void
pcn_tick(void *arg)
@@ -1564,7 +1566,15 @@ pcn_tick(void *arg)
int s;
s = splnet();
- mii_tick(&sc->sc_mii);
+
+ if (__predict_true(sc->sc_flags & PCN_F_HAS_MII)) {
+ mii_tick(&sc->sc_mii);
+ }
+
+ if (__predict_false(if_txtimer_is_expired(sc->sc_txtimer))) {
+ pcn_txtimer_expired(sc);
+ }
+
splx(s);
callout_reset(&sc->sc_tick_ch, hz, pcn_tick, sc);
@@ -1809,10 +1819,8 @@ pcn_init(struct ifnet *ifp)
/* Enable interrupts and external activity (and ACK IDON). */
pcn_csr_write(sc, LE_CSR0, LE_C0_INEA | LE_C0_STRT | LE_C0_IDON);
- if (sc->sc_flags & PCN_F_HAS_MII) {
- /* Start the one second MII clock. */
- callout_reset(&sc->sc_tick_ch, hz, pcn_tick, sc);
- }
+ /* Start the one second MII clock. */
+ callout_reset(&sc->sc_tick_ch, hz, pcn_tick, sc);
/* ...all done! */
ifp->if_flags |= IFF_RUNNING;
@@ -1857,10 +1865,10 @@ pcn_stop(struct ifnet *ifp, int disable)
struct pcn_txsoft *txs;
int i;
- if (sc->sc_flags & PCN_F_HAS_MII) {
- /* Stop the one second clock. */
- callout_stop(&sc->sc_tick_ch);
+ /* Stop the one second clock. */
+ callout_stop(&sc->sc_tick_ch);
+ if (sc->sc_flags & PCN_F_HAS_MII) {
/* Down the MII. */
mii_down(&sc->sc_mii);
}
@@ -1880,7 +1888,7 @@ pcn_stop(struct ifnet *ifp, int disable)
/* Mark the interface as down and cancel the watchdog timer. */
ifp->if_flags &= ~(IFF_RUNNING | IFF_OACTIVE);
- ifp->if_timer = 0;
+ if_txtimer_disarm(sc->sc_txtimer);
if (disable)
pcn_rxdrain(sc);
Index: dev/pci/if_wm.c
===================================================================
RCS file: /cvsroot/src/sys/dev/pci/if_wm.c,v
retrieving revision 1.659
diff -u -p -r1.659 if_wm.c
--- dev/pci/if_wm.c 20 Jan 2020 19:45:27 -0000 1.659
+++ dev/pci/if_wm.c 21 Jan 2020 23:31:05 -0000
@@ -370,8 +370,7 @@ struct wm_txqueue {
bool txq_stopping;
- bool txq_sending;
- time_t txq_lastsent;
+ struct if_txtimer *txq_txtimer;
uint32_t txq_packets; /* for AIM */
uint32_t txq_bytes; /* for AIM */
@@ -704,9 +703,9 @@ static bool wm_suspend(device_t, const p
static bool wm_resume(device_t, const pmf_qual_t *);
static void wm_watchdog(struct ifnet *);
static void wm_watchdog_txq(struct ifnet *, struct wm_txqueue *,
- uint16_t *);
+ const time_t, uint16_t *);
static void wm_watchdog_txq_locked(struct ifnet *, struct wm_txqueue *,
- uint16_t *);
+ uint16_t *);
static void wm_tick(void *);
static int wm_ifflags_cb(struct ethercom *);
static int wm_ioctl(struct ifnet *, u_long, void *);
@@ -1807,6 +1806,19 @@ wm_attach(device_t parent, device_t self
sc->sc_rev = PCI_REVISION(pci_conf_read(pc, pa->pa_tag,PCI_CLASS_REG));
pci_aprint_devinfo_fancy(pa, "Ethernet controller", wmp->wmp_name, 1);
+ /*
+ * Set up some parts of the ifnet now; it will be needed when we
+ * initialize the transmit queues.
+ */
+ ifp = &sc->sc_ethercom.ec_if;
+ xname = device_xname(sc->sc_dev);
+ strlcpy(ifp->if_xname, xname, IFNAMSIZ);
+ ifp->if_softc = sc;
+ ifp->if_flags = IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST;
+#ifdef WM_MPSAFE
+ ifp->if_extflags = IFEF_MPSAFE;
+#endif
+
sc->sc_type = wmp->wmp_type;
/* Set default function pointers */
@@ -2828,14 +2840,7 @@ alloc_retry:
else
wm_tbi_mediainit(sc); /* All others */
- ifp = &sc->sc_ethercom.ec_if;
- xname = device_xname(sc->sc_dev);
- strlcpy(ifp->if_xname, xname, IFNAMSIZ);
- ifp->if_softc = sc;
- ifp->if_flags = IFF_BROADCAST | IFF_SIMPLEX | IFF_MULTICAST;
-#ifdef WM_MPSAFE
- ifp->if_extflags = IFEF_MPSAFE;
-#endif
+ /* See above for first part of ifnet initialization... */
ifp->if_ioctl = wm_ioctl;
if ((sc->sc_flags & WM_F_NEWQUEUE) != 0) {
ifp->if_start = wm_nq_start;
@@ -3148,10 +3153,12 @@ wm_watchdog(struct ifnet *ifp)
struct wm_softc *sc = ifp->if_softc;
uint16_t hang_queue = 0; /* Max queue number of wm(4) is 82576's 16. */
+ const time_t txq_now = if_txtimer_now();
+
for (qid = 0; qid < sc->sc_nqueues; qid++) {
struct wm_txqueue *txq = &sc->sc_queue[qid].wmq_txq;
- wm_watchdog_txq(ifp, txq, &hang_queue);
+ wm_watchdog_txq(ifp, txq, txq_now, &hang_queue);
}
/* IF any of queues hanged up, reset the interface. */
@@ -3169,14 +3176,13 @@ wm_watchdog(struct ifnet *ifp)
static void
-wm_watchdog_txq(struct ifnet *ifp, struct wm_txqueue *txq, uint16_t *hang)
+wm_watchdog_txq(struct ifnet *ifp, struct wm_txqueue *txq,
+ const time_t txq_now, uint16_t *hang)
{
mutex_enter(txq->txq_lock);
- if (txq->txq_sending &&
- time_uptime - txq->txq_lastsent > wm_watchdog_timeout)
+ if (if_txtimer_is_expired_explicit(txq->txq_txtimer, txq_now))
wm_watchdog_txq_locked(ifp, txq, hang);
-
mutex_exit(txq->txq_lock);
}
@@ -3195,7 +3201,7 @@ wm_watchdog_txq_locked(struct ifnet *ifp
*/
wm_txeof(txq, UINT_MAX);
- if (txq->txq_sending)
+ if (if_txtimer_is_armed(txq->txq_txtimer))
*hang |= __BIT(wmq->wmq_id);
if (txq->txq_free == WM_NTXDESC(txq)) {
@@ -6356,7 +6362,7 @@ wm_stop_locked(struct ifnet *ifp, int di
struct wm_queue *wmq = &sc->sc_queue[qidx];
struct wm_txqueue *txq = &wmq->wmq_txq;
mutex_enter(txq->txq_lock);
- txq->txq_sending = false; /* Ensure watchdog disabled */
+ if_txtimer_disarm(txq->txq_txtimer);
for (i = 0; i < WM_TXQUEUELEN(txq); i++) {
txs = &txq->txq_soft[i];
if (txs->txs_mbuf != NULL) {
@@ -6769,6 +6775,8 @@ wm_alloc_txrx_queues(struct wm_softc *sc
struct wm_txqueue *txq = &sc->sc_queue[i].wmq_txq;
txq->txq_sc = sc;
txq->txq_lock = mutex_obj_alloc(MUTEX_DEFAULT, IPL_NET);
+ txq->txq_txtimer = if_txtimer_alloc(&sc->sc_ethercom.ec_if,
+ wm_watchdog_timeout);
error = wm_alloc_tx_descs(sc, txq);
if (error)
@@ -6956,6 +6964,8 @@ wm_free_txrx_queues(struct wm_softc *sc)
wm_free_tx_descs(sc, txq);
if (txq->txq_lock)
mutex_obj_free(txq->txq_lock);
+ if (txq->txq_txtimer)
+ if_txtimer_free(txq->txq_txtimer);
}
kmem_free(sc->sc_queue, sizeof(struct wm_queue) * sc->sc_nqueues);
@@ -7058,7 +7068,7 @@ wm_init_tx_queue(struct wm_softc *sc, st
wm_init_tx_buffer(sc, txq);
txq->txq_flags = 0; /* Clear WM_TXQ_NO_SPACE */
- txq->txq_sending = false;
+ if_txtimer_disarm(txq->txq_txtimer);
}
static void
@@ -7833,8 +7843,7 @@ retry:
if (txq->txq_free != ofree) {
/* Set a watchdog timer in case the chip flakes out. */
- txq->txq_lastsent = time_uptime;
- txq->txq_sending = true;
+ if_txtimer_arm(txq->txq_txtimer);
}
}
@@ -8417,8 +8426,7 @@ retry:
if (sent) {
/* Set a watchdog timer in case the chip flakes out. */
- txq->txq_lastsent = time_uptime;
- txq->txq_sending = true;
+ if_txtimer_arm(txq->txq_txtimer);
}
}
@@ -8574,7 +8582,7 @@ wm_txeof(struct wm_txqueue *txq, u_int l
* timer.
*/
if (txq->txq_sfree == WM_TXQUEUELEN(txq))
- txq->txq_sending = false;
+ if_txtimer_disarm(txq->txq_txtimer);
return more;
}
Index: net/if.c
===================================================================
RCS file: /cvsroot/src/sys/net/if.c,v
retrieving revision 1.468
diff -u -p -r1.468 if.c
--- net/if.c 20 Jan 2020 18:38:18 -0000 1.468
+++ net/if.c 21 Jan 2020 23:31:06 -0000
@@ -1,7 +1,7 @@
/* $NetBSD: if.c,v 1.468 2020/01/20 18:38:18 thorpej Exp $ */
/*-
- * Copyright (c) 1999, 2000, 2001, 2008 The NetBSD Foundation, Inc.
+ * Copyright (c) 1999, 2000, 2001, 2008, 2020 The NetBSD Foundation, Inc.
* All rights reserved.
*
* This code is derived from software contributed to The NetBSD Foundation
@@ -3774,6 +3774,283 @@ if_mcast_op(ifnet_t *ifp, const unsigned
return rc;
}
+struct if_txtimer {
+ time_t txt_armtime;
+ unsigned int txt_timeout;
+ bool txt_armed;
+ bool txt_has_callback;
+};
+
+struct if_txtimer_callout {
+ struct if_txtimer itc_txtimer;
+ callout_t itc_callout;
+ void (*itc_cb)(void *);
+ void *itc_cb_arg;
+ kmutex_t *itc_interlock;
+};
+
+/*
+ * Acquire the transmit timer interlock (usually the driver's transmit
+ * queue lock) if provided. If one wasn't provided, then we assume a
+ * legacy driver and go to splnet().
+ */
+#define ITC_INTERLOCK_ACQUIRE(itc, s) \
+do { \
+ if ((itc)->itc_interlock) { \
+ mutex_enter((itc)->itc_interlock); \
+ s = 0xdeadbeef; /* XXX -Wuninitialized */ \
+ } else { \
+ s = splnet(); \
+ } \
+} while (/*CONSTCOND*/0)
+
+#define ITC_INTERLOCK_RELEASE(itc, s) \
+do { \
+ if ((itc)->itc_interlock) { \
+ mutex_exit((itc)->itc_interlock); \
+ } else { \
+ KASSERT(s != 0xdeadbeef); \
+ splx(s); \
+ } \
+} while (/*CONSTCOND*/0)
+
+static void
+if_txtimer_tick(void *arg)
+{
+ struct if_txtimer_callout * const itc = arg;
+ int s;
+
+ ITC_INTERLOCK_ACQUIRE(itc, s);
+
+ if (__predict_false(if_txtimer_is_expired(&itc->itc_txtimer))) {
+ (*itc->itc_cb)(itc->itc_cb_arg);
+ /*
+ * If one was provided, the callback is responsible for
+ * dropping the interlock. However, if no interlock
+ * was given, then we have to drop the IPL.
+ */
+ if (itc->itc_interlock == NULL) {
+ KASSERT(s != 0xdeadbeef);
+ splx(s);
+ }
+ /*
+ * The callback is also responsible for calling
+ * if_txtimer_start() once it is done recovering
+ * from the timeout error.
+ */
+ } else {
+ ITC_INTERLOCK_RELEASE(itc, s);
+ callout_schedule(&itc->itc_callout, hz);
+ }
+}
+
+/*
+ * if_txtimer_alloc --
+ * Allocate a network interface transmit timer.
+ */
+struct if_txtimer *
+if_txtimer_alloc(ifnet_t *ifp __unused, unsigned int timeout)
+{
+
+ KASSERT(timeout != 0);
+
+ struct if_txtimer * const txt = kmem_zalloc(sizeof(*txt), KM_SLEEP);
+ txt->txt_timeout = timeout;
+
+ return txt;
+}
+
+/*
+ * if_txtimer_alloc_with_callback --
+ * Allocate a network interface transmit timer, callback variant.
+ */
+struct if_txtimer *
+if_txtimer_alloc_with_callback(ifnet_t *ifp, void (*expired_cb)(void *),
+ void *arg, kmutex_t *interlock,
+ unsigned int timeout)
+{
+
+ KASSERT(expired_cb != NULL);
+ KASSERT(timeout != 0);
+
+ struct if_txtimer_callout * const itc =
+ kmem_zalloc(sizeof(*itc), KM_SLEEP);
+
+ itc->itc_txtimer.txt_timeout = timeout;
+ itc->itc_txtimer.txt_has_callback = true;
+
+ callout_init(&itc->itc_callout,
+ if_is_mpsafe(ifp) ? CALLOUT_MPSAFE : 0);
+ callout_setfunc(&itc->itc_callout, if_txtimer_tick, itc);
+
+ itc->itc_cb = expired_cb;
+ itc->itc_cb_arg = arg;
+ itc->itc_interlock = interlock;
+
+ return &itc->itc_txtimer;
+}
+
+/*
+ * if_txtimer_free --
+ * Free a network interface transmit timer.
+ */
+void
+if_txtimer_free(struct if_txtimer * const txt)
+{
+
+ if (txt->txt_has_callback) {
+ struct if_txtimer_callout * const itc =
+ container_of(txt, struct if_txtimer_callout, itc_txtimer);
+
+ callout_halt(&itc->itc_callout, NULL);
+ callout_destroy(&itc->itc_callout);
+ kmem_free(itc, sizeof(*itc));
+ return;
+ }
+
+ kmem_free(txt, sizeof(*txt));
+}
+
+/*
+ * if_txtimer_start --
+ * Start the periodic callout for a network interface
+ * transmit timer.
+ */
+void
+if_txtimer_start(struct if_txtimer * const txt)
+{
+
+ if (__predict_true(txt->txt_has_callback)) {
+ struct if_txtimer_callout * const itc =
+ container_of(txt, struct if_txtimer_callout, itc_txtimer);
+ callout_schedule(&itc->itc_callout, hz);
+ }
+}
+
+/*
+ * if_txtimer_stop --
+ * Stop the periodic callout for a network interface
+ * transmit timer.
+ *
+ * This is a basic wrapper around callout_stop().
+ */
+bool
+if_txtimer_stop(struct if_txtimer * const txt)
+{
+
+ if_txtimer_disarm(txt);
+ if (__predict_true(txt->txt_has_callback)) {
+ struct if_txtimer_callout * const itc =
+ container_of(txt, struct if_txtimer_callout, itc_txtimer);
+ return callout_stop(&itc->itc_callout);
+ }
+
+ return false;
+}
+
+/*
+ * if_txtimer_halt --
+ * Halt the periodic callout for a network interface
+ * transmit timer.
+ *
+ * This is a basic wrapper around callout_halt().
+ */
+bool
+if_txtimer_halt(struct if_txtimer * const txt)
+{
+
+ if_txtimer_disarm(txt);
+ if (__predict_true(txt->txt_has_callback)) {
+ struct if_txtimer_callout * const itc =
+ container_of(txt, struct if_txtimer_callout, itc_txtimer);
+ int s;
+
+ ITC_INTERLOCK_ACQUIRE(itc, s);
+ const bool rv = callout_halt(&itc->itc_callout,
+ itc->itc_interlock);
+ ITC_INTERLOCK_RELEASE(itc, s);
+ return rv;
+ }
+
+ return false;
+}
+
+/*
+ * if_txtimer_arm --
+ * Arm a network interface transmit timer.
+ */
+void
+if_txtimer_arm(struct if_txtimer * const txt)
+{
+ const time_t current_time = time_uptime;
+
+ txt->txt_armtime = current_time;
+ txt->txt_armed = true;
+}
+
+/*
+ * if_txtimer_disarm --
+ * Disarm a network interface transmit timer.
+ */
+void
+if_txtimer_disarm(struct if_txtimer * const txt)
+{
+
+ txt->txt_armed = false;
+}
+
+/*
+ * if_txtimer_is_armed --
+ * Return if a network interface transmit timer is armed.
+ */
+bool
+if_txtimer_is_armed(const struct if_txtimer * const txt)
+{
+
+ return txt->txt_armed;
+}
+
+/*
+ * if_txtimer_now --
+ * Return the current value of "now" for the purpose of
+ * checking for transmit timer expiration.
+ */
+time_t
+if_txtimer_now(void)
+{
+
+ return time_uptime;
+}
+
+/*
+ * if_txtimer_is_expired_explicit --
+ * Return if a network interface transmit timer has expired,
+ * using an explicit time.
+ */
+bool
+if_txtimer_is_expired_explicit(const struct if_txtimer * const txt,
+ const time_t current_time)
+{
+
+ return txt->txt_armed &&
+ (current_time - txt->txt_armtime) > txt->txt_timeout;
+}
+
+/*
+ * if_txtimer_is_expired --
+ * Return if a network interface transmit timer has expired.
+ */
+bool
+if_txtimer_is_expired(const struct if_txtimer * const txt)
+{
+ const time_t current_time = time_uptime;
+
+ return if_txtimer_is_expired_explicit(txt, current_time);
+}
+
+#undef ITC_INTERLOCK_ACQUIRE
+#undef ITC_INTERLOCK_RELEASE
+
static void
sysctl_sndq_setup(struct sysctllog **clog, const char *ifname,
struct ifaltq *ifq)
Index: net/if.h
===================================================================
RCS file: /cvsroot/src/sys/net/if.h,v
retrieving revision 1.277
diff -u -p -r1.277 if.h
--- net/if.h 19 Sep 2019 06:07:24 -0000 1.277
+++ net/if.h 21 Jan 2020 23:31:06 -0000
@@ -1,7 +1,7 @@
/* $NetBSD: if.h,v 1.277 2019/09/19 06:07:24 knakahara Exp $ */
/*-
- * Copyright (c) 1999, 2000, 2001 The NetBSD Foundation, Inc.
+ * Copyright (c) 1999, 2000, 2001, 2020 The NetBSD Foundation, Inc.
* All rights reserved.
*
* This code is derived from software contributed to The NetBSD Foundation
@@ -1182,6 +1182,80 @@ bool ifa_is_destroying(struct ifaddr *);
void ifaref(struct ifaddr *);
void ifafree(struct ifaddr *);
+/*
+ * if_txtimer --
+ *
+ * Transmission timer (to replace the legacy ifnet watchdog timer).
+ *
+ * The driver should allocate one if_txtimer per hardware-managed
+ * transmission queue. There are two different ways to allocate
+ * the and use the timer, based on the driver's structure.
+ *
+ * DRIVERS THAT PROVIDE THEIR OWN PERIODIC CALLOUT
+ * ===============================================
+ *
+ * ==> Allocate timers using if_txtimer_alloc().
+ * ==> In the periodic callout, check for txtimer expiration using
+ * if_txtimer_is_expired() or if_txtimer_is_expired_explicit()
+ * (see below).
+ *
+ * DRIVERS THAT DO NOT PROVIDE THEIR OWN PERIODIC CALLOUT
+ * ======================================================
+ *
+ * ==> Allocate timers using if_txtimer_alloc_with_callback().
+ * This variant allocates a callout and provides a facility
+ * for the callout to invoke a driver-provided callack when
+ * the timer expires, with an optional interlock (typically
+ * the transmit queue mutex).
+ *
+ * If an interlock is provided, the interlock will be acquired
+ * before checking for timer expiration, and will invoke the
+ * callback with the interlock held if the timer has expired.
+ * NOTE: the callback is responsible for releasing the interlock.
+ * If an interlock is NOT provided, then IPL will be raised to
+ * splnet() before checking for timer expiration and invoking
+ * the callback. In this case, the IPL will be restored on
+ * the callback's behalf when it returns.
+ * ==> In the driver's (*if_init)() routine, the timer's callout
+ * should be started with if_txtimer_start(). In the driver's
+ * (*if_stop)() routine, the timer's callout should be stopped
+ * with if_txtimer_stop() or if_txtimer_halt() (see callout(9)
+ * for the difference between stop and halt.
+ *
+ * In both cases, all locking of the if_txtimer is the responsibility
+ * of the driver. The if_txtimer should be protected by the same lock
+ * that protects the associated transmission queue. The queue
+ * associated with the timer should be locked when arming and disarming
+ * the timer and when checking the timer for expiration.
+ *
+ * When the driver gives packets to the hardware to transmit, it should
+ * arm the timer by calling if_txtimer_arm(). When it is sweeping up
+ * completed transmit jobs, it should disarm the timer by calling
+ * if_txtimer_disarm() if there are no outstanding jobs remaining.
+ *
+ * If a driver needs to check multiple transmission queues, an
+ * optimization is available that avoids repeated calls to fetch
+ * the compare time. In this case, the driver can get the compare
+ * time by calling if_txtimer_now() and can check for timer expiration
+ * using if_txtimer_is_expired_explicit().
+ *
+ * The granularity of the if_txtimer is 1 second.
+ */
+struct if_txtimer;
+struct if_txtimer *if_txtimer_alloc(ifnet_t *, unsigned int);
+struct if_txtimer *if_txtimer_alloc_with_callback(ifnet_t *,
+ void (*)(void *), void *, kmutex_t *, unsigned int);
+void if_txtimer_start(struct if_txtimer *);
+bool if_txtimer_stop(struct if_txtimer *);
+bool if_txtimer_halt(struct if_txtimer *);
+void if_txtimer_free(struct if_txtimer *);
+void if_txtimer_arm(struct if_txtimer *);
+void if_txtimer_disarm(struct if_txtimer *);
+bool if_txtimer_is_armed(const struct if_txtimer *);
+bool if_txtimer_is_expired_explicit(const struct if_txtimer *, const time_t);
+bool if_txtimer_is_expired(const struct if_txtimer *);
+time_t if_txtimer_now(void);
+
struct ifaddr *ifa_ifwithaddr(const struct sockaddr *);
struct ifaddr *ifa_ifwithaddr_psref(const struct sockaddr *, struct psref *);
struct ifaddr *ifa_ifwithaf(int);
-- thorpej
Home |
Main Index |
Thread Index |
Old Index