DHDAXCW-Rockchip-OpenWrt/target/linux/generic/pending-5.4/770-03-net-ethernet-mtk_eth_soc-fix-unnecessary-tx-queue-st.patch
AmadeusGhost 86bc29e4a8
kernel: bump 5.4 to 5.4.68 (#5555)
[mac80211]
 ca5ee6e mac80211: Fix potential endless loop
 2c14710 mac80211: add more AQL fixes/improvements
 91fb3ce mac80211: remove an obsolete patch that is no longer doing anything useful
 acf1733 mac80211: add preliminary support for enabling 802.11ax in config
 d717343 mac80211: update encap offload patches to the latest version
 673062f mac80211: allow bigger A-MSDU sizes in VHT, even if HT is limited
 caf7277 mac80211: do not allow bigger VHT MPDUs than the hardware supports
 cd36c0d mac80211: select the first available channel for 5GHz interfaces
 1c6d456 mac80211: fix regression in station connection monitor optimization
 4bd7689 mac80211: update sta connection monitor regression fix

[target]
 Sync: at91, ath25, ath79, lantiq, mediatek, mvebu.
2020-10-03 00:36:16 +08:00

51 lines
1.4 KiB
Diff

From: Felix Fietkau <nbd@nbd.name>
Date: Wed, 26 Aug 2020 16:55:54 +0200
Subject: [PATCH] net: ethernet: mtk_eth_soc: fix unnecessary tx queue
stops
When running short on descriptors, only stop the queue for the netdev that tx
was attempted for. By the time the something tries to send on the other netdev,
the ring might have some more room already
Signed-off-by: Felix Fietkau <nbd@nbd.name>
---
--- a/drivers/net/ethernet/mediatek/mtk_eth_soc.c
+++ b/drivers/net/ethernet/mediatek/mtk_eth_soc.c
@@ -1126,17 +1126,6 @@ static void mtk_wake_queue(struct mtk_et
}
}
-static void mtk_stop_queue(struct mtk_eth *eth)
-{
- int i;
-
- for (i = 0; i < MTK_MAC_COUNT; i++) {
- if (!eth->netdev[i])
- continue;
- netif_stop_queue(eth->netdev[i]);
- }
-}
-
static int mtk_start_xmit(struct sk_buff *skb, struct net_device *dev)
{
struct mtk_mac *mac = netdev_priv(dev);
@@ -1157,7 +1146,7 @@ static int mtk_start_xmit(struct sk_buff
tx_num = mtk_cal_txd_req(skb);
if (unlikely(atomic_read(&ring->free_count) <= tx_num)) {
- mtk_stop_queue(eth);
+ netif_stop_queue(dev);
netif_err(eth, tx_queued, dev,
"Tx Ring full when queue awake!\n");
spin_unlock(&eth->page_lock);
@@ -1183,7 +1172,7 @@ static int mtk_start_xmit(struct sk_buff
goto drop;
if (unlikely(atomic_read(&ring->free_count) <= ring->thresh))
- mtk_stop_queue(eth);
+ netif_stop_queue(dev);
spin_unlock(&eth->page_lock);