This was added to mitigate the effects of too much sampling on devices that
use a static global fallback table instead of configurable multi-rate retry.
Now that the sampling algorithm is improved, this code path no longer performs
any better than the standard probing on affected devices.
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Link: https://lore.kernel.org/r/20210127055735.78599-6-nbd@nbd.name
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
The biggest flaw in current minstrel_ht is the fact that it needs way too
many probing packets to be able to quickly find the best rate.
Depending on the wifi hardware and operating mode, this can significantly
reduce throughput when not operating at the highest available data rate.
In order to be able to significantly reduce the amount of rate sampling,
we need a much smarter selection of probing rates.
The new approach introduced by this patch maintains a limited set of
available rates to be tested during a statistics window.
They are split into distinct categories:
- MINSTREL_SAMPLE_TYPE_INC - incremental rate upgrade:
Pick the next rate group and find the first rate that is faster than
the current max. throughput rate
- MINSTREL_SAMPLE_TYPE_JUMP - random testing of higher rates:
Pick a random rate from the next group that is faster than the current
max throughput rate. This allows faster adaptation when the link changes
significantly
- MINSTREL_SAMPLE_TYPE_SLOW - test a rate between max_prob, max_tp2 and
max_tp in order to reduce the gap between them
In order to prioritize sampling, every 6 attempts are split into 3x INC,
2x JUMP, 1x SLOW.
Available rates are checked and refilled on every stats window update.
With this approach, we finally get a very small delta in throughput when
comparing setting the optimal data rate as a fixed rate vs normal rate
control operation.
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Link: https://lore.kernel.org/r/20210127055735.78599-4-nbd@nbd.name
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
In order to more gracefully be able to fall back to lower rates without too
much throughput fluctuations, initialize all untested rates below tested ones
to the maximum probabilty of higher rates.
Usually this leads to untested lower rates getting initialized with a
probability value of 100%, making them better candidates for fallback without
having to rely on random probing
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Link: https://lore.kernel.org/r/20210127055735.78599-3-nbd@nbd.name
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Get rid of a lot of divisions and modulo operations
Reduces code size and improves performance
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Link: https://lore.kernel.org/r/20210127055735.78599-1-nbd@nbd.name
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
The new noise filter has been the default for a while now with no reported
downside and significant improvement compared to the old code.
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Link: https://lore.kernel.org/r/20210115120242.89616-5-nbd@nbd.name
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
The legacy minstrel code is essentially unmaintained and receives only very
little testing. In order to bring the significant algorithm improvements from
minstrel_ht to legacy clients, this patch adds support for OFDM rates to
minstrel_ht and removes the fallback to the legacy codepath.
This also makes it work much better on hardware with rate selection constraints,
e.g. mt76.
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Link: https://lore.kernel.org/r/20210115120242.89616-3-nbd@nbd.name
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
- move ack overhead out of rate duration table
- remove cck_supported, cck_supported_short
Preparation for adding OFDM legacy rates support
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Link: https://lore.kernel.org/r/20210115120242.89616-2-nbd@nbd.name
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
On some devices that only support static rate fallback tables sending rate
control probing packets can be really expensive.
Probing lower rates can already hurt throughput quite a bit. What hurts even
more is the fact that on mt76x0/mt76x2, single probing packets can only be
forced by directing packets at a different internal hardware queue, which
causes some heavy reordering and extra latency.
The reordering issue is mainly problematic while pushing lots of packets to
a particular station. If there is little activity, the overhead of probing is
neglegible.
The static fallback behavior is designed to pretty much only handle rate
control algorithms that use only a very limited set of rates on which the
algorithm switches up/down based on packet error rate.
In order to better support that kind of hardware, this patch implements a
different approach to rate probing where it switches to a slightly higher rate,
waits for tx status feedback, then updates the stats and switches back to
the new max throughput rate. This only triggers above a packet rate of 100
per stats interval (~50ms).
For that kind of probing, the code has to reduce the set of probing rates
a lot more compared to single packet probing, so it uses only one packet
per MCS group which is either slightly faster, or as close as possible to
the max throughput rate.
This allows switching between similar rates with different numbers of
streams. The algorithm assumes that the hardware will work its way lower
within an MCS group in case of retransmissions, so that lower rates don't
have to be probed by the high packets per second rate probing code.
To further reduce the search space, it also does not probe rates with lower
channel bandwidth than the max throughput rate.
At the moment, these changes will only affect mt76x0/mt76x2.
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Link: https://lore.kernel.org/r/20190820095449.45255-4-nbd@nbd.name
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Based on 2 normalized pattern(s):
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation
this program is free software you can redistribute it and or modify
it under the terms of the gnu general public license version 2 as
published by the free software foundation #
extracted by the scancode license scanner the SPDX license identifier
GPL-2.0-only
has been chosen to replace the boilerplate/reference in 4122 file(s).
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Enrico Weigelt <info@metux.net>
Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org>
Reviewed-by: Allison Randal <allison@lohutok.net>
Cc: linux-spdx@vger.kernel.org
Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This is needed for the upcoming driver for MT7615 4x4 802.11ac chipsets
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
By storing a shift value for all duration values of a group, we can
reduce precision by a neglegible amount to make it fit into a u16 value.
This improves cache footprint and reduces size:
Before:
text data bss dec hex filename
10024 116 0 10140 279c rc80211_minstrel_ht.o
After:
text data bss dec hex filename
9368 116 0 9484 250c rc80211_minstrel_ht.o
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Legacy-only devices are not very common and the overhead of the extra
code for HT and VHT rates is not big enough to justify all those extra
lines of code to make it optional.
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
debugfs entries are cleaned up by debugfs_remove_recursive already.
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Improves dcache footprint by ensuring that fewer cache lines need to be
touched.
Signed-off-by: Felix Fietkau <nbd@nbd.name>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
This patch adds the new statistic "maximum possible lossless
throughput" to Minstrels and Minstrel-HTs rc_stats (in debugfs). This
enables comprehensive comparison between current per-rate throughput
and max. achievable per-rate throughput.
Signed-off-by: Thomas Huehn <thomas@net.t-labs.tu-berlin.de>
Acked-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
This patch moves Minstrels and Minstrel-HTs per-rate throughput
calculation (EWMA(thr)) into a dedicated function to be called.
Therefore the variable "unsigned int cur_tp" within struct
"minstrel_rate_stats" becomes obsolete. and is removed to free
up its space.
Signed-off-by: Thomas Huehn <thomas@net.t-labs.tu-berlin.de>
Acked-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
This patch ensures a consistent usage of variable names for type
"minstrel_rate_stats" to be used as "mrs" and from type minstrel_rate
as "mr" across both Minstrel & Minstrel-HT. In addition some
variable and function names got changed to more meaningful ones.
Signed-off-by: Thomas Huehn <thomas@net.t-labs.tu-berlin.de>
Acked-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
This patch adds a new debugfs file "rc_stats_csv" to output
Minstrel-HTs statistics in a common csv format that is easy
to parse.
Signed-off-by: Thomas Huehn <thomas@net.t-labs.tu-berlin.de>
Signed-off-by: Stefan Venz <ikstream86@gmail.com>
Acked-by: Felix Fietkau <nbd@openwrt.org>
[remove printing current time of day]
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
When the new CONFIG_MAC80211_RC_MINSTREL_VHT is not set (default 'N'),
there is no behavioral change including in sampling and MCS_GROUP_RATES
remains 8.
Otherwise MCS_GROUP_RATES is 10, and a module parameter *vht_only*
(default 'true'), restricts the rates selection to VHT when VHT is
supported.
Regarding the debugfs stats buffer:
It is explicitly increased from 8k to 32k to fit every rates incl. when
both HT and VHT rates are enabled, as for the format, before:
type rate tpt eprob *prob ret *ok(*cum) ok( cum)
HT20/LGI ABCDP MCS0 0.0 0.0 0.0 1 0( 0) 0( 0)
after:
type rate tpt eprob *prob ret *ok(*cum) ok( cum)
HT20/LGI ABCDP MCS0 0.0 0.0 0.0 1 0( 0) 0( 0)
VHT40/LGI MCS5/2 0.0 0.0 0.0 0 0( 0) 0( 0)
Signed-off-by: Karl Beldan <karl.beldan@rivierawaves.com>
Cc: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
No functional change.
Signed-off-by: Karl Beldan <karl.beldan@rivierawaves.com>
Cc: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Since 5935839ad7 ("mac80211: improve minstrel_ht rate sorting by
throughput & probability"), the rate indexes are manipulated via u8's
and hence allow for a maximum of 256 mcs_group entries in
minstrel_mcs_groups.
ATM, minstrel_ht advertizes support up to 3HTSS@40MHz, consuming:
8(MCS_GROUP_RATES) * (3(SS)*2(GI)*2(BW)+1(CCK)), i.e. 104 entries.
Support for 3VHTSS@80MHz will require:
10(MCS_GROUP_RATES) * (3(SS)*2(GI)*2(BW)+1(CCK)) +
10(MCS_GROUP_RATES) * (3(SS)*2(GI)*3(BW)), i.e. 130 + 180 entries.
This change moves from u8s to u16s where necessary.
Signed-off-by: Karl Beldan <karl.beldan@rivierawaves.com>
Cc: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
This patch improves the way minstrel_ht sorts rates according to throughput
and success probability. 3 FOR-loops across the entire rate and mcs group set
in function minstrel_ht_update_stats() which where used to determine the
fastest, second fastest and most robust rate are reduced to 2 FOR-loop.
The sorted list of rates according throughput is extended to the best four
rates as we need them in upcoming joint rate and power control. The sorting
is done via the new function minstrel_ht_sort_best_tp_rates(). The annotation
of those 4 best throughput rates in the debugfs file rc-stats is changes to:
"A,B,C,D", where A is the fastest rate and C the 4th fastest.
Signed-off-by: Thomas Huehn <thomas@net.t-labs.tu-berlin.de>
Tested-by: Stefan Venz <ikstream86@gmail.com>
Acked-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Minstrel and Mintrel_HT used there own structs to keep track of rate
statistics. Unify those variables in struct minstrel_rate_states and
move it to rc80211_minstrel.h for common usage. This is a clean-up
patch to prepare Minstrel and Minstrel_HT codebase for upcoming TPC.
Signed-off-by: Thomas Huehn <thomas@net.t-labs.tu-berlin.de>
Acked-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Pass the rate selection table to mac80211 from minstrel_ht_update_stats.
Only rates for sample attempts are set in info->control.rates.
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Both minstrel versions use individual ways to scale up integer values
to perform calculations. Merge minstrel_ht's scaling macros into
minstrels header file and use them in both minstrel versions.
Acked-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: Thomas Huehn <thomas@net.t-labs.tu-berlin.de>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
If a rate is below the max_tp_rate, sample it frequently if:
- it is above max_tp_rate2, or
- it is above max_prob_rate and is a candidate for max_prob_rate
(has fewer streams than max_tp_rate).
This helps the retry chain recover more quickly from bad statistics
caused by collisions or interference, and slightly reduces throughput
fluctuations with higher rates.
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
When MCS rates start to get bad in 2.4 GHz because of long range or
strong interference, CCK rates can be a lot more robust.
This patch adds a pseudo MCS group containing CCK rates (long preamble
in the lower 4 slots, short preamble in the upper slots).
Signed-off-by: Felix Fietkau <nbd@openwrt.org>
[make minstrel_ht_get_stats static]
Signed-off-by: Johannes Berg <johannes.berg@intel.com>