mm/page_alloc.c: fix a misleading comment
The comment says that the per-cpu batchsize and zone watermarks are determined by present_pages which is definitely wrong, they are both calculated from managed_pages. Fix it. Signed-off-by: Yaowei Bai <bywxiaobai@163.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: David Rientjes <rientjes@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Родитель
c9d13f5fc7
Коммит
013110a73d
|
@ -349,7 +349,7 @@ zone[i]'s protection[j] is calculated by following expression.
|
||||||
|
|
||||||
(i < j):
|
(i < j):
|
||||||
zone[i]->protection[j]
|
zone[i]->protection[j]
|
||||||
= (total sums of present_pages from zone[i+1] to zone[j] on the node)
|
= (total sums of managed_pages from zone[i+1] to zone[j] on the node)
|
||||||
/ lowmem_reserve_ratio[i];
|
/ lowmem_reserve_ratio[i];
|
||||||
(i = j):
|
(i = j):
|
||||||
(should not be protected. = 0;
|
(should not be protected. = 0;
|
||||||
|
@ -360,7 +360,7 @@ The default values of lowmem_reserve_ratio[i] are
|
||||||
256 (if zone[i] means DMA or DMA32 zone)
|
256 (if zone[i] means DMA or DMA32 zone)
|
||||||
32 (others).
|
32 (others).
|
||||||
As above expression, they are reciprocal number of ratio.
|
As above expression, they are reciprocal number of ratio.
|
||||||
256 means 1/256. # of protection pages becomes about "0.39%" of total present
|
256 means 1/256. # of protection pages becomes about "0.39%" of total managed
|
||||||
pages of higher zones on the node.
|
pages of higher zones on the node.
|
||||||
|
|
||||||
If you would like to protect more pages, smaller values are effective.
|
If you would like to protect more pages, smaller values are effective.
|
||||||
|
|
|
@ -6022,7 +6022,7 @@ void __init mem_init_print_info(const char *str)
|
||||||
* set_dma_reserve - set the specified number of pages reserved in the first zone
|
* set_dma_reserve - set the specified number of pages reserved in the first zone
|
||||||
* @new_dma_reserve: The number of pages to mark reserved
|
* @new_dma_reserve: The number of pages to mark reserved
|
||||||
*
|
*
|
||||||
* The per-cpu batchsize and zone watermarks are determined by present_pages.
|
* The per-cpu batchsize and zone watermarks are determined by managed_pages.
|
||||||
* In the DMA zone, a significant percentage may be consumed by kernel image
|
* In the DMA zone, a significant percentage may be consumed by kernel image
|
||||||
* and other unfreeable allocations which can skew the watermarks badly. This
|
* and other unfreeable allocations which can skew the watermarks badly. This
|
||||||
* function may optionally be used to account for unfreeable pages in the
|
* function may optionally be used to account for unfreeable pages in the
|
||||||
|
|
Загрузка…
Ссылка в новой задаче