mm: ksm: fix a typo in comment "alreaady"->"already"
There is a typo in comment, fix it. Signed-off-by: Ethon Paul <ethp@qq.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Ralph Campbell <rcampbell@nvidia.com> Link: http://lkml.kernel.org/r/20200410162427.13927-1-ethp@qq.com Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
This commit is contained in:
Родитель
52cfc24578
Коммит
457aef949d
10
mm/ksm.c
10
mm/ksm.c
|
@ -612,7 +612,7 @@ static struct stable_node *alloc_stable_node_chain(struct stable_node *dup,
|
|||
* Move the old stable node to the second dimension
|
||||
* queued in the hlist_dup. The invariant is that all
|
||||
* dup stable_nodes in the chain->hlist point to pages
|
||||
* that are wrprotected and have the exact same
|
||||
* that are write protected and have the exact same
|
||||
* content.
|
||||
*/
|
||||
stable_node_chain_add_dup(dup, chain);
|
||||
|
@ -1148,7 +1148,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
|
|||
|
||||
/*
|
||||
* No need to check ksm_use_zero_pages here: we can only have a
|
||||
* zero_page here if ksm_use_zero_pages was enabled alreaady.
|
||||
* zero_page here if ksm_use_zero_pages was enabled already.
|
||||
*/
|
||||
if (!is_zero_pfn(page_to_pfn(kpage))) {
|
||||
get_page(kpage);
|
||||
|
@ -1608,7 +1608,7 @@ again:
|
|||
* continue. All KSM pages belonging to the
|
||||
* stable_node dups in a stable_node chain
|
||||
* have the same content and they're
|
||||
* wrprotected at all times. Any will work
|
||||
* write protected at all times. Any will work
|
||||
* fine to continue the walk.
|
||||
*/
|
||||
tree_page = get_ksm_page(stable_node_any,
|
||||
|
@ -1843,7 +1843,7 @@ again:
|
|||
* continue. All KSM pages belonging to the
|
||||
* stable_node dups in a stable_node chain
|
||||
* have the same content and they're
|
||||
* wrprotected at all times. Any will work
|
||||
* write protected at all times. Any will work
|
||||
* fine to continue the walk.
|
||||
*/
|
||||
tree_page = get_ksm_page(stable_node_any,
|
||||
|
@ -2001,7 +2001,7 @@ static void stable_tree_append(struct rmap_item *rmap_item,
|
|||
* duplicate. page_migration could break later if rmap breaks,
|
||||
* so we can as well crash here. We really need to check for
|
||||
* rmap_hlist_len == STABLE_NODE_CHAIN, but we can as well check
|
||||
* for other negative values as an undeflow if detected here
|
||||
* for other negative values as an underflow if detected here
|
||||
* for the first time (and not when decreasing rmap_hlist_len)
|
||||
* would be sign of memory corruption in the stable_node.
|
||||
*/
|
||||
|
|
Загрузка…
Ссылка в новой задаче