sparc64: Sharpen address space randomization calculations.

A recent patch to the x86 randomization code caused me to take
a quick look at what we do on sparc64, and in doing so I noticed
that we sometimes calculate a non-page-aligned randomization value
and stick it into mmap_base.

I also noticed that since I copied the logic over from PowerPC,
the powerpc code has tweaked the randomization ranges in ways that
would benefit us as well.

For one thing, we should allow up to at least 8MB of randomization
otherwise huge-page regions when HPAGE_SIZE is 4MB never randomize
at all.

And on the 64-bit side we were using up to 4GB.  Tone it down to
1GB as 4GB can result in a lot of address space wastage.

Finally, make sure all computations are unsigned.

Signed-off-by: David S. Miller <davem@davemloft.net>
This commit is contained in:
David S. Miller 2011-02-18 14:06:47 -08:00
Родитель fd49bf48ca
Коммит 5a0efea09f
1 изменённых файлов: 13 добавлений и 8 удалений

Просмотреть файл

@ -360,20 +360,25 @@ unsigned long get_fb_unmapped_area(struct file *filp, unsigned long orig_addr, u
}
EXPORT_SYMBOL(get_fb_unmapped_area);
/* Essentially the same as PowerPC... */
void arch_pick_mmap_layout(struct mm_struct *mm)
/* Essentially the same as PowerPC. */
static unsigned long mmap_rnd(void)
{
unsigned long random_factor = 0UL;
unsigned long gap;
unsigned long rnd = 0UL;
if (current->flags & PF_RANDOMIZE) {
random_factor = get_random_int();
unsigned long val = get_random_int();
if (test_thread_flag(TIF_32BIT))
random_factor &= ((1 * 1024 * 1024) - 1);
rnd = (val % (1UL << (22UL-PAGE_SHIFT)));
else
random_factor = ((random_factor << PAGE_SHIFT) &
0xffffffffUL);
rnd = (val % (1UL << (29UL-PAGE_SHIFT)));
}
return (rnd << PAGE_SHIFT) * 2;
}
void arch_pick_mmap_layout(struct mm_struct *mm)
{
unsigned long random_factor = mmap_rnd();
unsigned long gap;
/*
* Fall back to the standard layout if the personality