ANDROID: binder: change down_write to down_read

binder_update_page_range needs down_write of mmap_sem because
vm_insert_page need to change vma->vm_flags to VM_MIXEDMAP unless
it is set. However, when I profile binder working, it seems
every binder buffers should be mapped in advance by binder_mmap.
It means we could set VM_MIXEDMAP in binder_mmap time which is
already hold a mmap_sem as down_write so binder_update_page_range
doesn't need to hold a mmap_sem as down_write.
Please use proper API down_read. It would help mmap_sem contention
problem as well as fixing down_write abuse.

Ganesh Mahendran tested app launching and binder throughput test
and he said he couldn't find any problem and I did binder latency
test per Greg KH request(Thanks Martijn to teach me how I can do)
I cannot find any problem, too.

Cc: Ganesh Mahendran <opensource.ganesh@gmail.com>
Cc: Joe Perches <joe@perches.com>
Cc: Arve Hjønnevåg <arve@android.com>
Cc: Todd Kjos <tkjos@google.com>
Reviewed-by: Martijn Coenen <maco@android.com>
Signed-off-by: Minchan Kim <minchan@kernel.org>
Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
This commit is contained in:
Minchan Kim 2018-05-07 23:15:37 +09:00 коммит произвёл Greg Kroah-Hartman
Родитель 838d556566
Коммит 720c241924
2 изменённых файлов: 6 добавлений и 4 удалений

Просмотреть файл

@ -4727,7 +4727,9 @@ static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
failure_string = "bad vm_flags"; failure_string = "bad vm_flags";
goto err_bad_arg; goto err_bad_arg;
} }
vma->vm_flags = (vma->vm_flags | VM_DONTCOPY) & ~VM_MAYWRITE; vma->vm_flags |= VM_DONTCOPY | VM_MIXEDMAP;
vma->vm_flags &= ~VM_MAYWRITE;
vma->vm_ops = &binder_vm_ops; vma->vm_ops = &binder_vm_ops;
vma->vm_private_data = proc; vma->vm_private_data = proc;

Просмотреть файл

@ -219,7 +219,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
mm = alloc->vma_vm_mm; mm = alloc->vma_vm_mm;
if (mm) { if (mm) {
down_write(&mm->mmap_sem); down_read(&mm->mmap_sem);
vma = alloc->vma; vma = alloc->vma;
} }
@ -288,7 +288,7 @@ static int binder_update_page_range(struct binder_alloc *alloc, int allocate,
/* vm_insert_page does not seem to increment the refcount */ /* vm_insert_page does not seem to increment the refcount */
} }
if (mm) { if (mm) {
up_write(&mm->mmap_sem); up_read(&mm->mmap_sem);
mmput(mm); mmput(mm);
} }
return 0; return 0;
@ -321,7 +321,7 @@ err_page_ptr_cleared:
} }
err_no_vma: err_no_vma:
if (mm) { if (mm) {
up_write(&mm->mmap_sem); up_read(&mm->mmap_sem);
mmput(mm); mmput(mm);
} }
return vma ? -ENOMEM : -ESRCH; return vma ? -ENOMEM : -ESRCH;