0 Performance: large files on Linux
John Rusk [MSFT] редактировал(а) эту страницу 2020-05-27 14:43:15 +12:00
Этот файл содержит неоднозначные символы Юникода!

Этот файл содержит неоднозначные символы Юникода, которые могут быть перепутаны с другими в текущей локали. Если это намеренно, можете спокойно проигнорировать это предупреждение. Используйте кнопку Экранировать, чтобы подсветить эти символы.

When running an upload from Linux, that involves just a small number of large files (e.g. one 500 GB file), AzCopy can perform slowly.

The root cause is that AzCopy reads files sequentially... but, some Linux distros arent tuned for optimal sequential read performance of very large files. In particular, Linux distros are commonly configured to pre-read only 128 KB. Thats not enough for the kind of work AzCopy does. The solution is simply to increase Linuxs pre-read size, for the device in question.

So, for disk /dev/sdc for example, heres how you could do it:

echo 8192 | sudo tee -a /sys/block/sdc/queue/read_ahead_kb

Weve tested several values. 4096 is also OK, but was slightly slower.

Note that if you only do the above, the setting wont persist after reboots. To make it persistent, you have to script it to be run at startup, typically with systemd on recent distros (such as recent Ubuntu versions) and with /etc/rc.d/rc.local on older ones. (Remove “sudo” from the script when automating it for use at startup).

For additional tips relevant to jobs with small numbers of large files, see Performance: Large files in general