block: allow 'chunk_sectors' to be non-power-of-2

It is possible, albeit more unlikely, for a block device to have a non
power-of-2 for chunk_sectors (e.g. 10+2 RAID6 with 128K chunk_sectors,
which results in a full-stripe size of 1280K. This causes the RAID6's
io_opt to be advertised as 1280K, and a stacked device _could_ then be
made to use a blocksize, aka chunk_sectors, that matches non power-of-2
io_opt of underlying RAID6 -- resulting in stacked device's
chunk_sectors being a non power-of-2).

Update blk_queue_chunk_sectors() and blk_max_size_offset() to
accommodate drivers that need a non power-of-2 chunk_sectors.

Reviewed-by: Ming Lei <ming.lei@redhat.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
This commit is contained in:
Mike Snitzer 2020-09-21 22:32:49 -04:00 коммит произвёл Jens Axboe
Родитель 22ada802ed
Коммит 07d098e6bb
2 изменённых файлов: 13 добавлений и 9 удалений

Просмотреть файл

@ -172,15 +172,13 @@ EXPORT_SYMBOL(blk_queue_max_hw_sectors);
* *
* Description: * Description:
* If a driver doesn't want IOs to cross a given chunk size, it can set * If a driver doesn't want IOs to cross a given chunk size, it can set
* this limit and prevent merging across chunks. Note that the chunk size * this limit and prevent merging across chunks. Note that the block layer
* must currently be a power-of-2 in sectors. Also note that the block * must accept a page worth of data at any offset. So if the crossing of
* layer must accept a page worth of data at any offset. So if the * chunks is a hard limitation in the driver, it must still be prepared
* crossing of chunks is a hard limitation in the driver, it must still be * to split single page bios.
* prepared to split single page bios.
**/ **/
void blk_queue_chunk_sectors(struct request_queue *q, unsigned int chunk_sectors) void blk_queue_chunk_sectors(struct request_queue *q, unsigned int chunk_sectors)
{ {
BUG_ON(!is_power_of_2(chunk_sectors));
q->limits.chunk_sectors = chunk_sectors; q->limits.chunk_sectors = chunk_sectors;
} }
EXPORT_SYMBOL(blk_queue_chunk_sectors); EXPORT_SYMBOL(blk_queue_chunk_sectors);

Просмотреть файл

@ -1063,11 +1063,17 @@ static inline unsigned int blk_queue_get_max_sectors(struct request_queue *q,
static inline unsigned int blk_max_size_offset(struct request_queue *q, static inline unsigned int blk_max_size_offset(struct request_queue *q,
sector_t offset) sector_t offset)
{ {
if (!q->limits.chunk_sectors) unsigned int chunk_sectors = q->limits.chunk_sectors;
if (!chunk_sectors)
return q->limits.max_sectors; return q->limits.max_sectors;
return min(q->limits.max_sectors, (unsigned int)(q->limits.chunk_sectors - if (likely(is_power_of_2(chunk_sectors)))
(offset & (q->limits.chunk_sectors - 1)))); chunk_sectors -= offset & (chunk_sectors - 1);
else
chunk_sectors -= sector_div(offset, chunk_sectors);
return min(q->limits.max_sectors, chunk_sectors);
} }
static inline unsigned int blk_rq_get_max_sectors(struct request *rq, static inline unsigned int blk_rq_get_max_sectors(struct request *rq,