Redirecting stream config to BlockCache (#1445)

* Auto convert streaming config to block-cache config
This commit is contained in:
ashruti-msft 2024-11-05 11:38:27 +05:30 коммит произвёл GitHub
Родитель ba739b6206
Коммит 46a557ce18
Не найден ключ, соответствующий данной подписи
Идентификатор ключа GPG: B5690EEEBB952194
22 изменённых файлов: 169 добавлений и 3670 удалений

Просмотреть файл

@ -2,6 +2,9 @@
**Bug Fixes**
- [#1426](https://github.com/Azure/azure-storage-fuse/issues/1426) Read panic in block-cache due to boundary conditions.
**Other Changes**
- Stream config will be converted to block-cache config implicitly and 'stream' component is no longer used from this release onwards.
## 2.3.2 (2024-09-03)
**Bug Fixes**
- Fixed the case where file creation using SAS on HNS accounts was returning back wrong error code.

Просмотреть файл

@ -13,7 +13,7 @@ Please submit an issue [here](https://github.com/azure/azure-storage-fuse/issues
## NOTICE
- Due to known data consistency issues when using Blobfuse2 in `block-cache` mode, it is strongly recommended that all Blobfuse2 installations be upgraded to version 2.3.2. For more information, see [this](https://github.com/Azure/azure-storage-fuse/wiki/Blobfuse2-Known-issues).
- As of version 2.3.0, blobfuse has updated its authentication methods. For Managed Identity, Object-ID based OAuth is solely accessible via CLI-based login, requiring Azure CLI on the system. For a dependency-free option, users may utilize Application/Client-ID or Resource ID based authentication.
- `streaming` mode is being deprecated. This is the older option and is replaced with the `block-cache` mode which is the more performant streaming option.
- `streaming` mode is deprecated. Blobfuse2 will implicitly convert your streaming config to block-cache.
## Limitations in Block Cache
- Concurrent write operations on the same file using multiple handles is not checked for data consistency and may lead to incorrect data being written.
@ -38,7 +38,7 @@ Visit [this](https://github.com/Azure/azure-storage-fuse/wiki/Blobfuse2-Supporte
- Basic file system operations such as mkdir, opendir, readdir, rmdir, open,
read, create, write, close, unlink, truncate, stat, rename
- Local caching to improve subsequent access times
- Streaming/Block-Cache to support reading AND writing large files
- Block-Cache to support reading AND writing large files
- Parallel downloads and uploads to improve access time for large files
- Multiple mounts to the same container for read-only workloads
@ -65,7 +65,7 @@ One of the biggest BlobFuse2 features is our brand new health monitor. It allows
- CLI to check or update a parameter in the encrypted config
- Set MD5 sum of a blob while uploading
- Validate MD5 sum on download and fail file open on mismatch
- Large file writing through write streaming/Block-Cache
- Large file writing through write Block-Cache
## Blobfuse2 performance compared to blobfuse(v1.x.x)
- 'git clone' operation is 25% faster (tested with vscode repo cloning)
@ -154,8 +154,6 @@ To learn about a specific command, just include the name of the command (For exa
* `--high-disk-threshold=<PERCENTAGE>`: If local cache usage exceeds this, start early eviction of files from cache.
* `--low-disk-threshold=<PERCENTAGE>`: If local cache usage comes below this threshold then stop early eviction.
* `--sync-to-flush=false` : Sync call will force upload a file to storage container if this is set to true, otherwise it just evicts file from local cache.
- Stream options
* `--block-size-mb=<SIZE IN MB>`: Size of a block to be downloaded during streaming.
- Block-Cache options
* `--block-cache-block-size=<SIZE IN MB>`: Size of a block to be downloaded as a unit.
* `--block-cache-pool-size=<SIZE IN MB>`: Size of pool to be used for caching. This limits total memory used by block-cache. Default - 80% of free memory available.
@ -230,7 +228,6 @@ Below diagrams guide you to choose right configuration for your workloads.
<br/><br/>
- [Sample File Cache Config](./sampleFileCacheConfig.yaml)
- [Sample Block-Cache Config](./sampleBlockCacheConfig.yaml)
- [Sample Stream Config](./sampleStreamingConfig.yaml)
- [All Config options](./setup/baseConfig.yaml)

Просмотреть файл

@ -76,7 +76,12 @@ steps:
displayName: 'Unmount RW mount'
- script: |
$(WORK_DIR)/blobfuse2 gen-test-config --config-file=$(WORK_DIR)/testdata/config/azure_key_bc.yaml --container-name=${{ parameters.container }} --temp-path=${{ parameters.temp_dir }} --output-file=${{ parameters.config_file }}
if [ "${{ parameters.idstring }}" = "Stream" ]; then
CONFIG_FILE=$(WORK_DIR)/testdata/config/azure_stream.yaml
else
CONFIG_FILE=$(WORK_DIR)/testdata/config/azure_key_bc.yaml
fi
$(WORK_DIR)/blobfuse2 gen-test-config --config-file=$CONFIG_FILE --container-name=${{ parameters.container }} --temp-path=${{ parameters.temp_dir }} --output-file=${{ parameters.config_file }}
displayName: 'Create Config File for RO mount'
env:
NIGHTLY_STO_ACC_NAME: ${{ parameters.account_name }}

Просмотреть файл

@ -1596,6 +1596,77 @@ stages:
mount_dir: $(MOUNT_DIR)
block_size_mb: "8"
- stage: StreamDataValidation
jobs:
# Ubuntu Tests
- job: Set_1
timeoutInMinutes: 300
strategy:
matrix:
Ubuntu-22:
AgentName: 'blobfuse-ubuntu22'
containerName: 'test-cnt-ubn-22'
adlsSas: $(AZTEST_ADLS_CONT_SAS_UBN_22)
fuselib: 'libfuse3-dev'
tags: 'fuse3'
pool:
name: "blobfuse-ubuntu-pool"
demands:
- ImageOverride -equals $(AgentName)
variables:
- group: NightlyBlobFuse
- name: ROOT_DIR
value: "/usr/pipeline/workv2"
- name: WORK_DIR
value: "/usr/pipeline/workv2/go/src/azure-storage-fuse"
- name: skipComponentGovernanceDetection
value: true
- name: MOUNT_DIR
value: "/usr/pipeline/workv2/blob_mnt"
- name: TEMP_DIR
value: "/usr/pipeline/workv2/temp"
- name: BLOBFUSE2_CFG
value: "/usr/pipeline/workv2/blobfuse2.yaml"
- name: GOPATH
value: "/usr/pipeline/workv2/go"
steps:
- template: 'azure-pipeline-templates/setup.yml'
parameters:
tags: $(tags)
installStep:
script: |
sudo apt-get update --fix-missing
sudo apt update
sudo apt-get install cmake gcc $(fuselib) git parallel -y
if [ $(tags) == "fuse2" ]; then
sudo apt-get install fuse -y
else
sudo apt-get install fuse3 -y
fi
displayName: 'Install fuse'
- template: 'azure-pipeline-templates/e2e-tests-block-cache.yml'
parameters:
conf_template: azure_stream.yaml
config_file: $(BLOBFUSE2_CFG)
container: $(containerName)
idstring: Stream
adls: false
account_name: $(NIGHTLY_STO_BLOB_ACC_NAME)
account_key: $(NIGHTLY_STO_BLOB_ACC_KEY)
account_type: block
account_endpoint: https://$(NIGHTLY_STO_BLOB_ACC_NAME).blob.core.windows.net
distro_name: $(AgentName)
quick_test: false
verbose_log: ${{ parameters.verbose_log }}
clone: true
# TODO: These can be removed one day and replace all instances of ${{ parameters.temp_dir }} with $(TEMP_DIR) since it is a global variable
temp_dir: $(TEMP_DIR)
mount_dir: $(MOUNT_DIR)
- stage: FNSDataValidation
jobs:
# Ubuntu Tests

Просмотреть файл

@ -40,5 +40,4 @@ import (
_ "github.com/Azure/azure-storage-fuse/v2/component/file_cache"
_ "github.com/Azure/azure-storage-fuse/v2/component/libfuse"
_ "github.com/Azure/azure-storage-fuse/v2/component/loopback"
_ "github.com/Azure/azure-storage-fuse/v2/component/stream"
)

Просмотреть файл

@ -47,9 +47,9 @@ import (
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/component/attr_cache"
"github.com/Azure/azure-storage-fuse/v2/component/azstorage"
"github.com/Azure/azure-storage-fuse/v2/component/block_cache"
"github.com/Azure/azure-storage-fuse/v2/component/file_cache"
"github.com/Azure/azure-storage-fuse/v2/component/libfuse"
"github.com/Azure/azure-storage-fuse/v2/component/stream"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
@ -96,7 +96,7 @@ type PipelineConfig struct {
NonEmptyMountOption bool `yaml:"nonempty,omitempty"`
LogOptions `yaml:"logging,omitempty"`
libfuse.LibfuseOptions `yaml:"libfuse,omitempty"`
stream.StreamOptions `yaml:"stream,omitempty"`
block_cache.StreamOptions `yaml:"stream,omitempty"`
file_cache.FileCacheOptions `yaml:"file_cache,omitempty"`
attr_cache.AttrCacheOptions `yaml:"attr_cache,omitempty"`
azstorage.AzStorageOptions `yaml:"azstorage,omitempty"`
@ -113,7 +113,7 @@ var bfv2FuseConfigOptions libfuse.LibfuseOptions
var bfv2FileCacheConfigOptions file_cache.FileCacheOptions
var bfv2AttrCacheConfigOptions attr_cache.AttrCacheOptions
var bfv2ComponentsConfigOptions ComponentsConfig
var bfv2StreamConfigOptions stream.StreamOptions
var bfv2StreamConfigOptions block_cache.StreamOptions
var bfv2ForegroundOption bool
var bfv2ReadOnlyOption bool
var bfv2NonEmptyMountOption bool
@ -132,7 +132,7 @@ func resetOptions() {
bfv2FileCacheConfigOptions = file_cache.FileCacheOptions{}
bfv2AttrCacheConfigOptions = attr_cache.AttrCacheOptions{}
bfv2ComponentsConfigOptions = ComponentsConfig{}
bfv2StreamConfigOptions = stream.StreamOptions{}
bfv2StreamConfigOptions = block_cache.StreamOptions{}
bfv2ForegroundOption = false
bfv2ReadOnlyOption = false
bfv2NonEmptyMountOption = false

Просмотреть файл

@ -46,8 +46,8 @@ import (
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/component/attr_cache"
"github.com/Azure/azure-storage-fuse/v2/component/azstorage"
"github.com/Azure/azure-storage-fuse/v2/component/block_cache"
"github.com/Azure/azure-storage-fuse/v2/component/file_cache"
"github.com/Azure/azure-storage-fuse/v2/component/stream"
"github.com/spf13/cobra"
"github.com/spf13/pflag"
@ -607,7 +607,7 @@ func (suite *generateConfigTestSuite) TestCLIParamStreaming() {
suite.assert.Nil(err)
// Read the generated v2 config file
options := stream.StreamOptions{}
options := block_cache.StreamOptions{}
viper.SetConfigType("yaml")
config.ReadFromConfigFile(v2ConfigFile.Name())

Просмотреть файл

@ -55,6 +55,7 @@ import (
var RootMount bool
var ForegroundMount bool
var IsStream bool
// IsDirectoryMounted is a utility function that returns true if the directory is already mounted using fuse
func IsDirectoryMounted(path string) bool {

Просмотреть файл

@ -83,9 +83,9 @@ type BlockCache struct {
maxDiskUsageHit bool // Flag to indicate if we have hit max disk usage
noPrefetch bool // Flag to indicate if prefetch is disabled
prefetchOnOpen bool // Start prefetching on file open call instead of waiting for first read
lazyWrite bool // Flag to indicate if lazy write is enabled
fileCloseOpt sync.WaitGroup // Wait group to wait for all async close operations to complete
stream *Stream
lazyWrite bool // Flag to indicate if lazy write is enabled
fileCloseOpt sync.WaitGroup // Wait group to wait for all async close operations to complete
}
// Structure defining your config parameters
@ -175,7 +175,13 @@ func (bc *BlockCache) Stop() error {
// Return failure if any config is not valid to exit the process
func (bc *BlockCache) Configure(_ bool) error {
log.Trace("BlockCache::Configure : %s", bc.Name())
if common.IsStream {
err := bc.stream.Configure(true)
if err != nil {
log.Err("BlockCache:Stream::Configure : config error [invalid config attributes]")
return fmt.Errorf("config error in %s [%s]", bc.Name(), err.Error())
}
}
defaultMemSize := false
conf := BlockCacheOptions{}
err := config.UnmarshalKey(bc.Name(), &conf)

Просмотреть файл

@ -163,6 +163,7 @@ func (tobj *testObj) cleanupPipeline() error {
os.RemoveAll(tobj.fake_storage_path)
os.RemoveAll(tobj.disk_cache_path)
common.IsStream = false
return nil
}
@ -2597,6 +2598,18 @@ func (suite *blockCacheTestSuite) TestReadWriteBlockInParallel() {
suite.assert.Equal(fs.Size(), int64(62*_1MB))
}
func (suite *blockCacheTestSuite) TestZZZZZStreamToBlockCacheConfig() {
common.IsStream = true
config := "read-only: true\n\nstream:\n block-size-mb: 16\n max-buffers: 80\n buffer-size-mb: 8\n"
tobj, err := setupPipeline(config)
defer tobj.cleanupPipeline()
suite.assert.Nil(err)
suite.assert.Equal(tobj.blockCache.Name(), "block_cache")
suite.assert.EqualValues(tobj.blockCache.blockSize, 16*_1MB)
suite.assert.EqualValues(tobj.blockCache.memSize, 8*_1MB*80)
}
// In order for 'go test' to run this suite, we need to create
// a normal test function and pass our suite to suite.Run
func TestBlockCacheTestSuite(t *testing.T) {

Просмотреть файл

@ -31,29 +31,24 @@
SOFTWARE
*/
package stream
package block_cache
import (
"context"
"errors"
"fmt"
"github.com/Azure/azure-storage-fuse/v2/common/config"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
"github.com/pbnjay/memory"
)
type Stream struct {
internal.BaseComponent
cache StreamConnection
BlockSize int64
BufferSize uint64 // maximum number of blocks allowed to be stored for a file
CachedObjLimit int32
CachedObjects int32
StreamOnly bool // parameter used to check if its pure streaming
}
type StreamOptions struct {
@ -69,38 +64,21 @@ type StreamOptions struct {
}
const (
compName = "stream"
mb = 1024 * 1024
compStream = "stream"
mb = 1024 * 1024
)
var _ internal.Component = &Stream{}
func (st *Stream) Name() string {
return compName
}
func (st *Stream) SetName(name string) {
st.BaseComponent.SetName(name)
}
func (st *Stream) SetNextComponent(nc internal.Component) {
st.BaseComponent.SetNextComponent(nc)
}
func (st *Stream) Priority() internal.ComponentPriority {
return internal.EComponentPriority.LevelMid()
}
func (st *Stream) Start(ctx context.Context) error {
log.Trace("Starting component : %s", st.Name())
return nil
return compStream
}
func (st *Stream) Configure(_ bool) error {
log.Trace("Stream::Configure : %s", st.Name())
conf := StreamOptions{}
err := config.UnmarshalKey(compName, &conf)
err := config.UnmarshalKey(compStream, &conf)
if err != nil {
log.Err("Stream::Configure : config error [invalid config attributes]")
return fmt.Errorf("config error in %s [%s]", st.Name(), err.Error())
@ -112,11 +90,11 @@ func (st *Stream) Configure(_ bool) error {
return fmt.Errorf("config error in %s [%s]", st.Name(), err.Error())
}
if config.IsSet(compName + ".max-blocks-per-file") {
if config.IsSet(compStream + ".max-blocks-per-file") {
conf.BufferSize = conf.BlockSize * uint64(conf.MaxBlocksPerFile)
}
if config.IsSet(compName+".stream-cache-mb") && conf.BufferSize > 0 {
if config.IsSet(compStream+".stream-cache-mb") && conf.BufferSize > 0 {
conf.CachedObjLimit = conf.StreamCacheMb / conf.BufferSize
if conf.CachedObjLimit == 0 {
conf.CachedObjLimit = 1
@ -127,93 +105,32 @@ func (st *Stream) Configure(_ bool) error {
log.Err("Stream::Configure : config error, not enough free memory for provided configuration")
return errors.New("not enough free memory for provided stream configuration")
}
st.cache = NewStreamConnection(conf, st)
log.Info("Stream::Configure : Buffer size %v, Block size %v, Handle limit %v, FileCaching %v, Read-only %v, StreamCacheMb %v, MaxBlocksPerFile %v",
log.Info("Stream to Block Cache::Configure : Buffer size %v, Block size %v, Handle limit %v, FileCaching %v, Read-only %v, StreamCacheMb %v, MaxBlocksPerFile %v",
conf.BufferSize, conf.BlockSize, conf.CachedObjLimit, conf.FileCaching, conf.readOnly, conf.StreamCacheMb, conf.MaxBlocksPerFile)
if conf.BlockSize > 0 {
config.Set(compName+".block-size-mb", fmt.Sprint(conf.BlockSize))
}
if conf.MaxBlocksPerFile > 0 {
config.Set(compName+".prefetch", fmt.Sprint(conf.MaxBlocksPerFile))
}
if conf.BufferSize*conf.CachedObjLimit > 0 {
config.Set(compName+".mem-size-mb", fmt.Sprint(conf.BufferSize*conf.CachedObjLimit))
}
return nil
}
// Stop : Stop the component functionality and kill all threads started
func (st *Stream) Stop() error {
log.Trace("Stopping component : %s", st.Name())
return st.cache.Stop()
}
func (st *Stream) CreateFile(options internal.CreateFileOptions) (*handlemap.Handle, error) {
return st.cache.CreateFile(options)
}
func (st *Stream) OpenFile(options internal.OpenFileOptions) (*handlemap.Handle, error) {
return st.cache.OpenFile(options)
}
func (st *Stream) ReadInBuffer(options internal.ReadInBufferOptions) (int, error) {
return st.cache.ReadInBuffer(options)
}
func (st *Stream) WriteFile(options internal.WriteFileOptions) (int, error) {
return st.cache.WriteFile(options)
}
func (st *Stream) FlushFile(options internal.FlushFileOptions) error {
return st.cache.FlushFile(options)
}
func (st *Stream) CloseFile(options internal.CloseFileOptions) error {
return st.cache.CloseFile(options)
}
func (st *Stream) DeleteFile(options internal.DeleteFileOptions) error {
return st.cache.DeleteFile(options)
}
func (st *Stream) RenameFile(options internal.RenameFileOptions) error {
return st.cache.RenameFile(options)
}
func (st *Stream) DeleteDir(options internal.DeleteDirOptions) error {
return st.cache.DeleteDirectory(options)
}
func (st *Stream) RenameDir(options internal.RenameDirOptions) error {
return st.cache.RenameDirectory(options)
}
func (st *Stream) TruncateFile(options internal.TruncateFileOptions) error {
return st.cache.TruncateFile(options)
}
func (st *Stream) GetAttr(options internal.GetAttrOptions) (*internal.ObjAttr, error) {
return st.cache.GetAttr(options)
}
func (st *Stream) SyncFile(options internal.SyncFileOptions) error {
return st.cache.SyncFile(options)
}
// ------------------------- Factory -------------------------------------------
// Pipeline will call this method to create your object, initialize your variables here
// << DO NOT DELETE ANY AUTO GENERATED CODE HERE >>
func NewStreamComponent() internal.Component {
comp := &Stream{}
comp.SetName(compName)
return comp
}
// On init register this component to pipeline and supply your constructor
func init() {
internal.AddComponent(compName, NewStreamComponent)
blockSizeMb := config.AddUint64Flag("block-size-mb", 0, "Size (in MB) of a block to be downloaded during streaming.")
config.BindPFlag(compName+".block-size-mb", blockSizeMb)
config.BindPFlag(compStream+".block-size-mb", blockSizeMb)
maxBlocksMb := config.AddIntFlag("max-blocks-per-file", 0, "Maximum number of blocks to be cached in memory for streaming.")
config.BindPFlag(compName+".max-blocks-per-file", maxBlocksMb)
config.BindPFlag(compStream+".max-blocks-per-file", maxBlocksMb)
maxBlocksMb.Hidden = true
streamCacheSize := config.AddUint64Flag("stream-cache-mb", 0, "Limit total amount of data being cached in memory to conserve memory footprint of blobfuse.")
config.BindPFlag(compName+".stream-cache-mb", streamCacheSize)
config.BindPFlag(compStream+".stream-cache-mb", streamCacheSize)
streamCacheSize.Hidden = true
}

Просмотреть файл

@ -1,77 +0,0 @@
/*
_____ _____ _____ ____ ______ _____ ------
| | | | | | | | | | | | |
| | | | | | | | | | | | |
| --- | | | | |-----| |---- | | |-----| |----- ------
| | | | | | | | | | | | |
| ____| |_____ | ____| | ____| | |_____| _____| |_____ |_____
Licensed under the MIT License <http://opensource.org/licenses/MIT>.
Copyright © 2020-2024 Microsoft Corporation. All rights reserved.
Author : <blobfusedev@microsoft.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
*/
package stream
import (
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
)
type StreamConnection interface {
RenameDirectory(options internal.RenameDirOptions) error
DeleteDirectory(options internal.DeleteDirOptions) error
RenameFile(options internal.RenameFileOptions) error
DeleteFile(options internal.DeleteFileOptions) error
CreateFile(options internal.CreateFileOptions) (*handlemap.Handle, error) //TODO TEST THIS
Configure(cfg StreamOptions) error
ReadInBuffer(internal.ReadInBufferOptions) (int, error)
OpenFile(internal.OpenFileOptions) (*handlemap.Handle, error)
WriteFile(options internal.WriteFileOptions) (int, error)
TruncateFile(internal.TruncateFileOptions) error
FlushFile(internal.FlushFileOptions) error
GetAttr(internal.GetAttrOptions) (*internal.ObjAttr, error)
CloseFile(options internal.CloseFileOptions) error
SyncFile(options internal.SyncFileOptions) error
Stop() error
}
// NewAzStorageConnection : Based on account type create respective AzConnection Object
func NewStreamConnection(cfg StreamOptions, stream *Stream) StreamConnection {
if cfg.readOnly {
r := ReadCache{}
r.Stream = stream
_ = r.Configure(cfg)
return &r
}
if cfg.FileCaching {
rw := ReadWriteFilenameCache{}
rw.Stream = stream
_ = rw.Configure(cfg)
return &rw
}
rw := ReadWriteCache{}
rw.Stream = stream
_ = rw.Configure(cfg)
return &rw
}

Просмотреть файл

@ -1,246 +0,0 @@
/*
_____ _____ _____ ____ ______ _____ ------
| | | | | | | | | | | | |
| | | | | | | | | | | | |
| --- | | | | |-----| |---- | | |-----| |----- ------
| | | | | | | | | | | | |
| ____| |_____ | ____| | ____| | |_____| _____| |_____ |_____
Licensed under the MIT License <http://opensource.org/licenses/MIT>.
Copyright © 2020-2024 Microsoft Corporation. All rights reserved.
Author : <blobfusedev@microsoft.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
*/
package stream
import (
"io"
"sync/atomic"
"syscall"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
)
type ReadCache struct {
*Stream
StreamConnection
}
func (r *ReadCache) Configure(conf StreamOptions) error {
if conf.BufferSize <= 0 || conf.BlockSize <= 0 || conf.CachedObjLimit <= 0 {
r.StreamOnly = true
log.Info("ReadCache::Configure : Streamonly set to true")
}
r.BlockSize = int64(conf.BlockSize) * mb
r.BufferSize = conf.BufferSize * mb
r.CachedObjLimit = int32(conf.CachedObjLimit)
r.CachedObjects = 0
return nil
}
// Stop : Stop the component functionality and kill all threads started
func (r *ReadCache) Stop() error {
log.Trace("Stopping component : %s", r.Name())
handleMap := handlemap.GetHandles()
handleMap.Range(func(key, value interface{}) bool {
handle := value.(*handlemap.Handle)
if handle.CacheObj != nil {
handle.CacheObj.Lock()
handle.CacheObj.Purge()
handle.CacheObj.Unlock()
}
return true
})
return nil
}
func (r *ReadCache) unlockBlock(block *common.Block, exists bool) {
if exists {
block.RUnlock()
} else {
block.Unlock()
}
}
func (r *ReadCache) OpenFile(options internal.OpenFileOptions) (*handlemap.Handle, error) {
log.Trace("Stream::OpenFile : name=%s, flags=%d, mode=%s", options.Name, options.Flags, options.Mode)
handle, err := r.NextComponent().OpenFile(options)
if err != nil {
log.Err("Stream::OpenFile : error %s [%s]", options.Name, err.Error())
return handle, err
}
if handle == nil {
handle = handlemap.NewHandle(options.Name)
}
if !r.StreamOnly {
handlemap.CreateCacheObject(int64(r.BufferSize), handle)
if r.CachedObjects >= r.CachedObjLimit {
log.Trace("Stream::OpenFile : file handle limit exceeded - switch handle to stream only mode %s [%s]", options.Name, handle.ID)
handle.CacheObj.StreamOnly = true
return handle, nil
}
atomic.AddInt32(&r.CachedObjects, 1)
block, exists, err := r.getBlock(handle, 0)
if err != nil {
log.Err("Stream::OpenFile : error failed to get block on open %s [%s]", options.Name, err.Error())
return handle, err
}
// if it exists then we can just RUnlock since we didn't manipulate its data buffer
r.unlockBlock(block, exists)
}
return handle, err
}
func (r *ReadCache) getBlock(handle *handlemap.Handle, offset int64) (*common.Block, bool, error) {
blockSize := r.BlockSize
blockKeyObj := offset
handle.CacheObj.Lock()
block, found := handle.CacheObj.Get(blockKeyObj)
if !found {
if (offset + blockSize) > handle.Size {
blockSize = handle.Size - offset
}
block = &common.Block{
StartIndex: offset,
EndIndex: offset + blockSize,
Data: make([]byte, blockSize),
}
block.Lock()
handle.CacheObj.Put(blockKeyObj, block)
handle.CacheObj.Unlock()
// if the block does not exist fetch it from the next component
options := internal.ReadInBufferOptions{
Handle: handle,
Offset: block.StartIndex,
Data: block.Data,
}
_, err := r.NextComponent().ReadInBuffer(options)
if err != nil && err != io.EOF {
return nil, false, err
}
return block, false, nil
} else {
block.RLock()
handle.CacheObj.Unlock()
return block, true, nil
}
}
func (r *ReadCache) copyCachedBlock(handle *handlemap.Handle, offset int64, data []byte) (int, error) {
dataLeft := int64(len(data))
// counter to track how much we have copied into our request buffer thus far
dataRead := 0
// covers the case if we get a call that is bigger than the file size
for dataLeft > 0 && offset < handle.Size {
// round all offsets to the specific blocksize offsets
cachedBlockStartIndex := (offset - (offset % r.BlockSize))
// Lock on requested block and fileName to ensure it is not being rerequested or manipulated
block, exists, err := r.getBlock(handle, cachedBlockStartIndex)
if err != nil {
r.unlockBlock(block, exists)
log.Err("Stream::ReadInBuffer : failed to download block of %s with offset %d: [%s]", handle.Path, block.StartIndex, err.Error())
return dataRead, err
}
dataCopied := int64(copy(data[dataRead:], block.Data[offset-cachedBlockStartIndex:]))
r.unlockBlock(block, exists)
dataLeft -= dataCopied
offset += dataCopied
dataRead += int(dataCopied)
}
return dataRead, nil
}
func (r *ReadCache) ReadInBuffer(options internal.ReadInBufferOptions) (int, error) {
// if we're only streaming then avoid using the cache
if r.StreamOnly || options.Handle.CacheObj.StreamOnly {
data, err := r.NextComponent().ReadInBuffer(options)
if err != nil && err != io.EOF {
log.Err("Stream::ReadInBuffer : error failed to download requested data for %s: [%s]", options.Handle.Path, err.Error())
}
return data, err
}
return r.copyCachedBlock(options.Handle, options.Offset, options.Data)
}
func (r *ReadCache) CloseFile(options internal.CloseFileOptions) error {
log.Trace("Stream::CloseFile : name=%s, handle=%d", options.Handle.Path, options.Handle.ID)
err := r.NextComponent().CloseFile(options)
if err != nil {
log.Err("Stream::CloseFile : error closing file %s [%s]", options.Handle.Path, err.Error())
}
if !r.StreamOnly && !options.Handle.CacheObj.StreamOnly {
options.Handle.CacheObj.Lock()
defer options.Handle.CacheObj.Unlock()
options.Handle.CacheObj.Purge()
options.Handle.CacheObj.StreamOnly = true
atomic.AddInt32(&r.CachedObjects, -1)
}
return nil
}
func (r *ReadCache) GetAttr(options internal.GetAttrOptions) (*internal.ObjAttr, error) {
// log.Trace("AttrCache::GetAttr : %s", options.Name)
return r.NextComponent().GetAttr(options)
}
func (r *ReadCache) WriteFile(options internal.WriteFileOptions) (int, error) {
return 0, syscall.ENOTSUP
}
func (r *ReadCache) FlushFile(options internal.FlushFileOptions) error {
// log.Trace("Stream::FlushFile : name=%s, handle=%d", options.Handle.Path, options.Handle.ID)
return nil
}
func (r *ReadCache) TruncateFile(options internal.TruncateFileOptions) error {
return syscall.ENOTSUP
}
func (r *ReadCache) RenameFile(options internal.RenameFileOptions) error {
return syscall.ENOTSUP
}
func (r *ReadCache) DeleteFile(options internal.DeleteFileOptions) error {
return syscall.ENOTSUP
}
func (r *ReadCache) DeleteDirectory(options internal.DeleteDirOptions) error {
return syscall.ENOTSUP
}
func (r *ReadCache) RenameDirectory(options internal.RenameDirOptions) error {
return syscall.ENOTSUP
}
func (r *ReadCache) CreateFile(options internal.CreateFileOptions) (*handlemap.Handle, error) {
return nil, syscall.ENOTSUP
}
func (r *ReadCache) SyncFile(_ internal.SyncFileOptions) error {
return nil
}

Просмотреть файл

@ -1,715 +0,0 @@
/*
_____ _____ _____ ____ ______ _____ ------
| | | | | | | | | | | | |
| | | | | | | | | | | | |
| --- | | | | |-----| |---- | | |-----| |----- ------
| | | | | | | | | | | | |
| ____| |_____ | ____| | ____| | |_____| _____| |_____ |_____
Licensed under the MIT License <http://opensource.org/licenses/MIT>.
Copyright © 2020-2024 Microsoft Corporation. All rights reserved.
Author : <blobfusedev@microsoft.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
*/
package stream
import (
"context"
"crypto/rand"
"os"
"strings"
"sync"
"syscall"
"testing"
"time"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/config"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
"github.com/golang/mock/gomock"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/suite"
)
type streamTestSuite struct {
suite.Suite
assert *assert.Assertions
stream *Stream
mockCtrl *gomock.Controller
mock *internal.MockComponent
}
var wg = sync.WaitGroup{}
var emptyConfig = ""
// The four file keys to be tested against
var fileNames [4]string = [4]string{"file1", "file2"}
const MB = 1024 * 1024
// Helper methods for setup and getting options/data ========================================
func newTestStream(next internal.Component, configuration string, ro bool) (*Stream, error) {
_ = config.ReadConfigFromReader(strings.NewReader(configuration))
// we must be in read-only mode for read stream
config.SetBool("read-only", ro)
stream := NewStreamComponent()
stream.SetNextComponent(next)
err := stream.Configure(true)
return stream.(*Stream), err
}
func (suite *streamTestSuite) setupTestHelper(config string, ro bool) {
var err error
suite.assert = assert.New(suite.T())
suite.mockCtrl = gomock.NewController(suite.T())
suite.mock = internal.NewMockComponent(suite.mockCtrl)
suite.stream, err = newTestStream(suite.mock, config, ro)
suite.assert.Equal(err, nil)
_ = suite.stream.Start(context.Background())
}
func (suite *streamTestSuite) SetupTest() {
err := log.SetDefaultLogger("silent", common.LogConfig{})
if err != nil {
panic("Unable to set silent logger as default.")
}
suite.setupTestHelper(emptyConfig, true)
}
func (suite *streamTestSuite) cleanupTest() {
_ = suite.stream.Stop()
suite.mockCtrl.Finish()
}
func (suite *streamTestSuite) getRequestOptions(fileIndex int, handle *handlemap.Handle, overwriteEndIndex bool, fileSize, offset, endIndex int64) (internal.OpenFileOptions, internal.ReadInBufferOptions, *[]byte) {
var data []byte
openFileOptions := internal.OpenFileOptions{Name: fileNames[fileIndex], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
if !overwriteEndIndex {
data = make([]byte, suite.stream.BlockSize)
} else {
data = make([]byte, endIndex-offset)
}
readInBufferOptions := internal.ReadInBufferOptions{Handle: handle, Offset: offset, Data: data}
return openFileOptions, readInBufferOptions, &data
}
// return data buffer populated with data of the given size
func getBlockData(suite *streamTestSuite, size int) *[]byte {
dataBuffer := make([]byte, size)
_, _ = rand.Read(dataBuffer)
return &dataBuffer
}
// return the block
func getCachedBlock(suite *streamTestSuite, offset int64, handle *handlemap.Handle) *common.Block {
bk := offset
blk, _ := handle.CacheObj.Get(bk)
return blk
}
// Concurrency helpers with wait group terminations ========================================
func asyncReadInBuffer(suite *streamTestSuite, readInBufferOptions internal.ReadInBufferOptions) {
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
wg.Done()
}
func asyncOpenFile(suite *streamTestSuite, openFileOptions internal.OpenFileOptions) {
_, _ = suite.stream.OpenFile(openFileOptions)
wg.Done()
}
func asyncCloseFile(suite *streamTestSuite, closeFileOptions internal.CloseFileOptions) {
_ = suite.stream.CloseFile(closeFileOptions)
wg.Done()
}
// Assertion helpers ========================================================================
// assert that the block is cached
func assertBlockCached(suite *streamTestSuite, offset int64, handle *handlemap.Handle) {
_, found := handle.CacheObj.Get(offset)
suite.assert.Equal(found, true)
}
// assert the block is not cached and KeyNotFoundError is thrown
func assertBlockNotCached(suite *streamTestSuite, offset int64, handle *handlemap.Handle) {
_, found := handle.CacheObj.Get(offset)
suite.assert.Equal(found, false)
}
func assertHandleNotStreamOnly(suite *streamTestSuite, handle *handlemap.Handle) {
suite.assert.Equal(handle.CacheObj.StreamOnly, false)
}
func assertHandleStreamOnly(suite *streamTestSuite, handle *handlemap.Handle) {
suite.assert.Equal(handle.CacheObj.StreamOnly, true)
}
func assertNumberOfCachedFileBlocks(suite *streamTestSuite, numOfBlocks int, handle *handlemap.Handle) {
suite.assert.Equal(numOfBlocks, len(handle.CacheObj.Keys()))
}
// ====================================== End of helper methods =================================
// ====================================== Unit Tests ============================================
func (suite *streamTestSuite) TestDefault() {
defer suite.cleanupTest()
suite.assert.Equal("stream", suite.stream.Name())
suite.assert.EqualValues(true, suite.stream.StreamOnly)
}
func (suite *streamTestSuite) TestConfig() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
suite.assert.Equal("stream", suite.stream.Name())
suite.assert.Equal(16*MB, int(suite.stream.BufferSize))
suite.assert.Equal(4, int(suite.stream.CachedObjLimit))
suite.assert.EqualValues(false, suite.stream.StreamOnly)
suite.assert.EqualValues(4*MB, suite.stream.BlockSize)
// assert streaming is on if any of the values is 0
suite.cleanupTest()
config = "stream:\n block-size-mb: 0\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
suite.assert.EqualValues(true, suite.stream.StreamOnly)
}
func (suite *streamTestSuite) TestReadWriteFile() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
// assert streaming is on if any of the values is 0
suite.cleanupTest()
config = "stream:\n block-size-mb: 0\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
_, err := suite.stream.WriteFile(internal.WriteFileOptions{})
suite.assert.Equal(syscall.ENOTSUP, err)
}
func (suite *streamTestSuite) TestReadTruncateFile() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
// assert streaming is on if any of the values is 0
suite.cleanupTest()
config = "stream:\n block-size-mb: 0\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
err := suite.stream.TruncateFile(internal.TruncateFileOptions{})
suite.assert.Equal(syscall.ENOTSUP, err)
}
func (suite *streamTestSuite) TestReadRenameFile() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
// assert streaming is on if any of the values is 0
suite.cleanupTest()
config = "stream:\n block-size-mb: 0\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
err := suite.stream.RenameFile(internal.RenameFileOptions{})
suite.assert.Equal(syscall.ENOTSUP, err)
}
func (suite *streamTestSuite) TestReadDeleteFile() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
// assert streaming is on if any of the values is 0
suite.cleanupTest()
config = "stream:\n block-size-mb: 0\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
err := suite.stream.DeleteFile(internal.DeleteFileOptions{})
suite.assert.Equal(syscall.ENOTSUP, err)
}
func (suite *streamTestSuite) TestFlushFile() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
handle1 := &handlemap.Handle{Size: 2, Path: fileNames[0]}
flushFileOptions := internal.FlushFileOptions{Handle: handle1}
err := suite.stream.FlushFile(flushFileOptions)
suite.assert.Equal(nil, err)
}
func (suite *streamTestSuite) TestSyncFile() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
handle1 := &handlemap.Handle{Size: 2, Path: fileNames[0]}
syncFileOptions := internal.SyncFileOptions{Handle: handle1}
err := suite.stream.SyncFile(syncFileOptions)
suite.assert.Equal(nil, err)
}
func (suite *streamTestSuite) TestReadDeleteDir() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
// assert streaming is on if any of the values is 0
suite.cleanupTest()
config = "stream:\n block-size-mb: 0\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
err := suite.stream.DeleteDir(internal.DeleteDirOptions{})
suite.assert.Equal(syscall.ENOTSUP, err)
}
func (suite *streamTestSuite) TestReadRenameDir() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
// assert streaming is on if any of the values is 0
suite.cleanupTest()
config = "stream:\n block-size-mb: 0\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
err := suite.stream.RenameDir(internal.RenameDirOptions{})
suite.assert.Equal(syscall.ENOTSUP, err)
}
func (suite *streamTestSuite) TestReadCreateFile() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
// assert streaming is on if any of the values is 0
suite.cleanupTest()
config = "stream:\n block-size-mb: 0\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
_, err := suite.stream.CreateFile(internal.CreateFileOptions{})
suite.assert.Equal(syscall.ENOTSUP, err)
}
func (suite *streamTestSuite) TestStreamOnlyError() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 0\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
// assert streaming is on if any of the values is 0
suite.assert.EqualValues(true, suite.stream.StreamOnly)
handle := &handlemap.Handle{Size: int64(100 * MB), Path: fileNames[0]}
_, readInBufferOptions, _ := suite.getRequestOptions(0, handle, true, int64(100*MB), 0, 5)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(0, syscall.ENOENT)
_, err := suite.stream.ReadInBuffer(readInBufferOptions)
suite.assert.Equal(err, syscall.ENOENT)
}
// Test file key gets cached on open and first block is prefetched
func (suite *streamTestSuite) TestCacheOnOpenFile() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 16\n max-buffers: 3\n"
suite.setupTestHelper(config, true)
handle := &handlemap.Handle{Size: int64(100 * MB), Path: fileNames[0]}
openFileOptions, readInBufferOptions, _ := suite.getRequestOptions(0, handle, false, int64(100*MB), 0, 0)
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockCached(suite, 0, handle)
assertNumberOfCachedFileBlocks(suite, 1, handle)
}
// If open file returns error ensure nothing is cached and error is returned
func (suite *streamTestSuite) TestCacheOnOpenFileError() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 16\n max-buffers: 3\n"
suite.setupTestHelper(config, true)
handle := &handlemap.Handle{Size: int64(100 * MB), Path: fileNames[0]}
openFileOptions, _, _ := suite.getRequestOptions(0, handle, false, int64(100*MB), 0, 0)
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, syscall.ENOENT)
_, err := suite.stream.OpenFile(openFileOptions)
suite.assert.Equal(err, syscall.ENOENT)
}
// When we evict/remove all blocks of a given file the file should be no longer referenced in the cache
func (suite *streamTestSuite) TestFileKeyEviction() {
defer suite.cleanupTest()
suite.cleanupTest()
// our config only fits one block - therefore with every open we purge the previous file cached
config := "stream:\n block-size-mb: 16\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
handle_1 := &handlemap.Handle{Size: int64(100 * MB), Path: fileNames[0]}
handle_2 := &handlemap.Handle{Size: int64(100 * MB), Path: fileNames[1]}
for i, handle := range []*handlemap.Handle{handle_1, handle_2} {
openFileOptions, readInBufferOptions, _ := suite.getRequestOptions(i, handle, false, int64(100*MB), 0, 0)
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockCached(suite, 0, handle)
}
// since our configuration limits us to have one cached file at a time we expect to not have the first file key anymore
assertBlockCached(suite, 0, handle_2)
assertNumberOfCachedFileBlocks(suite, 1, handle_2)
}
func (suite *streamTestSuite) TestBlockEviction() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 16\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
handle := &handlemap.Handle{Size: int64(100 * MB), Path: fileNames[0]}
openFileOptions, readInBufferOptions, _ := suite.getRequestOptions(0, handle, false, int64(100*MB), 0, 0)
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockCached(suite, 0, handle)
_, readInBufferOptions, _ = suite.getRequestOptions(0, handle, false, int64(100*MB), 16*MB, 0)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
// we expect our first block to have been evicted
assertBlockNotCached(suite, 0, handle)
assertBlockCached(suite, 16*MB, handle)
assertNumberOfCachedFileBlocks(suite, 1, handle)
}
// Test handle tracking by opening/closing a file multiple times
func (suite *streamTestSuite) TestHandles() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 16\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
handle := &handlemap.Handle{Size: int64(100 * MB), Path: fileNames[0]}
openFileOptions, readInBufferOptions, _ := suite.getRequestOptions(0, handle, false, int64(100*MB), 0, 0)
closeFileOptions := internal.CloseFileOptions{Handle: handle}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
_, _ = suite.stream.OpenFile(openFileOptions)
suite.mock.EXPECT().CloseFile(closeFileOptions).Return(nil)
_ = suite.stream.CloseFile(closeFileOptions)
// we expect to call read in buffer again since we cleaned the cache after the file was closed
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
_, _ = suite.stream.OpenFile(openFileOptions)
}
func (suite *streamTestSuite) TestStreamOnlyHandleLimit() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 16\n buffer-size-mb: 16\n max-buffers: 1\n"
suite.setupTestHelper(config, true)
handle1 := &handlemap.Handle{Size: int64(100 * MB), Path: fileNames[0]}
handle2 := &handlemap.Handle{Size: int64(100 * MB), Path: fileNames[0]}
handle3 := &handlemap.Handle{Size: int64(100 * MB), Path: fileNames[0]}
openFileOptions, readInBufferOptions, _ := suite.getRequestOptions(0, handle1, false, int64(100*MB), 0, 0)
closeFileOptions := internal.CloseFileOptions{Handle: handle1}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle1, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertHandleNotStreamOnly(suite, handle1)
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle2, nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertHandleStreamOnly(suite, handle2)
suite.mock.EXPECT().CloseFile(closeFileOptions).Return(nil)
_ = suite.stream.CloseFile(closeFileOptions)
// we expect to call read in buffer again since we cleaned the cache after the file was closed
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle3, nil)
readInBufferOptions.Handle = handle3
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertHandleNotStreamOnly(suite, handle3)
}
// Get data that spans two blocks - we expect to have two blocks stored at the end
func (suite *streamTestSuite) TestBlockDataOverlap() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 16\n buffer-size-mb: 32\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
handle := &handlemap.Handle{Size: int64(100 * MB), Path: fileNames[0]}
openFileOptions, readInBufferOptions, _ := suite.getRequestOptions(0, handle, false, int64(100*MB), 0, 0)
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockCached(suite, 0, handle)
// options of our request from the stream component
_, userReadInBufferOptions, _ := suite.getRequestOptions(0, handle, true, int64(100*MB), 1*MB, 17*MB)
// options the stream component should request for the second block
_, streamMissingBlockReadInBufferOptions, _ := suite.getRequestOptions(0, handle, false, int64(100*MB), 16*MB, 0)
suite.mock.EXPECT().ReadInBuffer(streamMissingBlockReadInBufferOptions).Return(int(16*MB), nil)
_, _ = suite.stream.ReadInBuffer(userReadInBufferOptions)
// we expect 0-16MB, and 16MB-32MB be cached since our second request is at offset 1MB
assertBlockCached(suite, 0, handle)
assertBlockCached(suite, 16*MB, handle)
assertNumberOfCachedFileBlocks(suite, 2, handle)
}
func (suite *streamTestSuite) TestFileSmallerThanBlockSize() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 16\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
handle := &handlemap.Handle{Size: int64(15 * MB), Path: fileNames[0]}
// case1: we know the size of the file from the get go, 15MB - smaller than our block size
openFileOptions, readInBufferOptions, _ := suite.getRequestOptions(0, handle, true, int64(15*MB), 0, 15*MB)
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
// we expect our request to be 15MB
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(15*MB), nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockCached(suite, 0, handle)
blk := getCachedBlock(suite, 0, handle)
suite.assert.Equal(int64(15*MB), blk.EndIndex)
// TODO: case2: file size changed in next component without stream being updated and therefore we get EOF
}
func (suite *streamTestSuite) TestEmptyFile() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 16\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
handle := &handlemap.Handle{Size: 0, Path: fileNames[0]}
// case1: we know the size of the file from the get go, 0
openFileOptions, readInBufferOptions, _ := suite.getRequestOptions(0, handle, true, int64(0), 0, 0)
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
// we expect our request to be 0
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(0), nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockCached(suite, 0, handle)
blk := getCachedBlock(suite, 0, handle)
suite.assert.Equal(int64(0), blk.EndIndex)
}
// When we stop the component we expect everything to be deleted
func (suite *streamTestSuite) TestCachePurge() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
handle_1 := &handlemap.Handle{Size: int64(100 * MB), Path: fileNames[0]}
handle_2 := &handlemap.Handle{Size: int64(100 * MB), Path: fileNames[1]}
for i, handle := range []*handlemap.Handle{handle_1, handle_2} {
openFileOptions, readInBufferOptions, _ := suite.getRequestOptions(i, handle, false, int64(100*MB), 0, 0)
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockCached(suite, 0, handle)
}
_ = suite.stream.Stop()
assertBlockCached(suite, 0, handle_1)
assertBlockCached(suite, 0, handle_2)
}
// Data sanity check
func (suite *streamTestSuite) TestCachedData() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 16\n buffer-size-mb: 32\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
var dataBuffer *[]byte
var readInBufferOptions internal.ReadInBufferOptions
handle_1 := &handlemap.Handle{Size: int64(32 * MB), Path: fileNames[0]}
data := *getBlockData(suite, 32*MB)
for _, off := range []int64{0, 16} {
openFileOptions, readInBufferOptions, _ := suite.getRequestOptions(0, handle_1, false, int64(32*MB), off*MB, 0)
if off == 0 {
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle_1, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
_, _ = suite.stream.OpenFile(openFileOptions)
} else {
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
}
assertBlockCached(suite, off*MB, handle_1)
block := getCachedBlock(suite, off*MB, handle_1)
block.Data = data[off*MB : off*MB+suite.stream.BlockSize]
}
// now let's assert that it doesn't call next component and that the data retrieved is accurate
// case1: data within a cached block
_, readInBufferOptions, dataBuffer = suite.getRequestOptions(0, handle_1, true, int64(32*MB), int64(2*MB), int64(3*MB))
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
suite.assert.Equal(data[2*MB:3*MB], *dataBuffer)
// case2: data cached within two blocks
_, readInBufferOptions, dataBuffer = suite.getRequestOptions(0, handle_1, true, int64(32*MB), int64(14*MB), int64(20*MB))
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
suite.assert.Equal(data[14*MB:20*MB], *dataBuffer)
}
// This test does a data sanity check in the case where concurrent read is happening and causes evicitons
func (suite *streamTestSuite) TestAsyncReadAndEviction() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
var blockOneDataBuffer *[]byte
var blockTwoDataBuffer *[]byte
var readInBufferOptions internal.ReadInBufferOptions
handle_1 := &handlemap.Handle{Size: int64(16 * MB), Path: fileNames[0]}
// Even though our file size is 16MB below we only check against 8MB of the data (we check against two blocks)
data := *getBlockData(suite, 8*MB)
for _, off := range []int64{0, 4} {
openFileOptions, readInBufferOptions, _ := suite.getRequestOptions(0, handle_1, false, int64(16*MB), off*MB, 0)
if off == 0 {
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle_1, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
_, _ = suite.stream.OpenFile(openFileOptions)
} else {
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
}
assertBlockCached(suite, off*MB, handle_1)
block := getCachedBlock(suite, off*MB, handle_1)
block.Data = data[off*MB : off*MB+suite.stream.BlockSize]
}
// test concurrent data access to the same file
// call 1: data within a cached block
_, readInBufferOptions, blockOneDataBuffer = suite.getRequestOptions(0, handle_1, true, int64(16*MB), int64(2*MB), int64(3*MB))
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
wg.Add(2)
// call 2: data cached within two blocks
_, readInBufferOptions, blockTwoDataBuffer = suite.getRequestOptions(0, handle_1, true, int64(16*MB), int64(3*MB), int64(6*MB))
go asyncReadInBuffer(suite, readInBufferOptions)
// wait a little so we can guarantee block offset 0 is evicted
time.Sleep(2 * time.Second)
// call 3: get missing block causing an eviction to block 1 with offset 0 - this ensures our data from block 1 is still copied correctly
_, readInBufferOptions, _ = suite.getRequestOptions(0, handle_1, false, int64(16*MB), int64(12*MB), 0)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
go asyncReadInBuffer(suite, readInBufferOptions)
wg.Wait()
// assert data within first block is correct
suite.assert.Equal(data[2*MB:3*MB], *blockOneDataBuffer)
// assert data between two blocks is correct
suite.assert.Equal(data[3*MB:6*MB], *blockTwoDataBuffer)
// assert we did in fact evict the first block and have added the third block
assertBlockCached(suite, 0, handle_1)
assertBlockCached(suite, 12*MB, handle_1)
}
// This tests concurrent open and ensuring the number of handles and cached blocks is handled correctly
func (suite *streamTestSuite) TestAsyncOpen() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
handle_1 := &handlemap.Handle{Size: int64(100 * MB), Path: fileNames[0]}
handle_2 := &handlemap.Handle{Size: int64(100 * MB), Path: fileNames[1]}
// Open four files concurrently - each doing a readInBuffer call to store the first block
for i, handle := range []*handlemap.Handle{handle_1, handle_2} {
openFileOptions, readInBufferOptions, _ := suite.getRequestOptions(i, handle, false, int64(100*MB), 0, 0)
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
wg.Add(1)
go asyncOpenFile(suite, openFileOptions)
}
wg.Wait()
for _, handle := range []*handlemap.Handle{handle_1, handle_2} {
assertBlockCached(suite, 0, handle)
assertNumberOfCachedFileBlocks(suite, 1, handle)
}
}
func (suite *streamTestSuite) TestAsyncClose() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, true)
handle_1 := &handlemap.Handle{Size: int64(100 * MB), Path: fileNames[0]}
handle_2 := &handlemap.Handle{Size: int64(100 * MB), Path: fileNames[1]}
for i, handle := range []*handlemap.Handle{handle_1, handle_2} {
openFileOptions, readInBufferOptions, _ := suite.getRequestOptions(i, handle, false, int64(100*MB), 0, 0)
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(int(suite.stream.BlockSize), nil)
wg.Add(1)
go asyncOpenFile(suite, openFileOptions)
}
wg.Wait()
for _, handle := range []*handlemap.Handle{handle_1, handle_2} {
closeFileOptions := internal.CloseFileOptions{Handle: handle}
suite.mock.EXPECT().CloseFile(closeFileOptions).Return(nil)
wg.Add(1)
go asyncCloseFile(suite, closeFileOptions)
}
wg.Wait()
}
func TestStreamTestSuite(t *testing.T) {
suite.Run(t, new(streamTestSuite))
}

Просмотреть файл

@ -1,535 +0,0 @@
/*
_____ _____ _____ ____ ______ _____ ------
| | | | | | | | | | | | |
| | | | | | | | | | | | |
| --- | | | | |-----| |---- | | |-----| |----- ------
| | | | | | | | | | | | |
| ____| |_____ | ____| | ____| | |_____| _____| |_____ |_____
Licensed under the MIT License <http://opensource.org/licenses/MIT>.
Copyright © 2020-2024 Microsoft Corporation. All rights reserved.
Author : <blobfusedev@microsoft.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
*/
package stream
import (
"encoding/base64"
"errors"
"io"
"os"
"sync/atomic"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
"github.com/pbnjay/memory"
)
type ReadWriteCache struct {
*Stream
StreamConnection
}
func (rw *ReadWriteCache) Configure(conf StreamOptions) error {
if conf.BufferSize <= 0 || conf.BlockSize <= 0 || conf.CachedObjLimit <= 0 {
rw.StreamOnly = true
log.Info("ReadWriteCache::Configure : Streamonly set to true")
}
rw.BlockSize = int64(conf.BlockSize) * mb
rw.BufferSize = conf.BufferSize * mb
rw.CachedObjLimit = int32(conf.CachedObjLimit)
rw.CachedObjects = 0
return nil
}
func (rw *ReadWriteCache) CreateFile(options internal.CreateFileOptions) (*handlemap.Handle, error) {
log.Trace("Stream::CreateFile : name=%s, mode=%s", options.Name, options.Mode)
handle, err := rw.NextComponent().CreateFile(options)
if err != nil {
log.Err("Stream::CreateFile : error failed to create file %s: [%s]", options.Name, err.Error())
return handle, err
}
if !rw.StreamOnly {
err = rw.createHandleCache(handle)
if err != nil {
log.Err("Stream::CreateFile : error creating cache object %s [%s]", options.Name, err.Error())
}
}
return handle, err
}
func (rw *ReadWriteCache) OpenFile(options internal.OpenFileOptions) (*handlemap.Handle, error) {
log.Trace("Stream::OpenFile : name=%s, flags=%d, mode=%s", options.Name, options.Flags, options.Mode)
handle, err := rw.NextComponent().OpenFile(options)
if err != nil {
log.Err("Stream::OpenFile : error failed to open file %s [%s]", options.Name, err.Error())
return handle, err
}
if options.Flags&os.O_TRUNC != 0 {
handle.Size = 0
}
if !rw.StreamOnly {
err = rw.createHandleCache(handle)
if err != nil {
log.Err("Stream::OpenFile : error failed to create cache object %s [%s]", options.Name, err.Error())
}
}
return handle, err
}
func (rw *ReadWriteCache) ReadInBuffer(options internal.ReadInBufferOptions) (int, error) {
// log.Trace("Stream::ReadInBuffer : name=%s, handle=%d, offset=%d", options.Handle.Path, options.Handle.ID, options.Offset)
if !rw.StreamOnly && options.Handle.CacheObj.StreamOnly {
err := rw.createHandleCache(options.Handle)
if err != nil {
log.Err("Stream::ReadInBuffer : error failed to create cache object %s [%s]", options.Handle.Path, err.Error())
return 0, err
}
}
if rw.StreamOnly || options.Handle.CacheObj.StreamOnly {
data, err := rw.NextComponent().ReadInBuffer(options)
if err != nil && err != io.EOF {
log.Err("Stream::ReadInBuffer : error failed to download requested data for %s: [%s]", options.Handle.Path, err.Error())
}
return data, err
}
options.Handle.CacheObj.Lock()
defer options.Handle.CacheObj.Unlock()
if atomic.LoadInt64(&options.Handle.Size) == 0 {
return 0, nil
}
read, err := rw.readWriteBlocks(options.Handle, options.Offset, options.Data, false)
if err != nil {
log.Err("Stream::ReadInBuffer : error failed to download requested data for %s: [%s]", options.Handle.Path, err.Error())
}
return read, err
}
func (rw *ReadWriteCache) WriteFile(options internal.WriteFileOptions) (int, error) {
// log.Trace("Stream::WriteFile : name=%s, handle=%d, offset=%d", options.Handle.Path, options.Handle.ID, options.Offset)
if !rw.StreamOnly && options.Handle.CacheObj.StreamOnly {
err := rw.createHandleCache(options.Handle)
if err != nil {
log.Err("Stream::WriteFile : error failed to create cache object %s [%s]", options.Handle.Path, err.Error())
return 0, err
}
}
if rw.StreamOnly || options.Handle.CacheObj.StreamOnly {
data, err := rw.NextComponent().WriteFile(options)
if err != nil && err != io.EOF {
log.Err("Stream::WriteFile : error failed to write data to %s: [%s]", options.Handle.Path, err.Error())
}
return data, err
}
options.Handle.CacheObj.Lock()
defer options.Handle.CacheObj.Unlock()
written, err := rw.readWriteBlocks(options.Handle, options.Offset, options.Data, true)
if err != nil {
log.Err("Stream::WriteFile : error failed to write data to %s: [%s]", options.Handle.Path, err.Error())
}
options.Handle.Flags.Set(handlemap.HandleFlagDirty)
return written, err
}
func (rw *ReadWriteCache) TruncateFile(options internal.TruncateFileOptions) error {
log.Trace("Stream::TruncateFile : name=%s, size=%d", options.Name, options.Size)
// if !rw.StreamOnly {
// handleMap := handlemap.GetHandles()
// handleMap.Range(func(key, value interface{}) bool {
// handle := value.(*handlemap.Handle)
// if handle.CacheObj != nil && !handle.CacheObj.StreamOnly {
// if handle.Path == options.Name {
// err := rw.purge(handle, options.Size, true)
// if err != nil {
// log.Err("Stream::TruncateFile : failed to flush and purge handle cache %s [%s]", handle.Path, err.Error())
// return false
// }
// }
// }
// return true
// })
// if err != nil {
// return err
// }
// }
err := rw.NextComponent().TruncateFile(options)
if err != nil {
log.Err("Stream::TruncateFile : error truncating file %s [%s]", options.Name, err.Error())
}
return err
}
func (rw *ReadWriteCache) RenameFile(options internal.RenameFileOptions) error {
log.Trace("Stream::RenameFile : name=%s", options.Src)
// if !rw.StreamOnly {
// var err error
// handleMap := handlemap.GetHandles()
// handleMap.Range(func(key, value interface{}) bool {
// handle := value.(*handlemap.Handle)
// if handle.CacheObj != nil && !handle.CacheObj.StreamOnly {
// if handle.Path == options.Src {
// err := rw.purge(handle, -1, true)
// if err != nil {
// log.Err("Stream::RenameFile : failed to flush and purge handle cache %s [%s]", handle.Path, err.Error())
// return false
// }
// }
// }
// return true
// })
// if err != nil {
// return err
// }
// }
err := rw.NextComponent().RenameFile(options)
if err != nil {
log.Err("Stream::RenameFile : error renaming file %s [%s]", options.Src, err.Error())
}
return err
}
func (rw *ReadWriteCache) FlushFile(options internal.FlushFileOptions) error {
log.Trace("Stream::FlushFile : name=%s, handle=%d", options.Handle.Path, options.Handle.ID)
if rw.StreamOnly || options.Handle.CacheObj.StreamOnly {
return nil
}
if options.Handle.Dirty() {
err := rw.NextComponent().FlushFile(options)
if err != nil {
log.Err("Stream::FlushFile : error flushing file %s [%s]", options.Handle.Path, err.Error())
return err
}
options.Handle.Flags.Clear(handlemap.HandleFlagDirty)
}
return nil
}
func (rw *ReadWriteCache) CloseFile(options internal.CloseFileOptions) error {
log.Trace("Stream::CloseFile : name=%s, handle=%d", options.Handle.Path, options.Handle.ID)
// try to flush again to make sure it's cleaned up
err := rw.FlushFile(internal.FlushFileOptions{Handle: options.Handle})
if err != nil {
log.Err("Stream::CloseFile : error flushing file %s [%s]", options.Handle.Path, err.Error())
return err
}
if !rw.StreamOnly && !options.Handle.CacheObj.StreamOnly {
err = rw.purge(options.Handle, -1)
if err != nil {
log.Err("Stream::CloseFile : error purging file %s [%s]", options.Handle.Path, err.Error())
}
}
err = rw.NextComponent().CloseFile(options)
if err != nil {
log.Err("Stream::CloseFile : error closing file %s [%s]", options.Handle.Path, err.Error())
}
return err
}
func (rw *ReadWriteCache) DeleteFile(options internal.DeleteFileOptions) error {
log.Trace("Stream::DeleteFile : name=%s", options.Name)
// if !rw.StreamOnly {
// handleMap := handlemap.GetHandles()
// handleMap.Range(func(key, value interface{}) bool {
// handle := value.(*handlemap.Handle)
// if handle.CacheObj != nil && !handle.CacheObj.StreamOnly {
// if handle.Path == options.Name {
// err := rw.purge(handle, -1, false)
// if err != nil {
// log.Err("Stream::DeleteFile : failed to purge handle cache %s [%s]", handle.Path, err.Error())
// return false
// }
// }
// }
// return true
// })
// }
err := rw.NextComponent().DeleteFile(options)
if err != nil {
log.Err("Stream::DeleteFile : error deleting file %s [%s]", options.Name, err.Error())
}
return err
}
func (rw *ReadWriteCache) DeleteDirectory(options internal.DeleteDirOptions) error {
log.Trace("Stream::DeleteDirectory : name=%s", options.Name)
// if !rw.StreamOnly {
// handleMap := handlemap.GetHandles()
// handleMap.Range(func(key, value interface{}) bool {
// handle := value.(*handlemap.Handle)
// if handle.CacheObj != nil && !handle.CacheObj.StreamOnly {
// if strings.HasPrefix(handle.Path, options.Name) {
// err := rw.purge(handle, -1, false)
// if err != nil {
// log.Err("Stream::DeleteDirectory : failed to purge handle cache %s [%s]", handle.Path, err.Error())
// return false
// }
// }
// }
// return true
// })
// }
err := rw.NextComponent().DeleteDir(options)
if err != nil {
log.Err("Stream::DeleteDirectory : error deleting directory %s [%s]", options.Name, err.Error())
}
return err
}
func (rw *ReadWriteCache) RenameDirectory(options internal.RenameDirOptions) error {
log.Trace("Stream::RenameDirectory : name=%s", options.Src)
// if !rw.StreamOnly {
// var err error
// handleMap := handlemap.GetHandles()
// handleMap.Range(func(key, value interface{}) bool {
// handle := value.(*handlemap.Handle)
// if handle.CacheObj != nil && !handle.CacheObj.StreamOnly {
// if strings.HasPrefix(handle.Path, options.Src) {
// err := rw.purge(handle, -1, true)
// if err != nil {
// log.Err("Stream::RenameDirectory : failed to flush and purge handle cache %s [%s]", handle.Path, err.Error())
// return false
// }
// }
// }
// return true
// })
// if err != nil {
// return err
// }
// }
err := rw.NextComponent().RenameDir(options)
if err != nil {
log.Err("Stream::RenameDirectory : error renaming directory %s [%s]", options.Src, err.Error())
}
return err
}
// Stop : Stop the component functionality and kill all threads started
func (rw *ReadWriteCache) Stop() error {
log.Trace("Stream::Stop : stopping component : %s", rw.Name())
if !rw.StreamOnly {
handleMap := handlemap.GetHandles()
handleMap.Range(func(key, value interface{}) bool {
handle := value.(*handlemap.Handle)
if handle.CacheObj != nil && !handle.CacheObj.StreamOnly {
err := rw.purge(handle, -1)
if err != nil {
log.Err("Stream::Stop : failed to purge handle cache %s [%s]", handle.Path, err.Error())
return false
}
}
return true
})
}
return nil
}
func (rw *ReadWriteCache) GetAttr(options internal.GetAttrOptions) (*internal.ObjAttr, error) {
// log.Trace("AttrCache::GetAttr : %s", options.Name)
return rw.NextComponent().GetAttr(options)
}
func (rw *ReadWriteCache) purge(handle *handlemap.Handle, size int64) error {
handle.CacheObj.Lock()
defer handle.CacheObj.Unlock()
handle.CacheObj.Purge()
// if size isn't -1 then we're resizing
if size != -1 {
atomic.StoreInt64(&handle.Size, size)
}
handle.CacheObj.StreamOnly = true
atomic.AddInt32(&rw.CachedObjects, -1)
return nil
}
func (rw *ReadWriteCache) createHandleCache(handle *handlemap.Handle) error {
handlemap.CreateCacheObject(int64(rw.BufferSize), handle)
// if we hit handle limit then stream only on this new handle
if atomic.LoadInt32(&rw.CachedObjects) >= rw.CachedObjLimit {
handle.CacheObj.StreamOnly = true
return nil
}
opts := internal.GetFileBlockOffsetsOptions{
Name: handle.Path,
}
var offsets *common.BlockOffsetList
var err error
if handle.Size == 0 {
offsets = &common.BlockOffsetList{}
offsets.Flags.Set(common.SmallFile)
} else {
offsets, err = rw.NextComponent().GetFileBlockOffsets(opts)
if err != nil {
return err
}
}
handle.CacheObj.BlockOffsetList = offsets
// if its a small file then download the file in its entirety if there is memory available, otherwise stream only
if handle.CacheObj.SmallFile() {
if uint64(atomic.LoadInt64(&handle.Size)) > memory.FreeMemory() {
handle.CacheObj.StreamOnly = true
return nil
}
block, _, err := rw.getBlock(handle, &common.Block{StartIndex: 0, EndIndex: handle.Size})
if err != nil {
return err
}
block.Id = base64.StdEncoding.EncodeToString(common.NewUUID().Bytes())
// our handle will consist of a single block locally for simpler logic
handle.CacheObj.BlockList = append(handle.CacheObj.BlockList, block)
handle.CacheObj.BlockIdLength = common.GetIdLength(block.Id)
// now consists of a block - clear the flag
handle.CacheObj.Flags.Clear(common.SmallFile)
}
atomic.AddInt32(&rw.CachedObjects, 1)
return nil
}
func (rw *ReadWriteCache) putBlock(handle *handlemap.Handle, block *common.Block) error {
ok := handle.CacheObj.Put(block.StartIndex, block)
// if the cache is full and we couldn't evict - we need to do a flush
if !ok {
err := rw.NextComponent().FlushFile(internal.FlushFileOptions{Handle: handle})
if err != nil {
return err
}
ok = handle.CacheObj.Put(block.StartIndex, block)
if !ok {
return errors.New("flushed and still unable to put block in cache")
}
}
return nil
}
func (rw *ReadWriteCache) getBlock(handle *handlemap.Handle, block *common.Block) (*common.Block, bool, error) {
cached_block, found := handle.CacheObj.Get(block.StartIndex)
if !found {
block.Data = make([]byte, block.EndIndex-block.StartIndex)
err := rw.putBlock(handle, block)
if err != nil {
return block, false, err
}
options := internal.ReadInBufferOptions{
Handle: handle,
Offset: block.StartIndex,
Data: block.Data,
}
// check if its a create operation
if len(block.Data) != 0 {
_, err = rw.NextComponent().ReadInBuffer(options)
if err != nil && err != io.EOF {
return nil, false, err
}
}
return block, false, nil
}
return cached_block, true, nil
}
func (rw *ReadWriteCache) readWriteBlocks(handle *handlemap.Handle, offset int64, data []byte, write bool) (int, error) {
// if it's not a small file then we look the blocks it consistts of
blocks, found := handle.CacheObj.FindBlocks(offset, int64(len(data)))
if !found && !write {
return 0, nil
}
dataLeft := int64(len(data))
dataRead, blk_index, dataCopied := 0, 0, int64(0)
lastBlock := handle.CacheObj.BlockList[len(handle.CacheObj.BlockList)-1]
for dataLeft > 0 {
if offset < int64(lastBlock.EndIndex) {
block, _, err := rw.getBlock(handle, blocks[blk_index])
if err != nil {
return dataRead, err
}
if write {
dataCopied = int64(copy(block.Data[offset-blocks[blk_index].StartIndex:], data[dataRead:]))
block.Flags.Set(common.DirtyBlock)
} else {
dataCopied = int64(copy(data[dataRead:], block.Data[offset-blocks[blk_index].StartIndex:]))
}
dataLeft -= dataCopied
offset += dataCopied
dataRead += int(dataCopied)
blk_index += 1
//if appending to file
} else if write {
emptyByteLength := offset - lastBlock.EndIndex
// if the data to append + our last block existing data do not exceed block size - just append to last block
if (lastBlock.EndIndex-lastBlock.StartIndex)+(emptyByteLength+dataLeft) <= rw.BlockSize || lastBlock.EndIndex == 0 {
_, _, err := rw.getBlock(handle, lastBlock)
if err != nil {
return dataRead, err
}
// if no overwrites and pure append - then we need to create an empty buffer in between
if emptyByteLength > 0 {
truncated := make([]byte, emptyByteLength)
lastBlock.Data = append(lastBlock.Data, truncated...)
}
lastBlock.Data = append(lastBlock.Data, data[dataRead:]...)
newLastBlockEndIndex := lastBlock.EndIndex + dataLeft + emptyByteLength
handle.CacheObj.Resize(lastBlock.StartIndex, newLastBlockEndIndex)
lastBlock.Flags.Set(common.DirtyBlock)
atomic.StoreInt64(&handle.Size, lastBlock.EndIndex)
dataRead += int(dataLeft)
return dataRead, nil
}
blk := &common.Block{
StartIndex: lastBlock.EndIndex,
EndIndex: lastBlock.EndIndex + dataLeft + emptyByteLength,
Id: base64.StdEncoding.EncodeToString(common.NewUUIDWithLength(handle.CacheObj.BlockIdLength)),
}
blk.Data = make([]byte, blk.EndIndex-blk.StartIndex)
dataCopied = int64(copy(blk.Data[offset-blk.StartIndex:], data[dataRead:]))
blk.Flags.Set(common.DirtyBlock)
handle.CacheObj.BlockList = append(handle.CacheObj.BlockList, blk)
err := rw.putBlock(handle, blk)
if err != nil {
return dataRead, err
}
atomic.StoreInt64(&handle.Size, blk.EndIndex)
dataRead += int(dataCopied)
return dataRead, nil
} else {
return dataRead, nil
}
}
return dataRead, nil
}
func (rw *ReadWriteCache) SyncFile(options internal.SyncFileOptions) error {
log.Trace("ReadWriteCache::SyncFile : handle=%d, path=%s", options.Handle.ID, options.Handle.Path)
err := rw.FlushFile(internal.FlushFileOptions{Handle: options.Handle})
if err != nil {
log.Err("Stream::SyncFile : error flushing file %s [%s]", options.Handle.Path, err.Error())
return err
}
return nil
}

Просмотреть файл

@ -1,492 +0,0 @@
/*
_____ _____ _____ ____ ______ _____ ------
| | | | | | | | | | | | |
| | | | | | | | | | | | |
| --- | | | | |-----| |---- | | |-----| |----- ------
| | | | | | | | | | | | |
| ____| |_____ | ____| | ____| | |_____| _____| |_____ |_____
Licensed under the MIT License <http://opensource.org/licenses/MIT>.
Copyright © 2020-2024 Microsoft Corporation. All rights reserved.
Author : <blobfusedev@microsoft.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
*/
package stream
import (
"encoding/base64"
"errors"
"io"
"strings"
"sync"
"sync/atomic"
"time"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/log"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
"github.com/pbnjay/memory"
)
type ReadWriteFilenameCache struct {
sync.RWMutex
*Stream
StreamConnection
fileCache map[string]*handlemap.Cache
}
func (rw *ReadWriteFilenameCache) Configure(conf StreamOptions) error {
if conf.BufferSize <= 0 || conf.BlockSize <= 0 || conf.CachedObjLimit <= 0 {
rw.StreamOnly = true
}
rw.BlockSize = int64(conf.BlockSize) * mb
rw.BufferSize = conf.BufferSize * mb
rw.CachedObjLimit = int32(conf.CachedObjLimit)
rw.fileCache = make(map[string]*handlemap.Cache)
rw.CachedObjects = 0
return nil
}
func (rw *ReadWriteFilenameCache) CreateFile(options internal.CreateFileOptions) (*handlemap.Handle, error) {
log.Trace("Stream::CreateFile : name=%s, mode=%s", options.Name, options.Mode)
handle, err := rw.NextComponent().CreateFile(options)
if err != nil {
log.Err("Stream::CreateFile : error failed to create file %s: [%s]", options.Name, err.Error())
}
if !rw.StreamOnly {
err = rw.createFileCache(handle)
if err != nil {
log.Err("Stream::CreateFile : error creating cache object %s [%s]", options.Name, err.Error())
}
}
return handle, err
}
func (rw *ReadWriteFilenameCache) OpenFile(options internal.OpenFileOptions) (*handlemap.Handle, error) {
log.Trace("Stream::OpenFile : name=%s, flags=%d, mode=%s", options.Name, options.Flags, options.Mode)
handle, err := rw.NextComponent().OpenFile(options)
if err != nil {
log.Err("Stream::OpenFile : error failed to open file %s [%s]", options.Name, err.Error())
return handle, err
}
if !rw.StreamOnly {
err = rw.createFileCache(handle)
if err != nil {
log.Err("Stream::OpenFile : error failed to create cache object %s [%s]", options.Name, err.Error())
}
}
return handle, err
}
func (rw *ReadWriteFilenameCache) ReadInBuffer(options internal.ReadInBufferOptions) (int, error) {
// log.Trace("Stream::ReadInBuffer : name=%s, handle=%d, offset=%d", options.Handle.Path, options.Handle.ID, options.Offset)
if !rw.StreamOnly && options.Handle.CacheObj.StreamOnly {
err := rw.createFileCache(options.Handle)
if err != nil {
log.Err("Stream::ReadInBuffer : error failed to create cache object %s [%s]", options.Handle.Path, err.Error())
return 0, err
}
}
if rw.StreamOnly || options.Handle.CacheObj.StreamOnly {
data, err := rw.NextComponent().ReadInBuffer(options)
if err != nil && err != io.EOF {
log.Err("Stream::ReadInBuffer : error failed to download requested data for %s: [%s]", options.Handle.Path, err.Error())
}
return data, err
}
if atomic.LoadInt64(&options.Handle.CacheObj.Size) == 0 {
return 0, nil
}
read, err := rw.readWriteBlocks(options.Handle, options.Offset, options.Data, false)
if err != nil {
log.Err("Stream::ReadInBuffer : error failed to download requested data for %s: [%s]", options.Handle.Path, err.Error())
}
return read, err
}
func (rw *ReadWriteFilenameCache) WriteFile(options internal.WriteFileOptions) (int, error) {
// log.Trace("Stream::WriteFile : name=%s, handle=%d, offset=%d", options.Handle.Path, options.Handle.ID, options.Offset)
if !rw.StreamOnly && options.Handle.CacheObj.StreamOnly {
err := rw.createFileCache(options.Handle)
if err != nil {
log.Err("Stream::WriteFile : error failed to create cache object %s [%s]", options.Handle.Path, err.Error())
return 0, err
}
}
if rw.StreamOnly || options.Handle.CacheObj.StreamOnly {
data, err := rw.NextComponent().WriteFile(options)
if err != nil && err != io.EOF {
log.Err("Stream::WriteFile : error failed to write data to %s: [%s]", options.Handle.Path, err.Error())
}
return data, err
}
written, err := rw.readWriteBlocks(options.Handle, options.Offset, options.Data, true)
if err != nil {
log.Err("Stream::WriteFile : error failed to write data to %s: [%s]", options.Handle.Path, err.Error())
}
options.Handle.Flags.Set(handlemap.HandleFlagDirty)
return written, err
}
// TODO: truncate in cache
func (rw *ReadWriteFilenameCache) TruncateFile(options internal.TruncateFileOptions) error {
log.Trace("Stream::TruncateFile : name=%s, size=%d", options.Name, options.Size)
err := rw.NextComponent().TruncateFile(options)
if err != nil {
log.Err("Stream::TruncateFile : error truncating file %s [%s]", options.Name, err.Error())
return err
}
if !rw.StreamOnly {
rw.purge(options.Name, false)
}
return nil
}
func (rw *ReadWriteFilenameCache) RenameFile(options internal.RenameFileOptions) error {
log.Trace("Stream::RenameFile : name=%s", options.Src)
err := rw.NextComponent().RenameFile(options)
if err != nil {
log.Err("Stream::RenameFile : error renaming file %s [%s]", options.Src, err.Error())
return err
}
if !rw.StreamOnly {
rw.purge(options.Src, false)
}
return nil
}
func (rw *ReadWriteFilenameCache) CloseFile(options internal.CloseFileOptions) error {
log.Trace("Stream::CloseFile : name=%s, handle=%d", options.Handle.Path, options.Handle.ID)
// try to flush again to make sure it's cleaned up
err := rw.FlushFile(internal.FlushFileOptions{Handle: options.Handle})
if err != nil {
log.Err("Stream::CloseFile : error flushing file %s [%s]", options.Handle.Path, err.Error())
return err
}
if !rw.StreamOnly {
rw.purge(options.Handle.Path, true)
}
err = rw.NextComponent().CloseFile(options)
if err != nil {
log.Err("Stream::CloseFile : error closing file %s [%s]", options.Handle.Path, err.Error())
}
return err
}
func (rw *ReadWriteFilenameCache) FlushFile(options internal.FlushFileOptions) error {
log.Trace("Stream::FlushFile : name=%s, handle=%d", options.Handle.Path, options.Handle.ID)
if options.Handle.Dirty() {
err := rw.NextComponent().FlushFile(options)
if err != nil {
log.Err("Stream::FlushFile : error flushing file %s [%s]", options.Handle.Path, err.Error())
return err
}
options.Handle.Flags.Clear(handlemap.HandleFlagDirty)
}
return nil
}
func (rw *ReadWriteFilenameCache) DeleteFile(options internal.DeleteFileOptions) error {
log.Trace("Stream::DeleteFile : name=%s", options.Name)
err := rw.NextComponent().DeleteFile(options)
if err != nil {
log.Err("Stream::DeleteFile : error deleting file %s [%s]", options.Name, err.Error())
return err
}
if !rw.StreamOnly {
rw.purge(options.Name, false)
}
return nil
}
func (rw *ReadWriteFilenameCache) DeleteDirectory(options internal.DeleteDirOptions) error {
log.Trace("Stream::DeleteDirectory : name=%s", options.Name)
for fileName := range rw.fileCache {
if strings.HasPrefix(fileName, options.Name) {
rw.purge(fileName, false)
}
}
err := rw.NextComponent().DeleteDir(options)
if err != nil {
log.Err("Stream::DeleteDirectory : error deleting directory %s [%s]", options.Name, err.Error())
return err
}
return nil
}
func (rw *ReadWriteFilenameCache) RenameDirectory(options internal.RenameDirOptions) error {
log.Trace("Stream::RenameDirectory : name=%s", options.Src)
for fileName := range rw.fileCache {
if strings.HasPrefix(fileName, options.Src) {
rw.purge(fileName, false)
}
}
err := rw.NextComponent().RenameDir(options)
if err != nil {
log.Err("Stream::RenameDirectory : error renaming directory %s [%s]", options.Src, err.Error())
return err
}
return nil
}
// Stop : Stop the component functionality and kill all threads started
func (rw *ReadWriteFilenameCache) Stop() error {
log.Trace("Stopping component : %s", rw.Name())
if !rw.StreamOnly {
rw.Lock()
defer rw.Unlock()
for fileName, buffer := range rw.fileCache {
delete(rw.fileCache, fileName)
buffer.Lock()
defer buffer.Unlock()
buffer.Purge()
atomic.AddInt32(&rw.CachedObjects, -1)
}
}
return nil
}
// GetAttr : Try to serve the request from the attribute cache, otherwise cache attributes of the path returned by next component
func (rw *ReadWriteFilenameCache) GetAttr(options internal.GetAttrOptions) (*internal.ObjAttr, error) {
// log.Trace("AttrCache::GetAttr : %s", options.Name)
attrs, err := rw.NextComponent().GetAttr(options)
if err != nil {
log.Err("Stream::GetAttr : error getting attributes %s [%s]", options.Name, err.Error())
return nil, err
}
rw.RLock()
defer rw.RUnlock()
buffer, found := rw.fileCache[options.Name]
if !found {
return attrs, err
}
attrs.Mtime = buffer.Mtime
attrs.Size = buffer.Size
return attrs, nil
}
func (rw *ReadWriteFilenameCache) purge(fileName string, close bool) {
// check if this file is cached
rw.Lock()
defer rw.Unlock()
buffer, found := rw.fileCache[fileName]
if found {
// if it is a close operation then decrement the handle count on the buffer
if close {
atomic.AddInt64(&buffer.HandleCount, -1)
}
// rw.RUnlock()
// if the handle count is 0 (no open handles) purge the buffer
if atomic.LoadInt64(&buffer.HandleCount) <= 0 || !close {
delete(rw.fileCache, fileName)
buffer.Lock()
defer buffer.Unlock()
buffer.Purge()
buffer.StreamOnly = true
atomic.AddInt32(&rw.CachedObjects, -1)
}
}
}
func (rw *ReadWriteFilenameCache) createFileCache(handle *handlemap.Handle) error {
// check if file is cached
rw.Lock()
defer rw.Unlock()
buffer, found := rw.fileCache[handle.Path]
if found && !buffer.StreamOnly {
// this file is cached set the buffer of the handle to point to the cached obj
handle.CacheObj = buffer
atomic.AddInt64(&handle.CacheObj.HandleCount, 1)
return nil
} else {
// if the file is not cached then try to create a buffer for it
handlemap.CreateCacheObject(int64(rw.BufferSize), handle)
if atomic.LoadInt32(&rw.CachedObjects) >= rw.CachedObjLimit {
handle.CacheObj.StreamOnly = true
return nil
} else {
opts := internal.GetFileBlockOffsetsOptions{
Name: handle.Path,
}
offsets, err := rw.NextComponent().GetFileBlockOffsets(opts)
if err != nil {
return err
}
handle.CacheObj.BlockOffsetList = offsets
atomic.StoreInt64(&handle.CacheObj.Size, handle.Size)
handle.CacheObj.Mtime = handle.Mtime
if handle.CacheObj.SmallFile() {
if uint64(atomic.LoadInt64(&handle.Size)) > memory.FreeMemory() {
handle.CacheObj.StreamOnly = true
return nil
}
block, _, err := rw.getBlock(handle, &common.Block{StartIndex: 0, EndIndex: handle.CacheObj.Size})
if err != nil {
return err
}
block.Id = base64.StdEncoding.EncodeToString(common.NewUUID().Bytes())
// our handle will consist of a single block locally for simpler logic
handle.CacheObj.BlockList = append(handle.CacheObj.BlockList, block)
handle.CacheObj.BlockIdLength = common.GetIdLength(block.Id)
// now consists of a block - clear the flag
handle.CacheObj.Flags.Clear(common.SmallFile)
}
rw.fileCache[handle.Path] = handle.CacheObj
atomic.AddInt32(&rw.CachedObjects, 1)
atomic.AddInt64(&handle.CacheObj.HandleCount, 1)
return nil
}
}
}
func (rw *ReadWriteFilenameCache) putBlock(handle *handlemap.Handle, buffer *handlemap.Cache, block *common.Block) error {
ok := buffer.Put(block.StartIndex, block)
// if the cache is full and we couldn't evict - we need to do a flush
if !ok {
err := rw.NextComponent().FlushFile(internal.FlushFileOptions{Handle: handle})
if err != nil {
return err
}
// re-attempt to put the block in cache once more after the flush
ok = handle.CacheObj.Put(block.StartIndex, block)
if !ok {
return errors.New("flushed and still unable to put block in cache")
}
}
return nil
}
func (rw *ReadWriteFilenameCache) getBlock(handle *handlemap.Handle, block *common.Block) (*common.Block, bool, error) {
cached_block, found := handle.CacheObj.Get(block.StartIndex)
if !found {
block.Data = make([]byte, block.EndIndex-block.StartIndex)
// put the newly created block into the cache
err := rw.putBlock(handle, handle.CacheObj, block)
if err != nil {
return block, false, err
}
options := internal.ReadInBufferOptions{
Handle: handle,
Offset: block.StartIndex,
Data: block.Data,
}
// check if its a create operation
if len(block.Data) != 0 {
_, err = rw.NextComponent().ReadInBuffer(options)
if err != nil && err != io.EOF {
return nil, false, err
}
}
return block, false, nil
}
return cached_block, true, nil
}
func (rw *ReadWriteFilenameCache) readWriteBlocks(handle *handlemap.Handle, offset int64, data []byte, write bool) (int, error) {
// if it's not a small file then we look the blocks it consistts of
handle.CacheObj.Lock()
defer handle.CacheObj.Unlock()
blocks, found := handle.CacheObj.FindBlocks(offset, int64(len(data)))
if !found && !write {
return 0, nil
}
dataLeft := int64(len(data))
dataRead, blk_index, dataCopied := 0, 0, int64(0)
lastBlock := handle.CacheObj.BlockList[len(handle.CacheObj.BlockList)-1]
for dataLeft > 0 {
if offset < int64(lastBlock.EndIndex) {
block, _, err := rw.getBlock(handle, blocks[blk_index])
if err != nil {
return dataRead, err
}
if write {
dataCopied = int64(copy(block.Data[offset-blocks[blk_index].StartIndex:], data[dataRead:]))
block.Flags.Set(common.DirtyBlock)
} else {
dataCopied = int64(copy(data[dataRead:], block.Data[offset-blocks[blk_index].StartIndex:]))
}
dataLeft -= dataCopied
offset += dataCopied
dataRead += int(dataCopied)
blk_index += 1
//if appending to file
} else if write {
emptyByteLength := offset - lastBlock.EndIndex
// if the data to append + our last block existing data do not exceed block size - just append to last block
if (lastBlock.EndIndex-lastBlock.StartIndex)+(emptyByteLength+dataLeft) <= rw.BlockSize || lastBlock.EndIndex == 0 {
_, _, err := rw.getBlock(handle, lastBlock)
if err != nil {
return dataRead, err
}
// if no overwrites and pure append - then we need to create an empty buffer in between
if emptyByteLength > 0 {
truncated := make([]byte, emptyByteLength)
lastBlock.Data = append(lastBlock.Data, truncated...)
}
lastBlock.Data = append(lastBlock.Data, data[dataRead:]...)
newLastBlockEndIndex := lastBlock.EndIndex + dataLeft + emptyByteLength
handle.CacheObj.Resize(lastBlock.StartIndex, newLastBlockEndIndex)
lastBlock.Flags.Set(common.DirtyBlock)
atomic.StoreInt64(&handle.Size, lastBlock.EndIndex)
atomic.StoreInt64(&handle.CacheObj.Size, lastBlock.EndIndex)
handle.CacheObj.Mtime = time.Now()
dataRead += int(dataLeft)
return dataRead, nil
}
blk := &common.Block{
StartIndex: lastBlock.EndIndex,
EndIndex: lastBlock.EndIndex + dataLeft + emptyByteLength,
Id: base64.StdEncoding.EncodeToString(common.NewUUIDWithLength(handle.CacheObj.BlockIdLength)),
}
blk.Data = make([]byte, blk.EndIndex-blk.StartIndex)
dataCopied = int64(copy(blk.Data[offset-blk.StartIndex:], data[dataRead:]))
blk.Flags.Set(common.DirtyBlock)
handle.CacheObj.BlockList = append(handle.CacheObj.BlockList, blk)
err := rw.putBlock(handle, handle.CacheObj, blk)
if err != nil {
return dataRead, err
}
atomic.StoreInt64(&handle.Size, blk.EndIndex)
atomic.StoreInt64(&handle.CacheObj.Size, blk.EndIndex)
handle.CacheObj.Mtime = time.Now()
dataRead += int(dataCopied)
return dataRead, nil
} else {
return dataRead, nil
}
}
return dataRead, nil
}
func (rw *ReadWriteFilenameCache) SyncFile(options internal.SyncFileOptions) error {
log.Trace("ReadWriteFilenameCache::SyncFile : handle=%d, path=%s", options.Handle.ID, options.Handle.Path)
err := rw.FlushFile(internal.FlushFileOptions{Handle: options.Handle})
if err != nil {
log.Err("Stream::SyncFile : error flushing file %s [%s]", options.Handle.Path, err.Error())
return err
}
return nil
}

Просмотреть файл

@ -1,738 +0,0 @@
/*
_____ _____ _____ ____ ______ _____ ------
| | | | | | | | | | | | |
| | | | | | | | | | | | |
| --- | | | | |-----| |---- | | |-----| |----- ------
| | | | | | | | | | | | |
| ____| |_____ | ____| | ____| | |_____| _____| |_____ |_____
Licensed under the MIT License <http://opensource.org/licenses/MIT>.
Copyright © 2020-2024 Microsoft Corporation. All rights reserved.
Author : <blobfusedev@microsoft.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
*/
package stream
import (
"os"
"syscall"
"testing"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
"github.com/stretchr/testify/suite"
)
func (suite *streamTestSuite) TestWriteFilenameConfig() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 16\n max-buffers: 4\n file-caching: true\n"
suite.setupTestHelper(config, false)
suite.assert.Equal("stream", suite.stream.Name())
suite.assert.Equal(16*MB, int(suite.stream.BufferSize))
suite.assert.Equal(4, int(suite.stream.CachedObjLimit))
suite.assert.EqualValues(false, suite.stream.StreamOnly)
suite.assert.EqualValues(4*MB, suite.stream.BlockSize)
// assert streaming is on if any of the values is 0
suite.cleanupTest()
config = "stream:\n block-size-mb: 0\n buffer-size-mb: 16\n max-buffers: 4\n file-caching: true\n"
suite.setupTestHelper(config, false)
suite.assert.EqualValues(true, suite.stream.StreamOnly)
}
// ============================================== stream only tests ========================================
func (suite *streamTestSuite) TestStreamOnlyFilenameOpenFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set buffer limit to 1
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 32\n max-buffers: 0\n file-caching: true\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 0, Path: fileNames[0]}
openFileOptions := internal.OpenFileOptions{Name: fileNames[0], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle1, nil)
_, _ = suite.stream.OpenFile(openFileOptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
func (suite *streamTestSuite) TestStreamOnlyFilenameCloseFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set buffer limit to 1
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 0\n max-buffers: 10\n file-caching: true\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 2, Path: fileNames[0]}
closeFileOptions := internal.CloseFileOptions{Handle: handle1}
suite.mock.EXPECT().CloseFile(closeFileOptions).Return(nil)
_ = suite.stream.CloseFile(closeFileOptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
func (suite *streamTestSuite) TestStreamOnlyFilenameFlushFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set buffer limit to 1
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 0\n max-buffers: 10\n file-caching: true\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 2, Path: fileNames[0]}
flushFileOptions := internal.FlushFileOptions{Handle: handle1}
_ = suite.stream.FlushFile(flushFileOptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
func (suite *streamTestSuite) TestStreamOnlyFilenameSyncFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set buffer limit to 1
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 0\n max-buffers: 10\n file-caching: true\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 2, Path: fileNames[0]}
syncFileOptions := internal.SyncFileOptions{Handle: handle1}
_ = suite.stream.SyncFile(syncFileOptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
func (suite *streamTestSuite) TestStreamOnlyFilenameCreateFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set buffer limit to 1
config := "stream:\n block-size-mb: 0\n buffer-size-mb: 32\n max-buffers: 1\n file-caching: true\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 0, Path: fileNames[0]}
createFileoptions := internal.CreateFileOptions{Name: handle1.Path, Mode: 0777}
suite.mock.EXPECT().CreateFile(createFileoptions).Return(handle1, nil)
_, _ = suite.stream.CreateFile(createFileoptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
func (suite *streamTestSuite) TestCreateFilenameFileError() {
defer suite.cleanupTest()
suite.cleanupTest()
// set buffer limit to 1
config := "stream:\n block-size-mb: 0\n buffer-size-mb: 32\n max-buffers: 1\n file-caching: true\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 0, Path: fileNames[0]}
createFileoptions := internal.CreateFileOptions{Name: handle1.Path, Mode: 0777}
suite.mock.EXPECT().CreateFile(createFileoptions).Return(handle1, syscall.ENOENT)
_, err := suite.stream.CreateFile(createFileoptions)
suite.assert.NotEqual(nil, err)
}
func (suite *streamTestSuite) TestStreamOnlyFilenameDeleteFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set buffer limit to 1
config := "stream:\n block-size-mb: 0\n buffer-size-mb: 32\n max-buffers: 1\n file-caching: true\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 0, Path: fileNames[0]}
deleteFileOptions := internal.DeleteFileOptions{Name: handle1.Path}
suite.mock.EXPECT().DeleteFile(deleteFileOptions).Return(nil)
_ = suite.stream.DeleteFile(deleteFileOptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
func (suite *streamTestSuite) TestStreamOnlyFilenameRenameFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set buffer limit to 1
config := "stream:\n block-size-mb: 0\n buffer-size-mb: 32\n max-buffers: 1\n file-caching: true\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 0, Path: fileNames[0]}
renameFileOptions := internal.RenameFileOptions{Src: handle1.Path, Dst: handle1.Path + "new"}
suite.mock.EXPECT().RenameFile(renameFileOptions).Return(nil)
_ = suite.stream.RenameFile(renameFileOptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
func (suite *streamTestSuite) TestStreamOnlyFilenameRenameDirectory() {
defer suite.cleanupTest()
suite.cleanupTest()
// set buffer limit to 1
config := "stream:\n block-size-mb: 0\n buffer-size-mb: 32\n max-buffers: 1\n file-caching: true\n"
suite.setupTestHelper(config, false)
renameDirOptions := internal.RenameDirOptions{Src: "/test/path", Dst: "/test/path_new"}
suite.mock.EXPECT().RenameDir(renameDirOptions).Return(nil)
_ = suite.stream.RenameDir(renameDirOptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
func (suite *streamTestSuite) TestStreamOnlyFilenameDeleteDirectory() {
defer suite.cleanupTest()
suite.cleanupTest()
// set buffer limit to 1
config := "stream:\n block-size-mb: 0\n buffer-size-mb: 32\n max-buffers: 1\n file-caching: true\n"
suite.setupTestHelper(config, false)
deleteDirOptions := internal.DeleteDirOptions{Name: "/test/path"}
suite.mock.EXPECT().DeleteDir(deleteDirOptions).Return(nil)
_ = suite.stream.DeleteDir(deleteDirOptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
func (suite *streamTestSuite) TestStreamOnlyFilenameTruncateFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set buffer limit to 1
config := "stream:\n block-size-mb: 0\n buffer-size-mb: 32\n max-buffers: 1\n file-caching: true\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 0, Path: fileNames[0]}
truncateFileOptions := internal.TruncateFileOptions{Name: handle1.Path}
suite.mock.EXPECT().TruncateFile(truncateFileOptions).Return(nil)
_ = suite.stream.TruncateFile(truncateFileOptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
// ============================================================================ read tests ====================================================
// test small file caching
func (suite *streamTestSuite) TestCacheSmallFileFilenameOnOpen() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 16\n buffer-size-mb: 32\n max-buffers: 4\n file-caching: true\n"
suite.setupTestHelper(config, false)
// make small file very large to confirm it would be stream only
handle := &handlemap.Handle{Size: int64(100000000 * MB), Path: fileNames[0]}
getFileBlockOffsetsOptions := internal.GetFileBlockOffsetsOptions{Name: fileNames[0]}
openFileOptions := internal.OpenFileOptions{Name: fileNames[0], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
bol := &common.BlockOffsetList{
BlockList: []*common.Block{},
}
bol.Flags.Set(common.SmallFile)
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockNotCached(suite, 0, handle)
assertNumberOfCachedFileBlocks(suite, 0, handle)
assertHandleStreamOnly(suite, handle)
// small file that should get cached on open
handle = &handlemap.Handle{Size: int64(1), Path: fileNames[1]}
openFileOptions = internal.OpenFileOptions{Name: fileNames[1], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
getFileBlockOffsetsOptions = internal.GetFileBlockOffsetsOptions{Name: fileNames[1]}
readInBufferOptions := internal.ReadInBufferOptions{
Handle: handle,
Offset: 0,
Data: make([]byte, 1),
}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(len(readInBufferOptions.Data), nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockCached(suite, 0, handle)
assertNumberOfCachedFileBlocks(suite, 1, handle)
assertHandleNotStreamOnly(suite, handle)
}
func (suite *streamTestSuite) TestFilenameReadInBuffer() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 16\n buffer-size-mb: 32\n max-buffers: 4\n file-caching: true\n"
suite.setupTestHelper(config, false)
handle := &handlemap.Handle{Size: int64(4 * MB), Path: fileNames[0]}
getFileBlockOffsetsOptions := internal.GetFileBlockOffsetsOptions{Name: fileNames[0]}
openFileOptions := internal.OpenFileOptions{Name: fileNames[0], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
// file consists of two blocks
bol := &common.BlockOffsetList{
BlockList: []*common.Block{{StartIndex: 0, EndIndex: 2 * MB}, {StartIndex: 2, EndIndex: 4 * MB}},
}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
_, _ = suite.stream.OpenFile(openFileOptions)
// get second block
readInBufferOptions := internal.ReadInBufferOptions{
Handle: handle,
Offset: 0,
Data: make([]byte, 2*MB),
}
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(len(readInBufferOptions.Data), syscall.ENOENT)
_, err := suite.stream.ReadInBuffer(readInBufferOptions)
suite.assert.NotEqual(nil, err)
}
// test large files don't cache block on open
func (suite *streamTestSuite) TestFilenameOpenLargeFile() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 16\n buffer-size-mb: 32\n max-buffers: 4\n file-caching: true\n"
suite.setupTestHelper(config, false)
handle := &handlemap.Handle{Size: int64(4 * MB), Path: fileNames[0]}
getFileBlockOffsetsOptions := internal.GetFileBlockOffsetsOptions{Name: fileNames[0]}
openFileOptions := internal.OpenFileOptions{Name: fileNames[0], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
// file consists of two blocks
bol := &common.BlockOffsetList{
BlockList: []*common.Block{{StartIndex: 0, EndIndex: 2 * MB}, {StartIndex: 2, EndIndex: 4 * MB}},
}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockNotCached(suite, 0, handle)
assertNumberOfCachedFileBlocks(suite, 0, handle)
assertHandleNotStreamOnly(suite, handle)
}
// test if handle limit met to stream only next handles
func (suite *streamTestSuite) TestFilenameStreamOnly() {
defer suite.cleanupTest()
suite.cleanupTest()
// set buffer limit to 1
config := "stream:\n block-size-mb: 16\n buffer-size-mb: 32\n max-buffers: 1\n file-caching: true\n"
suite.setupTestHelper(config, false)
handle := &handlemap.Handle{Size: int64(4 * MB), Path: fileNames[0]}
getFileBlockOffsetsOptions := internal.GetFileBlockOffsetsOptions{Name: fileNames[0]}
openFileOptions := internal.OpenFileOptions{Name: fileNames[0], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
bol := &common.BlockOffsetList{
BlockList: []*common.Block{{StartIndex: 0, EndIndex: 2 * MB}, {StartIndex: 2, EndIndex: 4 * MB}},
}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertHandleNotStreamOnly(suite, handle)
// open new file
handle = &handlemap.Handle{Size: int64(4 * MB), Path: fileNames[1]}
getFileBlockOffsetsOptions = internal.GetFileBlockOffsetsOptions{Name: fileNames[0]}
openFileOptions = internal.OpenFileOptions{Name: fileNames[1], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
bol = &common.BlockOffsetList{
BlockList: []*common.Block{{StartIndex: 0, EndIndex: 2 * MB}, {StartIndex: 2, EndIndex: 4 * MB}},
}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockNotCached(suite, 0, handle)
assertNumberOfCachedFileBlocks(suite, 0, handle)
// confirm new handle is stream only since limit is exceeded
assertHandleStreamOnly(suite, handle)
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, syscall.ENOENT)
_, err := suite.stream.OpenFile(openFileOptions)
suite.assert.NotEqual(nil, err)
writeFileOptions := internal.WriteFileOptions{
Handle: handle,
Offset: 1 * MB,
Data: make([]byte, 1*MB),
}
suite.mock.EXPECT().WriteFile(writeFileOptions).Return(0, syscall.ENOENT)
_, err = suite.stream.WriteFile(writeFileOptions)
suite.assert.NotEqual(nil, err)
}
func (suite *streamTestSuite) TestFilenameReadLargeFileBlocks() {
defer suite.cleanupTest()
suite.cleanupTest()
// set buffer limit to 1
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 32\n max-buffers: 1\n file-caching: true\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: int64(2 * MB), Path: fileNames[0]}
getFileBlockOffsetsOptions := internal.GetFileBlockOffsetsOptions{Name: fileNames[0]}
openFileOptions := internal.OpenFileOptions{Name: fileNames[0], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
bol := &common.BlockOffsetList{
BlockList: []*common.Block{{StartIndex: 0, EndIndex: 1 * MB}, {StartIndex: 1 * MB, EndIndex: 2 * MB}},
}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle1, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockNotCached(suite, 0, handle1)
assertNumberOfCachedFileBlocks(suite, 0, handle1)
assertHandleNotStreamOnly(suite, handle1)
// data spans two blocks
readInBufferOptions := internal.ReadInBufferOptions{
Handle: handle1,
Offset: 1*MB - 2,
Data: make([]byte, 7),
}
suite.mock.EXPECT().ReadInBuffer(internal.ReadInBufferOptions{
Handle: handle1,
Offset: 0,
Data: make([]byte, 1*MB)}).Return(len(readInBufferOptions.Data), nil)
suite.mock.EXPECT().ReadInBuffer(internal.ReadInBufferOptions{
Handle: handle1,
Offset: 1 * MB,
Data: make([]byte, 1*MB)}).Return(len(readInBufferOptions.Data), nil)
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
assertBlockCached(suite, 0, handle1)
assertBlockCached(suite, 1*MB, handle1)
assertNumberOfCachedFileBlocks(suite, 2, handle1)
}
func (suite *streamTestSuite) TestFilenamePurgeOnClose() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 16\n buffer-size-mb: 32\n max-buffers: 4\n file-caching: true\n"
suite.setupTestHelper(config, false)
handle := &handlemap.Handle{Size: int64(1), Path: fileNames[0]}
getFileBlockOffsetsOptions := internal.GetFileBlockOffsetsOptions{Name: fileNames[0]}
openFileOptions := internal.OpenFileOptions{Name: fileNames[0], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
bol := &common.BlockOffsetList{
BlockList: []*common.Block{},
}
bol.Flags.Set(common.SmallFile)
readInBufferOptions := internal.ReadInBufferOptions{
Handle: handle,
Offset: 0,
Data: make([]byte, 1),
}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(len(readInBufferOptions.Data), nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockCached(suite, 0, handle)
assertNumberOfCachedFileBlocks(suite, 1, handle)
assertHandleNotStreamOnly(suite, handle)
suite.mock.EXPECT().CloseFile(internal.CloseFileOptions{Handle: handle}).Return(nil)
_ = suite.stream.CloseFile(internal.CloseFileOptions{Handle: handle})
assertBlockNotCached(suite, 0, handle)
}
// ========================================================= Write tests =================================================================
// TODO: need to add an assertion on the blocks for their start and end indices as we append to them
// test appending to small file evicts older block if cache capacity full
func (suite *streamTestSuite) TestFilenameWriteToSmallFileEviction() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 1\n buffer-size-mb: 1\n max-buffers: 4\n file-caching: true\n"
suite.setupTestHelper(config, false)
// create small file and confirm it gets cached
handle := &handlemap.Handle{Size: int64(1 * MB), Path: fileNames[0]}
getFileBlockOffsetsOptions := internal.GetFileBlockOffsetsOptions{Name: fileNames[0]}
openFileOptions := internal.OpenFileOptions{Name: fileNames[0], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
bol := &common.BlockOffsetList{
BlockList: []*common.Block{},
}
bol.Flags.Set(common.SmallFile)
readInBufferOptions := internal.ReadInBufferOptions{
Handle: handle,
Offset: 0,
Data: make([]byte, 1*MB),
}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(len(readInBufferOptions.Data), nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockCached(suite, 0, handle)
assertNumberOfCachedFileBlocks(suite, 1, handle)
// append new block and confirm old gets evicted
writeFileOptions := internal.WriteFileOptions{
Handle: handle,
Offset: 1 * MB,
Data: make([]byte, 1*MB),
}
_, _ = suite.stream.WriteFile(writeFileOptions)
assertBlockNotCached(suite, 0, handle)
assertBlockCached(suite, 1*MB, handle)
assertNumberOfCachedFileBlocks(suite, 1, handle)
assertHandleNotStreamOnly(suite, handle)
}
// get block 1, get block 2, mod block 2, mod block 1, create new block - expect block 2 to be removed
func (suite *streamTestSuite) TestFilenameLargeFileEviction() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 1\n buffer-size-mb: 2\n max-buffers: 2\n file-caching: true\n"
suite.setupTestHelper(config, false)
// file consists of two blocks
block1 := &common.Block{StartIndex: 0, EndIndex: 1 * MB}
block2 := &common.Block{StartIndex: 1 * MB, EndIndex: 2 * MB}
handle := &handlemap.Handle{Size: int64(2 * MB), Path: fileNames[0]}
getFileBlockOffsetsOptions := internal.GetFileBlockOffsetsOptions{Name: fileNames[0]}
openFileOptions := internal.OpenFileOptions{Name: fileNames[0], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
bol := &common.BlockOffsetList{
BlockList: []*common.Block{block1, block2},
BlockIdLength: 10,
}
readInBufferOptions := internal.ReadInBufferOptions{
Handle: handle,
Offset: 0,
Data: make([]byte, 1*MB),
}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
_, _ = suite.stream.OpenFile(openFileOptions)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(len(readInBufferOptions.Data), nil)
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
assertBlockCached(suite, 0, handle)
assertNumberOfCachedFileBlocks(suite, 1, handle)
// get second block
readInBufferOptions = internal.ReadInBufferOptions{
Handle: handle,
Offset: 1 * MB,
Data: make([]byte, 1*MB),
}
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(len(readInBufferOptions.Data), nil)
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
assertBlockCached(suite, 1*MB, handle)
assertNumberOfCachedFileBlocks(suite, 2, handle)
// write to second block
writeFileOptions := internal.WriteFileOptions{
Handle: handle,
Offset: 1*MB + 2,
Data: make([]byte, 2),
}
_, _ = suite.stream.WriteFile(writeFileOptions)
// write to first block
writeFileOptions.Offset = 2
_, _ = suite.stream.WriteFile(writeFileOptions)
// append to file
writeFileOptions.Offset = 2*MB + 4
// when we get the first flush - it means we're clearing out our cache
callbackFunc := func(options internal.FlushFileOptions) {
block1.Flags.Clear(common.DirtyBlock)
block2.Flags.Clear(common.DirtyBlock)
handle.Flags.Set(handlemap.HandleFlagDirty)
}
suite.mock.EXPECT().FlushFile(internal.FlushFileOptions{Handle: handle}).Do(callbackFunc).Return(nil)
_, _ = suite.stream.WriteFile(writeFileOptions)
assertBlockCached(suite, 0, handle)
assertBlockCached(suite, 2*MB, handle)
assertBlockNotCached(suite, 1*MB, handle)
assertNumberOfCachedFileBlocks(suite, 2, handle)
suite.assert.Equal(handle.Size, int64(2*MB+6))
}
// test stream only file becomes cached buffer
func (suite *streamTestSuite) TestFilenameStreamOnly2() {
defer suite.cleanupTest()
suite.cleanupTest()
// set buffer limit to 1
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 32\n max-buffers: 1\n file-caching: true\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: int64(2 * MB), Path: fileNames[0]}
getFileBlockOffsetsOptions := internal.GetFileBlockOffsetsOptions{Name: fileNames[0]}
getFileBlockOffsetsOptions2 := internal.GetFileBlockOffsetsOptions{Name: fileNames[1]}
openFileOptions := internal.OpenFileOptions{Name: fileNames[0], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
bol := &common.BlockOffsetList{
BlockList: []*common.Block{{StartIndex: 0, EndIndex: 1 * MB}, {StartIndex: 1, EndIndex: 2 * MB}},
}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle1, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockNotCached(suite, 0, handle1)
assertNumberOfCachedFileBlocks(suite, 0, handle1)
assertHandleNotStreamOnly(suite, handle1)
handle2 := &handlemap.Handle{Size: int64(2 * MB), Path: fileNames[1]}
openFileOptions = internal.OpenFileOptions{Name: fileNames[1], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle2, nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockNotCached(suite, 0, handle2)
assertNumberOfCachedFileBlocks(suite, 0, handle2)
// confirm new buffer is stream only
assertHandleStreamOnly(suite, handle2)
//close the first handle
closeFileOptions := internal.CloseFileOptions{Handle: handle1}
suite.mock.EXPECT().CloseFile(closeFileOptions).Return(nil)
_ = suite.stream.CloseFile(closeFileOptions)
// get block for second handle and confirm it gets cached
readInBufferOptions := internal.ReadInBufferOptions{
Handle: handle2,
Offset: 0,
Data: make([]byte, 4),
}
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions2).Return(bol, nil)
suite.mock.EXPECT().ReadInBuffer(internal.ReadInBufferOptions{
Handle: handle2,
Offset: 0,
Data: make([]byte, 1*MB)}).Return(len(readInBufferOptions.Data), nil)
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
assertBlockCached(suite, 0, handle2)
assertNumberOfCachedFileBlocks(suite, 1, handle2)
assertHandleNotStreamOnly(suite, handle2)
}
func (suite *streamTestSuite) TestFilenameCreateFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set buffer limit to 1
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 32\n max-buffers: 1\n file-caching: true\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 0, Path: fileNames[0]}
createFileoptions := internal.CreateFileOptions{Name: handle1.Path, Mode: 0777}
getFileBlockOffsetsOptions := internal.GetFileBlockOffsetsOptions{Name: fileNames[0]}
bol := &common.BlockOffsetList{
BlockList: []*common.Block{},
}
bol.Flags.Set(common.SmallFile)
suite.mock.EXPECT().CreateFile(createFileoptions).Return(handle1, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
_, _ = suite.stream.CreateFile(createFileoptions)
assertHandleNotStreamOnly(suite, handle1)
}
func (suite *streamTestSuite) TestFilenameTruncateFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set buffer limit to 1
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 32\n max-buffers: 1\n file-caching: true\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 1, Path: fileNames[0]}
truncateFileOptions := internal.TruncateFileOptions{Name: handle1.Path}
suite.mock.EXPECT().TruncateFile(truncateFileOptions).Return(nil)
_ = suite.stream.TruncateFile(truncateFileOptions)
suite.assert.Equal(suite.stream.StreamOnly, false)
suite.mock.EXPECT().TruncateFile(truncateFileOptions).Return(syscall.ENOENT)
err := suite.stream.TruncateFile(truncateFileOptions)
suite.assert.NotEqual(nil, err)
}
func (suite *streamTestSuite) TestFilenameRenameFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set buffer limit to 1
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 32\n max-buffers: 1\n file-caching: true\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 0, Path: fileNames[0]}
renameFileOptions := internal.RenameFileOptions{Src: handle1.Path, Dst: handle1.Path + "new"}
suite.mock.EXPECT().RenameFile(renameFileOptions).Return(nil)
_ = suite.stream.RenameFile(renameFileOptions)
suite.assert.Equal(suite.stream.StreamOnly, false)
suite.mock.EXPECT().RenameFile(renameFileOptions).Return(syscall.ENOENT)
err := suite.stream.RenameFile(renameFileOptions)
suite.assert.NotEqual(nil, err)
}
func (suite *streamTestSuite) TestFilenameRenameDirectory() {
defer suite.cleanupTest()
suite.cleanupTest()
// set buffer limit to 1
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 32\n max-buffers: 1\n file-caching: true\n"
suite.setupTestHelper(config, false)
renameDirOptions := internal.RenameDirOptions{Src: "/test/path", Dst: "/test/path_new"}
suite.mock.EXPECT().RenameDir(renameDirOptions).Return(nil)
_ = suite.stream.RenameDir(renameDirOptions)
suite.assert.Equal(suite.stream.StreamOnly, false)
suite.mock.EXPECT().RenameDir(renameDirOptions).Return(syscall.ENOENT)
err := suite.stream.RenameDir(renameDirOptions)
suite.assert.NotEqual(nil, err)
}
func (suite *streamTestSuite) TestFilenameDeleteDirectory() {
defer suite.cleanupTest()
suite.cleanupTest()
// set buffer limit to 1
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 32\n max-buffers: 1\n file-caching: true\n"
suite.setupTestHelper(config, false)
deleteDirOptions := internal.DeleteDirOptions{Name: "/test/path"}
suite.mock.EXPECT().DeleteDir(deleteDirOptions).Return(nil)
_ = suite.stream.DeleteDir(deleteDirOptions)
suite.assert.Equal(suite.stream.StreamOnly, false)
suite.mock.EXPECT().DeleteDir(deleteDirOptions).Return(syscall.ENOENT)
err := suite.stream.DeleteDir(deleteDirOptions)
suite.assert.NotEqual(nil, err)
}
// func (suite *streamTestSuite) TestFlushFile() {
// }
func TestFilenameWriteStreamTestSuite(t *testing.T) {
suite.Run(t, new(streamTestSuite))
}

Просмотреть файл

@ -1,735 +0,0 @@
/*
_____ _____ _____ ____ ______ _____ ------
| | | | | | | | | | | | |
| | | | | | | | | | | | |
| --- | | | | |-----| |---- | | |-----| |----- ------
| | | | | | | | | | | | |
| ____| |_____ | ____| | ____| | |_____| _____| |_____ |_____
Licensed under the MIT License <http://opensource.org/licenses/MIT>.
Copyright © 2020-2024 Microsoft Corporation. All rights reserved.
Author : <blobfusedev@microsoft.com>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE
*/
package stream
import (
"os"
"syscall"
"testing"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/internal"
"github.com/Azure/azure-storage-fuse/v2/internal/handlemap"
"github.com/stretchr/testify/suite"
)
func (suite *streamTestSuite) TestWriteConfig() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, false)
suite.assert.Equal("stream", suite.stream.Name())
suite.assert.Equal(16*MB, int(suite.stream.BufferSize))
suite.assert.Equal(4, int(suite.stream.CachedObjLimit))
suite.assert.EqualValues(false, suite.stream.StreamOnly)
suite.assert.EqualValues(4*MB, suite.stream.BlockSize)
// assert streaming is on if any of the values is 0
suite.cleanupTest()
config = "stream:\n block-size-mb: 0\n buffer-size-mb: 16\n max-buffers: 4\n"
suite.setupTestHelper(config, false)
suite.assert.EqualValues(true, suite.stream.StreamOnly)
}
// ============================================== stream only tests ========================================
func (suite *streamTestSuite) TestStreamOnlyOpenFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 32\n max-buffers: 0\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 0, Path: fileNames[0]}
openFileOptions := internal.OpenFileOptions{Name: fileNames[0], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle1, nil)
_, _ = suite.stream.OpenFile(openFileOptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
func (suite *streamTestSuite) TestStreamOnlyCloseFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 0\n max-buffers: 10\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 2, Path: fileNames[0]}
closeFileOptions := internal.CloseFileOptions{Handle: handle1}
suite.mock.EXPECT().CloseFile(closeFileOptions).Return(nil)
_ = suite.stream.CloseFile(closeFileOptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
func (suite *streamTestSuite) TestStreamOnlyFlushFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 0\n max-buffers: 10\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 2, Path: fileNames[0]}
flushFileOptions := internal.FlushFileOptions{Handle: handle1}
_ = suite.stream.FlushFile(flushFileOptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
func (suite *streamTestSuite) TestStreamOnlySyncFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 0\n max-buffers: 10\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 2, Path: fileNames[0]}
syncFileOptions := internal.SyncFileOptions{Handle: handle1}
_ = suite.stream.SyncFile(syncFileOptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
func (suite *streamTestSuite) TestStreamOnlyCreateFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 0\n buffer-size-mb: 32\n max-buffers: 1\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 0, Path: fileNames[0]}
createFileoptions := internal.CreateFileOptions{Name: handle1.Path, Mode: 0777}
suite.mock.EXPECT().CreateFile(createFileoptions).Return(handle1, nil)
_, _ = suite.stream.CreateFile(createFileoptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
func (suite *streamTestSuite) TestCreateFileError() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 0\n buffer-size-mb: 32\n max-buffers: 1\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 0, Path: fileNames[0]}
createFileoptions := internal.CreateFileOptions{Name: handle1.Path, Mode: 0777}
suite.mock.EXPECT().CreateFile(createFileoptions).Return(handle1, syscall.ENOENT)
_, err := suite.stream.CreateFile(createFileoptions)
suite.assert.NotEqual(nil, err)
}
func (suite *streamTestSuite) TestStreamOnlyDeleteFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 0\n buffer-size-mb: 32\n max-buffers: 1\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 0, Path: fileNames[0]}
deleteFileOptions := internal.DeleteFileOptions{Name: handle1.Path}
suite.mock.EXPECT().DeleteFile(deleteFileOptions).Return(nil)
_ = suite.stream.DeleteFile(deleteFileOptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
func (suite *streamTestSuite) TestStreamOnlyRenameFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 0\n buffer-size-mb: 32\n max-buffers: 1\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 0, Path: fileNames[0]}
renameFileOptions := internal.RenameFileOptions{Src: handle1.Path, Dst: handle1.Path + "new"}
suite.mock.EXPECT().RenameFile(renameFileOptions).Return(nil)
_ = suite.stream.RenameFile(renameFileOptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
func (suite *streamTestSuite) TestStreamOnlyRenameDirectory() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 0\n buffer-size-mb: 32\n max-buffers: 1\n"
suite.setupTestHelper(config, false)
renameDirOptions := internal.RenameDirOptions{Src: "/test/path", Dst: "/test/path_new"}
suite.mock.EXPECT().RenameDir(renameDirOptions).Return(nil)
_ = suite.stream.RenameDir(renameDirOptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
func (suite *streamTestSuite) TestStreamOnlyDeleteDirectory() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 0\n buffer-size-mb: 32\n max-buffers: 1\n"
suite.setupTestHelper(config, false)
deleteDirOptions := internal.DeleteDirOptions{Name: "/test/path"}
suite.mock.EXPECT().DeleteDir(deleteDirOptions).Return(nil)
_ = suite.stream.DeleteDir(deleteDirOptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
func (suite *streamTestSuite) TestStreamOnlyTruncateFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 0\n buffer-size-mb: 32\n max-buffers: 1\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 0, Path: fileNames[0]}
truncateFileOptions := internal.TruncateFileOptions{Name: handle1.Path}
suite.mock.EXPECT().TruncateFile(truncateFileOptions).Return(nil)
_ = suite.stream.TruncateFile(truncateFileOptions)
suite.assert.Equal(suite.stream.StreamOnly, true)
}
// ============================================================================ read tests ====================================================
// test small file caching
func (suite *streamTestSuite) TestCacheSmallFileOnOpen() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 16\n buffer-size-mb: 32\n max-buffers: 4\n"
suite.setupTestHelper(config, false)
// make small file very large to confirm it would be stream only
handle := &handlemap.Handle{Size: int64(100000000 * MB), Path: fileNames[0]}
getFileBlockOffsetsOptions := internal.GetFileBlockOffsetsOptions{Name: fileNames[0]}
openFileOptions := internal.OpenFileOptions{Name: fileNames[0], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
bol := &common.BlockOffsetList{
BlockList: []*common.Block{},
}
bol.Flags.Set(common.SmallFile)
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockNotCached(suite, 0, handle)
assertNumberOfCachedFileBlocks(suite, 0, handle)
assertHandleStreamOnly(suite, handle)
// small file that should get cached on open
handle = &handlemap.Handle{Size: int64(1), Path: fileNames[1]}
openFileOptions = internal.OpenFileOptions{Name: fileNames[1], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
getFileBlockOffsetsOptions = internal.GetFileBlockOffsetsOptions{Name: fileNames[1]}
readInBufferOptions := internal.ReadInBufferOptions{
Handle: handle,
Offset: 0,
Data: make([]byte, 1),
}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(len(readInBufferOptions.Data), nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockCached(suite, 0, handle)
assertNumberOfCachedFileBlocks(suite, 1, handle)
assertHandleNotStreamOnly(suite, handle)
}
func (suite *streamTestSuite) TestReadInBuffer() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 16\n buffer-size-mb: 32\n max-buffers: 4\n"
suite.setupTestHelper(config, false)
handle := &handlemap.Handle{Size: int64(4 * MB), Path: fileNames[0]}
getFileBlockOffsetsOptions := internal.GetFileBlockOffsetsOptions{Name: fileNames[0]}
openFileOptions := internal.OpenFileOptions{Name: fileNames[0], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
// file consists of two blocks
bol := &common.BlockOffsetList{
BlockList: []*common.Block{{StartIndex: 0, EndIndex: 2 * MB}, {StartIndex: 2, EndIndex: 4 * MB}},
}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
_, _ = suite.stream.OpenFile(openFileOptions)
// get second block
readInBufferOptions := internal.ReadInBufferOptions{
Handle: handle,
Offset: 0,
Data: make([]byte, 2*MB),
}
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(len(readInBufferOptions.Data), syscall.ENOENT)
_, err := suite.stream.ReadInBuffer(readInBufferOptions)
suite.assert.NotEqual(nil, err)
}
// test large files don't cache block on open
func (suite *streamTestSuite) TestOpenLargeFile() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 16\n buffer-size-mb: 32\n max-buffers: 4\n"
suite.setupTestHelper(config, false)
handle := &handlemap.Handle{Size: int64(4 * MB), Path: fileNames[0]}
getFileBlockOffsetsOptions := internal.GetFileBlockOffsetsOptions{Name: fileNames[0]}
openFileOptions := internal.OpenFileOptions{Name: fileNames[0], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
// file consists of two blocks
bol := &common.BlockOffsetList{
BlockList: []*common.Block{{StartIndex: 0, EndIndex: 2 * MB}, {StartIndex: 2, EndIndex: 4 * MB}},
}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockNotCached(suite, 0, handle)
assertNumberOfCachedFileBlocks(suite, 0, handle)
assertHandleNotStreamOnly(suite, handle)
}
// test if handle limit met to stream only next handles
func (suite *streamTestSuite) TestStreamOnly() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 16\n buffer-size-mb: 32\n max-buffers: 1\n"
suite.setupTestHelper(config, false)
handle := &handlemap.Handle{Size: int64(4 * MB), Path: fileNames[0]}
getFileBlockOffsetsOptions := internal.GetFileBlockOffsetsOptions{Name: fileNames[0]}
openFileOptions := internal.OpenFileOptions{Name: fileNames[0], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
bol := &common.BlockOffsetList{
BlockList: []*common.Block{{StartIndex: 0, EndIndex: 2 * MB}, {StartIndex: 2, EndIndex: 4 * MB}},
}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertHandleNotStreamOnly(suite, handle)
// create new handle
handle = &handlemap.Handle{Size: int64(4 * MB), Path: fileNames[0]}
getFileBlockOffsetsOptions = internal.GetFileBlockOffsetsOptions{Name: fileNames[0]}
openFileOptions = internal.OpenFileOptions{Name: fileNames[0], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
bol = &common.BlockOffsetList{
BlockList: []*common.Block{{StartIndex: 0, EndIndex: 2 * MB}, {StartIndex: 2, EndIndex: 4 * MB}},
}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockNotCached(suite, 0, handle)
assertNumberOfCachedFileBlocks(suite, 0, handle)
// confirm new handle is stream only since limit is exceeded
assertHandleStreamOnly(suite, handle)
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, syscall.ENOENT)
_, err := suite.stream.OpenFile(openFileOptions)
suite.assert.NotEqual(nil, err)
writeFileOptions := internal.WriteFileOptions{
Handle: handle,
Offset: 1 * MB,
Data: make([]byte, 1*MB),
}
suite.mock.EXPECT().WriteFile(writeFileOptions).Return(0, syscall.ENOENT)
_, err = suite.stream.WriteFile(writeFileOptions)
suite.assert.NotEqual(nil, err)
}
func (suite *streamTestSuite) TestReadLargeFileBlocks() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 32\n max-buffers: 1\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: int64(2 * MB), Path: fileNames[0]}
getFileBlockOffsetsOptions := internal.GetFileBlockOffsetsOptions{Name: fileNames[0]}
openFileOptions := internal.OpenFileOptions{Name: fileNames[0], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
bol := &common.BlockOffsetList{
BlockList: []*common.Block{{StartIndex: 0, EndIndex: 1 * MB}, {StartIndex: 1 * MB, EndIndex: 2 * MB}},
}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle1, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockNotCached(suite, 0, handle1)
assertNumberOfCachedFileBlocks(suite, 0, handle1)
assertHandleNotStreamOnly(suite, handle1)
// data spans two blocks
readInBufferOptions := internal.ReadInBufferOptions{
Handle: handle1,
Offset: 1*MB - 2,
Data: make([]byte, 7),
}
suite.mock.EXPECT().ReadInBuffer(internal.ReadInBufferOptions{
Handle: handle1,
Offset: 0,
Data: make([]byte, 1*MB)}).Return(len(readInBufferOptions.Data), nil)
suite.mock.EXPECT().ReadInBuffer(internal.ReadInBufferOptions{
Handle: handle1,
Offset: 1 * MB,
Data: make([]byte, 1*MB)}).Return(len(readInBufferOptions.Data), nil)
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
assertBlockCached(suite, 0, handle1)
assertBlockCached(suite, 1*MB, handle1)
assertNumberOfCachedFileBlocks(suite, 2, handle1)
}
func (suite *streamTestSuite) TestPurgeOnClose() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 16\n buffer-size-mb: 32\n max-buffers: 4\n"
suite.setupTestHelper(config, false)
handle := &handlemap.Handle{Size: int64(1), Path: fileNames[0]}
getFileBlockOffsetsOptions := internal.GetFileBlockOffsetsOptions{Name: fileNames[0]}
openFileOptions := internal.OpenFileOptions{Name: fileNames[0], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
bol := &common.BlockOffsetList{
BlockList: []*common.Block{},
}
bol.Flags.Set(common.SmallFile)
readInBufferOptions := internal.ReadInBufferOptions{
Handle: handle,
Offset: 0,
Data: make([]byte, 1),
}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(len(readInBufferOptions.Data), nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockCached(suite, 0, handle)
assertNumberOfCachedFileBlocks(suite, 1, handle)
assertHandleNotStreamOnly(suite, handle)
suite.mock.EXPECT().CloseFile(internal.CloseFileOptions{Handle: handle}).Return(nil)
_ = suite.stream.CloseFile(internal.CloseFileOptions{Handle: handle})
assertBlockNotCached(suite, 0, handle)
}
// ========================================================= Write tests =================================================================
// TODO: need to add an assertion on the blocks for their start and end indices as we append to them
// test appending to small file evicts older block if cache capacity full
func (suite *streamTestSuite) TestWriteToSmallFileEviction() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 1\n buffer-size-mb: 1\n max-buffers: 4\n"
suite.setupTestHelper(config, false)
// create small file and confirm it gets cached
handle := &handlemap.Handle{Size: int64(1 * MB), Path: fileNames[0]}
getFileBlockOffsetsOptions := internal.GetFileBlockOffsetsOptions{Name: fileNames[0]}
openFileOptions := internal.OpenFileOptions{Name: fileNames[0], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
bol := &common.BlockOffsetList{
BlockList: []*common.Block{},
}
bol.Flags.Set(common.SmallFile)
readInBufferOptions := internal.ReadInBufferOptions{
Handle: handle,
Offset: 0,
Data: make([]byte, 1*MB),
}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(len(readInBufferOptions.Data), nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockCached(suite, 0, handle)
assertNumberOfCachedFileBlocks(suite, 1, handle)
// append new block and confirm old gets evicted
writeFileOptions := internal.WriteFileOptions{
Handle: handle,
Offset: 1 * MB,
Data: make([]byte, 1*MB),
}
_, _ = suite.stream.WriteFile(writeFileOptions)
assertBlockNotCached(suite, 0, handle)
assertBlockCached(suite, 1*MB, handle)
assertNumberOfCachedFileBlocks(suite, 1, handle)
assertHandleNotStreamOnly(suite, handle)
}
// get block 1, get block 2, mod block 2, mod block 1, create new block - expect block 2 to be removed
func (suite *streamTestSuite) TestLargeFileEviction() {
defer suite.cleanupTest()
suite.cleanupTest()
config := "stream:\n block-size-mb: 1\n buffer-size-mb: 2\n max-buffers: 2\n"
suite.setupTestHelper(config, false)
// file consists of two blocks
block1 := &common.Block{StartIndex: 0, EndIndex: 1 * MB}
block2 := &common.Block{StartIndex: 1 * MB, EndIndex: 2 * MB}
handle := &handlemap.Handle{Size: int64(2 * MB), Path: fileNames[0]}
getFileBlockOffsetsOptions := internal.GetFileBlockOffsetsOptions{Name: fileNames[0]}
openFileOptions := internal.OpenFileOptions{Name: fileNames[0], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
bol := &common.BlockOffsetList{
BlockList: []*common.Block{block1, block2},
BlockIdLength: 10,
}
readInBufferOptions := internal.ReadInBufferOptions{
Handle: handle,
Offset: 0,
Data: make([]byte, 1*MB),
}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
_, _ = suite.stream.OpenFile(openFileOptions)
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(len(readInBufferOptions.Data), nil)
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
assertBlockCached(suite, 0, handle)
assertNumberOfCachedFileBlocks(suite, 1, handle)
// get second block
readInBufferOptions = internal.ReadInBufferOptions{
Handle: handle,
Offset: 1 * MB,
Data: make([]byte, 1*MB),
}
suite.mock.EXPECT().ReadInBuffer(readInBufferOptions).Return(len(readInBufferOptions.Data), nil)
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
assertBlockCached(suite, 1*MB, handle)
assertNumberOfCachedFileBlocks(suite, 2, handle)
// write to second block
writeFileOptions := internal.WriteFileOptions{
Handle: handle,
Offset: 1*MB + 2,
Data: make([]byte, 2),
}
_, _ = suite.stream.WriteFile(writeFileOptions)
// write to first block
writeFileOptions.Offset = 2
_, _ = suite.stream.WriteFile(writeFileOptions)
// append to file
writeFileOptions.Offset = 2*MB + 4
// when we get the first flush - it means we're clearing out our cache
callbackFunc := func(options internal.FlushFileOptions) {
block1.Flags.Clear(common.DirtyBlock)
block2.Flags.Clear(common.DirtyBlock)
handle.Flags.Set(handlemap.HandleFlagDirty)
}
suite.mock.EXPECT().FlushFile(internal.FlushFileOptions{Handle: handle}).Do(callbackFunc).Return(nil)
_, _ = suite.stream.WriteFile(writeFileOptions)
assertBlockCached(suite, 0, handle)
assertBlockCached(suite, 2*MB, handle)
assertBlockNotCached(suite, 1*MB, handle)
assertNumberOfCachedFileBlocks(suite, 2, handle)
suite.assert.Equal(handle.Size, int64(2*MB+6))
}
// test stream only handle becomes cached handle
func (suite *streamTestSuite) TestStreamOnlyHandle() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 32\n max-buffers: 1\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: int64(2 * MB), Path: fileNames[0]}
getFileBlockOffsetsOptions := internal.GetFileBlockOffsetsOptions{Name: fileNames[0]}
openFileOptions := internal.OpenFileOptions{Name: fileNames[0], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
bol := &common.BlockOffsetList{
BlockList: []*common.Block{{StartIndex: 0, EndIndex: 1 * MB}, {StartIndex: 1, EndIndex: 2 * MB}},
}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle1, nil)
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockNotCached(suite, 0, handle1)
assertNumberOfCachedFileBlocks(suite, 0, handle1)
assertHandleNotStreamOnly(suite, handle1)
handle2 := &handlemap.Handle{Size: int64(2 * MB), Path: fileNames[0]}
openFileOptions = internal.OpenFileOptions{Name: fileNames[0], Flags: os.O_RDONLY, Mode: os.FileMode(0777)}
suite.mock.EXPECT().OpenFile(openFileOptions).Return(handle2, nil)
_, _ = suite.stream.OpenFile(openFileOptions)
assertBlockNotCached(suite, 0, handle2)
assertNumberOfCachedFileBlocks(suite, 0, handle2)
// confirm new handle is stream only
assertHandleStreamOnly(suite, handle2)
//close the first handle
closeFileOptions := internal.CloseFileOptions{Handle: handle1}
suite.mock.EXPECT().CloseFile(closeFileOptions).Return(nil)
_ = suite.stream.CloseFile(closeFileOptions)
// get block for second handle and confirm it gets cached
readInBufferOptions := internal.ReadInBufferOptions{
Handle: handle2,
Offset: 0,
Data: make([]byte, 4),
}
suite.mock.EXPECT().GetFileBlockOffsets(getFileBlockOffsetsOptions).Return(bol, nil)
suite.mock.EXPECT().ReadInBuffer(internal.ReadInBufferOptions{
Handle: handle2,
Offset: 0,
Data: make([]byte, 1*MB)}).Return(len(readInBufferOptions.Data), nil)
_, _ = suite.stream.ReadInBuffer(readInBufferOptions)
assertBlockCached(suite, 0, handle2)
assertNumberOfCachedFileBlocks(suite, 1, handle2)
assertHandleNotStreamOnly(suite, handle2)
}
func (suite *streamTestSuite) TestCreateFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 32\n max-buffers: 1\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 0, Path: fileNames[0]}
createFileoptions := internal.CreateFileOptions{Name: handle1.Path, Mode: 0777}
bol := &common.BlockOffsetList{
BlockList: []*common.Block{},
}
bol.Flags.Set(common.SmallFile)
suite.mock.EXPECT().CreateFile(createFileoptions).Return(handle1, nil)
_, _ = suite.stream.CreateFile(createFileoptions)
assertHandleNotStreamOnly(suite, handle1)
}
func (suite *streamTestSuite) TestTruncateFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 32\n max-buffers: 1\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 1, Path: fileNames[0]}
truncateFileOptions := internal.TruncateFileOptions{Name: handle1.Path}
suite.mock.EXPECT().TruncateFile(truncateFileOptions).Return(nil)
_ = suite.stream.TruncateFile(truncateFileOptions)
suite.assert.Equal(suite.stream.StreamOnly, false)
suite.mock.EXPECT().TruncateFile(truncateFileOptions).Return(syscall.ENOENT)
err := suite.stream.TruncateFile(truncateFileOptions)
suite.assert.NotEqual(nil, err)
}
func (suite *streamTestSuite) TestRenameFile() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 32\n max-buffers: 1\n"
suite.setupTestHelper(config, false)
handle1 := &handlemap.Handle{Size: 0, Path: fileNames[0]}
renameFileOptions := internal.RenameFileOptions{Src: handle1.Path, Dst: handle1.Path + "new"}
suite.mock.EXPECT().RenameFile(renameFileOptions).Return(nil)
_ = suite.stream.RenameFile(renameFileOptions)
suite.assert.Equal(suite.stream.StreamOnly, false)
suite.mock.EXPECT().RenameFile(renameFileOptions).Return(syscall.ENOENT)
err := suite.stream.RenameFile(renameFileOptions)
suite.assert.NotEqual(nil, err)
}
func (suite *streamTestSuite) TestRenameDirectory() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 32\n max-buffers: 1\n"
suite.setupTestHelper(config, false)
renameDirOptions := internal.RenameDirOptions{Src: "/test/path", Dst: "/test/path_new"}
suite.mock.EXPECT().RenameDir(renameDirOptions).Return(nil)
_ = suite.stream.RenameDir(renameDirOptions)
suite.assert.Equal(suite.stream.StreamOnly, false)
suite.mock.EXPECT().RenameDir(renameDirOptions).Return(syscall.ENOENT)
err := suite.stream.RenameDir(renameDirOptions)
suite.assert.NotEqual(nil, err)
}
func (suite *streamTestSuite) TestDeleteDirectory() {
defer suite.cleanupTest()
suite.cleanupTest()
// set handle limit to 1
config := "stream:\n block-size-mb: 4\n buffer-size-mb: 32\n max-buffers: 1\n"
suite.setupTestHelper(config, false)
deleteDirOptions := internal.DeleteDirOptions{Name: "/test/path"}
suite.mock.EXPECT().DeleteDir(deleteDirOptions).Return(nil)
_ = suite.stream.DeleteDir(deleteDirOptions)
suite.assert.Equal(suite.stream.StreamOnly, false)
suite.mock.EXPECT().DeleteDir(deleteDirOptions).Return(syscall.ENOENT)
err := suite.stream.DeleteDir(deleteDirOptions)
suite.assert.NotEqual(nil, err)
}
// func (suite *streamTestSuite) TestFlushFile() {
// }
func TestWriteStreamTestSuite(t *testing.T) {
suite.Run(t, new(streamTestSuite))
}

Просмотреть файл

@ -37,6 +37,7 @@ import (
"context"
"fmt"
"github.com/Azure/azure-storage-fuse/v2/common"
"github.com/Azure/azure-storage-fuse/v2/common/log"
)
@ -57,6 +58,10 @@ func NewPipeline(components []string, isParent bool) (*Pipeline, error) {
comps := make([]Component, 0)
lastPriority := EComponentPriority.Producer()
for _, name := range components {
if name == "stream" {
common.IsStream = true
name = "block_cache"
}
// Search component exists in our registered map or not
compInit, ok := registeredComponents[name]
if ok {

Просмотреть файл

@ -77,6 +77,26 @@ func NewComponentC() Component {
return &ComponentC{}
}
type ComponentStream struct {
BaseComponent
}
func NewComponentStream() Component {
comp := &ComponentStream{}
comp.SetName("stream")
return comp
}
type ComponentBlockCache struct {
BaseComponent
}
func NewComponentBlockCache() Component {
comp := &ComponentBlockCache{}
comp.SetName("block_cache")
return comp
}
/////////////////////////////////////////
type pipelineTestSuite struct {
@ -88,6 +108,8 @@ func (suite *pipelineTestSuite) SetupTest() {
AddComponent("ComponentA", NewComponentA)
AddComponent("ComponentB", NewComponentB)
AddComponent("ComponentC", NewComponentC)
AddComponent("stream", NewComponentStream)
AddComponent("block_cache", NewComponentBlockCache)
suite.assert = assert.New(suite.T())
}
@ -111,7 +133,7 @@ func (s *pipelineTestSuite) TestInvalidComponent() {
func (s *pipelineTestSuite) TestStartStopCreateNewPipeline() {
p, err := NewPipeline([]string{"ComponentA", "ComponentB"}, false)
s.assert.Nil(err)
print(p.components[0].Name())
err = p.Start(nil)
s.assert.Nil(err)
@ -119,6 +141,12 @@ func (s *pipelineTestSuite) TestStartStopCreateNewPipeline() {
s.assert.Nil(err)
}
func (s *pipelineTestSuite) TestStreamToBlockCacheConfig() {
p, err := NewPipeline([]string{"stream"}, false)
s.assert.Nil(err)
s.assert.Equal(p.components[0].Name(), "block_cache")
}
func TestPipelineTestSuite(t *testing.T) {
suite.Run(t, new(pipelineTestSuite))
}

Просмотреть файл

@ -3,7 +3,7 @@
# 1. All boolean configs (true|false config) (except ignore-open-flags, virtual-directory) are set to 'false' by default.
# No need to mention them in your config file unless you are setting them to true.
# 2. 'loopbackfs' is purely for testing and shall not be used in production configuration.
# 3. 'stream' and 'file_cache' can not co-exist and config file shall have only one of them based on your use case.
# 3. 'block-cache' and 'file_cache' can not co-exist and config file shall have only one of them based on your use case.
# 4. By default log level is set to 'log_warning' level and are redirected to syslog.
# Either use 'base' logging or syslog filters to redirect logs to separate file.
# To install syslog filter follow below steps:
@ -67,14 +67,6 @@ libfuse:
extension: <physical path to extension library>
direct-io: true|false <enable to bypass the kernel cache>
# Streaming configuration – remove and redirect to block-cache
stream:
# If block-size-mb, max-buffers or buffer-size-mb are 0, the stream component will not cache blocks.
block-size-mb: <for read only mode size of each block to be cached in memory while streaming (in MB). For read/write size of newly created blocks. Default - 0 MB>
max-buffers: <total number of buffers to store blocks in. Default - 0>
buffer-size-mb: <size for each buffer. Default - 0 MB>
file-caching: <read/write mode file level caching or handle level caching. Default - false (handle level caching ON)>
# Block cache related configuration
block_cache:
block-size-mb: <size of each block to be cached in memory (in MB). Default - 16 MB>

2
testdata/config/azure_stream.yaml поставляемый
Просмотреть файл

@ -16,7 +16,7 @@ libfuse:
ignore-open-flags: true
stream:
block-size-mb: 4
block-size-mb: 16
max-buffers: 80
buffer-size-mb: 8